Pipeline rules of thumb handbook

  • 11 2,375 9
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Pipeline rules of thumb handbook

'seap, Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition Pipeline Risk Management Manua

3,938 1,439 14MB

Pages 422 Page size 514 x 729 pts Year 2003

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

'seap,

Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition

Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition

W. Kent Muhlbauer

-

-

-

-

AMSTERDAM. BOSTON HEIDELBERG * LONDON * NEWYORK OXFORD PARIS * SANDIEGO * SANFRANCISCO SINGAPORE SYDNEY -TOKYO

ELSEVIER

Gulf Professional Publishing IS an lrnprrnt of Elsevier 1°C

Gulf Professional Publishing is an imprint of Elsevier 200 Wheeler Road, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright 02004, Elsevier Inc. All rights reserved. No part ofthis publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher.

Permissions may be sought directly from Elsevier’s Science &Technology Rights Department in Oxford, UK: phone: ( 4 4 ) 1865 843830, fax: ( 4 4 ) 1865 853333, e-mail: [email protected] may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.”

8

Recognizing the importance of preserving what has been written, Elsvier prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-PublicationData Muhlbauer, W. Kent. Pipeline risk management manual : a tested and proven system to prevent loss and assess risk / by W. Kent Muhlbauer.-3rd ed. p. cm. Includes bibliographical references and index. ISBN 0-7506-7579-9 1. Pipelines-Safety measures-Handbooks, manuals, etc. 2. Pipelines-Reliability-Handbooks, manuals, etc. I. Title. TJ930.M84 2004 6213 ’ 6 7 2 4 ~ 2 2 20030583 15 British Library Cataloguing-in-PublicationData A catalogue record for this book is available from the British Library. ISBN: 0-7506-7579-9

For information on all Gulf Professional Publishing publications visit our Web site at www.gulfpp.com 03 04 05 06 07 08 09

10 9

Printed in the United States ofAmerica

8 7 6 5 4

3 2

1

Contents Acknowledgements

vii

Preface

ix

Introduction

xi

Risk Assessment at a Glance

xv

Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Appendix A Appendix B Appendix C Appendix D Appendix E Appendix F Appendix G

Risk: Theory and Application Risk Assessment Process Third-party Damage Index Corrosion Index Design Index Incorrect Operations Index Leak Impact Factor Data Management and Analyses Additional Risk Modules Service Interruption Risk Distribution Systems Offshore Pipeline Systems Stations and Surface Facilities Absolute Risk Estimates Risk Management Typical Pipeline Products Leak Rate Determination Pipe Strength Determination Surge Pressure Calculations Sample Pipeline Risk Assessment Algorithms Receptor Risk Evaluation Examples of Common Pipeline Inspection and Survey Techniques

1 21 43 61 91 117 133 177 197 209 223 243 257 293 33 1 357 361 363 367 369 375 379

Glossary

38 1

References Index

385 389

Acknowledgments As in the last edition, the author wishes to express his gratitude to the many practitioners of formal pipeline risk management who have improved the processes and shared their ideas. The author also wishes to thank reviewers of this edition who contributed their time and expertise to improving portions of this book, most notably Dr. Karl Muhlbauer and Mr. Bruce Beighle.

Preface The first edition of this book was written at a time when formal risk assessments of pipelines were fairly rare. To be sure, there were some repairheplace models out there, some maintenance prioritization schemes, and the occasional regulatory approval study, but, generally, those who embarked on a formal process for assessing pipeline risks were doing so for very specific needs and were not following a prescribed methodology. The situation is decidedly different now. Risk management is increasingly being mandated by regulations. A risk assessment seems to be the centerpiece of every approval process and every pipeline litigation. Regulators are directly auditing risk assessment programs. Risk management plans are increasingly coming under direct public scrutiny. While risk has always been an interesting topic to many, it is also often clouded by preconceptions of requirements of huge databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can be done even in a data-scarce environment. This was the major premise of the earlier editions. The first edition even had a certain sense of being a risk assessment cookbook-“Here are the ingredients and how to combine them.” Feedback from readers indicates that this was useful to them. Nonetheless, there also seems to be an increasing desire for more sophistication in risk modeling. This is no doubt the result of more practitioners than ever before-pushing the boundaries-as well as the more widespread availability of data and the more powerful computing environments that make it easy and cost effective to consider many more details in a risk model. Initiatives are currently under way to generate more

complete and useful databases to further our knowledge and to support detailed risk modeling efforts. Given this as a backdrop, one objective ofthis third edition is to again provide a simple approach to help a reader put together some kind of assessment tool with a minimum of aggravation. However, the primary objective of this edition is to provide a reference book for concepts, ideas, and maybe a few templates covering a wider range of pipeline risk issues and modeling options. This is done with the belief that an idea and reference book will best serve the present needs ofpipeline risk managers and anyone interested in the field. While I generally shy away from technical books that get too philosophical and are weak in specific how-to’s, it is just simply not possible to adequately discuss risk without getting into some social and psychological issues. It is also doing a disservice to the reader to imply that there is only one correct risk management approach. Just as an engineer will need to engage in a give-and-take process when designing the optimum building or automobile, so too will the designer of a risk assessment/management process. Those embarking on a pipeline risk management process should realize that, once some basic understanding is obtained, they have many options in specific approach. This should be viewed as an exciting feature, in my opinion. Imagine how mundane would be the practice of engineering if there were little variation in problem solving. So, my advice to the beginner is simple: arm yourself with knowledge, approach this as you would any significant engineering project, and then enjoy the journey !

Introduction As with previous editions of this book, the chief objective of this edition is to make pipelines safer. This is hopefully accomplished by enhancing readers’ understanding of pipeline risk issues and equipping them with ideas to measure, track, and continuouslyimprove pipeline safety. We in the pipeline industry are obviously very familiar with all aspects of pipelining. This familiarity can diminish our sensitivity to the complexity and inherent risk of this undertaking. The transportation of large quantities of sometimes very hazardous products over great distances through a pressurized pipeline system, often with zero-leak tolerance, is not a trivial thing. It is useful to occasionally step back and re-assess what a pipeline really is, through fresh eyes. We are placing a very complex, carefully engineered structure into an enormously variable, ever-changing, and usually hostile environment. One might reply, “complex!? It’s just a pipe!” But the underlying technical issues can be enormous. Metallurgy, fracture mechanics, welding processes, stress-strain reactions, soilinterface mechanical properties of the coating as well as their critical electrochemical properties, soil chemistry, every conceivable geotechnical event creating a myriad of forces and loadings, sophisticated computerized SCADA systems, and we’re not even to rotating equipment or the complex electrochemical reactions involved in corrosion prevention yet! A pipeline is indeed a complex system that must coexist with all of nature’s and man’s frequent lack of hospitality. The variation in this system is also enormous. Material and environmental changes over time are of chief concern. The pipeline must literally respond to the full range of possible ambient conditions of today as well as events of months and years past that are still impacting water tables, soil chemistry, land movements, etc. Out of all this variation, we are seeking risk ‘signals.’Our measuring ofrisk must therefore identify and properly consider all of the variables in such a way that we can indeed pick out risk signals from all of the background ‘noise’ created by the variability. Underlying most meanings of risk is the key issue of ‘probability.’ As is discussed in this text, probability expresses a degree ofbelief:This is the most compelling definition of probability because it encompasses statistical evidence as well as interpretations and judgment. Our beliefs should be firmly rooted in solid, old-fashioned engineering judgment and reasoning. This does not mean ignoring statistics-rather, using data appropriately-for diagnosis; to test hypotheses; to

uncover new information. Ideally, the degree of belief would also be determined in some consistent fashion so that any two estimators would arrive at the same conclusion given the same evidence. This is the purpose of this book-to provide frameworks in which a given set of evidence consistently leads to a specific degree of belief regarding the safety of a pipeline. Some of the key beliefs underpinning pipeline risk management, in this author’s view, include: Risk management techniques are fundamentally decision support tools. We must go through some complexity in order to achieve “intelligent simplification.” In most cases, we are more interested in identifying locations where a potential failure mechanism is more aggressive, rather than predicting the length of time the mechanism must be active before failure occurs. Many variables impact pipeline risk. Among all possible variables, choices are required to strike a balance between a comprehensive model (one that covers all of the important stuff) and an unwieldy model (one with too many relatively unimportant details). Resource allocation (or reallocation) towards reduction of failure probability is normally the most effectiveway to practice risk management. (The complete list can be seen in Chapter 2) The most critical beliefunderlying this book is that all available information should be used in a risk assessment. There are very few pieces of collected pipeline information that are not useful to the risk model. The risk evaluator should expect any piece of information to be useful until he absolutely cannot see any way that it can be relevant to risk or decides its inclusion is not cost effective. Any and all expert’s opinions and thought processes can and should be codified, thereby demystifymg their personal assessment processes.The experts’ analysis steps and logic processes can be duplicated to a large extent in the risk model. A very detailed model should ultimately be smarter than any single individual or group of individuals operating or maintaining the p i p e l i n e includingthat retired guy who knew everything. It is often useful to thinkof the model building process as ‘teaching the model’ rather than ‘designing the model.’ We are training the model to ‘think’

xii Introduction

like the best experts and giving it the collective knowledge of the entire organization and all the years ofrecord-keeping.

Changes from Previous Editions This edition offers some new example assessment schemes for evaluating various aspects of pipeline risk. After several years of use, some changes are also suggested for the model proposed in previous editions of this book. Changes reflect the input of pipeline operators, pipeline experts, and changes in technology. They are thought to improve our ability to measure pipeline risks in the model. Changes to risk algorithms have always been anticipated, and every risk model should be regularly reviewed in light of its ability to incorporate new knowledge and the latest information. Today’s computer systems are much more robust than in past years, so short-cuts, very general assumptions, and simplistic approximations to avoid costly data integrations are lessjustifiable. It was more appropriate to advocate a very simple approach when practitioners were picking this up only as a ‘good thing’ to do, rather than as a mandated and highly scrutinized activity. There is certainly still a place for the simple risk assessment. As with the most robust approach, even the simple techniques support decision makmg by crystallizing thinking, removing much subjectivity,helping to ensure consistency, and generating a host of other benefits. So, the basic risk assessment model of the second edition is preserved in this edition, although it is tempered with many alternative and supporting evaluation ideas. The most significant changes for this edition are seen in the Corrosion Index and Leak Impact Factor (LIF). In the former, variables have been extensively re-arranged to better reflect those variables’ relationships and interactions. In the case of LIF, the math by which the consequence variables are com-

bined has been made more intuitive. In both cases, the variables to consider are mostly the same as in previous editions. As with previous editions, the best practice is to assess major risk variables by evaluating and combining many lesser variables, generally available from the operator’s records or public domain databases. This allows assessments to benefit from direct use of measurements or at least qualitative evaluationsof several small variables, rather than a single, larger variable, thereby reducing subjectivity. For those who have risk assessment systems in place based on previous editions, the recommendation is simple: retain your current model and all its variables, but build a modern foundation beneath those variables (if you haven’t already done so). In other words, bolster the current assessments with more complete consideration of all available information. Work to replace the high-level assessments of ‘good,’ ‘fair,’ and ‘poor,’ with evaluations that combine several data-rich subvariables such as pipe-to-soil potential readings, house counts, ILI anomaly indications, soil resistivities, visual inspection results, and all the many other measurements taken. In many cases, this allows your ‘ascollected’ data and measurements to be used directly in the risk model-no extra interpretation steps required. This is straightforward and will be a worthwhile effort, yielding gains in efficiency and accuracy. As risks are re-assessed with new techniques and new information, the results will often be very similar to previous assessments. After all, the previous higher-level assessments were no doubt based on these same subvariables,only informally. If the new processes do yield different results than the previous assessments, then some valuable knowledge can be gained. This new knowledge is obtained by finding the disconnectthe basis of the differences-and learning why one of the approaches was not ‘thinking’ correctly. In the end, the risk assessment has been improved.

Disclaimer The user of this book is urged to exercise judgment in the use of the data presented here. Neither the author nor the publisher provides any guarantee, expressed or implied with regard to the general or specific application of the data, the range of errors that may be associated with any of the data, or the appropriateness of using any of the data. The author accepts no responsibility for damages, if any, suffered by any reader or user of this book as a result of decisions made or actions taken on information contained herein.

Risk Assessment at a Glance The following is a summary of the risk evaluation framework described in Chapters 3 through 7. It is one of several approaches to basic pipeline risk assessmentin which the main consequences of concern are related to public health and safety, including environmental considerations.Regardlessof the risk assessment methodologyused, this summary can be useful as a checklist to ensure that all risk issues are addressed.

Relative Risk Score

t

1

Leak Impact Factor

I

Incorrect Operations Figure0.1 Risk assessment model flowchart.

xvi Risk assessment at a glance

Relative Risk Rating Index Sum

A. B. C. D. E. F. G.

= =

(Index Sum) f (Leak Impact Factor) [(Third Party) +(Corrosion) +(Design) +(Incorrect Operations)]

Third-party Index Minimum Depth of Cover. ................. 0-20 pts Activity Level. ........................... 0-20 pts 0-10 pts Aboveground Facilities .................... 0-15 pts LineLocating .......................... 0-1 5 pts Public Education .......................... Right-of-way Condition. . . . . . . . . . . . . . . . . . . . 0-5 pts Patrol.. ................................. . O-15 pts

20% 20% 10% 15% 15%

0-100 pts

100%

Corrosion Index A. Atmospheric Corrosion. . . . A l . Atmospheric Exposure 0-2 pts A2. AtmosphericType ..................... A3. Atmospheric Coating. ................. 0-3 pts

5 yo 15%

10%

B. Internal Corrosion. . . . . . . 0-20 pts B1. Product Corrosivity . . . . . . . . . . . . . . . . . . .0-10 pts B2. Internal Protection. . . . . . . . . . . . . . . . . . . .0-10 pts

20%

C. Subsurface Corrosion. .................... .&70 pts C 1. Subsurface Environment ...............0-20 pts

70%

Mechanical Corrosion. . . . . . . . . . . . . . . . . .0-5 pts C2. Cathodic Protection. ................... 0-8 pts ... 0-15 pts Effectiveness. . . . Interference Potential. ................ 0-10 pts C3. Coating.. ............................ 0-10 pts Fitness ........... 0-10 pts Condition. ........................... 0-1 5 pts

Design Index A. Safety Factor. . . . . . . . . . . . . . . .

.......................... 0-15 pts C. Surge Potential. .......................... .O-10 pts .... .0-25 pts D. Integrity Verifications E. Land Movements. ........................ , 6 1 5 pts

. .O-35 pts

35% 15% 10% 25% 15%

0-100 pts

100%

Incorrect Operations Index A. Design A l . Hazard Identification .......... .. W p t s A2. MAOP Potential . . . . . . . . . . . . . . 0-12 pts A3. Safety Systems. . . . . 0-10pts A4. Material Selection. .................... 0-2 pts A5. Checks.. ............................. 0-2 pts 0-30 pts

30%

Risk assessment at a glance xvii

6. Construction BI. Inspection.. . . . . . . . . . . . . . . . . . . . . . . . . . .&IO pts 0-2 pts 8 2 . Materials.. . . . . . . . . . . . . . . . . . . . . . . . . . . . B3. Joining,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts B4. Backfill. ............................. &2 pts 65. Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts 6 6 . Coating,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts 0-20 pts

20%

C. Operation C1. Procedures. ........................... 0-7 pts C2. SCADNCommunications . . . . . . . . . . . . . . 0-3 pts C3. DrugTesting . . . . . . . . . . . . . . . . . . . . . . . . . . . O-2 pts C4. Safety Programs. ...................... 0-2 pts C5. SurveydMapdRecords . . . . . . . . . . . . . . . . . 0-5 pts 0-10 pts C6. Training. ............................ C7. Mechanical Error Preventers . . . . . . . . . . . . 0-6 pts 0-35 pts

35%

D. Maintenance DI. Documentation. . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts D2. Schedule.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . O-3 pts D3. Procedures,. . . . . . . . . . . . . . . . . . . . . . . . . .@-lopts 0-15 pts

15%

Total Index Sum 0-400 pts

Leak Impact Factor Leak Impact Factor = Product Hazard (PH) x Leakvolume (LV) x Dispersion (D)x Receptors (R) A. Product Hazard (Acute + Chronic Hazards) 0-22 points A 1. Acute Hazards a. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 04pts b. N r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 04pts c. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-4pts Total(Nf+N,+Nh) 2. Chronic Hazard RQ B. Leak Volume ( LV) C. Dispersion (D)

D. Receptors (R) D 1. Population Density (Pop) D2. Environmental Considerations (Env) D3. High-Value Areas (HVA) Total (Pop + Env + HVA)

0-12 pts 0-10 pts

Risk: Theory and Application Contents I The science and philosophyof ris Embracing paranoia 111 The scientificmethod 1/2 Modeling 113 II. Basicconcepts 113 Hazard 113 Risk 1/4 Farlure 114 Probability 114 Frequency, statistics, and probabi Failure rates 115 Consequences 116 Risk assessment 117 Riskmanagement 117 Experts 118 111 Uncertainty 118 IY Bsk process-the general steps

1. The science and philosophyof risk Embracingparanoia One of Murphy’s’ famous laws states that “left to themselves, things will always go from bad to worse.” This humorous prediction is, in a way, echoed in the second law of thermodynamics. That law deals with the concept of entropy. Stated simply, entropy

I Murphy$ laws arefamousparodies on scientific laws and l*, humorously pointing out all the things that can and often do go wrong in science and life.

is a measure of the disorder of a system.The thermodynamics law states that “entropy must always increase in the universe and in any hypothetical isolated system within it” [34]. Practical application of this law says that to offset the effects of entropy, energy must be injected into any system. Without adding energy, the system becomes increasingly disordered. Although the law was intended to be a statement of a scientific property, it was seized upon by “philosophers” who defined system to mean a car, a house, economics, a civilization, or anything that became disordered.By this extrapolation, the law explains why a desk or a garage becomes increasingly cluttered until a cleanup (injection of energy) is initiated. Gases

1/2 Risk: Theory and Application

diffuse and mix in irreversible processes, unmaintained buildings eventually crumble, and engines (highly ordered systems) break down without the constant infusion of maintenance energy. Here is another way of looking at the concept: “Mother Nature hates things she didn’t create.” Forces of nature seek to disorder man’s creations until the creation is reduced to the most basic components. Rust is an example-metal seeks to disorder itself by reverting to its original mineral components. If we indulge ourselves with this line of reasoning, we may soon conclude that pipeline failures will always occur unless an appropriate type of energy is applied. Transport of products in a closed conduit, often under high pressure, is a highly ordered, highly structured undertaking. If nature indeed seeks increasing disorder, forces are continuously at work to disrupt this structured process. According to this way of thinlang, a failed pipeline with all its product released into the atmosphere or into the ground or equipment and components decaying and reverting to their original premanufactured states represent the less ordered, more natural state of things. These quasi-scientific theories actually provide a useful way of looking at portions of our world. If we adopt a somewhat paranoid view of forces continuously acting to disrupt our creations, we become more vigilant. We take actions to offset those forces. We inject energy into a system to counteract the effects of entropy. In pipelines, this energy takes the forms of maintenance, inspection, and patrolling; that is, protecting the pipeline from the forces seeking to tear it apart. After years of experience in the pipeline industry, experts have established activities that are thought to directly offset specific threats to the pipeline. Such activities include patrolling, valve maintenance, corrosion control, and all of the other actions discussed in this text. Many of these activities have been mandated by governmental regulations, but usually only after their value has been established by industry practice. Where the activity has not proven to be effective in addressing a threat, it has eventuallybeen changed or eliminated. This evaluation process is ongoing. When new technology or techniques emerge, they are incorporated into operations protocols. The pipeline activity list is therefore being continuously refined. A basic premise of this book is that a risk assessment methodology should follow these same lines of reasoning. All activities that influence, favorably or unfavorably, the pipeline should be considered-even if comprehensive, historical data on the effectivenessof a particular activity are not yet available. Industry experience and operator intuition can and should be included in the risk assessment.

The scientific method This text advocates the use of simplifications to better understand and manage the complex interactions of the many variables that make up pipeline risk. This approach may appear to some to be inconsistent with their notions about scientific process. Therefore, it may be useful to briefly review some pertinent concepts related to science, engineering, and even philosophy. The results of a good risk assessment are in fact the advancement of a theory. The theory is a description of the expected behavior, in risk terms, of a pipeline system over some future period of time. Ideally, the theory is formulated from a risk assessment technique that conforms with appropriate scientific

methodologies and has made appropriate use of information and logic to create a model that can reliably produce such theories. It is hoped that the theory is a fair representation of actual risks. To be judged a superior theory by the scientific community, it will use all available information in the most rigorous fashion and be consistent with all available evidence. To be judged a superior theory by most engineers, it will additionally have a level of rigor and sophistication commensurate with its predictive capability; that is, the cost of the assessment and its use will not exceed the benefits derived from its use. If the pipeline actually behaves as predicted, then everyone’s confidence in the theory will grow, although results consistent with the predictions will never “prove” the theory. Much has been written about the generation and use of theories and the scientific method. One useful explanation of the scientific method is that it is the process by which scientists endeavorto construct a reliable and consistent representation of the world. In many common definitions, the methodology involves hypothesis generation and testing of that hypothesis: 1. Observe a phenomenon. 2. Hypothesize an explanation for the phenomenon. 3. Predict some measurable consequence that your hypothesis would have if it turned out to be true. 4. Test the predictions experimentally.

Much has also been written about the fallacy of believing that scientists use only a single method of discovery and that some special type of knowledge is thereby generated by this special method. For example, the classic methodology shown above would not help much with investigation of the nature of the cosmos. No single path to discovery exists in science, and no one clear-cut description can be given that accounts for all the ways in which scientific truth is pursued [56,88]. Common definitions of the scientific method note aspects such as objectivity and acceptability of results from scientific study. Objectivity indicates the attempt to observe things as they are, without altering observations to make them consistent with some preconceived world view. From a risk perspective, we want our models to be objective and unbiased (see the discussion of bias later in this chapter). However, our data sources often cannot be taken at face value. Some interpretation and, hence, alteration is usually warranted, thereby introducing some subjectivity. Acceptability is judged in terms ofthe degree to which observations and experimentationscan be reproduced. Of course, the ideal risk model will be accurate, but accuracy may only be verified after many years. Reproducibility is another characteristicthat is sought and immediately verifiable. If multiple assessors examine the same situation, they should come to similar conclusions if our model is acceptable. The scientific method requires both inductive reasoning and deductive reasoning. Induction or inference is the process of drawing a conclusion about an object or event that has yet to be observed or occur on the basis of previous observations of similar objects or events. In both everyday reasoning and scientific reasoning regarding matters of fact, induction plays a central role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, on the basis of data about a sample of that group or population; or we predict the occurrence of a future event on the basis of observations of similar past events; or we attribute a property to a nonobserved thing on the grounds that all observed things of

Basic concepts 113 the same kind have that property; or we draw conclusions about causes of an illness based on observations of symptoms. Inductive inference permeates almost all fields, including education, psychology, physics, chemistry, biology, and sociology 1561.The role of induction is central to many of our processes of reasoning. At least one application of inductive reasoning in pipeline risk assessment is obvious-using past failures to predict future performance. A more narrow example of inductive reasoning for pipeline risk assessment would be: “Pipeline ABC is shallow and fails often, therefore all pipelines that are shallow fail more often.” Deduction on the other hand, reasons forward from established rules: “All shallow pipelines fail more frequently; pipeline ABC is shallow; therefore pipeline ABC fails more frequently.” As an interesting aside to inductive reasoning, philosophers have struggled with the question of what justification we have to take for granted the common assumptions used with induction: that the future will follow the same patterns as the past; that a whole population will behave roughly like a randomly chosen sample; that the laws of nature governing causes and effects are uniform; or that we can presume that a sufficiently large number of observed objects gives us grounds to attribute something to another object we have not yet observed. In short, what is the justification for induction itself? Although it is tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science. and its conclusions are. by and large, proven to be correct. this justification is itself an induction and therefore it raises the same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future. The problem of induction raises important questions for the philosopher and logician whose concern it is to provide a basis for assessment of the correctness and the value of methods of reasoning [56,88]. Beyond the reasoning foundations of the scientific method, there is another important characteristic of a scientific theory or hypothesis that differentiates it from, for example, an act of faith: A theory must be “falsifiable.”This means that there must be some experiment or possible discovery that could prove the theory untrue. For example. Einstein’s theory of relativity made predictions about the results of experiments. These experiments could have produced results that contradicted Einstein, so the theory was (and still is) falsifiable [56]. On the other hand, the existence of God is an example of a proposition that cannot be falsified by any known experiment. Risk assessment results, or “theories” will predict very rare events and hence not be falsifiable for many years. This implies an element offaith in accepting such results. Because most risk assessment practitioners are primarily interested in the immediate predictive power of their assessments. many of these issues can largely be left to the philosophers. However, it is useful to understand the implications and underpinnings of our beliefs.

Modeling As previously noted, the scientific method is a process by which we create representations or models of our world. Science and engineering (as applied science) are and always have been concerned with creating models of how things work.

As it is used here, the term model refers to a set of rules that are used to describe a phenomenon. Models can range from very simple screening tools (Le., “ifA and not B, then risk = low”) to enormously complex sets of algorithms involving hundreds of variables that employ concepts from expert systems, fuzzy logic, and other artificial intelligence constructs. Model construction enables us to better understand our physical world and hence to create better engineered systems. Engineers actively apply such models in order to build more robust systems. Model building and model applicatiodevaluation are therefore the foundation of engineering. Similarly, risk assessment is the application of models to increase the understanding of risk, as discussed later in this chapter. In addition to the classical models of logic. logic techniques are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring “partial truths”-when a thing is neither completely true nor completely false-have been created based on fuuy logic originating in the 1960s from the University of California at Berkley as techniques to model the uncertainty of natural language. Fuzzy logic or fuzzy set theory resembles human reasoning in the face of uncertainty and approximate information. Questions such as “To what degree is1 safe?’ can be addressed through these techniques. They have found engineering application in many control systems ranging from “smart” clothes dryers to automatic trains.

II. Basic concepts Hazard Underlying the definition of risk is the concept of hazard. The word hazard comes from a1 zahr: the Arabic word for “dice” that referred to an ancient game of chance [lo]. We typically define a hazard as a characteristic or group of characteristics that provides the potential for a loss. Flammability and toxicity are examples of such characteristics. It is important to make the distinction between a hozard and a risk because we can change the risk without changing a hazard. When a person crosses a busy street, the hazard should be clear to that person. Loosely defined it is the prospect that the person must place himself in the path of moving vehicles that can cause him great bodily harm were he to be struck by one or more of them. The hazard is therefore injury or fatality as a result of being struck by a moving vehicle. The risk, however, is dependent on how that person conducts himself in the crossing of the street. He most likely realizes that the risk is reduced if he crosses in a designated trafficcontrolled area and takes extra precautions against vehicle operators who may not see him. He has not changed the hazard-he can still be struck by a vehicle-but his risk of injury or death is reduced by prudent actions. Were he to encase himself in an armored vehicle for the trip across the street, his risk would be reduced even further-he has reduced the consequences of the hazard. Several methodologies are available to identify hazards and threats in a formal and structured way. A hazard and operability (HAZOP) study is a technique in which a team of system experts is guided through a formal process in which imaginative scenarios are developed using specific guide words and analyzed by the team. Event-tree and fault-tree analyses are other tools. Such techniques underlie the identified threats to pipeline integrity that are presented in this book. Identified

1/4 Risk: Theory and Application

threats can be generally grouped into two categories: timedependent failuremechanisms and random failuremechanisms, as discussed later. The phrases threat assessment and hazard identification are sometimesused interchangeably in this book when they refer to identifyingmechanisms that can lead to a pipeline failure with accompanying consequences.

Risk Risk is most commonly defined as the probability of an event that causes a loss and the potential magnitude of that loss. By this definition, risk is increased when either the probability of the event increases or when the magnitude of the potential loss (the consequencesof the event) increases. Transportation ofproducts by pipeline is a risk because there is some probability of the pipeline failing, releasing its contents, and causing damage (in addition to the potential loss of the product itself). The most commonly accepted definition of risk is often expressed as a mathematical relationship: Risk = (event likelihood) x (event consequence)

As such, a risk is often expressed in measurable quantities such as the expectedfrequency of fatalities, injuries, or economic loss. Monetary costs are often used as part of an overall expression of risk however, the difficult task of assigning a dollar value to human life or environmental damage is necessary in using this as a metric. Related risk terms include Acceptable risk, tolerable risk, risk tolerance, and negligibie risk, in which risk assessment and decision making meet. These are discussed in Chapters 14 and 1 5 . A complete understanding of the risk requires that three questionsbe answered: 1. What can go wrong? 2. How likely is it? 3. What are the consequences?

By answering these questions,the risk is defined.

Failure Answering the question of “what can go wrong?’ begins with defining a pipeline failure. The unintentional release of pipeline contents is one definition. Loss ofintegrity is another way to characterize pipeline failure. However, a pipeline can fail in other ways that do not involve a loss of contents. A more general definition is failure to perform its intended function. In assessing the risk of service interruption, for example, a pipeline can fail by not meeting its delivery requirements (its intended purpose). This can occur through blockage, contamination, equipment failure, and so on, as discussed in Chapter 10. Further complicating the quest for a universal definition of failure is the fact that municipalpipeline systemslike water and wastewater and even natural gas distribution systems tolerate some amount of leakage (unlike most transmission pipelines). Therefore, they might be considered to have failed only when the leakage becomes excessive by some measure. Except in the

case of service interruption discussed in Chapter 10, the general definition of failure in this book will be excessive leakage. The term leakage implies that the release of pipeline contents is unintentional. This lets our definition distinguish a failure from a venting, de-pressuring, blow down, flaring, or other deliberateproduct release. Under this working definition, a failure will be clearer in some cases than others. For most hydrocarbon transmission pipelines, any leakage (beyond minor, molecular level emissions) is excessive, so any leak means that the pipeline has failed. For municipal systems, determinationof failure will not be as precise for several reasons, such as the fact that some leakage is only excessive-that is, a pipe failure-after it has continued for a period of time. Failure occurs when the structure is subjected to stresses beyond its capabilities,resulting in its structural integritybeing compromised.Internal pressure, soil overburden, extreme temperatures, external forces, and fatigue are examples of stresses that must be resisted by pipelines. Failure or loss of strength leading to failure can also occur through loss of material by corrosion or from mechanical damage such as scratches and gouges. The answers to what can go wrong must be comprehensive in order for a risk assessmentto be complete.Every possible failure mode and initiating cause must be identified. Every threat to the pipeline, even the more remotely possible ones, must be identified. Chapters 3 through 6 detail possible pipeline failure mechanisms grouped into the four categories of Third Par& Corrosion. Design, and Incorrect Opemtions. These roughly correspond to the dominant failure modes that have been historically observedin pipelines.

Probability By the commonly accepted definition of risk, it is apparent that probability is a critical aspect of all risk assessments. Some estimate of the probability of failure will be required in order to assess risks. This addresses the second question of the risk definition: “How likely is it?” Some think of probability as inextricably intertwined with statistics. That is, “real” probability estimates arise only from statistical analyses-relying solely on measured data or observed occurrences.However, this is only one of five definitions of probability offered in Ref. 88. It is a compelling definition since it is rooted in aspects of the scientific process and the familiar inductive reasoning. However, it is almost always woefully incomplete as a stand-alonebasis for probability estimates of complex systems. In reality, there are no systems beyond very simple, fixed-outcome-typesystems that can be fully understood solely on the basis of past observations-the core of statistics.Almost any system of a complexity beyond a simple roll of a die, spin of a roulette wheel, or draw from a deck of cards will not be static enough or allow enough trials for statistical analysis to completely characterize its behavior. Statisticsrequires data samples-past observationsfrom which inferencescan be drawn. More interestingsystems tend to have fewer available observations that are strictly representative of their current states. Data interpretation becomes more and more necessary to obtain meaningful estimates. As systems become more complex, more variable in nature, and where trial observations are less available, the historical frequency approach

Basic concepts 115 will often provide answers that are highly inappropriate estimates of probability. Even in cases where past frequencies lead to more reliable estimates of future events for populations, those estimates are often only poor estimates of individual events. It is relatively easy to estimate the average adulthood height of a class of third graders, but more problematic when we try to predict the height of a specific student solely on the basis of averages. Similarly, just because the national average ofpipeline failures might be 1 per 1,000 mile-years, the 1,000-mile-longABC pipeline could be failure free for 50 years or more. The point is that observed past occurrences are rarely sufficient information on which to base probability estimates. Many other types of information can and should play an important role in determining a probability.Weather forecasting is a good example of how various sources of information come together to form the best models. The use of historical statistics (climatological data-what has the weather been like historically on this date) turns out to be a fairly decent forecasting tool (producing probability estimates), even in the absence of any meteorological interpretations. However, a forecast based solely on what has happened in previous years on certain dates would ignore knowledge of frontal movements, pressure zones, current conditions, and other information commonly available. The forecasts become much more accurate as meteorological information and expert judgment are used to adjust the base case climatological forecasts [88]. Underlying most of the complete definitions of probability is the concept of degree of belief:A probability expresses a degree of belief. This is the most compelling interpretation of probability because it encompasses the statistical evidence as well as the interpretations and judgment. Ideally, the degree of belief could be determined in some consistent fashion so that any two estimators would arrive at the same conclusion given the same evidence. It is a key purpose of this book to provide a framework by which a given set of evidence consistently leads to a specific degree of belief regarding the safety of a pipeline. (Note that the terms likelihood. probability, and chance are often used interchangeably in this text.)

Frequency, statistics, and probability As used in this book,frequency usually refers to a count of past observations; statistics refers to the analyses of the past observations; and the definition ofprobability is “degree of belief,” which normally utilizes statistics but is rarely based entirely on them. A statistic is not a probability. Statistics are only numbers or methods of analyzing numbers. They are based on observations-past events. Statistics do not imply anything about future events until inductive reasoning is employed. Therefore, a probabilistic analysis is not only a statistical analysis. As previously noted, probability is a degree of belief. It is influenced by statistics (past observations), but only in rare cases do the statistics completely determine our belief. Such a rare case would be where we have exactly the same situation as that from which the past observations were made and we are making estimates for a population exactly like the one from which the past data arose-a very simple system. Historical failure frequencies-and the associated statistical values-are normally used in a risk assessment. Historical

data, however, are not generally available in sufficient quantity or quality for most event sequences. Furthermore, when data are available, it is normally rare-event d a t a 4 n e failure in many years of service on a specific pipeline, for instance. Extrapolating future failure probabilities from small amounts of information can lead to significant errors. However, historical data are very valuable when combined with all other information available to the evaluator. Another possible problem with using historical data is the assumption that the conditions remain constant. This is rarely true, even for a particular pipeline. For example, when historical data show a high occurrence of corrosion-related leaks, the operator presumably takes appropriate action to reduce those leaks. His actions have changed the situation and previous experience is now weaker evidence. History will foretell the future only when no offsetting actions are taken. Although important pieces of evidence, historical data alone are rarely sufficient to properly estimate failure probabilities.

Failure rates A failure rate is simply a count of failures over time. It is usually first a frequency observation of how often the pipeline has failed over some previous period of time. A failure rate can also be a prediction of the number of failures to be expected in a given future time period. The failure rate is normally divided into rates of failure for each failure mechanism. The ways in which a pipeline can fail can be loosely categorized according to the behavior of the failure rate over time. When the failure rate tends to vary only with a changing environment, the underlying mechanism is usually random and should exhibit a constant failure rate as long as the environment stays constant. When the failure rate tends to increase with time and is logically linked with an aging effect, the underlying mechanism is time dependent. Some failure mechanisms and their respective categories are shown in Table 1.1.There is certainly an aspect of randomness in the mechanisms labeled time dependent and the possibility of time dependency for some of the mechanisms labeled random. The labels point to the probability estimation protocol that seems to be most appropriate for the mechanism. The historical rate of failures on a particular pipeline system may tell an evaluator something about that system. Figure 1.1 is a graph that illustrates the well-known “bathtub shape of failure rate changes over time. This general shape represents the failure rate for many manufactured components and systems over their lifetimes. Figure 1.2 is a theorized bathtub curve for pipelines. Table 1.1

Failure rates vs. failuremechanisms

Failure mechanism Corrosion Cracking Third-party damage Laminationsiblistering Earth movements Material degradation Material defects

Nature of mechanism

Failure rate tendency

Time dependent Time dependent Random Random Random (except for slow-acting instabilities) Time dependent Random

Increase Increase Constant Constant Constant Increase Constant

they reach the end of their useful service life. Where a timedependent failure mechanism (corrosion or fatigue) is involved, its effects will be observed in this wear-outphase of the curve. An examination of the failure data of a particular system may suggest such a curve and theoretically tell the evaluator what stage the system is in and what can be expected. Failure rates are further discussed in Chapter 14.

ul

2?

.-3 m LL

c

0

t

n

f

I I I I I I I

Consequences

z I I I

Time

-

I I

!

Figure 1.1 Common failure rate curve (bathtubcurve)

Some pieces of equipment or installations have a high initial rate of failure. This first portion of the curve is called the burninphase or infant mortalityphase. Here, defects that developed during initial manufacture of a component cause failures. As these defects are eliminated, the curve levels off into the second zone. This is the so-called constantfailurezone and reflects the phase where random accidents maintain a fairly constant failure rate. Components that survive the bum-in phase tend to fail at a constant rate. Failure mechanisms that are more random in nature-third-party damages or most land movements for example-tend to drive the failure rate in this part of the curve. Far into the life of the component, the failure rate may begin to increase. This is the zone where things begin to wear out as

Inherent in any risk evaluation is a judgment of the potential consequences. This is the last of the three risk-defining questions: If something goes wrong, what are the consequences? Consequence implies a loss of some kind. Many of the aspects of potential losses are readily quantified. In the case of a major hydrocarbon pipeline accident (product escaping, perhaps causing an explosion and fire), we could quantify losses such as damaged buildings, vehicles, and other property; costs of service interruption; cost of the product lost; cost of the cleanup; and so on. Consequences are sometimes grouped into direct and indirect categories, where direct costs include Property damages Damages to human health Environmental damages Loss ofproduct Repair costs Cleanup and remediation costs Indirect costs can include litigation, contract violations, customer dissatisfaction, political reactions, loss of market share, and government fines and penalties.

Failures

Third-party; earth movements; materialHigh Figure 2.1

Simple risk matrix.

analyses. Initiating events such as equipment failure and safety system malfunction are flowcharted forward to all possible concluding events, with probabilities being assigned to each branch along the way. Failures are backward flowchartedto all possible initiating events, again with probabilities assigned to all branches.All possible paths can then be quantified based on the branch probabilities along the way. Final accident probabilities are achieved by chaining the estimated probabilities of individualevents. This technique is very data intensive. It yields absolute risk assessmentsof all possible failure events. These more elaborate models are generally more costly than other risk assessments. They are technologicallymore demanding to develop, require trained operators, and need extensive data. A detailed PRA is usually the most expensive of the risk assessmenttechniques. The output of a PRA is usually in a form whereby its output can be directly compared to other risks such as motor vehicle fatalities or tornado damages. However, in rare-event occurrences,historical data present an arguably blurred view. The PRA methodology was first popularized through opposition to various controversialfacilities, such as large chemical plants andnuclear reactors [88]. In addressingthe concerns, the intent was to obtain objective assessments of risk that were grounded in indisputable scientific facts and rigorous engineering analyses.The technique therefore makes extensive use of failure statistics of components as foundationsfor estimates of future failure probabilities. However, statistics paints an incompletepicture at best, and many probabilitiesmust still be based on expertjudgment. In attemptsto minimize subjectivity, applicationsof this techniquebecame increasinglycomprehensive and complex, requiring thousands of probability estimates and like numbers ofpages to document. Nevertheless,variation in probability estimates remains, and the complexity and cost of this method does not seem to yield commensurateincreases in accuracy or applicability [MI.In addition to sometimes widely differing results from “duplicate” PRAs performed on the same system by different evaluators, another criticism

2/24 Risk Assessment Process

includes the perception that underlying assumptions and input data can easily be adjusted to achieve some predetermined result. Of course, this latter criticism can be applied to any process involving much uncertainty and the need for assumptions. PRA-type techniques are required in order to obtain estimates of absolute risk values, expressed in fatalities, injuries, property damages, etc., per specific time period. This is the subject of Chapter 14. Some guidanceon evaluating the quality of a PRA-type technique is also offered in Chapter 14.

Indexing models Perhaps the most popular pipeline risk assessment technique in current use is the index model or some similar scoring technique. In this approach, numerical values (scores) are assigned to important conditions and activities on the pipeline system that contribute to the risk picture. This includes both riskreducing and risk-increasing items, or variables. Weightings are assigned to each risk variable. The relative weight reflects the importance of the item in the risk assessment and is based on statistics where available and on engineering judgment where data are not available. Each pipeline section is scored based on all of its attributes. The various pipe segments may then be ranked according to their relative risk scores in order to prioritize repairs, inspections,and other risk mitigating efforts. Among pipeline operators today, this technique is widely used and ranges from a simple one- or two-factor model (where only factors such as leak history and population density are considered) to models with hundreds of factors consideringvirtually every item that impacts risk. Although each risk assessmentmethod discussed has its own strengths and weaknesses, the indexing approach is especially appealingfor several reasons: Provides immediate answers Is a low-cost analysis (an intuitive approach using available information) Is comprehensive (allows for incomplete knowledge and is easily modified as new informationbecomes available) Acts as a decision support tool for resource allocation modeling Identifies and places values on risk mitigation opportunities

An indexing-typemodel for pipelinerisk assessmentis a recommended feature of a pipeline risk management program and is hlly described in this book. It is a hybrid of several of the methods listed previously. The great advantage of this technique is that a much broader spectrum of information can he included; for example, near misses as well as actual failures are considered.A drawback is the possible subjectivity of the scoring. Extra efforts must be employed to ensure consistency in the scoring and the use of weightings that fairly represent real-world risks. It is reasonableto assumethat not all variable weightings will prove to be correct in any risk model. Actual research and failure data will doubtlessly demonstrate that some were initially set too high and some too low. This is the result of modelers misjudging the relative importance of some of the variables. However, even if the quantification of the risk factors is imperfect, the results nonetheless will usually give a reliable picture

of places where risks are relatively lower (fewer “bad” factors present) and where they are relatively higher (more “bad” factors are present). An indexing approach to risk assessment is the emphasis of much ofthis book.

Further discussion on scoring-type risk assessments Scoring-typetechniques are in common use in many applications. They range from judging sports and beauty contests to medical diagnosis and credit card fraud detection, as are discussed later. Any time we need to consider many factors simultaneously and our knowledge is incomplete, a scoring system becomes practical. Done properly, it combines the best of all other approaches because critical variables are identified from scenario-based approaches and weightings are established from probabilistic conceptswhen possible. The genesis of scoring-typeapproaches is readily illustrated by the following example. As operators of motor vehicles, we generally know the hazards associated with driving as well as the consequencesof vehicle accidents.At one time or another, most drivers have been exposed to driving accident statistics as well as pictures or graphic commentary of the consequencesof accidents. Were we to perform a scientific quantitative risk analysis, we might begin by investigating the accident statistics of the particular make and model of the vehicle we operate.We would also want to know something about the crash survivability of the vehicle. Vehicle condition would also have to be included in our analysis. We might then analyze various roadways for accident history including the accident severity. We would naturally have to compensate for newer roads that have had less opportunityto accumulate an accident frequency base. To be complete, we would have to analyze driver conditionas it contributesto accident frequency or severity, as well as weather and road conditions. Some of these variables would he quite difficult to quantify scientifically. After a great deal of research and using a number of critical assumptions,we may be able to build a system model to give us an accident probability number for each combination of variables. For instance, we may conclude that, for vehicle type A, driven by driver B, in condition C, on roadway D, during weather and road conditions E, the accident frequency for an accident of severity F is once for every 200,000 miles driven. This system could take the form of a scenario approach or a scoring system. Does this now mean that until 200,000 miles are driven, no accidentsshould be expected? Does 600,000 miles driven guarantee three accidents?Of course not. What we do believe from our study of statistics is that, given a large enough data set, the accident frequency for this set of variables should tend to move toward once every 200,000 miles on average, if our underlying frequencies are representative of fume frequencies.This may mean an accident every 10,000miles for the first 100,000miles followed by no accidents for the next 1,900,000 miles-the average is still once every 200,000 miles. What we are perhaps most interestedin, however, is the relative amount of risk to which we are exposing ourselvesduring a single drive. Our study has told us little ahout the risk of this drive until we compare this drive with other drives. Suppose we change weather and road conditionsto state G from state F and find that the accident frequency is now once every 190,000

Risk assessment models 2/25

miles. This finding now tells us that condition G has increased the risk by a small amount. Suppose we change roadway D to roadway H and find that our accident frequency is now once every 300,000 miles driven. This tells us that by using road H we have reduced the risk quite substantially compared with using road D. Chances are, however, we could have made these general statements without the complicated exercise of calculating statistics for each variable and combining them for an overall accident frequency. So why use numbers at all? Suppose we now make both variable changes simultaneously. The risk reduction obtained by road H is somewhat offset by the increasedrisk associated with road and weather condition F, but what is the result when we combine a small risk increase with a substantial risk reduction? Because all of the variables are subject to change, we need some method to see the overall picture. This requires numbers, but the numbers can be relative-showing only that variable H has a greater effect on the risk picture than does variable G. Absolute numbers, such as the accident frequency numbers used earlier, are not only difficult to obtain, they also give a false sense of precision to the analysis. If we can only be sure of the fact that change X reduces the risk and it reduces it more than change Y does, it may be of little W h e r value to say that a once in 200,000 frequency has been reduced to a once in 2 10,000 frequency by change X and only a once in 205,000 fiequency by change Y. We are ultimately most interested in the relative risk picture of changeXversus change Y. This reasoning forms the basis of the scoring risk assessment. The experts come to a consensus as to how a change in a variable impacts the risk picture, relative to other variables in the risk picture. If frequency data are available, they are certainly used, but they are used outside the risk analysis system. The data are used to help the experts reach a consensus on the importance of the variable and its effects (or weighting) on the risk picture. The consensus is then used in the risk analysis. As previously noted, scoring systems are common in many applications. In fact, whenever information is incomplete and many aspects or variables must be simultaneously considered, a scoring system tends to emerge. Examples include sporting events that have some difficult-to-measure aspects like artistic expression or complexity, form, or aggressiveness. These include gymnastics, figure skating, boxing, and karate and other martial arts. Beauty contests are another application. More examples are found in the financial world. Many economic models use scoring systems to assess current conditions and forecast future conditions and market movements. Credit card fraud assessment is another example where some purchases trigger a model that combines variables such as purchase location, the card owner’s purchase history, items Table 2.1

purchased, time of day, and other factors to rate the probability of a fraudulent card use. Scoring systems are also used for psychological profiles, job applicant screening, career counseling, medical diagnostics, and a host of other applications.

Choosing a risk assessment approach Any or all ofthe above-describedtechniques might have a place in risk assessment/management. Understanding the strengths and weaknesses of the different risk assessment methodologies gives the decision-maker the basis for choosing one. A case can be made for using each in certain situations. For example, a simple matrix approach helps to organize thinking and is a first step towards formal risk assessment. If the need is to evaluate specific events at any point in time, a narrowly focused probabilistic risk analysis might be the tool of choice. Ifthe need is to weigh immediate risk trade-offs or perform inexpensive overall assessments, indexing models might be the best choice. These options are summarized in Table 2.1.

Uncertainty It is important that a risk assessment identify the role of uncertainty in its use of assumptions and also identify how the state of “no information” is assessed. The philosophy behind uncertainty and risk is discussed in Chapter 1. The recommendation from Chapter 1 is that a risk model generally assumes that things are “ b a d until data show otherwise. So, an underlying theme in the assessment is that “uncertainty increases risk.” This is a conservative approach requiring that, in the absence of meaningful data or the opportunity to assimilate all available data, risk should be overestimated rather than underestimated. So, lower ratings are assigned, reflecting the assumption of reasonably poor conditions, in order to accommodate the uncertainty. This results in a more conservative overall risk assessment. As a general philosophy, this approach to uncertainty has the added long-term benefit of encouraging data collection via inspections and testing. Uncertainty also plays a role in scoring aspects of operations and maintenance. Information should be considered to have a life span because users must realize that conditions are always changing and recent information is more useful than older information. Eventually, certain information has little value at all in the risk analysis.This applies to inspections, surveys, and so on. The scenarios shown inTable 2.2 illustrate the relative value of several knowledge states for purposes of evaluating risk where uncertainty is involved. Some assumptions and “reasonableness” are employed in setting risk scores in the absence of

Choosing a risk assessment technique

When the need is t o . . .

A technique to use might be

Study specific events. perform post-incidentinvestigations,compare risks of specific failures. calculatespecific event probabilities Obtain an inexpensive overall risk model, create a resource allocation model, model the interaction of many potential failure mechanisms, study or create an operatingdiscipline Better quantify a belief, create a simple decision support tool, combine several beliefs into a single solution, document choices in resource allocation

Event trees. fault trees, FMEA. PRA, HAZOP Indexing model Matrix

2/26 Risk Assessment Process Table 2.2 Uncertainty and risk assessment ~~

Action

Inspection results

Risk relevance

Timely and comprehensive inspection performed Timely and comprehensive inspection performed

No risk issues identified Some risk issues or indications of flaw potential identified: root

Least risk

cause analysis and proper follow-up to mitigate risk High uncertainty regarding risk issues Some nsk issues or Indications of flaw potential identifieduncertain reactions,uncertain mitigation of risk

More risk

No timely and comprehensiveinspection performed

Timely and comprehensive inspection performed

data; in general, however, worst-case conditions are conservatively used for default values. Uncertainty also arises in using the risk assessment model since there are inaccuracies inherent in any measuring tool. A signal-to-noise ratio analogy is a useful way to look at the tool and highlights precautions in its use. This is discussed in Chapter 1.

Sectioning or segmenting the pipeline It is generally recognized that, unlike most other facilities that undergo a risk assessment, a pipeline usually does not have a constant hazard potential over its entire length. As conditions along the line’s route change, so too does the risk picture. Because the risk picture is not constant, it is efficient to examine a long pipeline in shorter sections. The risk evaluator must decide on a strategy for creating these sections in order to obtain an accurate risk picture. Each section will have its own risk assessment results. Breaking the line into many short sections increases the accuracy of the assessment for each section, hut may result in higher costs of data collection, handling, and maintenance (although higher costs are rarely an issue with modern computing capabilities). Longer sections (fewer in number) on the other hand, may reduce data costs but also reduce accuracy, because average or worst case characteristics must govern if conditions change within the section.

Fixed-length approach A fixed-length method of sectioning, based on rules such as “every mile” or “between pump stations” or “between block valves,” is often proposed. While such an approach may be initially appealing (perhaps for reasons of consistency with existing accounting or personnel systems), it will usually reduce accuracy and increase costs. Inappropriate and unnecessary break points that are chosen limit the model’s usefulness and hide risk hot spots if conditions are averaged in the section, or risks will be exaggerated if worst case conditions are used for the entire length. It will also interfere with an otherwise efficient ability of the risk model to identify risk mitigation projects. Many pipeline projects are done in very specific locations, as is appropriate. The risk of such specific locations is often lost under a fixed-length sectioning scheme.

Dynamic segmentation approach The most appropriate method for sectioning the pipeline is to insert a break point wherever significant risk changes occur. A significant condition change must be determined by the eval-

Most risk

uator with consideration given to data costs and desired accuracy. The idea is for each pipeline section to be unique, from a risk perspective, from its neighbors. So, within a pipeline section, we recognize no differences in risk, from beginning to end. Each foot ofpipe is the same as any other foot, as far as we know from our data. But we know that the neighboring sections do differ in at least one risk variable. It might he a change in pipe specification (wall thickness. diameter, etc.), soil conditions (pH, moisture, etc.), population, or any of dozens of other risk variables, but at least one aspect is different from section to section. Section length is not important as long as characteristics remain constant. There is no reason to subdivide a 10-mile section of pipe if no real risk changes occur within those 10 miles. This type of sectioning is sometimes called dynamic segnienfution. It can be done very efficiently using modern computers. It can also be done manually, of course, and the manual process might be suitable for setting up a high-level screening assessment.

Manually establishing sections With today’s common computing environments, there is really no reason to follow the relatively inefficient option of manually establishing pipeline sections. However. envisioning the manual process of segmentation might be helpful for obtaining a better understanding of the concept. The evaluator should first scan Chapters 3 through 7 of this text to get a feel for the types ofconditions that make up the risk picture. He should note those conditions that are most variable in the pipeline system being studied and rank those items with regard to magnitude of change and frequency of change. This ranking will be rather subjective and perhaps incomplete, but it will serve as a good starting point for sectioning the line(s).An example of a short list ofprioritized conditions is as follows: 1. Population density 2. Soil conditions 3. Coating condition 4. Age ofpipeline

In this example, the evaluator(s) foresees the most significant changes along the pipeline route to be population density, followed by varying soil conditions, then coating condition, and pipeline age. This list was designed for an aging 60-mile pipeline in Louisiana that passes close to several rural communities and alternating between marshlands (clay)and sandy soil conditions. Furthermore, the coating is in various states ofdeterioration (maybe roughly corresponding to the changing soil

Risk assessment models 2/27

conditions) and the line has had sections replaced with new pipe during the last few years. Next. the evaluator should insert break points for the sections based on the top items on the prioritized list of condition changes. This produces a trial sectioning of the pipeline. If the number of sections resulting from this process is deemed to be too large, the evaluator needs to merely reduce the list (eliminating conditions from the bottom of the prioritized list) until an appropriate number of sections are obtained. This trial-anderror process is repeated until a cost-effective sectioning has been completed.

E.xaniple 2.1: Sectioning the Pipeline Following this philosophy, suppose that the evaluator of this hypothetical Louisiana pipeline decides to section the line according to the following rules he has developed: Insert a section break each time the population density along a 1-mile section changes by more than 10%. These popula-

tion section breaks will not occur more often than each mile, and as long as the population density remains constant, a section break is unwarranted. Insert a section break each time the soil corrosivity changes by 30%. In this example, data are available showing the average soil corrosivity for each 500-ft section of line. Therefore, section breaks may occur a maximum of I O times (5280 ft per mile divided by 500-ft sections) for each mile ofpipeline. Insert a section break each time the coating condition changes significantly. This will be measured by the corrosion engineer’s assessment. Because this assessment is subjective and based on sketchy data, such section breaks may occur as often as every mile. Insert a section break each time a difference in age of the pipeline is seen. This is measured by comparing the installation dates. Over the total length of the line, six new

sections have been installed to replace unacceptable older sections. Following these rules, the evaluator finds that his top listed condition causes 15 sections to be created. By applying the second condition rule, he has created an additional 8 sections. bringing the total to 23 sections. The third rule yields an additional 14 sections, and the fourth causes an additional 6 sections. This brings the total to 43 sections in the 60-mile pipeline. The evaluator can now decide if this is an appropriate number of sections. As previously noted, factors such as the desired accuracy of the evaluation and the cost of data gathering and analysis should be considered. If he decides that 43 sections is too many for the company’s needs, he can reduce the number of sections by first eliminating the additional sectioning caused by application of his fourth rule. Elimination of these 6 sections caused by age differences in the pipe is appropriate because it had already been established that this was a lower-priority item. That is, it is thought that the age differences in the pipe are not as significant a factor as the other conditions on the list. If the section count (now down to 37) is still too high, the evaluator can eliminate or reduce sectioning caused by his third rule. Perhaps combining the corrosion engineer’s “good’ and “fair” coating ratings would reduce the number of sections from I4 to 8. In the preceding example, the evaluator has roughed out a plan to break down the pipeline into an appropriate number of sections. Again, this is an inefficient way to section a pipeline and leads to further inefficiencies in risk assessment. This example is provided only for illustration purposes. Figure 2.2 illustrates a piece of pipeline sectioned based on population density and soil conditions. For many items in this evaluation (especially in the incorrect operations index) new section lines will not be created. Items such as training or procedures are generally applied uniformly across the entire pipeline system or at least within a single

I

Section 4

I

Section 5

I

I I

Section 6

I

Town

\

Pipeline

Figure 2.2

Sectioning of the pipeline.

w‘

I

2/28Risk Assessment Process

operations area. This should not be universally assumed, however, during the data-gathering step.

detail and complexity. Appendix E shows some samples of risk algorithms. Readers will find a review of some database design concepts to be useful (see Chapter 8).

Persistence of segments Another decision to make is how often segment boundaries will be changed. Under a dynamic segmentation strategy, segments are subject to change with each change of data. This results in the best risk assessments, but may create problems when tracking changes in risk over time. Difficulties can be readily overcome by calculating cumulative risks (see Chapter 15) or tracking specific points rather than tracking segments.

Results roll-ups The pipeline risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. It is insensitive to length. If two pipeline segments, say, 100 and 2600 ft, respectively, have the same risk score, then each point along the 100-ft segment presents the same risk as does each point along the 2600-ft length. Of course, the 2600-ft length presents more overall risk than does the 100-ft length because it has many more riskproducing points. A cumulative risk calculation adds the length aspect so that a 100-ft length of pipeline with one risk score can be compared against a 2600-ft length with a different risk score. As noted earlier, dividing the pipeline into segments based on any criteria other than all risk variables will lead to inefficiencies in risk assessment. However, it is common practice to report risk results in terms of fixed lengths such as “per mile” or “between valve stations,” even if a dynamic segmentation protocol has been applied. This “rolling up” of risk assessment results is often thought to be necessary for summarization and perhaps linking to other administrative systems such as accounting. To minimize the masking effect that such roll-ups might create, it is recommended that several measures be simultaneously examined to ensure a more complete use of information. For instance, when an average risk value is reported, a worst-case risk value, reflecting the worst length of pipe in the section, can be simultaneously reported. Length-weighted averages can also be used to better capture information, but those too must be used with caution. A very short, but very risky stretch of pipe is still of concern, even if the rest of the pipeline shows low risks. In Chapter 15, a system of calculating cumulative risk is offered. This system takes into account the varying section lengths and offers a way to examine and compare the effects of various risk mitigation efforts. Other aspects of data roll-ups are discussed in Chapters 8 and 15.

IV. Designing a risk assessment model A good risk model will be firmly rooted in engineering concepts and be consistent with experience and intuition. This leads to the many similarities in the efforts of many different modelers examining many different systems at many different times. Beyond compatibility with engineering and experience, a model can take many forms, especially in differing levels of

Data first or framework first? There are two possible scenarios for beginning a relative risk assessment. In one, a risk model (or at least a framework for a model) has already been developed, and the evaluator takes this model and begins collecting data to populate her model’s variables. In the second possibility, the modeler compiles a list of all available information and then puts this information into a framework from which risk patterns emerge and risk-based decisions can be made. The difference between these two approaches can be summarized in a question: Does the model drive data collection or does data availability drive model development? Ideally, each will be the driver at various stages of the process. One of the primary intents of risk assessment is to capture and use all available information and identify information gaps. Having data drive the process ensures complete usage of all data, while having a predetermined model allows data gaps to be easily identified. A blend of both is therefore recommended, especially considering possible pitfalls of taking either exclusively. Although a predefined set of risk algorithms defining how every piece of data is to be used is attractive, it has the potential to cause problems, such as: 0

Rigidity of approach.Difficulty is experienced in accepting new data or data in and unexpected format or information that is loosely structured. Relative scoring. Weightings are set in relation to types of information to be used. Weightings would need to be adjusted if unexpected data become available.

On the other hand, a pure custom development approach (building a model exclusively from available data) suffers from lack of consistency and inefficiency. An experienced evaluator or a checklist is required to ensure that significant aspects of the evaluation are not omitted as a result of lack of information. Therefore, the recommendation is to begin with lists of standard higher level variables that comprise all of the critical aspects of risk. Chapters 3 through 7 provide such lists for common pipeline components, and Chapters 9 through 13 list additional variables that might be appropriate for special situations. Then, use all available information to evaluate each variable. For example, the higher level variable of activity (as one measure of third-party damage potential) might be created from data such as number ofone-call reports, population density, previous thirdparty damages, and so on. So, higher level variable selection is standardized and consistent, yet the model is flexible enough to incorporate any and all information that is available or becomes available in the future. The experienced evaluator, or any evaluator armed with a comprehensive list of higher level variables, will quickly find many useful pieces of information that provide evidence on many variables. She may also see risk variables for which no information is available. Similar to piecing together a puzzle, a picture will emerge that readily displays all knowledge and knowledge gaps.

Designinga risk assessment model 2/29

Risk factors Tvpes of information Central to the design ofa risk model are the risk factors or variables (these terms are used interchangeably in this text) that will be included in the assessment. A complete list of risk factors, those items that add to or subtract from the amount of risk, can be readily identified for any pipeline system. There is widespread agreement on failure mechanisms and underlying factors influencing those mechanisms. Setting up a risk assessment model involves trade-offs between the number of factors to be considered and the ease of use of the model. Including all possible factors in a decision support system, however, can create a somewhat unwieldy system. So, the important variables are widely recognized, but the number to be considered in the model (and the depth of that consideration) is amatter of choice for the model developers. In this book, lists ofpossible risk indicators are offered based on their ability to provide useful risk signals. Each item’s specific ability to contribute without adding unnecessary complexities will be a function of a user’s specific system, needs, and ability to obtain the required data. The variables and the rationale for their possible inclusion are described in the following chapters. It is usually the case that some data impact several different aspects of risk. For example, pipe wall thickness is a factor in almost all potential failure modes: It determines time to failure for a given corrosion rate, partly determines ability to survive external forces, and so on. Population density is a consequence variable as well as a third-party damage indicator (as a possible measure of potential activity). Inspection results yield evidence regarding current pipe integrity as well as possibly active failure mechanisms. A single detected defect can yield much information. It could change our beliefs about coating condition, CP effectiveness, pipe strength, overall operating safety margin, and maybe even provides new information about soil corrosivity, interference currents, third-party activity, and so on. All of this arises from a single piece of data (evidence). Many companies now avoid the use of casings. But casings were put in place for a reason. The presence of a casing is a mitigation measure for external force damage potential, but is often seen to increase corrosion potential. The risk model should capture both of the risk implications from the presence of a casing. Numerous other examples can be shown. A great deal of information is usually available in a pipeline operation. Information that can routinely be used to update the risk assessment includes

0

0

All survey results such as pipe-to-soil voltage readings, leak surveys, patrols, depth of cover, population density, etc. Documentation of all repairs Documentation of all excavations Operational data including pressures and flow rates Results of integrity assessments Maintenance reports Updated consequence information Updated receptor information-new housing, high occupancy buildings. changes in population density or environmental sensitivities, etc. Results of root cause analyses and incident investigations Availability and capabilities of new technologies

Attributes andpreventions Because the ultimate goal of the risk assessment is to provide a means of risk management, it is sometimes useful to make a distinction between two types of risk variables. As noted earlier, there is a difference between a hazard and a risk. We can usually do little to change the hazard, but we can take actions to affect the risk. Following this reasoning, the evaluator can categorize each index risk variable as either an attribute or a prevention. The attributes correspond loosely to the characteristics of the hazard, while the preventions reflect the risk mitigation measures. Attributes reflect the pipeline’s environment-characteristics that are difficult or impossible to change. They are characteristics over which the operator usually has little or no control. Preventions are actions taken in response to that environment. Both impact the risk, but a distinction may be useful, especially in risk management analyses. Examples of aspects that are not routinely changed, and are therefore considered attributes, include Soil characteristics Type of atmosphere Product characteristics The presence and nature ofnearby buried utilities The other category, preventions, includes actions that the pipeline designer or operator can reasonably take to offset risks. Examples ofpreventions include Pipeline patrol frequency Operator training programs Right-of-way (ROW) maintenance programs The above examples of each category are pretty clear-cut. The evaluator should expect to encounter some gray areas of distinction between an attribute and a prevention. For instance. consider the proximity of population centers to the pipeline. In many risk assessments, this impacts the potential for third-party damage to the pipeline. This is obviously not an unchangeable characteristic because rerouting of the line is usually an option. But in an economic sense. this characteristic may be unchangeable due to unrecoverable expenses that may be incurred to change the pipeline’s location. Another example would be the pipeline depth of cover. To change this characteristic would mean a reburial or the addition of more cover. Neither of these is an uncommon action, but the practicality of such options must be weighed by the evaluator as he classifies a risk component as an attribute or a prevention. Figure 2.3 illustrates how some of the risk assessment variables are thought to appear on a scale with preventions at one extreme and attributes at the other. The distinction between attributes and preventions is especially useful in risk management policy making. Company standards can be developed to require certain risk-reducing actions to be taken in response to certain harsh environments. For example, more patrols might be required in highly populated areas or more corrosion-prevention verifications might be required under certain soil conditions. Such a procedure would provide for assigning a level of preventions based on the level of attributes. The standards can be predefined and programmed into a database program to adjust automatically the standards to

2/30Risk Assessment Process

r D e p t h cover

I

I

I

1

--------------Conditions * - - - - - - - - - - - - - - - - - - - - +Actions Figure 2.3

the environment of the section-harsh preventions to meet the standard.

Example items on attributes-preventions scale.

conditions require more

Model scope and resolution Assessment scope and resolution issues further complicate model design. Both involve choices of the ranges of certain risk variables. The assessment of relative risk characteristics is especially sensitive to the range of possible characteristics in the pipeline systems to be assessed. If only natural gas transmission pipelines are to be assessed then the model does not necessarily have to capture liquid pipeline variables such as surge potential. The model designer can either keep this variable and score it as “no threat” or she can redistribute the weighting points to other variables that do impact the risk. As another example, earth movements often pose a very localized threat on a relatively few stretches of pipeline. When the vast majority of a pipeline system to be evaluatedis not exposed to any land movement threats, risk points assigned to earth movements will not help to make risk distinctions among most pipeline segments. It may seem beneficial to reassign them to other variables, such as those that warrant full consideration. However, without the direct consideration for this variable, comparisons with the small portions of the system that are exposed, or future acquisitions of systems that have the threat, will be difficult. Model resolution-the signal-to-noise ratio as discussed in Chapter I-is also sensitive to the characteristics of the systems to be assessed. A model that is built for parameters ranging from, say, a 40-inch, 2000-psig propane pipeline to a 1-inch, 20psig fuel oil pipeline will not be able to make many risk distinctions between a 6-inch natural gas pipeline and an 8-inch natural gas pipeline. Similarly, a model that is sensitive to differences between a pipeline at 1 100 psig and one at 1200psig might have to treat all lines above a certain pressure/diameter threshold as the same. This is an issue ofmodeling resolution. Common risk variables that should have a range established as part of the model design include Diameter range Pressure range Products to be included

The range should include the smallest to largest values in systems to be studied as well as future systems to be acquired or other systems that might be used as comparisons. Given the difficulties in predicting future uses of the model, a more generic model-widely applicable to many different pipeline systems-might be appropriate.

Special Risk Factors Two possible risk factors deserve special consideration since they have a general impact on many other risk considerations. Age as a risk variable Some risk models use age as a risk variable. It is a tempting choice since many man-made systems experience deterioration that is proportional to their years in service. However, age itself is not a failure mechanism-at most it is a contributing factor. Using it as a stand-alone risk variable can detract from the actual failure mechanisms and can also unfairly penalize portions of the system being evaluated. Recall the discussion on time-dependent failure rates in Chapter 1, including the concept of the bathtub failure rate curve. Penalizing a pipeline for its age presupposes knowledge of that pipeline’s failure rate curve. Age alone is not a reliable indicator ofpipeline risk, as is evidenced by some pipelines found in excellent operating condition even after many decades of service. A perception that age always causes an inevitable, irreversible process of decay is not an appropriate characterization ofpipeline failure mechanisms. Mechanisms that can threaten pipe integrity exist but may or may not be active at any point on the line. Integrity threats are well understood and can normally be counteracted with a degree of confidence. Possible threats to pipe integrity are not necessarily strongly correlated with the passage of time, although the “area of opportunity” for something to go wrong obviously does increase with more time. The ways in which the age of a pipeline can influence the potential for failures are through specific failure mechanisms such as corrosion and fatigue, or in consideration of changes in manufacturing and construction methods since the pipeline was built. These age effects are well understood and can normally be countered by appropriate mitigation measures.

Designing a risk assessment model 2/31 Experts believe that there is no effect of age on the microcrystalline structure of steel such that the strength and ductility properties of steel pipe are degraded over time. The primary metal-related phenomena are the potential for corrosion and development of cracks from fatigue stresses. In the cases of certain other materials, mechanisms of strength degradation might be present and should be included in the assessment, Examples include creep and UV degradation possibilities in certain plastics and concrete deterioration when exposed to certain chemical environments. In some situations, a slow-acting earth movement could also be modeled with an age component. Such special situations are discussed in Chapters 4 and 5. Manufacturing and construction methods have changed over time. presumably improving and reflecting learning experiences from past failures. Hence, more recently manufactured and constructed systems may be less susceptible to failure mechanisms of the past. This can be included in the risk model and is discussed in Chapter 5. The recommendation here is that age not be used as an independent risk variable. unless the risk model is only a very high-level screening application. Preferably, the underlying mechanisms and mitigations should be evaluated to determine ifthere are any age-related effects.

rating tasks.) It is therefore useful for capturing expert judgments. However, these advantages are at least partially offset by inferior measurement quality, especially regarding obtaining consistency. Some emerging techniques for artificial intelligence systems seek to make better use of human reasoning to solve problems involving incomplete knowledge and the use of descriptive terms. In mirroring human decision making. fuzzy logic interprets and makes use of natural language in ways similar to our risk models. Much research can be found regarding transforming verbal expressions into quantitative or numerical probability values. Most conclude that there is relatively consistent usage of terms. This is useful when polling experts, weighing evidence. and devising quantitative measures from subject judgments. For example. Table 2.4 shows the results of a study where certain expressions, obtained from interviews of individuals, were correlated against numerical values. Using relationships like those shown in Table 2.4 can help bridge the gap between interview or survey results and numerical quantification of beliefs.

Table 2.4

Inspecfion age Inspection age should play a role in assessments that use the results of inspections or surveys. Since conditions should not be assumed to be static, inspection data becomes increasingly less valuable as it ages. One way to account for inspection age is to make a graduated scale indicating the decreasing usefulness of inspection data over time. This measure of information degradation can be applied to the scores as a percentage. After a predetermined time period scores based on previous inspections degrade to some predetermined value. An example is shown in Table 2.3. In this example, the evaluator has determined that a previous inspection yields no useful information after 5 years and that the usefulness degrades 20% per year. By this scale, point values based on inspection results will therefore change by 20% per year. A more scientific way to gauge the time degradation of integrity inspection data is shown in Chapter 5.

Inteniew dutu Collecting information via an interview will often require the use of qualitative descriptive terms. Such verbal labeling has some advantages, including ease of explanation and familiarity. (In fact. most people prefer verbal responses when replying to

Table 2.3

Assigning numbers to qualitative assessments

E-rpression

Almost certain Very high chance Very likely High chance Very probable Very possible Likely Probable Even chance Medium chance Possible Low chance Unlikely Improbable Very low chance Very unlikely Very improbable Almost impossible

Median prohahilrw equl~'ulellr YO 90

85 80 80 RO 70 70

Ruii,yt 9&99 5

85-')9 75-90 x0 Y?

75-92 70 87.5 65 85 h&75

15

45-55 40-6(1 40-70 I &70 IO 3

15

5-?0

10 10 5 2

5-15 2 I

50 50 40

70

1-15

0-5

Source: From Rohrmann, 6.. "Verbal Qualifiers for Ratlng Scales: Sociolinguistic Considerations and Psychometric Data," Project report, Universityof Melbourne,Australia, September 2002

Example of inspection degradations

Inspection age (j'ear.YJ

Adjustment (degradation) fuctor /%i

IO0 80 60

Nota Fresh data; no degradation Inspection data is 1 year old and less representative ofactual conditions

40

Inspection data is now 3 years old and current conditions might now be significantly di tErent

20 0

Inspection results assumed to no longer yield useful information

2/32Risk Assessment Process

Additional studies have yielded similar correlations with terms relating to quality and frequency. In Tables 2.5 and 2.6, some test results are summarized using the median numerical value for all qualitative interpretations along with the standard deviation. The former shows the midpoint of responses (equal number of answers above and below this value) and the latter indicates how much variability there is in the answers. Terms that have more variability suggest wider interpretations of their meanings. The terms in the tables relate quality to a 1-to 10-pointnumerical scale.

Variablegrouping The grouping or categorizing of failure modes, consequences, and underlying factors is a model design decision that must be made. Use of variables and subvariables helps understandability when variables are grouped in a logical fashion, but also creates intermediate calculations. Some view this as an attractive Table 2.5 Expressions of quality Term Outstanding Excellent Very good Good Satisfactory Adequate Fair Medium Average Not too bad

so-so Inadequate Unsatisfactoiry

Poor Bad

Median

Standard deviation

9.9 9.7 8.5 7.2 5.9 5.6 5.2 5 4.9 4.6 4.5 1.9 1.8 1.5 1

0.4 0.6 0.7 0.8 1.2 1.2 1.1 0.6 0.5 1.3 0.7 1.2 I .3 1.1 1

Source: From Rohrmann. B.. “Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project report, University of Melbourne,Australia,September 2002.

Table 2.6 Expressions of frequency Term Always Very often Mostly Frequently Often Fairly often Moderately often Sometimes Occasionally Seldom Rarely Never

Median

Standard deviation

10 8.3 8 7.4 6.6

0.2 0.9 1.3 1.2 1.2 1.1 1.2

6.1 5.7 3.6 3.2 1.7 1.3 0

1

1.1 0.7 0.6 0.1

Source: From Rohrmann, B.,“Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project report, University of Melbourne, Australia, September 2002.

aspect of a model, while others might merely see it as an unnecessary complication.Without categories of variables,the model takes on the look of a flat file, in database design analogy. When using categories that look more like those of a relational database design, the interdependenciesare more obvious.

Weightings The weightings of the risk variables, that is, their maximum possible point values or adjustment factors, reflect the relative importance of that item. Importance is based on the variable’s role in adding to or reducing risk. The following examples illustrate the way weightings can be viewed. Suppose that the threat of AC-induced corrosion is thought to represent 2% of the total threat of corrosion. It is a relatively rare phenomenon. Suppose further that all corrosion conditions and activities are thought to be worst case-the pipeline is in a harsh environment with no mitigation (no coatings, no cathodic protection, etc) and atmospheric, internal, and buried metal corrosion are all thought to be imminent. Ifwe now addressed all AC corrosion concerns only, then we would be adding 2% safety-reducing the threat of corrosion of any kind by 2% (and reducing the threat of AC-induced corrosion by 100%). As another example, if public education is assumed to carry a weight of 15 percent of the third-party threat, then doing public education as well as it can be done should reduce the relative failure rate from third-party damage scenariosby 15%. Weightings should be continuously revisited and modified whenever evidence shows that adjustments are appropriate. The weightings are especially important when absolute risk calculations are being performed. For example, if an extra foot of cover is assumed, via the weightings assigned,to reduce failure probability by 10% but an accumulation of statistical data suggests the effect is closer to 20%, obviously the predictive power of the model is improved by changing the weightings accordingly. In actuality, it is very difficult to extract the true influence of a single variable from the confounding influence of the multitude of other variables that are influencing the scenario simultaneously.In the depth of cover example, the reality is probably that the extra foot of cover impacts risk by 10% in some situations, 50% in others, and not at all in still others. (See also Chapter 8 for a discussion of sensitivity analysis.) The issue of assigning weightings to overall failure mechanisms also arises in model development.In a relative risk model with failure mechanisms of substantially equivalent orders of magnitude, a simplification can be used. The four indexes shown in Chapters 3 through 6 correspond to common failure modes and have equal @lo0 point scales-all failure modes are weighted equally. Because accident history (with regard to cause of failures) is not consistent from one company to another, it does not seem logical to rank one index over another on an accident history basis. Furthermore, if index weightings are based on a specific operator’s experience, that accident experience will probably change with the operator’s changing risk management focus. When an operator experiences many corrosion failures, he will presumably take actions to specifically reduce corrosion potential. Over time, a different mechanism may consequently become the chief failure cause. So, the weightings would need to change periodically, making the tracking of risk difficult. Weightings should, however, be used

Designing a risk assessment model 2/33 to reflect beliefs about frequency of certain failure types when linking relative models to absolute calculations or when there is large variations in expected failure frequencies among the possible failure types.

Risk scoring Direction ofpoint scale In a scoring-type relative risk assessment, one of two point schemes is possible: increasing scores versus decreasing to represent increased risk. Either can be effectively used and each has advantages. As a risk score, it makes sense that higher numbers mean more risk. However, as an analogy to a grading system and most sports and games (except golf), others prefer higher numbers being better-more safety and less risk. Perhaps the most compelling argument for the “increasing points = increasing safety” protocol is that it instills a mind-set of increasing safety. “Increasing safety” has a meaning subtly different from and certainly more positive than “lowering risks.” The implication is that additional safety is layered onto an already safe system, as points are acquired. This latter protocol also has the advantage of corresponding to certain common expressions such as “the risk situation has deteriorated’ = “scores have decreased and “risk situation has improved” = “scores have increased.” While this book uses an “increasing points = increasing safety” scale in all examples of failure probability, note that this choice can cause a slight complication if the relative risk assessments are linked to absolute risk values. The complication arises since the indexes actually represent relative probability of survival, and in order to calculate a relative probability of failure and link that to failure frequencies, an additional step is required. This is discussed in Chapter 14.

important the risk will be until she sees the weighting of that variable. Confusion can also arise in some models when the same variable is used in different parts of the model and has a locationspecific scoring scheme. For instance, in the offshore environment, water depth is a risk reducer when it makes anchoring damage less likely. It is a risk increaser when it increases the chance for buckling. So the same variable, water depth, is a “good” thing in one part of the model and a “ b a d thing somewhere else.

Combining variables An additional modeling design feature involves the choice of how variables will be combined. Because some variables will indicate increasing risk and others decreasing, a sign convention (positive versus negative) must be established. Increasing levels ofpreventions should lead to decreased risks while many attributes will be adding risks (see earlier discussion of preventions and attributes). For example, the prevention of performing additional inspections should improve risk scores, while risk scores deteriorate as more soil corrosivity indications (moisture, pH, contaminants, etc.) are found. Another aspect of combining variables involves the choice of multiplication versus addition. Each has advantages.Multiplication allows variables to independently have a great impact on a score. Adhtion better illustrates the layering of adverse conditions or mitigations. In formal probability calculations, multiplication usually represents the and operation: If corrosion prevention = “poor” AND soil comsivity = “high” then risk = “high.”Addition usually represents the or operation: If depth of cover = “good” OR activity levef= ‘‘low’’ then risk =“low.” Option 1 Risk variable = (sum of risk increasers) -(sum of nsk reducers)

Where to assign weightings In previous editions ofthis model, it is suggested that point values be set equal to weightings. That is, when a variable has a point value of 3, it represents 3% of the overall risk. The disadvantage of this system is that the user does not readily see what possible values that variable could take. Is it a 5-point variable, in which case a value of 3 means it is scoring midrange? Or is it a 15-point variable, for which a score of 3 means it is relatively low? An alternative point assignment scheme scores all variables on a fixed scale such as C L l O points. This has the advantage of letting the observer know immediately how “good” or “bad” the variable is. For example, a 2 always means 20% from the bottom and a 7 always means 70% of the maximum points that could be assigned. The disadvantage is that, in this system, weightings must be used in a subsequent calculation. This adds another step to the calculation and still does not make the point scale readily apparent. The observer does not know what the 70% variable score really means until he sees the weightings assigned. A score of 7 for a variable weighted at 20% is quite different from a score of 7 for a variable weighted at 5%. In one case, the user must see the point scale to know that a score of, say, 4 points represents the maximum level of mitigation. In the alternate case, the user knows that 10 always represents the maximum level of mitigation, but does not know how

where the point scales for each are in the same direction. For example, Corrosion threat = (environment) - [(coating) + (cathodic protection)]

Option 2 Risk variable = (sum ofrisk increasers) + (sum ofnsk reducers)

Point scales for risk increasers are often opposite from the scale of risk reducers. For example, in an “increasing points means increasing risk” scheme, Corrosion threat = (environment) + [(coating) + (cathodic protection)]

where actual point values might be (corrosion threat) = (24) + (-5

+-2)

=

17

Option 3 In this approach, we begin with an assessment ofthe threat level and then consider mitigation measures as adjustment factors. So, we begin with a risk and then adjust the risk downward (if increasing points = increasing risk) as mitigation is added: Risk variable = (threat) x (sum of% threat reduction through mitigations)

2/34 Risk Assessment Process

Exaniple Corrosion threat = (environment)x [(coating)+ (cathodic protection)] Option 3 avoids the need to create codes for interactions of variables. For example, a scoring rule such as “cathodic protection is not needed = 10 pts” would not be needed in this scheme. It would be needed in other scoring schemes to account for a case where risk is low not through mitigation but through absence of threat. The scoring should also attempt to define the interplay of certain variables. For example, if one variable can be done so well as to make certain others irrelevant, then the scoring protocol should allow for this. For example, ifpatrol (perhaps with a nominal weight 20% of the third-party damage potential) can be done so well that we do not care about any other activity or condition, then other pertinent variables (such as public education. activity level, and depth of‘cover) could be scored as NA (the best possible numerical score) and the entire index is then based solely on patrol. In theory, this could be the case for a continuous security presence in some situations. A scoring regime that uses multiplication rather than addition is better suited to capturing this nuance. The variables shown in Chapters 3 through 6 use a variation of option 2. All variables start at a value of 0, highest risk. Then safety points are awarded for knowledge of less threatening conditions and/or the presence of mitigations. Any of the options can be effective as long as a point assignment manual is availableto ensure proper and consistent scoring. Variable calculations Some risk assessment models in use today combine risk variables using only simple summations. Other mathematical relationships might be used to create variables before they are added to the model. The designer has the choice of where in the process certain variables are created. For instance, if D/t (pipe diameter divided by wall thickness) is often thought to be related to crack potential or strength or some other risk issue. A variable called D/t can be created during data collection and its value added to other risk variables. This eliminates the need to divide D by t in the actual model. Alternatively, data for diameter and wall thickness could be made directly available to the risk model’s algorithm which would calculate the variable D/t as part of the risk scoring. Given the increased robustness of computer environments, the ability to efficiently model more complex relationships is leading to risk assessment models that take advantage of this ability. Conditional statements “If X then Y,” including comparative relationships [“if b o p density) > 2 then (design factor) = 0.6, ELSE (design factor) = 0.72”] are becoming more prevalent. The use of these more complex algorithms to describe aspects of risk tend to mirror human reasoning and decisionmaking patterns. They are not unlike very sophisticated efforts to create expert systems and other artificial intelligence applications based on many simple rules that represent our understanding. Examples of more complex algorithms are shown in the following chapters and in Appendix E.

Direct evidence adjustments Risk evaluation is done primarily through the use of variables that provide indirect evidence of failure potential. This includes knowledge of pipe characteristics, measurements of environmental conditions, and results of surveys. From these, we infer the potential presence of active failure mechanisms or failure potential. However, active failure mechanisms are directly detected by in-line inspection (ILI), pressure testing, and/or visual inspections, including those that might be prompted by a leak. Pressure testing is included here as a direct means because it will either verify that failure mechanisms, even if present, have not compromised structural integrity or it will prompt a visual inspection. If direct evidence appears to be in conflict with risk assessment results (based on indirect evidence), then one of three scenarios is true: 1. The risk assessment model is wrong; an important variable

has been omitted or undervalued or some interaction of variables has not been properly modeled. 2. The data used in the risk assessment are wrong; actual conditions are not as thought. 3. There actually is no conflict; the direct evidence is being interpreted incorrectly or it represents an unlikely, but statistically possible event that the risk assessment had discounted due to its very low probability. It is prudent to perform an investigation to determine which scenario is the case. The first two each have significant implications regarding the utility of the risk management process. The last is a possible learning opportunity. Any conclusions based on previously gathered indirect evidence should be adjusted or overridden when appropriate, by direct evidence. This reflects common practice, especially for time-dependent mechanisms such as corrosionbest efforts produce an assessment of corrosion potential, but that assessment is periodically validated by direct observation. The recommendation is that, whenever direct evidence of failure mechanisms is obtained, assessments should assume that these mechanisms are active. This assumption should remain in place until an investigation, preferably a root cause analysis (discussed later in this chapter). demonstrates that the causes underlying the failure mechanisms are known and have been addressed. For example, an observation of external corrosion damage should not be assumed to reflect old, alreadymitigated corrosion. Rather, it should be assumed to represent active external corrosion unless the investigation concludes otherwise. Direct or confirmatory evidence includes leaks, breaks, anomalies detected by ILI, damages detected by visual inspection, and any other information that provides a direct indication of pipe integrity, if only at a very specific point. The use of ILI results in a risk assessment is discussed in Chapter 5 . The evidence should be captured in at least two areas of the assessment: pipe strength and failure potential. If reductions are not severe enough to warrant repairs, then the wall loss or strength reduction should be considered in the pipe strength evaluation (see Chapter 5). If repairs are questionable (use of nonstandard materials or practices), then the repair itself

Designing a risk assessment model 2/35 should be evaluated. This includes a repair’s potential to cause unwanted stress concentrations. If complete and acceptable repairs that restored full component strength have been made, then risk assessment “penalties” can be removed. Regardless of repair, evidence still suggests the potential for repeat failures in the same area until the root cause identification and elimination process has been completed. Whether or not a root cause analysis has been completed, direct evidence can be compiled in various ways for use in a relative risk assessment. A count of incidences or a density of incidences (leaks per mile, for example) will be an appropriate use of information in some cases, while a zoneofinfluence or anomaly-specific approach might be better suited in others. When such incidences are rather common-ccurring regularly or clustering in locations-the density or count approaches can be useful. For example, the density of ILI anomalies of a certain type and size in a transmission pipeline or the density ofnuisance leaks in a distribution main are useful risk indications (see Chapters 5 and 1 I). When direct evidence is rare in time andor space, a more compelling approach is to assign a zone qf influence around each incident. For example, a transmission pipe leak incident is rare and often directly affects only a few square inches of pipe. However, it yields evidence about the susceptibility of neighboring sections of pipeline. Therefore, a zone of influence, X number of feet on either side of the leak event, can be assigned around the leak. The length of pipeline within this zone of influence is then conservatively treated as having leaked and containing conditions that might suggest increased leak susceptibility in the future. The recommended process incorporating direct evidence into a relative risk assessment is as follows: A. Use all available leak history and ILI results---even when root cause investigations are not available-to help evaluate and score appropriate risk variables. Conservatively assume that damage mechanisms are still active. For example, the detection of pipe wall thinning due to external corrosion implies 0 The existence of a corrosive environment 0 Failure of both coating and cathodic protection systems or a special mechanism at work such as AC-induced corrosion or microbially induced corrosion 0 A pipe wall thickness that is not as thought-pipe strength must be recalculated Scores should be assigned accordingly. The detection of damaged coating, gouges, or dents suggests previous third-party damages or substandard installation practices. This implies that 0 Third-party damage activity is significant, or at least was at one time in the past 0 Errors occurred during construction Pipe strength must be recalculated Again, scores can be assigned accordingly. B. Use new direct evidence to directly validate or adjust risk scores. Compare actual coating condition, pipe wall thickness, pipe support condition, soil corrosivity, etc., with the corresponding risk variables’ scores. Compare the relative likelihood of each failure mode with the direct evi-

dence. How does the model’s implied corrosion rate compare with wall loss observations? How does third-party damage likelihood compare with dents and gouges on the top or side of pipe? Is the design index measure of land movement potential consistent with observed support condition or evidence of deformation? direct evidence says C. If disagreement is apparent-the something is actually “good’ or “bad” while the risk model says the opposite-then perform an investigation. Based on the investigation results, do one or more of the following: Modify risk algorithms based on new knowledge. Modify previous condition assessments to reflect new knowledge. For example, “coating condition is actually bad, not fair as previously thought” or “cathodicprotection levels are actually inadequate, despite 3-year-old close interval survey results.” 0 Monitor the situation carefully. For example, “existing third-party damage preventions are very protective of the pipe and this recent detection of a top side dent is a rare exception or old and not representative of the current situation. Rescoring is not appropriate unless additional evidence is obtained suggesting that third-party damage potential is actually higher than assumed.” Note that this example is a nonconservative use of information and is not generally recommended.

Role of leak history in riskassessment Pipeline failure data often come at a high cost-an accident happens. We can benefit from this unfortunate acquisition of data by refining our model to incorporate the new information. In actual practice, it is a common belief, which is sometimes backed by statistical analysis, that pipeline sections that have experienced previous leaks are more likely to have additional leaks. Intuitive reasoning suggests that conditions that promote one leak will most likely promote additional leaks in the same area. Leak history should be a part of any risk assessment. It is often the primary basis of risk estimations expressed in absolute terms (see Chapter 14). A leak is strong evidence of failure-promoting conditions nearby such as soil corrosivity, inadequate corrosion prevention, problematic pipe joints, failure of the one-call system, active earth movements, or any of many others. It is evidence of future leak potential. This evidence should be incorporated into a relative risk assessment because, hopefully, the evaluator’s “degree of belief” has been impacted by leaks. Each risk variable should always incorporate the best availableknowledge ofconditions andpossibilities for promoting failure. Where past leaks have had no root cause analysis and/or corrective action applied, risk scores for the type of failure can be adjusted to reflect the presence of higher failure probability factors. A zone of influence around the leak site can be established (see Chapter 8) to penalize nearby portions of the system. In some pipelines, such as distribution systems (see Chapter 11) where some leak rate is routinely seen, the determination as to whether a section of pipeline is experiencing a higher frequency of leaks must be made on a relative basis. This can be

2/36Risk Assessment Process

done by making comparisons with similar sections owned by the company or with industry-wide leak rates, as well as by benchmarking against specific other companies or by a combination of these. Note that an event history is only useful in predicting hture events to the extent that conditions remain unchanged. When corrective actions are applied, the event probability changes. Any adjustment for leak frequency should therefore be reanalyzed periodically.

Visual inspections A visual inspection of an internal or external pipe surface may be triggered by an ILI anomaly investigation, a leak, a pressure test, or routine maintenance. If a visual inspection detects pipe damage, then the respective failure mode score for that segment of pipe should reflect the new evidence. Points can be reassigned only after a root cause analysis has been done and demonstrates that the damage mechanism has been permanently removed. For risk assessment purposes, a visual inspection is often assumed to reflect conditionsfor some length ofpipe beyondthe portions actually viewed. A conservative zone some distance either side of the damage location can be assumed. This should reflect the degree of belief and be conservative. For instance, if poor coating condition is observed in one site, then poor coating condition should be assumed for as far as those conditions (coating type and age, soil conditions, etc.) might extend. As noted earlier, penalties from visual inspections are removed through root cause analysis and removal of the root cause. Historical records of leaks and visual inspectionsshould included in the risk assessment even if they do not completely document the inspection, leak cause, or repair as is often the case. Because root cause analyses for events long ago are problematic, and their value in a current condition assessment is arguable, the weighting of these events is often reduced, perhaps in proportion to the event’s age.

Root cause analyses Pipeline damage is very strong evidence of failure mechanisms at work. This should be captured in the risk assessment. However, once the cause of the damage has been removed, if it can be, then the risk assessment should reflect the now safer condition. Determining and removing the cause of a failure mechanism is not always easy. Before the evidenceprovided by actual damage is discounted, the evaluator should ensure that the true underlying cause has been identified and addressed. There are no rules for determining when a thorough and complete investigation has been performed. To help the evaluator make such a judgment, the following concepts regarding root cause analysesare offered [32]. A root cause analysis is a specializedtype of incident investigation process that is designed to find the lowest level contributingcauses to the incident. More conventional investigations often fail to arrive at this lowest level. For example, assume that a leak investigation reveals that a failed coating contributedto a leak. The coating is subsequently repaired and the previously assigned leak penalty is removed from the risk assessment results. But then, a few years later, another leak appears at the same location. It turns out that the

main root cause was actually soil movements that will damage any coating, eventually leadingto a repeat leak (discountingthe role of other corrosionpreventions; see Chapter 3). In this case, the leak penalty in the risk assessment should have been removed only after addressing the soil issue, not simply the coating repair. This example illustrates that the investigators stopped the analysis too early by not determining the causes of the damaged coating. The root is often a system of causes that should be defined in the analysis step. The very basic understanding of cause and effect is that every effect has causes (plural). There is rarely only one root cause. The focus of any investigation or risk assessment is ultimately on effective solutions that prevent recurrence. These effective solutions are found by being very diligent in the analysis step (the causes). A typical indication of an incompleteanalysis is missing evidence. Each cause-and-effect relationship should be validated with evidence. If we do not have evidence, then the causeand-effect relationship cannot be validated. Evidence must be added to all causes in the analysis step. In the previous example, the investigators were missing the additional causes and its evidence to causally explain why the coating was damaged. If the investigators had evidenceof coating damage, then the next question should have been “Why was the coating damaged?” A thorough analysis addresses the system of causes. If investigators cannot explain why the coating was damaged then they have not completed the investigation. Simply repairing the coating is not going to be an effective solution. Technically, there is no end to a cause-and-effect chainthere is no end to the “Why?” questions.Common terminology includes mot cause, direct cause, indirect cause, main cause, primaty cause, contributing cause, proximate cause, physical cause, and so on. It is also true that between any cause-andeffect relationshipthere are more causes that can be added-we can always ask more “Why?” questionsbetween any cause and effect. This allows an analysis to dig into whatever level of detail is necessary. The critical point here is that the risk evaluator should not discount strong direct evidence of damage potential unless there is also compelling evidence that the damage-causing mechanisms have been permanently removed.

V. Lessons learned in establishing a risk assessment program As the primary ingredient in a risk management system, a risk assessment process or model must first be established.This is no small undertaking and, as with any undertaking, is best accomplished with the benefit of experience. The following paragraphs offer some insights gained through development of many pipeline risk management programs for many varied circumstances.Of course, each situationis unique and any rules of thumb are necessarily general and subject to many exceptions to the rules. To some degree, they also reflect a personal preference, but nonetheless are offered here as food for thought for those embarking on such programs. These insights include some key points repeated from the first two chapters of this book.

Lessons learned in establishing a risk assessment program 2/37 The general lessons learned are as follows:

Avoid complexity

Work from general to specific. Think “organic.” Avoid complexity. Use computers wisely. Build the program as you would build a new pipeline. Study your results.

Every single component of the risk model should yield more benefits than the cost it adds in terms of complexity and datagathering efforts. Challenge every component of the risk model for its ability to genuinely improve the risk knowledge at a reasonable cost. For example: Don’t include an exotic variable unless that variable is a useful risk factor. Don’t use more significant digits than is justified. Don’t use exponential notation numbers if a relative scale can be appropriately used. Don’t duplicate existing databases; instead, access information from existing databases whenever possible. Duplicate data repositories will eventually lead to data inconsistencies. Don’t use special factors that are only designed to change numerical scales. These tend to add more confusion than their benefit in creating easy-to-use numbers. Avoid multiple levels of calculations whenever possible. Don’t overestimatethe accuracy of your results, especially in presentations and formal documentation. Remember the high degree ofuncertainty associated with this type of effort.

We now take a look at the specifics ofthese lessons learned.

Work from general to specific Get the big picture first. This means “Get an overview assessment done for the whole system rather than getting every detail for only a portion of the system.” This has two advantages: I. No matter how strongly the project begins, things may change before project completion. If an interruption does occur, at least a general assessment has been done and some useful information has been generated. 2. There are strong psychological benefits to having results (even if very preliminary--caution is needed here) early in the process. This provides incentives to refine and improve preliminary results. So, having the entire system evaluated to a preliminary level gives timely feedback and should encourage further work. It is easy to quickly assess an entire pipeline system by limiting the number of risk variables in the assessment. Use only a critical few, such as population density, type of product, operating pressure, perhaps incident experience, and a few others. The model can then later be “beefed up” by adding the variables that were not used in the first pass. Use readily available information whenever possible.

Think “organic” Imagine that the risk assessment process and even the model itself are living, breathing entities. They will grow and change over time. There is the fruit-the valuable answers that are used to directly improve decision making. The ideal process will continuously produce ready-to-eat fruit that is easy to “pick” and use without any more processing. There are also the roots-the hehind-the-scenes techniques and knowledge that create the fruit. To ensure the fruit is good, the roots must he properly cared for. Feed and strengthen the roots by using HAZOPS, statistical analysis, FEMA, event trees, fault trees, and other specific risk tools occasionally. Such tools provide the underpinnings for the risk model. Allow for growth because new inspection data, new inspection techniques, new statistical data sets to help determine weightings, missed risk indicators, new operating disciplines, and so on will arise. Plan for the most flexible environment possible, Make changes easy to incorporate. Anticipate that regardless of where the program begins and what the initial focus was, eventually, all company personnel might he visiting and “picking the fruit” provided by this process.

Use computers wisely Too much reliance on computers is probably more dangerous than too little. In the former, knowledge and insight can be obscured and even convoluted. In the latter, the chief danger is that inefficiencies will result-an undesirable, hut not critical, event. Regardless of potential misuse, however. computers can greatly increase the strength of the risk assessment process, and no modem program is complete without extensive use of them. The modem software environment is such that information is easily moved between applications. In the early stages of a project, the computer should serve chiefly as a data repository. Then, in subsequent stages, it should house the algorithnhow the raw information such as wall thickness, population density, soil type, etc., is turned into risk information. In later stages of the project, data analysis and display routines should he available. Finally, computer routines to ensure ease and consistency of data entry, model tweaking, and generation of required output should he available. Software use in risk modeling should always follow program development-not lead it. 0

Early stage. Use pencil and paper or simple graphics software to sketch preliminary designs of the risk assessment system. Also use project management tools if desired to plan the risk management project. Intermediate stages. Use software environments that can store, sort, and filter moderate amounts of data and generate new values from arithmetic and logical (if. . . then. . . else. . .) combinations of input data. Choices include modem spreadsheets and desktop databases. Later stages. Provide for larger quantity data entry, manipulation, query, display, etc., in a long-term, secure, and userfriendly environment. If spatial linking of information is desired, consider migrating to geographical information systems (GIS) platforms. If multiuser access is desired, consider robust database environments.

2/38Risk Assessment Process

Computer usage in pipeline risk assessment and management is further discussed in Chapter 8.

Build the program as you would build a new pipeline A useful way to view the establishment of a risk management program, and in particular the risk assessment process, is to consider a direct analogy with new pipeline construction. In either case, a certain discipline is required. As with new construction, failures in risk modeling occur through inappropriate expectations and poor planning, while success happens through thoughtful planning and management. Below. the project phases of a pipeline construction are compared to a risk assessment effort.

I. Conceptualization and scope creation phase: Pipeline: Determine the objective, the needed capacity, the delivery parameters and schedule. Risk assessment: Several questions to the pipeline operator may better focus the effort and direct the choice of a formal risk assessment technique: What data do you have? What is your confidence in the predictive value of the data? What are the resource demands (and availability) in terms of costs, man-hours, and time to set up and maintain a risk model? What benefits do you expect to accrue, in terms of cost savings, reduced regulatory burdens, improved public support, and operational efficiency? Subsequent defining questions might include: What portions of your system are to be evaluated-pipeline only? Tanks? Stations? Valve sites? Mainlines? Branch lines? Distribution systems? Gathering systems? Onshore/offshore? To what level of detail? Estimate the uses for the model, then add a margin of safety because there will be unanticipated uses. Develop a schedule and set milestones to measure progress. 11. Route selectiodROW acquisition: Pipeline: Determine the optimum routing, begin the process of acquiring needed ROW. Risk assessment: Determine the optimum location for the model and expertise. Centrally done from corporate headquarters? Field offices maintain and use information? Unlike the pipeline construction analogy, this aspect is readily changed at any point in the process and does not have to finally decided at this early stage of the project. 111. Design: Pipeline: Perform detailed design hydraulic calculations; specify equipment, control systems, and materials. Risk assessment: The heart of the risk assessment will be the model or algorithm-that component which takes raw information such as wall thickness, population density, soil type, etc., and turns it into risk information. Successful risk modeling involves a balancing between various issues including: Identifying an exhaustive list ofcontributing factors versus choosing the critical few to incorporate in a model (complex versus simple)

Hard data versus engineering judgment (how to incorporate widely held beliefs that do not have supporting statistical data) Uncertainty versus statistics (how much reliance to place on predictive power of limited data) Flexibility versus situation-specific model (ability to use same model for a variety of products, geographical locations, facility types, etc.) It is important that all risk variables be considered even if only to conclude that certain variables will not be included in the final model. In fact, many variables will not be included when such variables do not add significant value but reduce the usability of the model. These “use or don’t use” decisions should be done carefully and with full understanding ofthe role of the variables in the risk picture. Note that many simplifying assumptions are often made, especially in complex phenomena like dispersion modeling, fire and explosion potentials, etc.. in order to make the risk model easy to use and still relatively robust. Both probability variables and consequence variables are examined in most formal risk models. This is consistent with the most widely accepted definition of risk: Event risk = (eventprobability) x (eventconsequence) (See also “VI. Commissioning” for more aspects of a successful risk model design.) IV. Material procurement: Pipeline: Identify long-delivery-time items, prepare specifications, determine delivery and quality control processes. Risk assessment: Identify data needs that will take the longest to obtain and begin those efforts immediately. Identify data formats and level of detail. Take steps to minimize subjectivity in data collection. Prepare data collection forms or formats and train data collectors to ensure consistency. V Construction: Pipeline: Determine number of construction spreads, material staging, critical path schedule, inspection protocols. Risk assessment:Form the data collection team(s), clearly define roles and responsibilities, create critical path schedule to ensure timely data acquisition, schedule milestones, and take steps to ensure quality assurance/ quality control. VI. Commissioning: Pipeline: Testing of all components, start-up programs completed. Risk assessment: Use statistical analysis techniques to partially validate model results from a numerical basis. Perform a sensitivity analysis and some trial “what-ifs” to ensure that model results are believable and consistent. Perform validation exercises with experienced and knowledgeable operating and maintenance personnel. It is hoped that the risk assessment characteristics were earlier specified in the design and concept phase of the project. but here is a final place to check to ensure the following:

Examples of scoring algorithms 2/39 All failure modes are considered. All risk elements are considered and the most critical ones are included. Failure modes are considered independently as well as in aggregate. All available information is being appropriately utilized. Provisions exist for regular updates of information. including new types of data. Consequence factors are separable from probability factors. Weightings, or other methods to recognize relative importance of factors, are established. The rationale behind weightings is well documented and consistent. A sensitivity analysis has been performed. The model reacts appropriately to failures ofany type. Risk elements are combined appropriately (“and” versus “or” combinations). Steps are taken to ensure consistency of evaluation. Risk assessment results form a reasonable statistical distribution (outliers?). There is adequate discrimination in the measured results (signal-to-noise ratio). Comparisons can be made against fixed or floating standards or benchmarks. V11. Project completion: Pipeline: Finalize manuals, complete training, ensure maintenance protocols are in place, and turn system over to operations. Risk assessment: Carefully document the risk assessment process and all subprocesses. especially the detailed workings of the algorithm or central model. Set up administrative processes to support an ongoing program Ensure that control documents cover the details of all aspects of a good administrative program, including: Defining roles and responsibilities Performance monitoring and feedback Process procedures Management of change Communication protocols

Study the results This might seem obvious, but it is surprising how many owners really do not appreciate what they have available after completing a thorough risk assessment. Remember that your final risk numbers should be completely meaningful in a practical. real-world sense. They should represent everything you know about that piece of pipe (or other system component)-all of the collective years of experience of your organization, all the statistical data you can gather, all your gut feelings, all your sophisticated engineering calculations. If you can’t really believe your numbers. something is wrong with the model. When, through careful evaluation and much experience, you can really believe the numbers, you will find many ways to use them that you perhaps did not foresee. They can be used to 0 0

Design an operating discipline Assist in route selection

Optimize spending Strengthen project evaluation Determine project prioritization Determine resource allocation Ensure regulatory compliance

VI. Examples of scoring algorithms Sample relative risk model The relative risk assessment model outlined in Chapters 3 through 7 is designed to be a simple and straightforward pipeline risk assessment model that focuses on potential consequences to public safety and environment preservation. It provides a framework to ensure that all critical aspects of risk are captured. Figure 2.4 shows a flowchart of this model. This framework is flexible enough to accommodate any level of detail and data availability. For most variables. a sample point-scoring scheme is presented. In many cases, alternative scoring schemes are also shown. Additional risk assessment examples can be found in the case studies of Chapter 14 and in Appendix E. The pipeline risk picture is examined in two general parts. The first part is a detailed itemization and relative weighting of all reasonably foreseeable events that may lead to the failure of a pipeline: “What can go wrong?” and “How likely is it to go wrong?. This highlights operational and design options that can change the probability of failure (Chapters 3 through 6). The second part is an analysis of potential consequences if a failure should occur. This addresses the potential consequences should failure occur (Chapter 7). The two general parts correspond to the two factors used in the most commonly accepted definition of risk: Risk = (event likelihood) x (eventconsequence)

The failure potential component is further broken into four indexes (see Figure 2.4). The indexes roughly correspond to categories of reported pipeline accident failures. That is, each index reflects a general area to which, historically, pipeline accidents have been attributed. By considering each variable in each index, the evaluator arrives at a numerical value for that index. The four index values are then summed to a total value (called the index sum) representing the overall failure probability (or survival probability) for the segment evaluated. The individual variable values, not just the total index score, are preserved however, for detailed analysis later. The primary focus ofthe probability part ofthe assessment is the potential for a particular failure mechanism to be active. This is subtly different from the likelihood of failure. Especially in the case of a time-dependent mechanism such as corrosion. fatigue, or slow earth movements, the time to failure is related to factors beyond the presence of a failure mechanism. These include the resistance of the pipe material, the aggressiveness of the failure mechanism, and the time of exposure. These, in turn, can be furtherexamined. For instance. the material resistance is a function of material strength; dimensions, most notably pipe wall thickness; and the stress level. The additional aspects leading to a time-to-fail estimate are usually more appropriately considered in specific investigations.

2/40 Risk Assessment Process

In the second part of the evaluation,an assessmentis made of the potential consequences of a pipeline failure. Product characteristics,pipeline operating conditions, and the pipeline surroundings are considered in arriving at a consequence factor. The consequence score is called the leak impact factor and includes acute as well as chronic hazards associatedwith product releases. The leak impactfactor is combinedwith the index sum (by dividing) to arrive at a final risk score for each section of pipeline. The end result is a numerical risk value for each pipeline section. All of the information incorporated into this number is preserved for a detailed analysis, if required. The higher-level variables of the entire process can be seen in the flowchart in Figure 2.4.

Basic assumptions Some general assumptionsare built into the relative risk assessment model discussed in Chapters 3 through 7. The user, and especially, the customizer of this system, should be aware of these and make changes where appropriate. Independence Hazards are assumed to be additive but independent. Each item that influences the risk picture is considered separately from all other items-it independently influences the risk. The overall risk assessmentcombines all of the independent factors to get a final number. The final number reflects the “area of opportunity” for a failure mechanism to be active because the number of independentfactors is believed to be directly proportional to the risk. For example, if event B can only occur if event A has first occurred, then event B is given a lower weighting to reflect the fact that there is a lower probability of both events happening. However, the example risk model does not normally stipulate that event B cannot happen without eventA. Worst case When multiple conditions exist within the same pipeline segment, it is recommendedthat the worst-case condi-

I

tion for a section govern the point assignment.The rationale for this is discussed in Chapter 1. For instance, if a 5-mile section of pipeline has 3 ft of cover for all but 200 ft of its length (which has only 1 ft of cover), the section is still rated as if the entire 5mile length has only 1 ft of cover. The evaluator can work around this though his choice of section breaks (see Sectioning of the Pipeline section earlier in this chapter). Using modem segmentationstrategies,there is no reason to have differing risk conditionswithin the same pipeline segment. Relative Unless a correlation to absolute risk values has been established, point values are meaningfid only in a relative sense. A point score for one pipeline section only shows how that section compares with other scored sections. Higher point values represent increased safety-decreased probability of failure-in all index values (Chapters 3 through 6). Absolute risk values can be correlated to the relative risk values in some cases as is discussed in Chapter 14. Judgment bused The example point schedules reflect experts’ opinions based on their interpretations of pipeline industry experience as well as personal pipelining experience. The relative importance of each item (this is reflected in the weighting of the item) is similarly the experts’ judgments. If sound, statistical data are available, they are incorporated into these judgments. However, in many cases, useful fiequency-of-occurrence data are not available. Consequently, there is an element of subjectivityin this approach. Public Threats to the general public are of most interest here. Risks specific to pipeline operators and pipeline company personnel can be included as an expansion to this system, but only with great care since a careless additionmay interfere with the objectivesofthe evaluation.In most cases, it is believed that other possible consequences will be proportional to public safety risks, so the focus on public safety will usually fairly represent most risks.

Index sum

Figure 2.4 Flowchart of relative risk index system.

Examples of scoring algorithms 2/41 Mitigations It is assumed that mitigations never completely erase the threat. This is consistent with the idea that the condition 0f‘‘no threat” will have less fisk than the condition igated threat,’’ regardless of the robus~essof the mitigation measures. It also shows that even with much prevention in place, the hazard has not been removed.

Other examples See Appendix E for examples of other risk scoring algorithms for pipelines in general. Additional examples are included several 0 t h chapters, notably in Chapters 9 through 13, where discussions involve the assessments of special situations.

3/43

Third-party Damage Index

k

Third-partyDamage Index A. Minimum Depth of Cover B. Activity Level C. Aboveground Facilities D. Line Locating E. Public Education Programs E Right-of-way Condition G. Patrol Frequency

0-20 pts 0-20pts 0-10 pts 0-15 pts 0-15 pts 0 - 5 pts 0-15 pts

20% 20% 10% 15% 15% 5%

0-100pts

100%

15%

This table lists some possible variables and weightings that could be used to assess the potential for third-party damages to atypical transmission pipeline (see Figures 3.1 and 3.2).

Background Pipeline operators usually take steps to reduce the possibility of damage to their facilities by others. The extent to which mitigating steps are necessary depends on how readily the system can be damaged and bow often the chance for damage occurs.

Third-party damage, as the term is used here, refers to any accidental damage done to the pipe as a result ofactivities ofpersonnel not associated with the pipeline. This failure mode is also sometimes called outside force or external force, but those descriptions would presumably include damaging earth movements. We use third-party damage as the descriptor here to focus the analyses specifically on damage caused by people not associated with the pipeline. Potential earth movement damage is addressed in the design index discussion of Chapter 5. Intentional damages are covered in the sabotage module (Chapter 9). Accidental damages done by pipeline personnel and contractors are covered in the incorrect operations index chapter (Chapter 6). U.S. Department of Transportation (DOT) pipeline accident statistics indicate that third-party intrusions are often the leading cause of pipeline failure. Some 20 to 40 percent of all pipeline failures in most time periods are attributed to thirdparty damages. In spite of these statistics, the potential for third-party damage is often one of the least considered aspects of pipeline hazard assessment. The good safety record of pipelines has been attributed in part to their initial installation in sparsely populated areas and

3/44 Third-party Damage Index

Figure 3.1 Basic risk assessment model.

Soil cover Type of soil (rock, clay, sand, etc.) Pavement type (asphalt, concrete, none, etc.) Warning tape or mesh Water depth Population density Stability of the area (construction, renovation, etc.) One calls Other buried utilities Anchoring, dredging

Minimum depth of cover Activity level

--Aboveground facilities One-call system Public education Right-of-way condition Patrol

Vulnerability (distance, barriers, etc.) Threats (traffic volume, traffic type, aircraft, etC.)

. -

1

Mandated Response by owner Well-known and used

Methods (door-to-door, mail, advertisements, etC.) Frequency

Signs (size, spacing, lettering, phone numbers, etc.) Markers (air vs ground, size, visibility, spacing, etc.) Overgrowth Undergrowth Ground patrol frequency Ground patrol effectiveness Air patrol frequency Air patrol effectiveness Figure 3.2 Assessing third-partydamage potential:sample of data used to score the third-party damage index.

Riskvariables 3/45

their burial 2.5 to 3 feet deep. However, encroachments ofpopulation and land development activities are routinely threatening many pipelines today. In the period from 1983 through 1987, eight deaths, 25 injuries, and more than $14 million in property damage occurred in the hazardous liquid pipeline industry due solely to excavation damage by others. These types of pipeline failures represent 259 accidents out of a total of 969 accidents from all causes. This means that 26.7% of all hazardous liquid pipeline accidents were caused by excavation damage 1871. In the gas pipeline industry, a similar story emerges: 430 incidents from excavation damage were reported in the 1984-1987 period. These accidents resulted in 26 deaths, 148 injuries, and more than $18 million in property damage. Excavation damage is thought to be responsible for 10.5% of incidents reported for distribution systems, 22.7% of incidents reported for transmissiodgathering pipelines, and 14.6% of all incidents in gas pipelines [87]. European gas pipeline experience, based on almost 1.2 million mile-years of operations in nine Western European countries, shows that third-party interference represents approximately 50% of all pipeline failures [441.

Exposure To quantify the risk exposure from excavation damage, an estimate of the total number of excavations that present a chance for damage can be made. Reference 1641 discusses the Gas Research Institute’s (GRI’s) 1995 study that makes an effort to determine risk exposure for the gas industry. The study surveyed 65 local distribution companies and 35 transmission companies regarding line hits. The accuracy of the analysis was limited by the response-less than half (41%) of the companies responded, and several major gas-producing states were poorly represented (only one respondent from Texas and one from Oklahoma). The GRI estimate was determined by extrapolation and may be subject to a large degree of error because the data sample was not representative. Based on survey responses, however, GFU calculated an approximate magnitude of exposure. For those companies that responded, a total of25,123 hits to gas lines were recorded in 1993; from that, the GRI estimated total U.S. pipeline hits in 1993 to be 104,128. For a rate of exposure, this number can be compared to pipeline miles: For 1993, using a reported 1,778,600 miles of gas transmission, main, and service lines, the calculated exposure rate was 58 hits per 1000 line miles. Transmission lines had a substantially lower experience: a rate of 5.5 hits per 1000 miles, with distribution lines suffering 71 hits per 1000 miles [64]. All rates are based on limited data. Because the risk of excavation damage is associated with digging activity rather than system size, “hits per digs” is a useful measure of risk exposure. For the same year that GRI conducted its survey, one-call systems collectively received more than an estimated 20 million calls from excavators. (These calls generated 300 million work-site notifications for participating members to mark many different types of underground systems.) Using GRI’s estimate of hits. the risk exposure rate for 1993 was 5 hits per 1000 notifications to dig ~41.

Risk variables Many mitigation measures are in place in most Western countries to reduce the threat of third-party damages to pipelines. Nonetheless, recent experience in most countries shows that this remains a major threat, despite often mandatory systems such as one-call systems. Reasons for continued third-party damage, especially in urban areas, include Smaller contractors ignorant of permit or notification process No incentive for excavators to avoid damaging the lines when repair cost (to damaging party) is smaller than avoidance cost Inaccurate mapshecords Imprecise locations by operator. Many of these situations are evaluated as variables in the suggested risk assessment model. The pipeline designer a n 4 perhaps to an even greater extent, the operator can affect the probability of damage from thirdparty activities. As an element ofthe total risk picture, the probability of accidental third-party damage to a facility depends on The ease with which the facility can be reached by a third party The frequency and type ofthird-party activities nearby. Possible offenders include Excavating equipment Projectiles Vehicular traffic Trains Farming equipment Seismic charges Fenceposts Telephone posts Wildlife (cattle, elephants, birds, etc.) Anchors Dredges. Factors that affect the susceptibility of the facility include Depth of cover Nature of cover (earth, rock, concrete, paving, etc.) Man-made barriers (fences, barricades, levees, ditches. etc.) Natural barriers (trees, rivers, ditches, rocks, etc.) Presence of pipeline markers Condition of right ofway (ROW) Frequency and thoroughness of patrolling Response time to reported threats. The activity level is often judged by items such as: Population density Construction activities nearby Proximity and volume of rail or vehicular traffic Number of other buried utilities in the area.

3/46 Third-party Damage Index

Serious damage to a pipeline is not limited to actual punctures of the line. A mere scratch on a coated steel pipeline damages the corrosion-resistant coating. Such damage can lead to accelerated corrosion and ultimately a corrosion failure perhaps years in the future. If the scratch is deep enough to have removed enough metal, a stress concentration area (see Chapter 5 ) could be formed, which again, perhaps years later, may lead to a failure from fatigue, either alone or in combination with some form of corrosion-accelerated cracking. This is one reason why public education plays such an important role in damage prevention. To the casual observer, a minor dent or scratch in a steel pipeline may appear insignificantcertainly not worthy of mention. A pipeline operator knows the potential impact of any disturbance to the line. Communicating this to the general public increases pipeline safety. Several variables are thought to play a critical role in the threat of third-party damages. Measuring these variables can therefore provide an assessment of the overall threat. Note that in the approach described here, this index measures the potential for third-party damage-not the potential for pipeline failure from third-party damages. This is a subtle but important distinction. Ifthe evaluator wishes to measure the latter in a single assessment, additional variables such as pipe strength, operating stress level, and characteristics of the potential third-party intrusions (such as equipment type and strength) would need to be added to the assessment. What are believed to be the key variables to consider in assessing the potential for third-party damage, are discussed in the following sections. Weightings reflect the relative percentage contribution of the variable to the overall threat of thirdparty damage.

Assessing third-party damage potential A. Minimum depth of cover (weighting: 20%) The minimum depth of cover is the amount of earth, or equivalent cover, over the pipeline that serves to protect the pipe from third-party activities. A schedule or simple formula can be developed to assign point values based on depth of cover. In this formula, increasing points indicate a safer condition; this convention is used throughout this book. A sample formula for depth of cover is as follows:

-

Amount of cover in inches 3 =point value up to a maximum of 20 points For instance, 42 in. of cover = 42 + 3 points = 14 points 24 in. of cover = 24 + 3 points = 8 points

Points should be assessed based on the shallowest location within the section being evaluated. The evaluator should feel confident that the depth of cover data are current and accurate; otherwise, the point assessments should reflect the uncertainty. Experience and logic indicates that less than one foot of cover may actually do more harm than good. It is enough cover to conceal the line but not enough to protect the line from even shallow earth moving equipment (such as agricultural equip-

ment). Three feet of cover is a common amount of cover required by many regulatory agencies for new construction. Credit should also be given for comparable means of protecting the line from mechanical damage. A schedule can be developed for these other means, perhaps by equating the mechanical protection to an amount of additional earth cover that is thought to provide equivalent protection. For example, 2 in. ofconcrete coating = 8 in. of additional earth cover 4 In. of concrete coating = 12 in. of additional earth cover Pipe casing = 24 in. of additional cover Concrete slab (reinforced)= 24 in. of additional cover.

Using the example formula above, a pipe section that has 14 in. of cover and is encased in a casing pipe would have an equivalent earth cover of 14 + 24 = 38 in., yielding a point value of 38 + 3 = 12.7. Burial of a warning tape-a highly visible strip of material with warnings clearly printed on it-may help to avert damage to a pipeline (Figure 3.3). Such flagging or tape is commercially available and is usually installed just beneath the ground surface directly over the pipeline. Hopefully, an excavator will discover the warning tape, cease the excavation, and avoid damage to the line. Although this early warning system provides no physical protection, its benefit from a failureprevention standpoint can be included in this model. A derivative of this system is a warning mesh where instead of a single strip of low-strength tape, a tough, high-visibility plastic mesh, perhaps 30 to 36 in. wide is used. This provides some physical protection because most excavation equipment will have at least some minor difficulty penetrating it. It also provides additional protection via the increased width, reducing the likelihood of the excavation equipment striking the pipe before the warning mesh. Either system can be valued in terms of an equivalent amount of earth cover. For example: Warning tape = 6 in. of additional cover Warning mesh = 18 in of additional covet As with all items in this risk assessment system, the evaluator should use his company’s best experience or other available information to create his point values and weightings. Common situations that may need to be addressed include rocks in one region, sand in another (is the protection value equivalent?) and pipelines under different roadway types (concrete versus asphalt versus compacted stone, etc.). The evaluator need only remember the goal of consistency and the intent of assessing the amount of real protection from mechanical damage. Ifthe wall thickness is greater than what is required for anticipated pressures and external loadings, the extra thickness is available to provide additional protection against failure from external damage or corrosion. Mechanical protection that may be available from extra pipe wall material is accounted for in the design index (Chapter 5). In the case of pipelines submerged at water crossings, the intent is the same: Evaluate the ease with which a third party can physically access and damage the pipe. Credit should be given for water depth, concrete coatings, depth below seafloor, extra damage protection coatings, etc. A point schedule for submerged lines in navigable waterways might look something like the following:

Assessing third-party damage potential 3/47

Minimum depth of cover

,-

Ground surface

1

Warning tape J Pipeline

Figure 3.3 Minimum

Depth below)water surJace: 0 pts 3 pts 7 pts

0-5 ft 5 +Maximum anchor depth >Maximum anchor depth

Depth below bottom of waterway (add thesepoints to the points.from depth below water surface): 0 pts &2 ft 3 pts 2-3 ft 5 pts 3-5 ft 1 pts 5 %Maximum dredge depth 10 pts >Maximum dredge depth Concrete coating (add these points to the points assigned fcr uuter depth and burial depth): None 0 pts Minimum I in. 5 pts The total for all three categories may not exceed 20 pts if a weighting of 20% is used.

depth of cover

The above schedule assumes that water depth offers some protection against third-party damage. This may not be a valid assumption in every case; such an assumption should be confirmed by the evaluator. Point schedules might also reflect the anticipated sources of damage. If only small boats can anchor in the area, perhaps this results in less vulnerability and the point scores can reflect this. Reported depths must reflect the current situation because sea or riverbed scour can rapidly change the depth of cover. The use of water crossing surveys to determine the condition of the line, especially the extent of its exposure to external force damage, indirectly impacts the risk picture (Figure 3.4).Such a survey may be the only way to establish the pipeline depth and extent of its exposureto boat trafic, currents, floatingdebris,etc. Because conditions can change dramatically when flowing water is involved, the time since the last survey is also a factor to be considered.Such surveys are considered in the incorrect operations index chapter (Chapter 6).Points can be adjusted to reflect the evaluators’confidence that cover information is current with the recommendation to penalize (show increased risk) wherever uncertainty is higher. (See also Chapter 12 on offshore pipelines systems.)

River bank Previous survey

Figure 3.4

-,

River crossing survey

3/48 Third-party Damage Index

Example 3.1: Scoring the depth of cover In this example, apipeline section has burial depths of 10 and 30 in. In the shallowest portions, a concrete slab has been placed over and along the length of the line. The 4-in. slab is 3 ft wide and reinforced with steel mesh. Using the above schedule, the evaluator calculates points for the shallow sections with additional protection and for the sections buried with 30 in. of cover. For the shallow case: I O in. of cover + 24 in. of additional (equivalent) cover due to slab = (10 + 24)/3 pts = 11.3 pts. Second case: 30 in. of cover = 30/3 = I O pts. Because the minimum cover (including extra protection) yields the higher point value, the evaluator uses the IO-pt score for the pipe buried with 30 in. of cover as the worst case and,hence, the governing point value for this section. A better solution to this example would be to separate the 10-inch and 30-inch portions into separate pipeline sections for independent assessment. In this section, a submerged line lies unburied on a river bottom, 30 ft below the surface at the river midpoint, rising to the water surface at shore. At the shoreline, the line is buried with 36 in. of cover. The line has 4 in. of concrete coating around it throughout the entire section. Points are assessed as follows: The shore aonroaches are very shallow; although boat anchoring is rare, it is possible. No protection is offered by water depth, so 0 pts are given here. The 4 in. of concrete coating yields 5 pts. Because the pipe is not buried beneath the river bottom, 0 pts are awarded for cover. I I

Total score = O + 5 + 0 = 5 pts

B. Activity level (weighting: 20%) Fundamental to any risk assessment is the area ofopportunity. For an analysis of third-party damage potential, the area of opportunity is strongly affected by the level of activity near the pipeline. It is intuitively apparent that more digging activity near the line increases the opportunity for a line strike. Excavation OCCUTS frequently in the United States. The excavation notification system in the state of Illinois recorded more than 100,000 calls during the month ofApril 1997. New Jersey’s one-call system records 2.2 million excavation markings per year, an average of more than 6000 per day [64]. As noted previously, it is estimated that gas pipelines are accidentally struck at the rate of 5 hits per every 1000 one-call notifications. DOT accident statistics for gas pipelines indicate that, in the 1984-1987 period, 35% of excavation damage accidents occurred in Class 1 and 2 locations, as defined hy DOT gas pipeline regulations [87]. These are the less populated areas. This tends to support the hypothesis that a higher population density means more accident potential. Other considerations include nearby rail systems and high volumes of nearby traffic, especially where heavy vehicles such as trucks or trains are prevalent or speeds are high. Aboveground facilities and even buried pipe are at risk because an automobile or train wreck has tremendous destructiveenergy potential. In some areas, wildlife damage is common. Heavy animals such as elephants, bison, and cattle can damage instrumenta-

tion and pipe coatings, if not the pipe itself. Birds and other smaller animals and even insects can also cause damage by their normal activities. Again, coatings and instrumentation of aboveground facilities are usually most threatened. Where such activity presents a threat of external force damage to the pipeline, it can be assessed as a contributor to activity level here. The activity level item is normally a risk variable that may change over time, but is relatively unchangeable by the pipeline operator. Relocation is usually the only means for the pipeline operator to change this variable, and relocation is not normally a routine risk mitigation option. The evaluator can create several classifications of activity levels for risk scoring purposes. She does this by describing sufficient conditions such that an area falls into one of her classifications. The following example provides a sample of some of the conditions that may be appropriate. Further explanation follows the example classifications. High activity level (0 points) one or more of the following:

0

0

This area is characterized by

Class 3 population density (as defined by DOT CFR49Part 192) High population density as measured by some other scale Frequent construction activities High volume of one-call or reconnaissance reports (>2 per week) Rail or roadway traffic that poses a threat Many other buried utilities nearby Frequent damage from wildlife Normal anchoring area when offshore Frequent dredging near the offshore line.

Medium activiy level (8 points) by one or more of the following:

This area is characterized

Class 2 population density (as defined by DOT) Medium population density nearby, as measured by some other scale No routine construction activities that could pose a threat Few one-call or reconnaissance reports ( 60'F tad-wall thickness > 0.5 in.

thermal reliefdevices thermal relief valves-inspectionimaintenance torque specsltorque inspections traffic exposures-airimarine traffic exposures-ground outside station traffic exposures--overall susceptibility traffic exposures-preventions traffic exposures-ground within station traffic panernslroutingiflow training-completeness of subject matter training-job needs analysis training-testing, certification, and retesting use of colorslsignsllocksi"idiot-proofing" use of temporary workers UST-material of construction UST pressure UST volume UST-number of independent walls vacuum truck(s) vessel level safety systems vibration vibration: antivibration actions wall thickness walls < 6 A high walls > 6 A high water bodies nearby water body type (river, stream, creek, lake. etc.) water intakes nearby weather events-floods weather events-freeze weather events-hailhceisnow loading weather events-lightning weather events-potential weather events-windstorm wetlands nearby workplace ergonomics workplace human stress environment

141293

Absolute Risk Estimates

duction 14/293 General failure data 141295 Additional failure data 14/2 Relative to absoluterisk 14 V. Index sums versus failure pro diction 141301 obabilitzes 1413 e limits 141304 EX. Receutorvulnerabilities 141305 Population 141305 Generalizeddamage states

1. Introduction As noted in Chapter 1, risks can be expressed in absolute terms, for example, “number of fatalities per mile year for permanent residents within one-half mile of pipeline. . .” Also common is the use of relative risk measures, whereby hazards are prioritized such that the examiner can distinguish which aspects of the facilities pose more risk than others. The former is a frequency-based measure that estimates the probability of a specific type of failure consequence. The latter is a comparative measure of current risks, in terms of both failure likelihood and consequence. A criticism of the relative scale is its inability to compare risks from dissimilar systems-pipelines versus highway transportation, for example-and its inability to provide direct failure predictions.The absolute scale often fails in relying heavily on historical data, particularlyfor rare events that are extremely difficult to quantify, and on the unwieldy numbers that often generate a negativereaction from the public. The absolute scale

Ph.rt.

also often implies a precision that is usually not available to any risk assessmentmethod. So, the “absolutescale” offers the benefit of comparabilitywith other types of risks, whereas the “relative scale” offers the advantage of ease of use and customization to the specific risk being studied. Note that the two scales are not mutually exclusive. A relative risk ranking is converted into an absolute scale by equating previous accident histories with their respective relative risk values. This conversion is discussed in section IV on page 298. Absolute risk estimates are converted into relative numbers by simple mathematical relationships. Each scale has advantages, and a risk analysis that marries the two approaches may be the best approach. A relative assessment of the probability of failure can efficiently capture the many details that impact this probability.That estimate can then be used in post-failure event sequences that determine absolute risk values. (Also see Chapter 1 for discussion of issues such as objectivity and qualitative versus quantitative risk models.)

14294 Absolute Risk Estimates

Although risk management can be efficiently practiced exclusively on the basis of relative risks, occasionally it becomes desirable to deal in absolute risks. This chapter provides some guidance and examples for risk assessments requiring absolute results-risk estimates expressed in fatalities, injuries, property damages, or some other measure of damage, in a certain time period-rather than relative results. This requires concepts commonly seen in probabilistic risk assessments (PRAs), also called numerical risk assessments (NRAs) or quantitative risk assessments (QRAs). These techniques have their strengths and weaknesses as dmussed on pages 23-25, and they are heavily dependent on historical failure frequencies. Several sources of failure data are cited and their data presented in this chapter. In most instances, details of the assumptions employed and the calculation procedures used to generate these data are not provided. Therefore, it is imperative that data tables not be used for specific applications unless the user has determined that such data appropriately reflect that application. The user must decide what information may be appropriate to use in any particular risk assessment. Case studies are also presented to further illustrate possible approaches to the generation of absolute risk values. This chapter therefore becomes a compilation of ideas and data that might be helpful in producing risk estimates in absolute terms. The careful reader may conclude several things about the generation of absolute risk values for pipelines: 0

0

Results are very sensitive to data interpretation. Results are very sensitive to assumptions. Much variation is seen in the level of detail of analyses. A consistency of approach is important for a given level of detail of analysis.

II. Absolute risks As noted in Chapter 1, any good risk evaluation will require the generation of scenarios to represent all possible event sequences that lead to all possible damage states (consequences).To estimate the probability of any particular damage state, each event in the sequence is assigned a probability. The probabilities can be assigned either in absolute terms or, in the case of a relative risk assessment, in relative term-showing which events happen relatively more often than others. In either case, the probability assigned should be based on all available information. In a relative assessment, these event trees are examined and critical variables with their relative weighting (based on probabilities) are extracted as part of the model design. In a risk assessment expressing results in absolute numbers, the probabilities are assigned as part of the evaluation process. Absolute risk estimates require the predetermination of a damage state or consequence level of interest. Most common is the use of human fatalities as the consequence measure. Most risk criteria are also based on fatalities (see page 305) and are often shown on FN curves (see Figure 14.1 and Figure 15.1) where the relationship between event frequency and severity (measured by number of fatalities) is shown. Other options for consequence measures include

Humaninjuries Environmental damages

Property damages Thermal radiation levels Overpressure levels from explosions. Total consequences expressed in dollars Ifthe damage state of interest is more than a “stress” level such as a thermal radiation level or blast overpressure level, then a hazard area or hazard zone will also need to be defined. The hazard area is an estimate of the physical distances from the pipeline release that are potentially exposed to the threat. They are often based on the “stress” levelsjust noted and will vary in size depending on the scenario (product type, hole size, pressure, etc.) and the assumptions (wind, temperature, topography, soil infiltration, etc.). Hazard areas are discussed later in this chapter and also in Chapter 7. Receptors within the defined hazard area must be characterized. All exposure pathways to potential receptors, as discussed in Chapter 7 should be considered. Population densities, both permanent and transient (vehicle traffic, time-of-day, day-ofweek, and seasonal considerations, etc.); environmental sensitivities; property types; land use; and groundwater are some of the receptors typically characterized.The receptor’s vulnerability will often be a function of exposure time, which is a function of the receptor’s mobility-that is, its ability to escape the area. The event sequences are generated for all permutations of many parameters. For a hazardous substance pipeline, important parameters will generally involve Chance of failure Chance of failure hole size Spill size (considering leak detection and reaction scenarios) Chance of immediate ignition Spill dispersion Chance of delayed ignition Hazard area size (for each scenario) Chance of receptor@)being in hazard area Chance of various damage states to various receptor. A frequency of occurrence must be assigned to the selected damage state-how often might this potential consequence occur? This frequency involves first an estimate of the probability of failure of the pipeline. This is most often derived in part from historical data as discussed below. Then, given that failure has occurred, the probability of subsequent, consequenceinfluencing events is assessed. This often provides a logical breakpoint where the risk analysis can be enhanced by combining a detail-oriented assessment of the relative probability of failure with an absolute-type consequence assessment that is sensitive to the potential chains of events.

111. Failure rates Pipeline failure rates are required starting points for determining absolute risk values. Past failures on the pipeline of interest are naturally pertinent. Beyond that, representative data from other pipelines are sought. Failure rates are commonly derived from historical failure rates of similar pipelines in similar environments. That derivation is by no means a straightforward exercise. In most cases, the evaluator must first find a general pipeline failure database and then make assumptions

Failure rates 14/295

.00E-02

.00E-03

.00E-04

.00E-05

.00E-06

.00E-07 1

1( 30

10

Number of Fatalities (N) Flgure 14.1 FN curve for riskcharacterization.

regarding the best “slice” of data to use. This involvesattempts to extract from an existing database of pipeline failures a subset that approximates the characteristics of the pipeline being evaluating. Ideally, the evaluator desires a subset of pipelines with similar products, pressures, diameters, wall thicknesses, environments, age, operations and maintenances protocols, etc. It is very rare to fiid enough historical data on pipelines with enough similaritiesto provide data that can lead to confident estimates of future performance for a particular pipeline type. Even if such data are found, estimating the performance of the individual from the performance of the group presents another difficulty. In many cases, the results of the historical data analysis will only provide starting points or comparison points for the “best” estimates of future failure frequency. The evaluator will usually make adjustments to the historical failure frequencies in order to more appropriately capture a specific situation. The assumptions and adjustments required often put this risk assessment methodology on par with a relative risk assessment in terms of accuracy and predictive capabilities. This underlies the belief that, given some work in correlating the two scales, absolute and relative risks can be related and used interchangeably. This is discussed below.

General failuredata As a common damage stateof interest, fatalityrates are a subset of pipeline failure rates. Very few failures result in a fatality. A

rudimentary frequency-based assessment will simply identify the number of fatalitiesor injuriesper incidentand use this ratio to predict future human effects. For example, even in a database with much missing detail (as is typically the case in pipeline failure databases),one can extract an overall failure rate and the number of fatalities per length-time (i.e., mile-year or kmyear). From this, a “fatalities per failure” ratio can be calculated. These values can then be scaled to the length and design life of the subject pipeline to obtain some very high-level risk estimates on that pipeline. A sample of high-level data that might be useful in frequency estimates for failure and fatality rates is given inTables 14.1 through 14.4. A recent study [67] for pipeline risk assessment methodologies in Australia recommends that the generic failure rates shown in Table 14.5 be used. These are based on U.S., European, and Australiangas pipeline failure rates and are presumably recommended for gas transmission pipelines (although the report addresses both gas and liquid pipelines). Using the rates from Table 14.5 and additional assumptions, this study producesthe more detailedTable 14.6, a table of failure rates related to hole size and wall thickness. (Note: Table 14.6 is also a basis for the results shown later in this chapter for Case Study B.) As discussedin earlier chapters,there is a difference between ‘frequency’ and ‘probability’ even though in some uses, they are somewhat interchangeable. At very low frequencies of occurrence,the probabilityof failure will be numerically equal to the frequency of failure. However, the actual relationship between failure frequency and failure probability is often

14/296AbsoluteRisk Estimates Table 14.1 Compilation of pipeline failure data for frequency estimates

Location

Trpe

Canada USA USA USA USA USA USA USA USA Western Europe

Oiligas Oiligas Oil Gas Gas transmission Refinedproducts Hazardousliquids Crude oil Hazardousliquid Gas

Period 1989-92 1987-91 1982-91 1987-9 1 1986-2002 1975-1999 1975-1999 1975-1999 1986-2002

Fatality rate (no.perfailure)

Length

Failure rate

294,030 km 1,725,156 km 344,649 km 1,382,105 !an 300,000 miles

0.16h-year 0.25 0.55 0.17 0.267 failures/lOOO mile-year 0.6811000 mile-year 0.8911000 mile-year 0.11ilOOOmile-year

1.2 million mile-year

Re$

0.025 0.043 0.01 0.07

95 95 95 95

0.0086 0.0049 0.0024

86 86 86

0.29 /lo00 mile-year

44

Table 14.2 U.S. national hazardous liquids spill data (1975-1999)

Event category

Crude oil reportable rate

Refinedproducts reportable rate

Crude oil + refinedproducts reportable rate

Spill frequency Deaths Injuries

1 . 1 x 10-3 2.4 x 10-3 2.0 x 10-2

6.8 x IO4 8.6 x lo-' 6.1 x

8 . 9 IO" ~ 4.9 x 10-3 3.6~

Units Spillsiyearimile Deathsiincidents Injuriesiincidents

Source: URS Radian Corporation, "EnvironmentalAssessment of Lonqhorn Partners Pipeline," report prepared for U.S. EPA and DOT, September

2000.

modeled by assuming a Poisson distribution of actual frequencies. The Poisson equation relating spill probability and fiequency for a pipeline segment is P(X)SPILL = [(f *t)X/X !] * exp (-f * t )

where P(X)SPILL =probability of exactly X spills f =the average spill frequency for a segment of interest (spills/year) t =the time period for which the probability is sought (years) X =the number of spills for which the probability is sought, in the pipeline segment of interest. The probability for one or more spills is evaluated as follows: P(prohahi1ity nfone 0rmore)SPILL = 1 - P(X)SPILL

where X = 0.

Table 14.4 Comparisonof commonfailure causes for U.S. hazardous liquidpipelines ~~

Outside forces Corrosion Equipmentfailure(meta1fatigue, seal, gasket, age) Weld failure (all welds except longitudinal seam welds) Incorrect operation Unknown Repairiinstall Other Seam split Total

Tabla 14.5 Table 14.3 Average US. national hazardous liquid spill volumes and frequencies (1990-1997)

US. national average Pipe spill frequency Pipe spill volume Pipe and station spill frequency Pipe and stations spill volume

0.86 spillsiyearl1000 miles 0.70 hhl/year/mile 1.3 spillsiyearlmile 0.94 bhllyearimile

Source: URS Radian Corporation, "Environmental Assessment of Longhorn Partners Pipeline." report prepared for U.S. EPA and DOT, September 2000.

Percent of total

Cause

25 25 6

5 7 14 7

I 5 100

Generic failure rates recommendedin Australia

Cause ofFailure External force Corrosion Material defect Other Total

Failure rate p e r h - y e a r ) 3.00E-4 1.DOE4 1.00E-4 5.OOE-5 5.50E4

Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (ORA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002.

Failure rates 14/297 Table 14.6

Failure rates related to hole size and wall thickness ~

Hole size (mm)

Wall thickness (mm)

Impuct facto?

CorrosionfactoP

I.3

2 0.95 0 2 0.95 0 2 0.95 0 2 0.95 0 2 0.95 0

10

0.04 I.3 0.36 0.04 I .3 0.36 0.04 I .3 0.36 0.04 1.3 0.36 0.04

I0

I0 16 6-10 >I0 I0

IO0

I50

~~~~

Externalforce Ifraction)

Corrosion Ifraction)

Material defect Other (fraction) (fraction) Failures

2.08E4

0.125

0.5

0.34

1.20E4

0.5

6.05E-5 2.08E4 1.20E4 6.05E-5

0.125

0.5

0.34

0.5

0.285

0

0

0

3.08E-5 3.42E4

0.285

0

0

0

0.18

0

0

0

3.088-5 3.42E-6 7.02E-5 1.94E-5 2.16E-h

I.IIE4

l.llE-4

3 00E-4

Generic failure ratesb(overall = 5.50E")

1 .0E4

1.0E4

5.0E-5

Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)." Standards Australia ME-038.01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002. a See wall thickness adjustments, Table 14.8. These are the study-recommended generic failure rates to use for QRA in Australia (see Table 14.5).

fit from a mitigation is derived from engineering models or simply from logical analysis with assumptions Some observations from various studies are discussed next. The European Gas Pipeline Incident Group database (representing nine Western European countries and 1.2 million mileyears of operations as of this writing) gives the relative

Additional failure data A limited amount of data are also available to help make distinctions for pipeline characteristics such as wall thickness, diameter, depth of cover, and potential failure hole size. Several studies estimate the benefits of particular mitigation measures or design characteristics. These estimates are based on statistical analyses in some cases. These are ofienmerely the historical failure rate of a pipeline with a particular characteristic, such as a particular wall thickness pipe or diameter or depth of cover.This type ofanalyses must isolate the factor from other confounding factors and should also produce a rationale for the observation. For example, if data suggest that a larger diameter pipe ruptures less often on a per-length, per-year basis, is there a plausible explanation? In that particular case, higher strength due to geometrical factors, better quality control, and higher level of attention by operators are plausible explanations, so the premise could be tentatively accepted. In other cases, the bene-

Table 14.7

Table 14.8 Suggested wall thickness adjustments

Wall thickness (mml

External force coejficient

10

Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd , ADril2002.

European Gas Pipeline Incident Group database relative frequency of failure data

Percent ofdifferent hole size

Case Third-party interference Construction defect Corrosion Land movement Otheriunknown Total

Failure rate (mile-year)-'

Percent of total failure rufe

I.50E-04 5.30E-05 4.40E-05 1.80E-05 3.20E-05 2,90E+4

50 18

15 6 II I00

400 mm)

0.027 0.019 0.099 0.235

Source. Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)." Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd., April 2002. Derived from the European Gas pipeline incident data Group (EGIG) foronshore pipelinesfrom 1970to 1992. Note that these findings are based on hole size and not on release rate, which will vary with pipeline pressure.

One study uses 12% as the ignition probability ofNGL (natural gas liquids, referring to highly volatile liquids such as propane) based on U.S. data [43]. Another study concludes that the overall ignition probability for natural gas pipeline accidents is about 3.2% [95]. A more extensive model of natural gas risk assessment, called GRI (Gas Research Institute) PIMOS [33], estimates ignition probabilities for natural gas leaks and ruptures under various conditions. This model is discussed in the following paragraphs. In the GRI model, the nominal natural gas leak ignition probabilities range from 3.1 to 7.2% depending on accumulation potential and proximity t o structures (confinement).The higher range occurs for accumulations in or near buildings. There is a 30% chance of accumulation following a leak and a 30% chance of that accumulation being in or near a building, given that accumulation has occurred and an 80% chance of ignition when near or in a building, given an accumulation. Hence, that scenario leads to a 7.1% chance of ignition (30% x 30% X 80% = 7.1%). The other extreme scenario is (30% chance of accuTable 14.16 Estimates of ignition probabilitiesfor various products

Gasoline Gasoline and crude oil

Above and belousground

Belowgmundonl)

Crude oil Diesel oil Fuel oil Gasoline Kerosene Jet fuel Oil and gasoline

3.1 1.8 2 6 0 4.5 3.4

2

All

3.6

~~

Table 14.15 Estimates of ignition probabilities of natural gasfora range of hole sizes (European onshore pipelines)

Product

Ignition probability Pi)

Ignition probahilir);

Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002..

Failure mode

Table 14.17 Estimates of ignition probabilities for various products above and below grade

lgnition probahiliiy 5(%)

4-6 3

Source-Table created from statements in Ref. [86], which cites various sources for these probabilities.

~

1.5 0 3.1 0 38

0 2.1

mulation) x (70% chance ofnot near a building) x (1 5% chance o f ignition when not near building) = 3.1%. For ruptures, the ignition probabilities nominally range from about 4 to 15% with the higher probability occurring when ignition occurs immediately at the rupture location. Given a rupture, the probability of subsequent ignition at the rupture location is given a value 15%. If ignition does not occur at the rupture (85% chance of no ignition at rupture), then the probability of subsequent ignition is 5%. So, the latter leads to a probability estimate of 85% x 5% = 4.3%. In both the leak and rupture scenarios, these estimates are referred to as base caseprobabilities.They can be subsequently adjusted by the factors shown inTables 14.19 and 14.20. These probabilities are reportedly derived from U.S. gas transmission pipeline incident rates (U.S. Department o f Transportation,

Table 14.18 Estimates of ignition probabilities for below-grade gasoline pipelines

Ignition probabiliy (9% Location

Rupture

Hole

Leak

Rural Urban Rural Urban Rural Urban

3.1 6.2 1.55 3.1

3.1 6.2 1.55 3. I 1.55 3.1

0.62

~~~

Overall Immediate Delayed

1.55 3.1

1.24 0 31 0.62 0.31 0.62

Source: Morgan, B.,et al., "An Approach to the Risk Assessment of Gasoline Pipelines," presented at Pipeline Reliability Conference, Houston, TX, November 1996. Notes: US. experience is approximately 1.5 times higher than CONCAWE (data shown above are from CONCAWE). Assumes the urban is 2x base rates and that base rates reflect mostly rural experience. Leak ignition probability is 20% of that for ruptures or holes. Immediate and delayed ignitions occur with equal likelihood. Rupture is defined as 0.5 diameter or larger. Hole is > I O mm, but less than the rupture Leak is 30 in., and pressures > 1000 psig

u?

660 800 1000

660 1000

In cases of HVL pipeline modeling, default distances of 1000 to 1500 ft are commonly seen, depending on pipeline diameter, pressure, and product characteristics. HVL releases cases are very sensitive to weather conditions and carry the potential for unconfined vapor cloud explosions, each of which can greatly extend impact zones to more than a mile. (See also the discussion on land-use issues in a following section for thoughts on setback distances that are logically related to hazard zones.) A draft Michigan regulatory document suggests setback distances for buried high-pressure gas pipelines based on the HUD guideline thermal radiation criteria. The proposed setback distances are tabularized for pipeline diameters (from 4 to 26 in.) and pressures (from 400 to 1800 psig in 100-psig increments). The end points of the various tables are shown inTabIe 14.35. It is not known ifthese distances will be codified into regulations. In some cases, the larger distances might cause repercussions regarding alternative land uses for existing pipelines. Land use regulations can have significant social, political, and economic ramifications, as are discussed in Chapter 15. The U.S. Coast Guard (USCG) provides p d a n c e on the safe distance for people and wooden buildings from the edge of a burning spill in their Hazard Assessment Handbook, Commandant Instruction Manual M 16465.13 . Safe distances range widely depending on the size of the burning area, which is assumed to be on open water. For people, the distances vary from 150 to 10,100 ft, whereas for buildings the distances vary from 32 to 1900 ft for the same size spill. The spill radii for these distances range between 10 and 2000 ft [35]. A summary of setback distances was published in a consultant report and is shown in Table 14.36.

Table 14.35 Sample proposed setback distances

Minimum setback Ifr,

Facility

Multifamily developments (10,000 Btu/hr-A2 criteria) Elderly and handicappedunits Unprotected areas of congregation (450 Btuihr-A2criteria) Primary egress

4-in.pipeline a1400psig

26-in.pipeline at 1800psig

Table 14.36 Summary of setback requirements in codes. standards, and other guides

Code, standard, guide

Setback requirement for tankporn public (jii

IFC 2000 (adopted in Alaska and proposed in municipality ofhchorage UFC 2000 @re-2001 in Alaska) UFC 1997 APA

5-175

Tank size and type of adjacent use

5-175

Tank size and type of adjacent

50-75 Performance standard

HUD

Buildings: 130-155 People:650-775 150-> 10,000

Type ofadjacentuse Site specific and process driven Product and tank size Diameter of spill

USCG (open-water fire)

Yariables

Source: Golder and Associates, "Report on Hazard Study for the Bulk POL Facilities in the POA Area," prepared for Municipality of Anchorage POL Task Force, August 9,2002. Notes: APA. American Planning Association; USCG. US. Coast Guard (USCG); HUD, Department of Housing and Urban Development (HUD). The National Fire Protection Association (NFPA) publishes NFPA Code 30, Flammable and Combustible Liquids Code, 2000 Edition. The lnternational Code Council publishes the lnternational Fife Code 2000 (IFC). The Western Fire Chiefs Association publishes the Unifofm Fife Code 2000 Edition(UFC).

Any time default hazard zone distances replace situationspecific calculations, the defaults should be validated by actual calculations to ensure that they encompass most, ifnot all, possible release scenarios for the pipeline systems being evaluated.

XI. Case studies The following case stumes illustrate some techmques that are more numerically rigorous in producing absolute risk estimates. These are all extracted from public domain documents readily obtained from Internet sources and/or proceedings from regulatory approval processes. Company names and locations have been changed since the focus here is solely on illustrating the technique. Other minor modifications to the extracted materials include the changing of table, figure, and reference numbering to correspond to the sequencing in this book.

Case StudyA: natural gas Quantitative risk calculationsfor XYZpipeline

67

318 772

147

3164

40

1489

40

The following case study illustrates the estimation ofrisk using calculated hazard zones and frequency-based failure frequencies for a natural gas pipeline. Portions of this discussion were extracted or are based on Ref. [18], in which a proposed highpressure gas pipeline, having both onshore and offshore components, was being evaluated. For this example, the proposed

Case studies 141313

pipeline name is XYZ and the ownerioperator company will be called ACME. In this case, a relative risk assessment has been performed, but is to be supplemented by an evaluation of risks presented in absolute terms. The introduction very briefly describes the purpose and scope of the analysis. This document presents preliminary estimates of risks to the public that might be created by the proposed operation of the XYZ pipeline. The additional risk calculations build on the worst case estimates already provided in the regulatory application and will be used for emergency response planning. This analysis is preliminary and requires verification and review before using in connection with emergency pianning.

A frequency of failures, fatalities, and injuries is estimated based on available data sets. As it is used here, “failure” refers to an incident that triggers the necessity of filing a report to the governing regulatory agency. So failure counts are counts of “reportable incidents.”The failure frequency estimates are also later used with hazard area calculations.

Normalized frequency-basedprobabilistic risk estimates Risk is examined in two parts: probability of a pipeline failure and consequences of a failure. In order to produce failure probabilities for a specific pipeline that is not yet operational, a failure frequency estimate based on other pipeline experience is required. Four sets of calculations, each based on a different underlying failure frequency, have been performed to produce four risk estimates for the proposed XYZ pipeline. The estimates rely on frequencies of reportable incidents, fatalities, and injuries as recorded in the referenced databases. The incident rate is used to calculate the probability of failure and the fatalityiinjury rates are used to estimate consequences. The frequency estimates that underlie each of the four cases are generally described as follows: Case I . The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” Acme-owned (ACME) gas transmission pipeline. For this case, ACME system leak experiences are used to predict future performance ofthe subject pipeline. Case 2. The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” Canadian gas transmission pipeline.

In this case, the Canadian Transportation Safety Board historical leak frequency is used to predict future performance of the subject pipeline. Case 3 . The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” US. gas transmission pipeline. In this case, the U.S. historical leak frequency is used to predict future performance ofthe subject pipeline. Case 4 . The subject pipeline is assumed to behave like some U.S. gas transmission pipelines; in particular, those with similar diameter, age, stress level, burial depth, and integrity verification protocols. In this case, the U.S. historical leak frequency is used as a starting point to predict future performance of the subject pipeline. In all cases, failures are as defined by the respective regulations (“reportable accidents”) using regulatory criteria for reportable incidents. The calculation results for the four cases applied to the proposed 37.3 miles(60.0 km)ofXYZpipelineareshowninTable 14.37: The preceding part of this analysis illustrates a chief issue regarding the use of historical incident frequencies. In order for past frequencies to appropriately represent future frequencies, the past frequencies must be from a population of pipelines that is similar to the subject pipeline. As is seen in the table, basing the future fatality and injury rate on the experiences of the first two populations of pipelines results in an estimate of zero future such events since none have occurred in the past. The last column presents annual probability numbers for individuals. Such numbers are often desired so that risks can be compared to other risks to which an individual might be exposed. In this application, the individual risk was assumed to be the risks from 2000 ft of pipeline, 1000 ft either side of a hypothetical leak location.

Case 4 discussion Case 4 produces the best pomt estimate for risk for the XYZ pipeline. Note that all estimates suggest that the XYZ pipeline will experience no reportable failures during its design life. Probabilities of injuries andor fatalities are extremely low in all cases. The US.DOT database of pipeline failures provides the best set of pertinent data from which to infer a failure frequency. It is used to support calculations for Cases 3 and 4 above. Primarily basing failure calculations on U S . statistics, rather than Canadian, is appropriate because:

Table 14.37 Calculationsfor Cases 1 through 4

Comparison criteria ~

~~

Failuresper year

Injuriesper year

Fatalitiesper year

Years to fail

Years fo injua

Years to Annual fataliy

0.01055 0.01200 0.01015 0.04344 0.00507

0 0 0.00167 0.00348 0.00084

0 0 0.00044 0.00050 0.00022

100.4 83.3 98.6 23.0 197.26

Never Never 600.2 287.4 1,200.4

Never Never 2278.8 1987.6 4557.6

Annual Probabilit?,ofan individualfatali$

~~

Case I: ACME’ Case 2: Canada2 ca~e3:U.S.~ U.S. liquid3 Case 4: U S . adjusted4

0 0 4.8E-06 4.7E-06 2.4E-06

Notes: ACME, all Acme gas transmission systems, 1986-2000. TSB, Canadian gas transmission pipelines, 1994-1998; only one fatality (in 1985 third-party excavation) reported for NEB jurisdictional pipelines since 1959; a significant change in definition of reportable incidents occurred in 1989. OPS, US. gas transmission pipelines, 19862002. Adjusted by assuming failure rate of subject pipeline is -50% of U.S. gas transmission average, by rationale discussed. Assumes an individual is threatened by 2000 fl of pipe (directlyover pipeline, 1000 ft either side, 24i7 exposure); 2000 ft is chosen a s a conservative length based on hazard zone calculations.



*

14/314 Absolute Risk Estimates 0 0 0

0

More complete data are available (larger historical failure database and data are better characterized). Strong influence by a major US. operator on design, operations, and maintenance. Similar regulatory codes, pipeline environments, and failure experiences. Apparently similar failure experience between the countries.

Since the combined experience of all US.pipelines cannot realistically represent this pipeline’s future performance (it may “encompass” this pipeline, hut not represent it), a suitable comparison subset of the data is desired. Variables that tend to influence failure rates and hence are candidates for criteria by which to divide the data, include time period, location, age, diameter, stress level, wall thickness, product type, depth ofcover, etc. Unfortunately, no database can be found that is complete enough to allow such characterization of a subset. Therefore, it is reasonable to supplement the statistical data with adjustment factors to account for the more significant differences between the subject pipeline and the population of pipelines from which the statistics arise. Rationale supporting the adjustment factors is as follows: 0

Larger diameter is 4 0 % of failures in the complete database (90+% benefit from higher diameter is implied by the database but only 25% reduction in failures is assumed) Lower stress decreases failure rate by 10% (assumption based on the role of stress in many failure mechanisms) New coating decreases failure rate by 5% (assumption note the well-documented problem with PE tape coatings in Canada) New IMP (integrity management program) procedures decreases failure rate 10% (assumption based on judgment of ability for IMP to interrupt incident event sequence) Deeper cover (2 f i of additional depth is estimated to be worth 30% reduction in third-party damages according to one European study so a 10%reduction in overall failures is assumed) More challenging offshore environment leads to 10% increase in failures (somewhat arbitrary assumption, conservative since there are no known unresolved offshore design issues).

Combining these factors leads to the use of a -50% reduction from the average US. gas transmission failure rate. This is conservativeaccepting a bias on the side of over-predicting the failure frequency. Additional conservatism comes from the omission of other factors that logically would suggest laver failure frequencies. Such factors include Initial failure frequency is derived from pipelines that are predominantly pre- 1970 construction-there are more stringent practices in current pipe and coating manufacture and pipeline construction Better one-call (more often mandated, better publicized, in more common use) Better continuing public education Designed and mostly operated to Class 3 requirements where Class 3 pipelines have lower failure rates compared to other classes from which baseline failure rates have been derived Leaks versus ruptures (leaks less damaging, but counted if reporting criteria are triggered) Company employee fatalities are included in frequency data, even though general public fatalitieshjuries are being estimated Knowledge that frequency data do not represent the event of “one or more fatalities,” even though that is the event being estimated.

Model-basedfailure consequence estimates An analysis of consequence, beyond the use of the historical fatalityhnjury rate described above, has also been undertaken. The

severity of consequences (solely from a public safety perspective) associated with a pipeline’s failure depends on the extent of the product release, thermal effects from potential ignition of the released product, and the nature of any damage receptors within the affected area. The area affected is primarily a function of the pipeline’s diameter, pressure, and weather conditions at the time of the event. Secondary considerations include characteristics of the area including topography, terrain, vegetation, and structures.

Failure discussion The potential consequences from a pipeline release will depend on the failure mode (e.g., leak versus rupture), discharge configuration (e.g., vertical versus inclined jet, obstructed versus unobstructed), and the time to ignite (e.g., immediate versus delayed). For natural gas pipelines, the possibility of a significant flash fire or vapor cloud explosion resulting from delayed remote ignition is extremely low due to the buoyant nature of gas, which prevents the formation of apersistent flammable vapor cloud near common ignition sources. ACME applied a “Model of Sizing High Consequence Areas (HCAs)Associated with Natural Gas Pipelines” [83] to determine the potential worst case ACME Pipeline failure impacts on surrounding people and property. The Gas Research Institute (GRI) funded the development of this model for U.S. gas transmission lines in 2000, in association with the U S . Office of Pipeline Safety (OPS), to help define and size HCAs as part of new integrity management regulations. This model uses a conservative and simple equation that calculates the size of the affected worst case failure release area based on the pipeline’s diameter and operating pressure.

Failure scenarios There are an infinite number of possible failure scenarios encompassing all possible combinations of failure parameters. For evaluation purposes, nine different scenarios are examined involving permutations of three failure (hole) sizes and three possible pressures at the time of failure. These are used to represent the complete range of possibilities so that all probabilities sum to 100%. Probabilities of each bole size and pressure are assigned, as are probabilities for ignition in each case. For each of the nine cases, four possible damage ranges (resulting from thermal effects) are calculated. Parameters used in the nine failure scenarios are shown in Table 14.38.

Table 14.38 Parameters for the nine failure scenarios under

discussion Probability of occurrence (99)

Hole s u e (in.) 50% to full-bore rupture(8-16) 0.5-8 1800 psig would not be normal.

Case studies 14/315 For ACME Pipeline release modeling, a worst case rupture is assumed to be guillotine-type failure, in which the hole size is equal to the pipe diameter. at the pipeline’s 15,305-kPa (2220-psig) Maximum Allowable Operating Pressures (MAOP). This worst case rupture is further assumed to include a double-ended gas release that is almost immediately ignited and becomes a trench fire. Note that the majority of the ACME Pipeline will normally operate well below its post-installation, pressure-tested MAOP in Canada. Anticipated normal operating pressures in Canada are in the range of 800 to I 100 psig, even though this range is given only a 40% probability and all other scenarios conservatively involve higher pressures. Therefore the worst case release modeling assumptions are very conservative and cover all operational scenarios up to the 15.305-kPa (2220-psig) MAOP at any point along the pipeline. Other parameters used in the failure scenarios cases are ignition probability and thermal radiation intensity (Table 14.39). Ignition probability estimates usually fall in the range of 5 to 12% based on pipeline industry experience; 65% is conservatively used in this analysis. The four potential damage ranges that are calculated for each of the nine failure scenarios are a function of thermal radiation intensity. The thresholds were chosen to represent specific potential damages that are of interest. They are described generally inTable 14.40. These were chosen as being representative of the types of potential damages of interest. Reference [83] recommends the use of 5000 Btu/hr-ft* as a heat intensity threshold for defining a “high consequence area.” It is chosen because it corresponds to a level below which: -Property, as represented by a typical wooden structure would not be expected to burn -People located indoors at the time of failure would likely be afforded indefinite protection and -People located outdoors at the time of failure would be exposed to a finite but low chance of fatality. Note that these thermal radiation intensity levels only imply damage states. Actual damages are dependent on the quantity and types of receptors that are potentially exposed to these levels. A preliminary assessment of structures has been performed, identifying the types of buildings and distances from the pipeline. This information is not yet included in these calculations but will be used in emergency planning.

Table 14.40 Four potential damage ranges for each of the nine failure scenarios under discussion

Thermal radiation level (Btu/hr-ft2i

Description

12,000 5,000 4,000 1,600

100%mortality in -30 sec 1 % mortality in -30 sec Eventual wood ignition Onset injury -30 sec

impacted by any assumptions relative to leak detection capabilities. This is especially true since the damage states use an exposure time of -30 seconds in the analysis.

Results Results of calculations involving nine failure scenarios and four damage (consequence) states as measured by potential thermal radiation intensity are shown in Table 14.41 The nine cases are shown graphically in Figure 14.3. The rightmost end of each bar represents the total distance of any consequence type. The farthest extent of each damage type is shown by the right-most end point of the consequence type’s color. These nine cases can also be grouped into three categories as shown in Figure 14.4, which illustrates that 11% of all possible failure scenarios would not have any of the specified damages beyond 29 ft from the failure point. Of all possible failure scenarios, 55% (44% + 11%) would not have any specified damages beyond 457 ft. No failure scenario is envisioned that would produce the assessed damage states beyond913 ft. In these groupings, the worst case (largest distance) IS displayed. For example, the specific damage types can be interpreted from the chart as follows: Given a pipeline failure, 100% (-44% + -44% + -1 1%) of the possible damage scenarios have a fatality range of 333 ft or less (the longest bar). There is also a 56% chance that, given a pipeline failure, the fatality range would be 167 ft or less (the second longest bar).

Case Study B: natural gas Role ofleak detection in consequence reduction The nine failure scenarios analyzed represent the vast majority of all possible failure scenarios. Leak detection plays a relatively minor role in minimizing hazards to the public in most of these possible scenarios. Therefore, the analysis presented is not significantly Table 14.39 Additional parameters for the nine failure scenarios

under discussion

Hole size (in.)

Ignition probabiliti: given failure has occurred (%)

50% to fnll-bore rupture (8-16)

40

0.5-8