3,983 798 2MB
Pages 278 Page size 430.866 x 649.134 pts Year 2008
ADVANCES IN HEALTH CARE MANAGEMENT Series Editors: John D. Blair, Myron D. Fottler and Grant T. Savage Recent Volumes: Volume 1:
Advances in Healthcare Management, Edited by John D. Blair, Myron D. Fottler, and Grant T. Savage
Volume 2:
Advances in Healthcare Management, Edited by Myron D. Fottler, Grant T. Savage, and John D. Blair
Volume 3:
Advances in Healthcare Management, Edited by Grant T. Savage, John D. Blair, and Myron D. Fottler
Volume 4:
Bioterrorism, Preparedness, Attack and Response, Edited by J. D. Blair, M. D. Fottler, and A. C. Zapanta
Volume 5:
International Healthcare Management, Edited by Grant T. Savage, Jon A. Chilingerian, and Michael Powell
Volume 6:
Strategic Thinking and Entrepreneurial Action in the Health Care Industry, Edited by John D. Blair, Myron D. Fottler, Eric W. Ford, and G. Tyge Payne
ADVANCES IN HEALTH CARE MANAGEMENT
VOLUME 7
PATIENT SAFETY AND HEALTH CARE MANAGEMENT EDITED BY
GRANT T. SAVAGE University of Missouri, USA
ERIC W. FORD Texas Tech University, USA
United Kingdom – North America – Japan India – Malaysia – China
JAI Press is an imprint of Emerald Group Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2008 Copyright r 2008 Emerald Group Publishing Limited Reprints and permission service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. No responsibility is accepted for the accuracy of information contained in the text, illustrations or advertisements. The opinions expressed in these chapters are not necessarily those of the Editor or the publisher. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-84663-954-8 ISSN: 1474-8231 (Series)
Awarded in recognition of Emerald’s production department’s adherence to quality systems and processes when preparing scholarly journals for print
LIST OF CONTRIBUTORS Amy Abbott
Creighton Health Service Research Program, School of Nursing, Creighton University, USA
Donde Batten
Batten Consulting, Houston, TX, USA
Brian Binck
Department of Anesthesia & Critical Care Medicine, Alfred I. duPont Hospital for Children, Wilmington, DE, USA
James D. Bramble
Creighton Health Service Research Program, School of Pharmacy and Health Professions, Creighton University, USA
Racquel M. Calderon
Department of Respiratory Services, Totally Kids Specialty Healthcare, Loma Linda, CA, USA Department of Cardiopulmonary Sciences, School of Allied Health Professions, Loma Linda University, Loma Linda, CA, USA
Marsha Chan
St. Francis Medical Center, Lynwood, CA, USA
Bartholomew E. Clark
Pharmacy Sciences Department, Creighton Health Service Research Program, School of Pharmacy and Health Professions, Creighton University, USA
Richard A. Culbertson
Department of Health Systems Management, School of Public Health and Tropical Medicine, Tulane University, USA ix
x
LIST OF CONTRIBUTORS
Patricia R. DeLucia
Department of Psychology, Texas Tech University, TX, USA
Ellen S. Deutsch
Division of Otolaryngology, Department of Surgery, Alfred I. duPont Hospital for Children, Wilmington, DE, USA
Susan M. Distefano
Texas Children’s Hospital, Houston, TX, USA
Andjela Drincic
Creighton Health Service Research Program, School of Medicine, Creighton University, USA
Eric W. Ford
Jerry S. Rawls College of Business, Center for Innovation, Education & Research, Texas Tech University, TX, USA
Kimberly A. Galt
Creighton Health Service Research Program, School of Pharmacy and Health Professions, Creighton University, USA
Gerald Goodman
Department of Health Care Administration, College of Health Sciences, Texas Woman’s University, TX, USA
Alexia Green
Center for Patient Safety, School of Nursing, Texas Tech University Health Sciences Center, TX, USA
Julia A. Hughes
Department of Health Systems Management, School of Public Health and Tropical Medicine, Tulane University, USA
Ben-Tzion Karsh
Department of Industrial and Systems Engineering, College of Engineering, University of Wisconsin, Madison, USA
Jeff F. Lewis
Kosair Charities Pediatric Convalescent Center, Louisville, KY
xi
List of Contributors
Ann Scheck McAlearney
Division of Health Services Management and Policy, College of Public Health, Ohio State University, Columbus, OH, USA
Gina Moore
QCS and Regulatory Management, Alfred I. duPont Hospital for Children, Wilmington, DE, USA
Tammy E. Ott
Department of Psychology, Texas Tech University, TX, USA
Patrick A. Palmieri
Duke University School of Nursing and Duke Health Technology Solutions, Duke University, USA
Karen A. Paschal
Creighton Health Service Research Program, School of Pharmacy and Health Professions, Creighton University, USA
Lori T. Peterson
Jerry S. Rawls College of Business, Texas Tech University, TX, USA
Karlene H. Roberts
Walter A. Haas School of Business, University of California, Berkeley, CA, USA
Louis Rubino
Department of Health Sciences, California State University, Northridge, CA, USA
Ann M. Rule
Medical Liaison Alliance, Outreach Purdue Pharma, L. P. Stamford, CT, USA
Cynthia K. Russell
Acute & Chronic Care Department, College of Nursing, University of Tennessee Health Science Center, USA
Grant T. Savage
Health Management and Informatics Department School of Medicine, University of Missouri, USA
xii
LIST OF CONTRIBUTORS
Mark V. Siracuse
Creighton Health Service Research Program, School of Pharmacy and Health Professions Creighton University, USA
Michal Tamuz
Department of Preventive Medicine, College of Medicine, University of Tennessee Health Science Center, USA
Eric J. Thomas
The University of Texas Medical School at Houston, Houston, TX, USA
Daved W. van Stralen
Department of Pediatrics, School of Medicine, Loma Linda University, Loma Linda, CA, USA and Children’s Subacute Center, Community Hospital of San Bernardino, San Bernardino, CA, USA
Eric S. Williams
Culverhouse College of Commerce and Business Administration, Management and Marketing Department, University of Alabama, AL, USA
LIST OF REVIEWERS David A. Fleming University of Missouri
Nir Menachemi University of Alabama-Birmingham
Mark E. Frisse Vanderbilt University
Lori. T. Peterson Texas Tech University
Timothy R. Huerta Texas Tech University
Douglas S. Wakefield University of Missouri
Ann Scheck McAlearney Ohio State University
Eric S. Williams University of Alabama
xiii
PREFACE: SOME OBSERVATIONS ON PATIENT SAFETY The constitution of the United States refers to the obligation of the government to safeguard the people’s health and welfare. For those engaged in patient care, the health and safety of the patient is uppermost, but the safety of the patient is influenced by a multitude of factors. These factors include not only the actions of the providers of care and the patient’s environment, but also the patient’s own actions. Understandably, actions by caregivers can readily pose a risk to patient safety. When one considers the millions of contacts with patients for testing, for treatment, or nursing care, there are ample opportunities for mishaps to occur. Modern medicine increasingly relies on technology to protect and ensure the safety of patients. On one hand, telehealth makes oral, visual, and auditory communication feasible wherever the patient may be located. These applications not only protect the patient, but also contribute to the quality of care. One of the hallmarks of such applications is that they provide essential communication between patients and caregivers, whether the patient is receiving home care, ambulatory care, or intensive hospital care. For example, intensive care telehealth, implemented with success at the Lehigh Valley Health System, contributes materially in ensuring a safe environment for the patient. On the other hand, modern medical diagnostic tests, either invasive or non-invasive, have built-in modalities designed to prevent damage to the patient or for that matter, the caregiver. Injection needles and syringes are not only sterile; they are disposable and have safety features that protect against accidental injury. Most if not all medical instruments that depend on electric power contain automatic shut-off mechanisms. These mechanisms are necessarily an integral part of the instrument and are effective in preventing potentially lethal complications. Wherever health care is provided, the patient’s wellbeing is influenced by the environmental conditions that surround him or her. Again, modern health care relies on science and technology to enhance patient safety. For example, the design of medical facilities, their color, lighting, temperature, humidity, and noise all have an affect on the patient. Moreover, proper xv
xvi
PREFACE
ergonomics for furniture, such as beds, chairs, and examining tables, help minimize patient safety risks such pressure ulcers. A safety-related issue that deserves considerable attention is the patient’s own responsibility in safeguarding his or her safety. A major factor that impedes patient safety is non-compliance. Every day millions of people either take too much or too little of the medications that were prescribed, often with serious consequences. For example, patients, who receive more than one medication, and especially when the medications are prescribed by different and multiple physicians, are apt to have problems. Clear identification and coordination is a must when patients take more than one medication. However, a well-informed patient can relate more intelligently and rationally with caregivers. This rapport makes for better coordination and cooperation and, ultimately, results in better outcomes. This review would not be complete without some remarks about the development of health information technology and its impact on medicine and, in turn, on patient safety. One outstanding feature is the implementation of electronic medical records, which has the potential to eliminate most of the errors that are made in the dispensing of medications. That feature alone, when properly and sufficiently implemented, will annually save thousands of lives. This special volume on patient safety and health care management refers to a multitude of steps, actions, and behavior that alone or in combination affect the health and safety of a patient. It is apparent that many of these measures are under an individual’s control, while others depend on application of tools and techniques that are primarily instrumental or mechanical. Either way attention to all these issues is not only important, but it also affords genuine protection and safeguard the safety of patients. In short, this volume illustrates that the provision of care is insured by close attention to the organizational structures and processes and the safety features of which they are an essential part. Leo van der Reis, M.D., Quincy Foundation for Medical Research – Charitable Trust and University of Missouri, USA
PATIENT SAFETY: STATE-OF-THE-ART IN HEALTH CARE MANAGEMENT AND FUTURE DIRECTIONS Eric W. Ford and Grant T. Savage ABSTRACT The needs for health system change and improved patient safety have been pointed out by policymakers, researchers, and managers for several decades. Patient safety is now widely accepted as being fundamental to all aspects of health care. The question motivating this special volume on patient safety is: How can the increased emphasis on patient safety among health care managers be more effectively translated into better policy and reduced clinical risk? The 12 contributions in this volume are divided into four sections: (1) theoretical perspectives on managing patient safety; (2) top management perspectives on patient safety; (3) health information technology (HIT) perspectives on patient safety; and (4) organizational behavior and change perspectives on patient safety. Patient safety is a topic that provides a fertile niche for management researchers to test existing theories and develop new ones. For example, the patient safety goals of reducing medical errors while maximizing health outcomes draws upon the tenets of evidence-based medicine (EBM), as well as the managerial theories of human relations, Patient Safety and Health Care Management Advances in Health Care Management, Volume 7, 1–14 Copyright r 2008 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1474-8231/doi:10.1016/S1474-8231(08)07001-8
1
2
ERIC W. FORD AND GRANT T. SAVAGE
organizational culture, organizational development, organizational learning, organizational structure, quality improvement, and systems thinking. Indeed, these and other managerial theories are drawn upon and applied in different ways by the various contributors. Overall, the authors of this volume demonstrate that the future of patient safety for health care management requires health care professionals and managers who can successfully engage in multi-faceted projects that are socially and technically complex.
Patient safety is now widely accepted as being fundamental to all aspects of health care. Therefore, professionals in the policy and administration fields need to be even more vigilant to the potential risks in delivering clinical care. It is important for policymakers, administrators, and clinicians to understand a wide variety of system features that must be correctly aligned to ensure successful patient management. Further, developing common mental models among these key stakeholders will facilitate both the sharing of information and the aligning of incentives with desired outcomes. The overarching question motivating this special volume on patient safety is: How can the increased emphasis on patient safety among health care managers be more effectively translated into better policy and reduced clinical risk? Health care managers are at the fulcrum of balancing policy imperatives and practice practicalities. Therefore, conducting research from the managerial perspective that looks in both directions is necessary. Further, research teams drawn from a wide variety of disciplines should ensure frequent assessment and analysis of their ongoing theoretical and empirical work in relationship to system features that harm patients or create the potential for patient harm. Health management theories and methods will vary depending upon the researchers’ areas of expertise and the clinical practice or policy issues being studied. This special volume presents a collection of health care management articles that look at patient safety change efforts ranging from board room strategies (e.g., Rubino & Chan and Culbertson & Hughes) to discreet patient exchanges on the floor (e.g., Deutsch et al., 2008). In addition, we assess the state-of-the-art and future directions for health care management theory and research on patient safety. We believe the primary benefit to a special volume is that it goes beyond the sum of its parts. Further, synergistic benefits arise from the questions that are created from papers presented in juxtaposition to one another. Therefore, we frame and pose an
Patient Safety
3
initial set of questions for health care management practitioners and researchers to consider as they move forward in making health systems both safer and more effective.
ASSESSING THE STATE-OF-THE-ART IN PATIENT SAFETY RESEARCH The needs for health system change and improved patient safety have been highlighted by policymakers, researchers, and managers. As early as 1991, the Institute of Medicine (IOM) was calling for the universal adoption of electronic medical records (EMRs) to control cost and provide actionable data for quality improvement (Institute of Medicine, 1991). However, these alarms went largely unnoticed by policymakers and the public until the publication of To err is human: Building a safer health system (Kohn, Corrigan, & Donaldson, 1999) and its widespread dissemination of research estimating that 44,000–98,000 avoidable fatalities occurred annually in U.S. hospitals. Since this seminal report, researchers from numerous disciplines have brought their theories, methods, and experiences to bear on the causes of poor quality in health care.
Theories, Models, and Methods for Managing Patient Safety As an emerging field of research, the relatively recent recognition and interest in patient safety provides a fertile niche for management researchers to test existing theories and develop new ones. An indication of the emerging nature of patient safety is the makeup of the organizations that have formed to address the topic. While the Institute for Health Improvement (www.ihi.org) began relatively early, other organizations such as those at Johns Hopkins, the Center for Innovation and Quality in Patient Care (www.hopkinsquality.com) and Vanderbilt’s Center for Better Health (www.mc.vanderbilt.edu/vcbh/) did not begin until 2002 and 2001, respectively. Each of these organizations has a clear agenda related to policy and clinical practice, but it is difficult to discern any discreet programs targeting the improvement of management practices. Taken together, the papers in this volume of Advances in Health Care Management are intended to aid management researchers in formulating both individual and collaborative agendas to address the issue of patient safety.
4
ERIC W. FORD AND GRANT T. SAVAGE
The first chapter in this volume discusses evidence-based medicine (EBM), its application to patient safety, and its limits for improving safety in health service delivery (Savage and Williams). Many initiatives to improve patient safety are based on EBM, without recognition of either its key assumptions, or its inherent limits for improving patient safety. Savage and Williams address three research questions: (1) How does EBM contribute to patient safety? (2) How and why is EBM limited in improving patient safety? and (3) How can patient safety be maximized, given the limitations of EBM? Currently, EBM contributes to patient safety both by educating clinicians on the value and use of empirical evidence for medical practice and via large-scale initiatives to improve care processes. Attempts to apply EBM to individual patient care are limited, in part, because EBM relies on biostatisical and epidemiological reasoning to assess whether a screening, diagnostic, or treatment process produces desired health outcomes for a population – not for an individual. Health care processes that are most amenable to EBM are those that can be standardized or routinized; nonroutine processes, such as diagnosing and treating a person with both acute and chronic co-morbidities are cases where EBM has limited applicability. A first step in bridging the gap between EBM and management research is the development of models that help to identify how to fit such work into larger organizational frameworks. For example, to improve patient safety, health care organizations should not rely solely on EBM, but also recognize the need to foster mindfulness within the medical professions and develop patient centric organizational systems and cultures. The second manuscript presents a model designed to help managers think about the nature of errors (Palmieri et al., 2008). The ‘Swiss Cheese’ model promotes a systems thinking approach to identifying the multiple causes that underlie the worst errors in health care. In particular, the role of organizational-level policies in retarding and promoting medical errors is critical. In many health care settings the role of policies in outcomes are often ignored because the organizational levels between the top managers and patients is populated with professionals that assume full responsibility for the patient’s care. Therefore, when a failure or error occurs, the individuals charged with executing untenable or impracticable policies are blamed. The greatest promise of the ‘Swiss Cheese’ model is that it will diminish these types of attribution errors – blaming an individual for a systemic failure – that is the current mode of operation in many health care organizations. Nurses in particular bear the brunt of fundamental attribution error because they work at the end point of the patient care system. To the extent
5
Patient Safety
that it is possible to mitigate ineffective or poor policies, front-line nurse managers take on this task. The separation of policy input, operating authority, and outcome responsibility places these professionals in untenable situations on a daily basis. The third article by Tamuz, Russell, and Thomas describes this phenomenon using a series of case studies. Drawing on interviews with 20 nurse managers from three tertiary care hospitals, their study identifies key exemplars that illustrate how managers monitor nursing errors. The exemplars examine how nurse managers: (1) sent mixed messages to staff nurses about incident reporting, (2) kept two sets of books for recording errors, and (3) developed routines for classifying potentially harmful errors into non-reportable categories. These exemplars highlight two tensions: first, applying bureaucratic rule-based standards to professional tasks; and, second, maintaining accountability for errors while also learning from them. These fundamental tensions influence organizational learning and suggest theoretical and practical research questions. While nurse managers are engaged in important forms of organizational learning to improve patient safety, they cannot address the core issue that many problems have their roots at the other end of the organizational chart – the boardroom.
Top Management Perspectives on Patient Safety The commitments necessary to build high reliability health organizations that are safe take many forms. For example, the requisite financial investment to bring the latest information technology into the system is typically a significant percentage of an organization’s budget. Another common problem is that facilities are often designed in ways that make workflows inefficient and ineffective and they need to be remodeled or replaced. Because the costs of changes to an organization’s physical plant are so high, such decisions invariably require the approval of the board of directors. However, simply committing money is not enough. The biggest challenge that health care organization boards face is changing their own cultures and those of their organizations to put safety at the forefront of the care agenda. The case study by Rubino and Chan details how the Board of Directors at St. Francis Medical Center took on the task of improving patient safety. They provide a set of tools that other boards can adapt to their institutions in order to pursue similar goals. For example, they use a ‘Balanced Scorecard’ approach that is familiar to many hospital administrators and
6
ERIC W. FORD AND GRANT T. SAVAGE
board members. The use of scorecards reduces an important barrier to adoption by allowing board members to fold their patient safety efforts into existing quality assurance and improvement activities in the hospital. While such incremental steps are valuable, they do not provide a holistic theoretical framework to work from in instituting change or address the role of medical professionals in changing the delivery of care. The involvement of hospital boards in change processes is critical and there is growing pressure to hold board members accountable for organizational outcomes – including those of a clinical nature. Herein lays a dilemma, as physicians are the profession with the authority and responsibility for directing patient care. To that end, most hospitals rely on physicians to ensure the quality of care delivered. The article by Culbertson and Hughes considers this problem using the organizational structure theory first put forth by Mintzberg (1979), which views the hospital as a case of a professional bureaucracy. Physicians, as professional staff, are thus responsible for standard setting and regulation. However, trustees are now asked to examine reports identifying physician compliance in attaining safety standards without education in the practice supporting those standards. Physician board members, whose numbers have increased in the past decade, are often sought to take the lead on interpretation of patient safety standards and results. The very public nature of patient safety reporting and its reflection on the reputation of the organization for which the trustee is ultimately accountable create a new level of tension and workload that challenges the dominant voluntary model of trusteeship in the U.S. health system. Culbertson and Hughes offer some advice about how board composition and duties might be configured to include physicians, but not exclude them from other policymaking activities. The roles of nurses, physicians, and boards are undoubtedly critical foci in any fully formed theory related to patient safety; however, they do not, as individual areas of research or even in combination, address the systemic nature of the problem. Health care delivery takes place in a community context. Norms and standards of care have significant regional components that go beyond individual institutions. Many health services purchasers hope to activate these local market forces using tools such as pay-forperformance (P4P) to help improve care and control costs. The nursing shortage has already empowered that professional to activate market forces and negotiate better wage packages. Further nurses are asserting their own professional autonomy to redefine inter-professional relationships with physicians and managers. The power of community action can yield remarkable changes in the way people within organizations behave.
Patient Safety
7
The last article in the Top Management Team section provides an example of a regional effort to change the way care is delivered. The Houston–Galveston region created an aggressive approach to this issue by forming an unusual coalition of business, university, and hospital leaders and using a quality-improvement approach. Batten, Goodman, and Distefano’s findings indicate that shifting the focus away from individual employee behaviors to meaningful management change had a far more profound effect that stretched across an entire community. The project has achieved over 40% participation among hospitals in the 13-county region, and it includes 50 hospitals employing approximately 15,000 registered nurses. The data that have been collected by this collaborative to date suggest that hospitals are taking action to improve outcomes by modifying their key initiatives to address the attributed causes of poor work environments. From 2004 to 2005, executives of top performing hospitals increasingly attributed successful work environment outcomes to an emphasis on management development and executive-driven initiatives, de-emphasizing specific employee behavior, process, and outcome-based initiatives. The admonition to physicians to ‘heal themselves’ may be one that health care administrators ought to take to heart when launching efforts to improve patient safety. Administrators need develop management systems that do more than create policies and track claims data. Managers need to synthesize information into actionable forms that can be used to redesign work processes at levels that cannot be readily changed through written policies.
Health Information Technology Perspectives on Patient Safety Health Information Technology (HIT) has been held out as a ‘silver bullet’ solution to all of the problems that make the U.S. system suffer from lapses in patient safety and cause medical errors to occur. In particular, mandatory universal adoption of EMRs has been suggested as a means to document encounters, coordinate care among providers, monitor compliance with clinical guidelines, and provide decision support to physicians. These are ambitious goals and the EMR products currently available in the marketplace are a quantum leap away from achieving the level of functionality necessary to realize those goals. Further, even if a particular system possesses such features, it is unclear whether it would be capable of effectively interfacing with another manufacturer’s system.
8
ERIC W. FORD AND GRANT T. SAVAGE
In recent years there have been significant efforts to make systems interoperable and the U.S. Government formed the Certification Commission for Healthcare Information Technology (CCHIT) to promote and coordinate this goal (Classen, Avery, & Bates, 2007). Beginning in 2006, several products were certified as being interoperable to the HL7 standard. However, even among those products, it is unclear that it is possible to effectively match patients and share medical information from one to the next. Patient identification is particularly important for two reasons. First, it is the clinical imperative to correctly identify the person being treated. In emergency situations, having incorrect medical information such as blood type, allergies, immunizations, and pre-existing conditions can be far worse than having no information at all. The second issue is the consumer’s right to privacy. Fundamental questions around who owns and controls medical information are potentially more problematic than the technological challenges of interoperability and accurate individual identification. Coordinating the safeguarding and sharing of health information is a governance issue. Frequently, a local market’s competitive dynamics make it nearly impossible for organizations to harmonize their policies and procedures in a way that allows for meaningful interchange. A nationwide effort to assess state and regional stakeholders’ views on the issues surrounding health information and privacy was begun in 2005 under the auspices of the Department of Health and Human Services. One goal of the program was to identify current legal and regulatory standards that needed to be harmonized so that patient information sharing within and across communities could occur without the fear of violating laws. Galt and her colleagues examine how the process played out in the state of Nebraska. They conducted an in-depth case study to explore the knowledge, understanding, and awareness of 25 health board/facility oversight managers and 20 health professional association directors about privacy and security issues important to achieving Health Information Exchange (HIE). The case analysis revealed that health board/facility oversight managers were unaware of key elements of the federal agenda; their concerns about privacy encompassed broad definitions both of what constituted a ‘health record’ and ‘regulations centeredness.’ Alternatively, health professional association leaders were keenly aware of national initiatives. Despite concerns about HIE, they supported information exchange believing that patient care quality and safety would improve. The analysis also revealed a perceptual disconnect between board/facility oversight managers and professional association leaders; however, both favored HIE.
Patient Safety
9
Licensure and facility boards at the state level are likely to have a major role in the assurance of patient protections through facility oversight and provider behavior. Similarly, professional associations are the major vehicles for post-graduate education of practicing health professionals. Their engagement is essential to maintaining health professions knowledge. States will need to understand and engage both of these key stakeholders to make substantial progress in moving the HIE agenda forward. In addition to the challenges in these efforts, one theme that emerges is the large number and transient nature of the umbrella organizations that are charged with conducting these projects. The list of acronyms labeling the organizations charged with solving the Gordian Knot-like dilemma of HIT interoperability and exchange is now legion. The American Health Information Community (AHIC) has already come and gone. Its successor, AHIC.2 – a private–public partnership – is still in the formative stages. The contract with the Research Triangle Institute (RTI) to manage the Health Information Security and Privacy Collaboration (HISPC) is near its end – but what has come of it? Will it merely be another set of recommendations that no one has the authority or wherewithal to implement? Perhaps, the most telling story is that of the Santa Barbara County Data Exchange (Brailer, 2007; Frohlich, Karp, Smith, & Sujansky, 2007; Holmquest, 2007; Miller & Miller, 2007). Although it is not chronicled in this volume, its legacy is inescapable for those hoping to build a national system for sharing patient information. As one of the first and highest profile Regional Health Information Exchanges (RHIOs), the Santa Barbara organization was held out as a model for other communities. Its leader, Dr. David Brailer, became the first National Coordinator for Health Information Technology (ONCHIT). In the end, its peer-to-peer model collapsed because organizations could not interface their data systems and there was no sustainable business plan in place. One common element to all of the large-scale efforts has been the underlying philosophy that ‘if you build it, organizations will come.’ The separation of costs and benefits can be profound in information exchanges with the providers and their parent organizations bearing the cost, while the benefits accrue to others (Menachemi & Brooks, 2006). The impact of this misalignment is most keenly felt in smaller organizations that face all the fixed costs of adoption, but lack the economies of scale to make the financing work in their favor. Physicians in small practices are at the pointy-end of the HIT adoption stick (Ford, Menachemi, & Phillips, 2006). As much as any other group of
10
ERIC W. FORD AND GRANT T. SAVAGE
professionals, it is physicians’ workflows and pocketbooks that are likely to feel the major impact of moving to EMR systems. The paper by Bramble and his colleagues describe how physicians characterize these barriers to HIT implementation. Content analysis of qualitative interviews revealed three barrier themes: time, technology, and environment. Interviews also revealed two other major concerns; specifically, the compatibility of the HIT with the physician’s patient mix and the physician’s own attitude toward the use of HIT. The axiom that ‘time is money’ and that reduced productivity is a major concern among physicians when they consider adopting HIT are well illustrated. When the reward systems are not aligned with the policy goals of HIT advocates, the reward system will dominate decision making and the status quo will prevail. Changing individuals and organizations’ behaviors is one of the most difficult tasks that managers face and lies at the heart of many patient safety programs.
Organizational Behavior and Change Perspectives on Patient Safety Efforts to change the organizations’ structures, process, and cultures to improve patient safety related outcomes are proliferating rapidly. Health organizations are drawing on LEAN (Manos, Sattler, & Alukal, 2006) and Six Sigma (Lazarus & Novicoff, 2004; Revere & Black, 2003) manufacturing principles to redesign care delivery processes. New facilities are being built to create healing and family-centered environments (Fottler, Ford, Roberts, Ford, & Spears, 2000; Towill & Christopher, 2005). While some companies are achieving amazing results, three out of four reengineering programs fail (Manganelli & Klein, 1994). Those leading reengineer efforts often make bold promises to transform organizations, but the hard part is taking the theory and putting it into practice. To change conventional thinking and traditional practices pits managers against the status quo. To overcome these barriers, not only is senior management sponsorship essential, the leadership itself needs to be exposed to external change agents and ideas to reshape their views. The article by McAlearney describes a new mechanism to allow organization leaders to gather new ideas and skills – the corporate university. As McAlearney points out, in other industries, the rise of corporate universities has been steady over the past 20 years (Anonymous, 2005). The corporate university is organized to allow health leaders to stay at their current job while gathering the skills and knowledge to implement innovations. Aside from the human resources motivation for wanting to
Patient Safety
11
develop and retain talent, corporate universities allow innovation and change to originate from within the organization. The hope is that the corporate university will foster a culture of change as a shared value among leaders. In health care organizations, OD programs can serve an important institutional function by providing a framework through which patient safety can be emphasized as an organizational priority, and patient safety training can be delivered as part of OD efforts. In addition, organizations committed to creating a patient-focused safety culture can use OD initiatives strategically to support organizational culture change efforts. McAlearney’s paper describes different approaches to including patient safety in an OD framework, drawing from both management theory and practice. Findings from three extensive qualitative studies of leadership development and corporate universities in health care provide specific examples of how health care organizations discuss patient safety improvement using this alternative approach. The article by Deutsch and her colleagues discusses a common operating procedure that is anything but standardized in most health care organizations – the patient handoff. A ‘handoff ’ occurs so that patient-specific medical information can be provided to the medical professional(s) assuming responsibility for that patient. Providing an appropriate summary supports safe, high quality, effective medical care; inadequate or incorrect information may create risk for the patient. A handoff approach was developed to facilitate this process, using the mnemonic START (S: situation; T: therapies; A: anticipated course; R: reconciliation; T: transfer). Surveys of handoffs occurring before and after introduction of the START system demonstrate that there are several areas with potential for process improvement. Contrasted with the McAlearney paper, which deals with the macro-level of organizational change, Deutsch et al.’s paper takes us to the most finite of organizational activities. Nevertheless, the lack of common techniques for ensuring that vital patient information is effectively communicated from one caregiver to the next is a weak link in the system that often breaks. Indeed, the results of such breaks create the ‘holes’ in the continuity of care processes described by Palmieri and his colleagues in the first section of this volume. Nonetheless, there may not be one best way to conduct every handoff, since the intensive care unit (ICU) has different needs than subacute care facilities (SCFs). As van Stralen and his colleagues note, there often are not common standards within the same unit at many such facilities. Specifically, this last chapter in this compendium describes how a nursing home was transformed to a pediatric SCF. The transformation entailed not only
12
ERIC W. FORD AND GRANT T. SAVAGE
making information flows more effective, but also empowering the personnel to make the SCF a high reliability organization (HRO). To obtain these goals, the health care team implemented change in four behavioral areas: (1) risk awareness and acknowledgement; (2) defining care; (3) thinking and making decisions; and (4) information flow. The team focused on five reliability enhancement issues that emerged from previous research on banking institutions: (1) process auditing; (2) the reward system; (3) quality degradation; (4) risk awareness and acknowledgement; and (5) command and control. Three additional HRO processes also emerged: high trust and building a high reliability culture based on values and on beliefs. The case demonstrates that HRO processes can reduce costs, improve safety, and aid in developing new markets. Key to van Stralen et al.’s findings was that every organization must tailor its processes to fit its own situation. Further, organizations need managers with the skills and training to be flexible and adapt to in applying HRO principles.
CONCLUDING COMMENTS: THE FUTURE OF PATIENT SAFETY Much of the rhetoric promoting patient safety in the U.S. has focused on technical solutions – such as computer-assisted physician order entry (CPOE) for prescriptions – and over-simplifies the challenges facing health care managers and professionals. The authors of this volume – all of who are from the U.S. – demonstrate that the future of patient safety for health care management requires health care professionals and managers who can successfully engage in multi-faceted projects that are socially and technically complex. These challenges, more often than not, involve changing the social structures and cultures of health care organizations. Improvements in patient safety, thus, require long-term commitments from health care managers and professionals, as well as competencies in managing complexity. What should health care management researchers do to improve patient safety? Two paths are suggested based on the work published in this volume. First, we believe that health care management researchers can help clinical and managerial practitioners to improve patient safety by engaging in (a) multi-level of analysis research that evaluates organizational change efforts; (b) institutional-level research on inter-organizational and public–private collaborations; and (c) social-technical system evaluations of HIT and other
13
Patient Safety
technical implementations. These three general areas of research explore the complex and multi-faceted nature of health care organizations, and will better ensure that research findings inform both policymakers and health care managers and professionals. Second, and lastly, health care management researchers should look beyond the U.S. and its organizational and institutional landscape to investigate ways to improve patient safety. Efforts underway in Europe, Asia, and other countries represent naturally occurring experiments in patient safety. International comparative research holds the promise of illuminating new facets on the complex challenges of improving patient safety, while hastening the dissemination of best practices throughout the world.
REFERENCES Anonymous. (2005). The corporate university: Riding the third wave. Development and Learning in Organizations, 19(6), 16. Brailer, D. J. (2007). From Santa Barbara to Washington: A person’s and a nation’s journey toward portable health information. Health Affairs, 26(5), w581–w588. Classen, D. C., Avery, A. J., & Bates, D. W. (2007). Evaluation and certification of computerized provider order entry systems. Journal of the American Medical Informatics Association, 14(1), 48–55. Ford, E. W., Menachemi, N., & Phillips, M. T. (2006). Predicting the adoption of electronic health records by physicians: When will healthcare be paperless? Journal of the American Medical Informatics Association, 13(1), 106–112. Fottler, M. D., Ford, R. C., Roberts, V., Ford, E. W., & Spears, J. D. (2000). Creating a healing environment: The importance of the service setting in the new consumer-oriented healthcare system/practitioner application. Journal of Healthcare Management, 45(2), 91. Frohlich, J., Karp, S., Smith, M. D., & Sujansky, W. (2007). Retrospective: Lessons learned from the Santa Barbara project and their implications for health information exchange. Health Affairs, 26(5), w589–w591. Holmquest, D. L. (2007). Another lesson from Santa Barbara. Health Affairs, 26(5), w592–w594. Institute of Medicine. (1991). The computer-based patient record: An essential technology for healthcare. Washington, DC: National Academy Press. Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (Eds). (1999). To err is human: Building a safer health system. Washington, DC: National Academy Press. Lazarus, I. R., & Novicoff, W. M. (2004). Six Sigma enters the healthcare mainstream. Managed Healthcare Executive, 14(1), 26. Manganelli, R. L., & Klein, M. M. (1994). A framework for reengineering (Part I). Management Review, 83(6), 10. Manos, A., Sattler, M., & Alukal, G. (2006). Make healthcare lean. Quality Progress, 39(7), 24. Menachemi, N., & Brooks, R. G. (2006). Reviewing the benefits and costs of electronic health records and associated patient safety technologies. Journal of Medical Systems, 30(3), 159–168.
14
ERIC W. FORD AND GRANT T. SAVAGE
Miller, R. H., & Miller, B. S. (2007). The Santa Barbara County Care Data Exchange: What happened? Health Affairs, 26(5), w568–w580. Mintzberg, H. (1979). The structuring of organizations. Englewood Cliffs: Prentice-Hall. Revere, L., & Black, K. (2003). Integrating six sigma with total quality management: A case example for measuring medication errors. Journal of Healthcare Management, 48(6), 377. Towill, D. R., & Christopher, M. (2005). An evolutionary approach to the architecture of effective healthcare delivery systems. Journal of Health Organization and Management, 19(2), 130.
EVIDENCE-BASED MEDICINE AND PATIENT SAFETY: LIMITATIONS AND IMPLICATIONS Grant T. Savage and Eric S. Williams ABSTRACT A fundamental assumption by the Institute of Medicine (IOM) is that evidence-based medicine (EBM) improves the effectiveness of medical diagnosis and treatment and, thus, the safety of patients. However, EBM remains controversial, especially its links to patient safety. This chapter addresses three research questions: (1) How does EBM contribute to patient safety? (2) How and why is EBM limited in improving patient safety? and (3) How can patient safety be maximized, given the limitations of EBM? Currently, EMB contributes to patient safety both by educating clinicians on the value and use of empirical evidence for medical practice and via large-scale initiatives to improve care processes. Attempts to apply EBM to individual patient care are limited, in part, because EMB relies on biostatisical and epidemiological reasoning to assess whether a screening, diagnostic, or treatment process produces desired health outcomes for a general population. Health care processes that are most amenable to EBM are those that can be standardized or routinized; non-routine processes, such as diagnosing and treating a person with both acute and chronic co-morbidities, are cases where EBM has limited applicability. To improve patient safety, health care Patient Safety and Health Care Management Advances in Health Care Management, Volume 7, 17–31 Copyright r 2008 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1474-8231/doi:10.1016/S1474-8231(08)07002-X
17
18
GRANT T. SAVAGE AND ERIC S. WILLIAMS
organizations should not rely solely on EBM, but also recognize the need to foster mindfulness within the medical professions and develop patientcentric organizational systems and cultures.
The Institute of Medicine (IOM) report, To Err is Human, made the U.S. public both aware and alarmed that medical errors are estimated to kill between 44,000 and 98,000 American each year (Kohn, Corrigan, & Donaldson, 1999). The follow up report, Crossing the Quality Chasm, advocated reforming the entire health care system – including the organization, delivery, and financing of care – to improve quality while containing costs (Institute of Medicine, 2001a). These reports have generated a wave of funding, research, and organizational changes, including a renewed emphasis on a variety of approaches to improve both patient safety and the quality of care. A fundamental assumption by the IOM, as well as many other health care providers and health service researchers, is that evidence-based medicine (EBM) improves the effectiveness of medical diagnosis and treatment and, thus, the safety of patients (Berwick, 2002). However, EBM remains controversial, especially its connection to individual patient safety (Miles & Loughlin, 2006; Miles, Polychronis, & Grey, 2006). Hence, this chapter addresses three research questions: 1. How does EBM contribute to patient safety? 2. How and why is EBM limited in improving patient safety? 3. How can patient safety be maximized, given the limitations of EBM?
The chapter is divided into six sections. The first section provides a definition of EBM and patient safety, and establishes the relationship between these two concepts. Section two examines the contributions of EBM to patient safety. The third section reviews the criticisms of EBM and its limitations for improving patient safety. Section four explores these limitations by examining the heterogeneity of treatment effects (HTE), uncertainty, and the tenets of statistical quality control and improvement. The fifth section addresses how patient safety can be maximized given the limitations of EBM, while the sixth and concluding section focuses on the research challenges this perspective raises for health care organizations and health services researchers.
Evidence-Based Medicine
19
DEFINING EVIDENCE-BASED MEDICINE AND PATIENT SAFETY The term, evidence-based medicine (EBM), was proposed in the early 1990s by various academic physicians – first in Canada, then the United Kingdom, and, lastly, the United States – as a new way of teaching the practice of medicine (Cohen, Stavri, & Hersh, 2004; Evidence-Based Medicine Working Group, 1992). These physicians believed that a paradigm shift was occurring, from ‘‘intuition, unsystematic clinical experience and pathophysiological rationale’’ to an emphasis on ‘‘evidence from clinical research’’ (Evidence-Based Medicine Working Group, 1992). While there are several definitions of the term, Davidoff and his colleagues (Davidoff, Haynes, Sackett, & Smith, 1995), have articulated the most comprehensive and useful one for our discussion: [E]vidence-based medicine is rooted in five linked ideas: firstly, clinical decisions should be based on the best available scientific evidence; secondly, the clinical problem – rather than habits or protocols – should determine the type of evidence to be sought; thirdly, identifying the best evidence means using epidemiological and biostatistical ways of thinking; fourthly, conclusions derived from identifying and critically appraising evidence are useful only if put into action in managing patients or making health care decisions; and, finally, performance should be constantly evaluated. (p. 1085)
Compare the above definition of EBM with the definitions of both patient safety and safe care as promulgated by the IOM. On the one hand, patient safety is assured through ‘‘[t]he prevention of harm caused by errors of commission and omission’’ (Aspden, Corrigan, Wolcott, & Erickson, 2004). On the other hand, ‘‘[s]afe care involves making evidence-based clinical decisions to maximize the health outcomes of an individual and to minimize the potential for harm. Both errors of commission and omission should be avoided’’ (Aspden et al., 2004). Clearly, the IOM definition of patient safety is deeply rooted in Western medical tradition, with the dual notions of providing benefit and preventing harm echoing key passages in the Hippocratic Oath (Smith, 1996; von Staden, 1996). Also, the definition of safe care draws explicitly on EBM as the basis for both maximizing health outcomes for, and minimizing harm to, the patient. Moreover, the above definition aligns EBM with several tenets of quality improvement, including making decisions based on empirical data, using statistics to disclose variations in process outcomes, acting to improve processes, and continuous evaluation. EBM’s explicit connection to both safe care and quality
20
GRANT T. SAVAGE AND ERIC S. WILLIAMS
improvement, as we will demonstrate, has significant implications for delimiting EBM.
CONTRIBUTIONS OF EVIDENCE-BASED MEDICINE TO PATIENT SAFETY EBM has made two large contributions to the improvement of patient safety since its introduction in the early 1990s. On one hand, EBM and its offshoots – evidence-based nursing, evidence-based practice, etc. – have become fundamental components of medical, nursing, and allied health professional education. Currently, in most academic health centers practicing Western medicine, medical, nursing, and allied health students are exposed to the best empirical evidence for diagnosis and treatment, are taught how to find and use EBM guidelines, and are encouraged to maintain their knowledge and application of EBM once they enter practice (Gerhardt, Schoettker, Donovan, Kotagal, & Muething, 2007; McCluskey & Lovarini, 2005; McConnell, Lekan, Hebert, & Leatherwood, 2007; NailChiwetalu & Ratner, 2006; Sinclair, 2004; Slawson & Shaughnessy, 2005; Stone & Rowles, 2007; Wanvarie et al., 2006; Weberschock et al., 2005). While concerns remain about the effectiveness of the dissemination of EBM practice guidelines and the quality of the evidence for those guidelines, there has been a clear shift toward the use of EBM (Grimshaw, 2004a, 2004b, 2006). On the other hand, EBM informs many of the pay-for-performance and other projects that attempt to improve health care quality in hospitals, medical groups, and nursing homes. Leading examples in the U.S. include Medicare’s Hospital Compare (http://www.hospitalcompare.hhs.gov/) and the 5 Million Lives Campaign (http://www.ihi.org/IHI/Programs/Campaign/) sponsored by the Institute for Health Improvement. Currently, Hospital Compare includes 21 process of care measures: eight measures related to heart attack care, four measures related to heart failure care, seven measures related to pneumonia care, and two measures relates to surgical infection prevention. All of these care measures are derived from EBM. In turn, the 5 Million Lives Campaign includes multiple EBM interventions, from preventing ventilator-associated pneumonia to reducing surgical complications to preventing pressure ulcers to delivering care for acute myocardial infarction and congestive heart failure.
Evidence-Based Medicine
21
CRITICISMS OF EVIDENCE-BASED MEDICINE AND ITS LIMITATIONS FOR PATIENT SAFETY Criticisms of EBM include its (1) narrow definition of evidence, for example, elevation of randomized controlled trials; (2) reliance on empiricism; (3) lack of evidence of its effectiveness; (4) limited usefulness for individual patients; and (5) threats to physician autonomy and patient relations (Cohen et al., 2004). The first two criticisms have their basis in disputes over the philosophy of science, while the last criticism is disputed on both political and ethical grounds. Pragmatically, the most negative of these criticisms is the lack of evidence for EBM’s effectiveness and its limited usefulness for individual patients. Indeed, few rigorous empirical studies measure the impact of EBM on patient safety and safe care (see, however, Patkar et al., 2006; Shojania, Duncan, McDonald, Wachter, & Markowitz, 2001). Thus, its effectiveness in reducing errors and improving health outcomes remains largely untested (Buetow, Upshur, Miles, & Loughlin, 2006). Moreover, since its conception, EBM has been faulted for its provider-centric focus (Bensing, 2000) and its limited benefit for the individual patient (Marshall, 2006).
HETEROGENEITY OF TREATMENT EFFECTS, UNCERTAINTY, AND QUALITY IMPROVEMENT The arguments of Kravitz and his colleagues regarding the HTE provide, perhaps, the most constructive way to understand why EBM has had limited benefit to individual patients (Kravitz, Duan, & Braslow, 2004). At the same time, the uncertainty associated with individual patient differences creates a common-cause source of variance that limits the applicability of EBM. From a quality improvement perspective, EBM is least applicable to nonroutine health care processes. These points are explicated in the three parts to this section of the chapter. Heterogeneity of Treatment Effects The HTE occurs when the same treatment to different patients within a population generates various outcomes. In statistical terms, HTE ‘‘is the magnitude of the variation of individual treatment effects across a population’’ (Kravitz et al., 2004, p. 664). Four dimensions of treatmenteffect heterogeneity contribute to how an individual may respond to medical
22
GRANT T. SAVAGE AND ERIC S. WILLIAMS
treatment: (1) risk of disease without treatment; (2) responsiveness to treatment; (3) vulnerability to adverse effects; and (4) utility or preferences for different outcomes. Variations in one or more of these dimensions create treatment-effect heterogeneity. HTE has important implications for clinical trials, which may be characterized as controlled experiments that test a population sample. Unless extraordinary efforts are made to test a random sample of the population stratified into statistically meaningful subgroups, a clinical trial typically tests treatments on a single subgroup within the population. Furthermore, Kravitz and his colleagues caution that the trend toward including women, minorities, and children, as well as men, in a single trial may ‘‘do nothing but ensure that the estimates for any one subgroup are unreliable due to small numbers’’ (p. 677). Given either approach, the average treatment effect reported from a clinical trial may drastically underestimate the HTE for a treatment once it is widely disseminated into practice. ‘‘When HTE is present, the modest benefit ascribed to many treatments in clinical trials can be misleading because modest average effects may reflect a mixture of substantial benefits for some, little benefit for many, and harm for a few’’ (Kravitz et al., 2004, p. 662). Uncertainty While Kravitz and his colleagues, for the most part, focus on the HTE associated with random controlled trials for drugs, their arguments highlight how EBM has to deal with four types of uncertainties: risk without treatment, responsiveness to treatment, vulnerability to adverse effects, and utility for different outcomes. From a statistical quality-control perspective (Shewhart, 1939), these uncertainties are derived from differences among patients, not from the medical treatment per se. In other words, patients, as key inputs to the care process, represent a ‘‘commoncause’’ source of variation. In contrast, EBM attempts to eliminate the ‘‘special-cause’’ variation associated with different types of screenings, diagnostic tests, and treatments, as well as the variance introduced if different protocols are used for administering the same type of screening, diagnostic test, or treatment (McNeil, 2001). Quality Improvement From a quality improvement perspective, EBM works best on care processes with low to moderate uncertainty. As uncertainty within a care
23
Evidence-Based Medicine
process increases, the utility of methods that reduce special-cause variance, such as EBM, decreases. White (2004) discusses this observation in terms of EBM and patient safety: For EBM, the challenge is not just systematizing a vast and rapidly expanding knowledge base, but it is also to support its application to situations involving complex clinical judgments. In such situations, the usefulness of standardized guidelines may be limited, especially for seriously ill patients who account for a disproportionately large share of total expenditures. (p. 863)
Moreover, the limited research conducted on the success and failures of various quality initiatives has found most successes occurring when interventions are applied to processes with greater rather than lesser certainty (Easton & Jarrel, 1998; Flynn, Schroeder, & Sadao, 1995). Drawing upon the preceding studies and related research on quality improvement, Lillrank and his colleagues (Lillrank, 2002, 2003; Lillrank & Liukko, 2004) offer a useful metaphor, the quality broom, that has clear implications for EBM and patient safety. Fig. 1 shows a traditional broom with a hard thin handle, a broader middle section linking the handle and bristles, and the bristles themselves. This metaphor characterizes the three categories of care processes and their increasing level of uncertainty: standard, routine, and non-routine.
STANDARD
ROUTINE
NONROUTINE
• identical • repetition
• similar but not identical repetition
• non-repetitive
• compliance • procedures
• selection • clinical guidelines
• deviation
• error
• interpretation • intuition • failure
Level of uncertainty QUALITY SYSTEMS
QUALITY CULTURE
Fig. 1.
The Quality Broom. Source: Lillrank and Liukko (2004, p. 44).
24
GRANT T. SAVAGE AND ERIC S. WILLIAMS
Standard care processes (the thin, hard handle) have little uncertainty, and can be repeated without significant deviation. The administration of inactivated influenza vaccines via a flu shot is an example of a standard care process. In contrast, routine types of care processes (the middle section combining the handle and bristles) are bundles of standard sub-processes intermixed with patient-based variation. Routine care processes call upon methods for (a) assessing a patient’s risk of illness without treatment; (b) specifying decision rules for generating an appropriate treatment based on the patient’s responsiveness, vulnerability to side effects, and preferences; and (c) implementing a treatment to eliminate or alleviate the patient’s illness. ‘‘The essential thing in managing routine processes is not mindless, defect-free repetition (as in standard processes), but assessment and classification of input [i.e., patients], and selection from a finite set of alternative algorithms and actions [i.e., EBM]’’ (Lillrank & Liukko, 2004, p. 41). Lastly, some health care processes are non-routine (the bristles themselves). In many such instances, the inputs provided by the patients’ symptoms are unclear and not easily diagnosed (Lillrank, 2002, 2003; Lillrank & Liukko, 2004). In other instances, given a confirmed diagnosis, the efficacy of various treatments for the disease may be uncertain, with no clear understanding of the possible outcomes (McNeil, 2001). Non-routine processes, Lillrank and Liukko (2004) argue, ‘‘are best managed by indirect means, such as competence, improvement and professional values, visions and missions’’ (p. 44).
IMPLICATIONS FOR MAXIMIZING PATIENT SAFETY While EBM reduces much of the ‘‘special-cause’’ variance in the clinical decision-making processes, its impact on patient safety is limited by the ‘‘common-cause’’ variance represented by patient differences. For routine health care processes, Fig. 1 suggests that a mixture of EMB and clinical judgment is necessary to prevent the errors that result from improper diagnoses or the selection of an improper treatments or therapies. For nonroutine care processes, both clinical judgment and organizational culture come to the foreground, while EBM recedes to the background. Interestingly, sound clinical judgment is best supported by a strong culture of safety and quality – developed through health professionals’ clinical training (Bosk, 1979) and reinforced by organizational and institutional values
25
Evidence-Based Medicine
(Nieva & Sorra, 2003). Indeed, the second IOM report, Crossing the Quality Chasm, lays out a compelling vision for reorganizing the health care delivery system and improving patient safety and care quality (Institute of Medicine, 2001b). The aspects of this vision most relevant for maximizing patient safety vis-a`-vis EBM involve two key activities: (1) sharpening physicians’ abilities to operate in an increasingly complex and interdependent health system, and (2) sharpening health care organizations’ ability to support physicians’ decision-making and handling of routine and non-routine care processes.
Sharpening the Physician Sharpening the physician as an instrument of clinical judgment requires changing the models of medical education and continuing medical education (CME). The current model of medical education focuses on biomedical education and practice, and has its roots in the Flexner Report of 1910 (Flexner, 1972, c1910). Given the changes called for in the IOM report (Institute of Medicine, 2001b), two substantial changes to medical education may be in order. The first involves multidisciplinary training. If physicians are to work effectively within integrated care teams to manage non-routine care, they must have some exposure to other health professionals during their clinical training. Physicians-in-training should work with students from nursing, OT, PT, health care management, etc., on a variety of case studies, applied projects, and/or simulations. Ideally, such multidisciplinary training should begin early in medical school and take place across the four years of medical training. Ultimately, a new model for integrated health care education may emerge that includes medicine, nursing, health care management, and various therapeutic disciplines. The second direction involves training physicians to work effectively with issues beyond the clinical practice of medicine. Given the increasingly complex, technological, and multidisciplinary nature of health care, physicians-in-training should be exposed to coursework and practical experience in such diverse topics as quality improvement tools and methods, health care management, and health informatics. Ideally, these courses should be integrated into the four-year curriculum of medical school. However, there may be a sufficient amount of coursework to require an additional year of medical school, as well as post-graduate fellowships for those physicians most interested in applying the tools of quality improvement, health informatics, and management.
26
GRANT T. SAVAGE AND ERIC S. WILLIAMS
Physician education, however, does not end with medical school and residency; rather, it moves into the professionally controlled, fairly unstructured system of CME. Given the necessities of sharpening the ability of physicians to operate in a highly integrated system, it would seem necessary to revamp substantially the current system of CME. While CME might continue to be controlled by professional boards and societies, both IOM reports suggest that CME needs to be more rigorous and systematic. The model of training and retraining used by airlines has been suggested (Gaba, 2000). As applied to physicians, such training may involve periodic training and evaluation to maintain licensure or board certification. Such a system would also help to remove physicians who become less competent through age, stress, or other conditions. We believe the key to gaining physician acceptance of such a system would be to allow the continuation of professional control over licensure and board certification.
Sharpening the Organization To sharpen the abilities of health care organizations, the IOM report suggests a number of system changes (Institute of Medicine, 2001b). Here, the focus is on two ways of improving the organization’s capacity to support clinicians and improve patient safety: (1) creating learning organizations, and (2) developing high-reliability organizations.
Creating Learning Organizations Physicians are highly skilled and expensive workers. If they are engaged in standard care processes, they typically are being underutilized. Depending upon their degree of specialization, physicians contribute the most value to routine and, especially, non-routine care processes. The key is to provide an organizational context that supports and allows them to add the maximum amount of value. One of the elements in improving physician’s ability to manage nonroutine processes is to improve an organization’s capacity for problem solving and learning. The learning organization framework (Argyris & Schon, 1978; Senge, 1990) contains substantial insight into this process of learning at two levels (Senge, 1990). Single-loop learning involves incremental advances in existing practices. For example, most qualityimprovement programs operate by making incremental improvements in
Evidence-Based Medicine
27
existing practices. Double-loop learning emerges when organizations examine their mental models and their underlying assumptions inherent in their care processes. Such an approach to quality improvement involves radically reengineering routine care processes (Hammer & Champy, 1993). A learning organization emerges both as an organization comes to understand each type of learning and as continual learning becomes part of the organization’s culture and operations.
Developing High-Reliability Organizations In discussing the current state of high-reliability organization theory, Weick, Sutcliffe, and Obstfeld (1999) suggest that the principle factors include a ‘‘strategic prioritization of safety, careful attention to design and procedures, a limited degree of trial-and-error learning, redundancy, decentralized decision-making, continuous training often through simulation, and strong cultures that create broad vigilance for and responsiveness to potential accidents.’’ Many of these factors have found their way into health care applications. For example, redundancy has been used in surgery to avoid wrong-site surgeries. Surgeons will mark the limb or area to be operated on and this will be verified by the patient and others. Strong organizational cultures, particularly safety cultures, have also gained substantial credibility in the patient safety literature (Nieva & Sorra, 2003). However, while there are numerous success stories, Resar (2006) suggests that we need to learn ‘‘to walk before running in creating high-reliability organizations.’’
SUMMARY AND CONCLUSIONS We addressed three research questions in this chapter, first asking, ‘‘How does EBM contribute to patient safety?’’ We showed that EMB contributes to patient safety, both by educating clinicians on the value and use of empirical evidence for medical practice and via large-scale initiatives to improve care processes. Next, we addressed the question, ‘‘How and why is EBM limited in improving patient safety?’’ While there are five basic criticisms of EBM, we focused on its pragmatic shortcomings, especially its difficulty in application for individual patients. On one hand, EBM reduces much of the ‘‘special-cause’’ variance in the clinical decision-making processes; on the other hand, its impact is limited by the ‘‘common-cause’’ variance represented by patient differences. Hence, EBM works best on care processes with low to moderate uncertainty.
28
GRANT T. SAVAGE AND ERIC S. WILLIAMS
As uncertainty within a care process increases, the utility of methods that reduce special-cause variance, such as EBM, decreases. Finally, we asked, ‘‘How can patient safety be maximized, given the limitations of EBM?’’ Health care processes that are most amenable to EBM are those that can be standardized or routinized; for non-routine processes, such as diagnosing and treating a person with both acute and chronic co-morbidities, EBM has limited applicability. To improve patient safety, health care organizations should not rely solely on EBM, but also recognize the need to foster mindfulness within the medical professions and to develop patient-centric systems within high reliability and learning organization cultures. The recognition that health care organizations adopting EBM should also engage in strengthening an organizational culture of patient safety highlights a potential arena for research. High-reliability health care organizations focus on developing methods to translate effective nonroutine care processes into routine care processes. This transition is akin to the research and development process. As a complement to EBM, health service researchers should focus on creating systems for assessing nonroutine care process innovations that reduce patient-based variation. Nonetheless, in health care, the diffusion of innovations is notoriously slow (Balas & Boren, 2000). Thus, an applied research opportunity is developing systems for training and diffusing both EBM and complementary medical innovations, once they are deemed to be safe and effective. While preparing this manuscript, we came to recognize that clinicians and those working to improve patient safety and care quality via EBM see most care processes in different ways. In essence, they view quality improvement through different lenses. Physicians, attuned to the medical uncertainties embodied by the individual patient, see most care processes as non-routine. In contrast, health service researchers, viewing population health through the lenses of epidemiology and biostatistics, see most care processes as routine. This work shows both to be right y and wrong.
REFERENCES Argyris, C., & Schon, D. A. (1978). Organizational learning. Reading, MA: Addison-Wesley. Aspden, P., Corrigan, J. M., Wolcott, J., & Erickson, S. M. (Eds). (2004). Patient safety: Achieving a new standard of care. Washington, DC: National Academies Press. Balas, E. A., & Boren, S. A. (2000). Managing clinical knowledge for health care improvement. Yearbook of Medical Informatics, 65–70.
Evidence-Based Medicine
29
Bensing, J. (2000). Bridging the gap. The separate worlds of evidence-based medicine and patient-centered medicine. Patient Education and Counseling, 39(1), 17–25. Berwick, D. M. (2002). A user’s manual for the IOM’s ‘Quality Chasm’ report. Health Affairs, 21(3), 80–90. Bosk, C. L. (1979). Forgive and remember: Managing medical failure. Chicago: University of Chicago Press. Buetow, S., Upshur, R., Miles, A., & Loughlin, M. (2006). Taking stock of evidence-based medicine: Opportunities for its continuing evolution. Journal of Evaluation in Clinical Practice, 12(4), 399–404. Cohen, A. M., Stavri, P. Z., & Hersh, W. R. (2004). A categorization and analysis of the criticisms of evidence-based medicine. International Journal of Medical Informatics, 73(1), 35–43. Davidoff, F., Haynes, B., Sackett, D., & Smith, R. (1995). Evidence based medicine. British Medical Journal, 310(6987), 1085–1086. Easton, G. S., & Jarrel, S. (1998). The effects of total quality management on corporate performance: An empirical investigation. Journal of Business, 71, 253–307. Evidence-Based Medicine Working Group. (1992). Evidence-based medicine: A new approach to teaching the practice of medicine. JAMA, 268(17), 2420–2425. Flexner, A. (1972, c1910). Medical education in the United States and Canada: A report to the Carnegie Foundation for the advancement of teaching. New York: Arno Press. Flynn, B. B., Schroeder, R. G., & Sadao, S. (1995). The impact of quality management practices on performance and competitive advantage. Decision Science, 11, 611–629. Gaba, D. M. (2000). Structural and organizational issues in patient safety: A comparison of health care to other high-hazard industries. California Management Review, 43(1), 83–102. Gerhardt, W. E., Schoettker, P. J., Donovan, E. F., Kotagal, U. R., & Muething, S. E. (2007). Putting evidence-based clinical practice guidelines into practice: An academic pediatric center’s experience. Joint Commission Journal on Quality and Patient Safety/Joint Commission Resources, 33(4), 226–235. Grimshaw, J. (2004a). Implementing clinical guidelines: Current evidence and future implications. Journal of Continuing Education in the Health Professions, 24(Suppl 1), S31–S37. Grimshaw, J. M. (2004b). Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment, 8(6)iii-1-72. Grimshaw, J. (2006). Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966–1998. Journal of General Internal Medicine, 21(Suppl 2), S14–S20. Hammer, M., & Champy, J. (1993). Reengineering the corporation : A manifesto for business revolution (1st ed.). New York, NY: HarperBusiness. Institute of Medicine. (2001a). Crossing the quality chasm: A new health system for the 21st Century. Washington, DC: National Academy Press. Institute of Medicine. (2001b). Crossing the quality chasm: A new health system for the 21st Century. Washington, DC: National Academy Press. Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (1999). To err is human: Building a safer health system. Washington, DC: National Academy Press. Kravitz, R. L., Duan, N., & Braslow, J. (2004). Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. The Milbank Quarterly, 82(4), 661–687.
30
GRANT T. SAVAGE AND ERIC S. WILLIAMS
Lillrank, P. (2002). The broom and nonroutine processes: A metaphor for understanding variability in organizations. Knowledge and Process Management, 9(3), 143–148. Lillrank, P. (2003). The quality of standard, routine, and nonroutine processes. Organizational Studies, 34(1), 215–233. Lillrank, P., & Liukko, M. (2004). Standard, routine and non-routine processes in health care. International Journal of Health Care Quality Assurance, 17(1), 39–46. Marshall, J. C. (2006). Surgical decision-making: Integrating evidence, inference, and experience. The Surgical Clinics of North America, 86(1), 201–215, xii. McCluskey, A., & Lovarini, M. (2005). Providing education on evidence-based practice improved knowledge but did not change behaviour: A before and after study. BMC Medical Education, 5, 40. McConnell, E. S., Lekan, D., Hebert, C., & Leatherwood, L. (2007). Academic-practice partnerships to promote evidence-based practice in long-term care: Oral hygiene care practices as an exemplar. Nursing Outlook, 55(2), 95–105. McNeil, B. J. (2001). Hidden barriers to improvement in the quality of care. New England Journal of Medicine, 345(22), 1612–1620. Miles, A., & Loughlin, M. (2006). Continuing the evidence-based health care debate in 2006. The progress and price of EBM. Journal of Evaluation in Clinical Practice, 12(4), 385–398. Miles, A., Polychronis, A., & Grey, J. E. (2006). The evidence-based health care debate – 2006. Where are we now? Journal of Evaluation in Clinical Practice, 12(3), 239–247. Nail-Chiwetalu, B. J., & Ratner, N. B. (2006). Information literacy for speech-language pathologists: A key to evidence-based practice. Language, Speech, and Hearing Services in Schools, 37(3), 157–167. Nieva, V. F., & Sorra, J. (2003). Safety culture assessment: A tool for improving patient safety in healthcare organizations. Quality and Safety in Health Care, 12(Supplement II), i17–i23. Patkar, V., Hurt, C., Steele, R., Love, S., Purushotham, A., Williams, M., Thomson, R., & Fox, J. (2006). Evidence-based guidelines and decision support services: A discussion and evaluation in triple assessment of suspected breast cancer. British Journal of Cancer, 95(11), 1490–1496. Resar, R. K. (2006). Making noncatastrophic health care processes reliable: Learning to walk before running in creating high-reliability organizations. Health Services Research, 41(4 Pt 2), 1677–1689. Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday/Currency. Shewhart, W. A. (1939). Statistical method from the viewpoint of quality control. Washington, DC: The Graduate School of the Department of Agriculture. Shojania, K. G., Duncan, B. W., McDonald, K. M., Wachter, R. M., & Markowitz, A. J. (2001). Making health care safer: A critical analysis of patient safety practices. Evidence Report/Technology Assessment no. 43. Rockville, MD: Agency for Healthcare Research and Quality. Sinclair, S. (2004). Evidence-based medicine: A new ritual in medical teaching. British Medical Bulletin, 69(1), 179–196. Slawson, D. C., & Shaughnessy, A. F. (2005). Teaching evidence-based medicine: Should we be teaching information management instead? Academic Medicine, 80(7), 685–689.
Evidence-Based Medicine
31
Smith, D. C. (1996). The Hippocratic Oath and modern medicine. Journal of the History of Medicine and Allied Sciences, 51(4), 484–500. Stone, C., & Rowles, C. J. (2007). Nursing students can help support evidence-based practice on clinical nursing units. Journal of Nursing Management, 15(3), 367–370. von Staden, H. (1996). ‘‘In a pure and holy way’’: Personal and professional conduct in the Hippocratic Oath? Journal of the History of Medicine and Allied Sciences, 51(4), 404–437. Wanvarie, S., Sathapatayavongs, B., Sirinavin, S., Ingsathit, A., Ungkanont, A., & Sirinan, C. (2006). Evidence-based medicine in clinical curriculum. Annals of the Academy of Medicine, Singapore, 35(9), 615–618. Weberschock, T. B., Ginn, T. C., Reinhold, J., Strametz, R., Krug, D., Bergold, M., & Schulze, J. (2005). Change in knowledge and skills of year 3 undergraduates in evidence-based medicine seminars. Medical Education, 39(7), 665–671. Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (1999). Organizing for high reliability: Processes of collective mindfulness. Research in Organizational Behavior, 21, 81–123. White, W. D. (2004). Reason, rationalization, and professionalism in the era of managed care. Journal of Health Politics, Policy and Law, 29(4–5), 853–868; discussion 1005–1019.
THE ANATOMY AND PHYSIOLOGY OF ERROR IN ADVERSE HEALTH CARE EVENTS Patrick A. Palmieri, Patricia R. DeLucia, Lori T. Peterson, Tammy E. Ott and Alexia Green ABSTRACT Recent reports by the Institute of Medicine (IOM) signal a substantial yet unrealized deficit in patient safety innovation and improvement. With the aim of reducing this dilemma, we provide an introductory account of clinical error resulting from poorly designed systems by reviewing the relevant health care, management, psychology, and organizational accident sciences literature. First, we discuss the concept of health care error and describe two approaches to analyze error proliferation and causation. Next, by applying transdisciplinary evidence and knowledge to health care, we detail the attributes fundamental to constructing safer health care systems as embedded components within the complex adaptive environment. Then, the Health Care Error Proliferation Model explains the sequence of events typically leading to adverse outcomes, emphasizing the role that organizational and external cultures contribute to error identification, prevention, mitigation, and defense construction. Subsequently, we discuss the critical contribution health care leaders can make to address error as they strive to position their institution as a high Patient Safety and Health Care Management Advances in Health Care Management, Volume 7, 33–68 Copyright r 2008 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1474-8231/doi:10.1016/S1474-8231(08)07003-1
33
34
PATRICK A. PALMIERI ET AL.
reliability organization (HRO). Finally, we conclude that the future of patient safety depends on health care leaders adopting a system philosophy of error management, investigation, mitigation, and prevention. This change is accomplished when leaders apply the basic organizational accident and health care safety principles within their respective organizations.
INTRODUCTION The Institute of Medicine (IOM) established the contemporary starting line for the national patient safety movement with the seminal report, To Err is Human: Building a Safer Health System. The IOM estimated 98,000 patients die annually in American hospitals as a result of medical errors (Kohn, Corrigan, & Donaldson, 2000). Subsequent research using additional resources indicated that the number of preventable deaths was closer to 200,000 each year (Zhan & Miller, 2003). Based on their analysis, the IOM depicted the overall state of health care as a system that frequently harms and routinely fails to deliver the appropriate standard of care (Davis et al., 2002). In the years following the IOM report, tremendous public and political pressures motivated health care organizations to identify and attempt to reduce adverse events (Berta & Baker, 2004; Wachter, 2004). Indeed, patient safety advocates point to the IOM report as ‘‘galvanizing a dramatically expanded level of conversation and concern about patient injuries in health care both in the United States and abroad’’ (Leape & Berwick, 2005, p. 2384). Patient safety arrived at the point of being ‘‘a national problem that became increasingly difficult for providers to ignore’’ (Devers, Pham, & Lui, 2004, p. 103). Regulatory (e.g. CMS), accreditation (e.g. Joint Commission), and quality-improvement organizations (e.g. NCQA) develop and advocate patient safety standards primarily derived from expert panel recommendations and opinions with the majority lacking evidentiary knowledge (Agency for Healthcare Research and Quality, 2001; Institute of Medicine, 2001). Although health care organizations acknowledged the importance of To Err is Human, the majority of leaders ‘‘expressed different levels of commitment to patient safety’’ (Devers et al., 2004, p. 111) reflected by increasing patient morbidity and mortality related to adverse events (Institute of Medicine, 2004; Zhan & Miller, 2003). Devers and colleagues
35
Error in Health Care
characterized the impact of the patient safety movement on improving health care systems as ‘‘occurring relatively slowly and incrementally’’ (2004, p. 114). By most accounts, the patient safety movement requires transformation in the areas of error identification, process improvement, and cultural renovation to defend patients from harm (Committee on the Future of Rural Healthcare, 2005; Institute of Medicine, 2001, 2003, 2004, 2007a, 2007b; Kohn et al., 2000).
Reducing Errors The purpose of this chapter is to describe the current state of the often unreliable health care system and to provide an introductory account of error resulting from poorly designed systems. In doing so, we review the relevant health care, management, psychology, and organizational accident science literature and synthesize an error framework from a transdisciplinary perspective. First, we discuss the concept of error and the complexities associated with errors within the complex adaptive health care system. Second, we describe two approaches to analyze error causation and discuss the associated implications. Next, we summarize the Swiss Cheese Model of adverse events (Reason, 1990, 2000), advocating modifications to emphasize both the impact of the complex adaptive system on health care professionals and the role error serves to distract leaders and clinicians from system improvements. Through this transdisciplinary approach, we present our Health Care Error Proliferation Model. This adaptation of the Swiss Cheese Model improves and updates the applicability of the general structural elements specifically to health care. Finally, we discuss the critical role health care leadership serves to proactively address error causation. Ultimately, the emphasis on error reduction and system defense strategies will lead to a minimal number of adverse events, an attribute associated with high reliability organizations (HROs) and industries.
PHYSIOLOGY OF ERROR In this section, we review two approaches to dissecting the root cause of error: the person approach and the system approach. These two approaches represent distinct philosophies of error causation which lead to divergent methods for managing liability and reducing errors.
36
PATRICK A. PALMIERI ET AL.
Person Approach The person approach advocates identifying the culpable party as the root cause of an adverse event (Reason, 1998). Historically, this health care error investigation process focuses on the ‘‘who did it’’ instead of the ‘‘why did it happen’’ (Kohn et al., 2000; Rasmussen, 1999; Reason, 2000). The person approach is commonly preferred because ‘‘blaming individuals is emotionally more satisfying than targeting institutions’’ (Reason, 2000, p. 70). A leading rationale to explicate this person focus is fundamental attribution theory. Attribution theory is a body of knowledge about how people explain things by either internal or external factors (Heider, 1958; Weiner, 1972). Generally, attribution theory describes the process whereby leaders seek: (1) to understand the cause of an event (Heider, 1958), (2) assess responsibility for the outcomes (Martinko & Gardner, 1982), and (3) appraise the personal attributes of the involved parties (Weiner, 1995b). These three attributes allow leaders to rapidly form determinations to describe employee behavior and performance (Harvey & Weary, 1985). Attribution theory explains leaders’ and managers’ thinking processes as related to conclusions about the development and causation of an event (Martinko, Douglas, & Harvey, 2006; Martinko & Thomson, 1998). Essential to attribution theory is the method by which leaders form opinions about worker’s performance (Martinko & Thomson, 1998; Weiner, 1995a). Attribution is synonymous with explaining ‘‘why did something occur’’ (Green & Mitchell, 1979) in the context of the complex adaptive environment. As such, attributing error to a particular clinician is quick, easy, and efficient for leaders to formulate immediate conclusions. Although swift in generating answers, the accuracy of error attribution is troublesome within complex systems given the plethora of variables, processes, and functions (Dorner, 1996). In addition, speed matters while accuracy is less important following an accident because attributing error to the clinician protects the organization from needing to admit culpability during the mitigation phase of the risk management process. With further scrutiny, however, we discover the majority of errors leading to patient harm are not related to incompetent or substandard clinician care (Cook & Woods, 1994; Kohn et al., 2000; Reason, Carthey, & de Leval, 2001). Rather, errors more frequently reflect an inability of clinicians and other health care workers to cope with multiple gaps produced by system complexity (Dorner, 1996; Wiegmann & Shappell, 1999; Woods & Cook, 2002). Aviation is an industry that has achieved a remarkably low error rate. Reports attribute 80–90% of all errors to system complexity (Wiegmann &
Error in Health Care
37
Shappell, 1999). Experts hypothesize that this complexity is similar for the health care industry (Helmreich, 2000; Helmreich & Davies, 2004; Kohn et al., 2000). In other words, the root causes of error are not prominently located at individual level but tend to be a system property requiring causeand-effect analysis to elucidate ‘‘the causes’’ (Gano, 2003; Reason, 2000). Often, these system flaws, including obvious and dangerous conditions, remain undiscovered, hidden, or invisible until a sentinel event results. Described simply as an ‘‘error with sad consequences’’ (Cherns, 1962), most accidents result in the course of a complex chain reaction with a triggering cause or causes. In health care, a serious accident is called a sentinel event. Sentinel events are characterized as the unanticipated adverse event resulting in an outcome or potential outcome of serious injury or death not related to a patient’s expected illness trajectory (Joint Commission Resources, 2007). By supporting the person approach, clinical professionals, health care leaders, professional boards, and even the public frequently consider practitioners involved in errors to be ‘‘at fault’’ for neglecting to protect the patient. In fact, the National Council of State Boards of Nursing (NCSBN) could further exacerbate the assignment of blame by the person approach in describing the types and sources of nursing error through an inductive process called ‘‘practice breakdowns’’ (National Council of State Boards of Nursing, 2007). One universal consequence of adopting the person approach to error management is the culture of fear it engenders (Kohn et al., 2000; Reason, 2000; Rosenthal, 1994). The term ‘‘practice breakdowns’’ is defined as ‘‘the disruption or absence of any of the aspects of good practice’’ (Benner et al., 2006, p. 53). By categorizing error in terms of ‘‘practice breakdowns,’’ without specifically describing or defining what the disruption to, or the absence of, good practice means, error will continue to be adjudicated by examining performance closest to the adverse event, at the clinician practice level, as separate and removed components from the greater complex adaptive system. Perrow (1984) rejected the term ‘‘operator breakdown’’ in describing an accident because this blames the worker. The IOM (Kohn et al., 2000, p. 43) emphasizes the ‘‘individual provider issue’’ rather than failures in the process of providing care in a complex system as problematic to improving health care delivery. In short, the term ‘‘practice breakdowns’’ suggests a ‘‘clinician fault’’ connotation and may slow down the improvement to health care. Furthermore, public blame for clinician error and the subsequent punishment for adverse events due to error are expected by the majority of health care leaders regardless of the root causes (Vincent, 1997). The result of this punitive culture is that doctors and nurses hide ‘‘practice
38
PATRICK A. PALMIERI ET AL.
breakdown’’ situations (Kohn et al., 2000; Lawton & Parker, 2002) and mistakes to avoid reprisal and punishment (Gibson & Singh, 2003; Kohn et al., 2000). Speaking in opposition to the person approach, Gibson and Singh (2003, p. 24) stated, ‘‘When a health care professional reports a medical error, they suffer intimidation. They lose their standing, their status and are ostracized. An impenetrable culture of silence has developedy’’ Similarly, this person approach was illustrated by a recent sentinel event at a Midwestern hospital. An experienced labor and delivery nurse mistakenly delivered an epidural drug to a young late-term pregnant patient through the intravenous line (Institute for Safe Medication Practices, 2006). Unfortunately, the patient died as a direct result of this critical sentinel event error at the sharp-end, near the bedside. However, closer analysis revealed that the fatal nursing error was probably only one in a cascade of events plagued with ‘‘hidden’’ or contributory errors. Typically observed in medication delivery systems, errors originate proximal to system level and cascade through the process distally to clinical practice (Smetzer & Cohen, 2006). Due to the collective inability to recognize the nursing error in the context of a complex adaptive system with abundant dormant conditions, the nurse was subjected to significant public blame creating humiliation and hardship as well as criminal prosecution (Institute for Safe Medication Practices, 2006; State of Wisconsin, 2006). The nurse faced serious punishment despite the presence of other significant contributory factors (Institute for Safe Medication Practices, 2006). Numerous professional and quality-improvement organizations reacted with position statements objecting to the miscarriage of justice (Institute for Safe Medication Practices, 2006; Wisconsin Hospital Association, 2006; Wisconsin Medical Society, 2006), which is not in the spirit of To Err is Human (Institute for Safe Medication Practices, 2006; Kohn et al., 2000; Wisconsin Medical Society, 2006). In fact, the continual application of blame, punishment, and shame to address health care error facilitates the cultural evolution of learned helplessness (Seligman, 1965; Garber & Seligman, 1980; Seligman, Maier, & Geer, 1968). Learned helplessness is an evolutionary process where clinicians become passive professionals as a consequence of repeated punishment for errors that make success unlikely even following organizational change (Abramson, Garber, & Seligman, 1980; Martinko & Gardner, 1982). Repeated punishment for clinical errors may be internalized (Garber & Seligman, 1980) as a result of practitioners frequently witnessing the adjudication of active error via ‘‘blame and shame’’ with chastisement and character assassination. Outside the organization walls, sensationalized media reports only aggravate this dilemma. As a result, clinicians may develop a lack
39
Error in Health Care
of self-confidence in their abilities to perform without punishment for mistakes resulting in deterioration of performance (Peterson, Maier, & Seligman, 1993). Fashioned by this sharp system quality, the condition may be exacerbated by quasi-legal entities, such as the NCSBN, distinguishing error using the person approach and viewing error as a professional practice characteristics as opposed to a common system attribute.
System Approach The system approach stands in contrast to the person approach in the philosophy related to error and adverse event adjudication, investigation, and mitigation. According to Reason, the system approach to error management unearths concealed (latent) errors (2000) and exposes vulnerability through intensive system evaluation (Reason et al., 2001), while discounting the visible (active) errors caused by being human. The system approach advocates that while individual practitioners ‘‘must be responsible for the quality of their work, more errors will be eliminated by focusing on systems than on individuals’’ (Leape et al., 1995, p. 40). Consequently, this approach relies on investigative techniques and transdisciplinary analysis of both latent and active errors as threats to the system (Helmreich, 2000). Systems thinking and proactive process improvement on the part of health care organizations remains a significant opportunity for advancement (Amalberti, Auroy, Berwick, & Barach, 2005). The vast majority of errors that contribute to accidents results from poorly designed systems (Cook, Render, & Woods, 2000; Rasmussen, 1990; Reason, 2000) rather than from the carelessness or neglect of professionals working within the environment (Cook & Woods, 1994; Helmreich & Davies, 2004). The IOM recognized this approach in the report, Crossing the Quality Chasm (2001, p. 4) stating, ‘‘Trying harder will not work. Changing systems of care will.’’ The responsiveness of clinicians and hospital leaders to the numerous patient safety calls have been slow to emerge (Leape & Berwick, 2005). In response to the IOM’s call for comprehensive system transformation, Millenson (2003, p. 104) identified a barrier related to system improvement, stating ‘‘the IOM’s focus on ‘system’ improvement ignores the repeated refusal by physicians and hospital leaders to adopt [better] systems’’ in their effort to improve patient safety. Most clinicians believe they are already working to improve the system when they are not. This is supported by Devers et al. (2004, p. 111) categorization of physicians as ‘‘barrier[s] failing to buy into the magnitude of the [safety] problem.’’
40
PATRICK A. PALMIERI ET AL.
Deeply embedded but masked features, or latent factors (Dorner, 1996; Perrow, 1984; Reason, 2000) are the primary leadership focus subsequent to an adverse event (Reason, 1990). Although no system can completely eliminate error (Perrow, 1984; Rasmussen, 1990), the system approach is an instrumental philosophy to tackle potential hazards aimed at reducing risk, increasing reliability, and improving quality (Kohn et al., 2000; Reason, 2000; Smetzer & Cohen, 1998). HROs are prime examples of the system approach (Reason, 2000; Roberts, 2002) put into organizational practice. These industries are considered to be high risk, such as airlines and nuclear power plants, whose system design, organizational culture, and leadership commitment to safety facilitate highly reliable, low accident, systems (Roberts, 1990; Weick & Sutcliffe, 2001). The achievement and maintenance of exceptionally low operational process variation is the key distinguishing feature among HROs (Roberts, 1990). Although health care is a high-risk industry (Helmreich & Davies, 2004; Kohn et al., 2000), by most accounts there is excessive process variation (Institute of Medicine, 2004; Kohn et al., 2000; Reason, 2000) and unacceptably large quantities of adverse events (Institute of Medicine, 2004; Kohn et al., 2000; Zhan & Miller, 2003). As such, the majority of hospitals operate as low reliability organizations. Removing ‘‘blame and shame’’ from the equation may encourage health care professionals to embrace the system approach and participate in error reporting for the expressed purpose of system improvement. For example, one study found that 41% of physicians are discouraged from or not encouraged to report medical errors (Blendon et al., 2001). Shifting to a system philosophy diminishes the appropriateness of the current physician hierarchy emphasizing the assumption of personal responsibility and accountability for failures (Helmreich, 2000). Understanding the properties of HROs, the system philosophy is the ideal approach to address error within the complex adaptive health care system. Weick and Sutcliffe (2001) describe five ‘‘mindful’’ organizational attributes summarizing the typical HRO system approach for safe and effective operations. Reliable organizations characteristically: (1) stay preoccupied with reducing failures; (2) stand reluctant to simplify errors; (3) maintain a heightened sensitivity to latent failures; (4) remain committed to learning from failures; and (5) defer decisions to experts at various levels. These elements are often cited and discussed by the IOM to call upon the health care industry to accept the system approach to error prevention and investigation (Institute of Medicine, 2003, 2004, 2007b; Kohn et al., 2000).
41
Error in Health Care
ANATOMY OF ERROR With technological advances signaling the new millennium of health care, we must concede that extraordinary damage results from ordinary errors (Chiles, 2002). Unmistakably, medical errors are a significant reality (Kohn et al., 2000; Rasmussen, 1999; Reason, 2000) and a frequent product of poorly constructed health care delivery systems (Helmreich, 2000; Kohn et al., 2000; Reason et al., 2001). David Eddy, a nationally recognized physician patient safety expert, concisely summarized the impact modern demands have created with technological improvement and knowledge generation by stating, ‘‘the complexity of modern medicine exceeds the inherent limitations of the unaided human mind’’ (Millenson, 1997, p. 75). In an evaluation of accidents within the context of a system, Reason and Hobbs (2003, p. 39) believe errors result from ‘‘the failure of planned actions to achieve their desired goal, where this occurs without some unforeseeable or chance interventions.’’ Reason et al. (2001) describe this failure as a type of ‘‘vulnerable system syndrome’’ and emphasize early identification, evidence-based treatment, and prevention strategies to address process and system failures. To facilitate meaningful discussion, errors first need to be defined and accurately described (Benner et al., 2006; Wiegmann & Shappell, 2003). Broadly stated, there are two types of errors – latent and active (Reason, 1990, 2000). Both latent and active errors work to disrupt systems damaging both patients and clinicians by contributing to adverse events and poor outcomes. As systems fail in a ‘‘step-by-step’’ fashion analogous to the cracking of metal under intense stress (Chiles, 2002), the evolution of a sentinel event is a system of contributory fractures. To explain this complex system of layers, fractures, and pressures, we offer the Health Care Error Proliferation Model illustrated in Fig. 1. This model, incorporates several important elements of Reason’s work, depicts the health care system segregated into defensive layers within the complex adaptive system as well as part of the global health care environment. Organization leaders are positioned at the blunt-end, while the clinician works closest to patient bedsides and resides at the sharp-end. The holes in each layer provide opportunity for error to manifest when health care professionals are unable to defend these system gaps at various organizational levels. Vigilant clinician error defenses are analogous to systematic survival techniques developed through experiential learning and reflection. Frequently, these defenses derive from localized workarounds, described as clinical improvisation (Hanley & Fenton, 2007), purposed to
42
PATRICK A. PALMIERI ET AL.
Important Concepts: o Holes in any layer increases vulnerability of entire system • Size of hole proportional to significance of vulnerability o Virtually impossible to eliminate all holes o Important to understand whole system versus fragments • Continuously monitor the health of whole system o Error closest to the patient is the sharpest, furthest away the bluntest
Fig. 1. Health Care Error Proliferation Model. Source: Based on Concepts from Reason (1990, 1998). Artistic and Graphic Assistance by Larry Reising.
repair gaps produced by actions, changes, and adjustments fashioned at higher defensive layers.
Latent Conditions and Errors The least visible but most frequent type of error can be described as latent (Rasmussen, 1990, 1999; Reason, 1990, 2000). Latent errors can be defined as those less apparent failures of processes that are precursors to the
43
Error in Health Care
occurrence of errors or permit them to cause patient harm. Latent conditions exist distant from the delivery of patient care. For example, commonly observed latent conditions include operational glitches caused by poorly designed surgical time-out procedures, faults created by flawed patient identification policies, and inadequate resources allocation such as staffing, equipment, and supplies. Problematic or latent conditions may remain dormant for extended periods of time (Reason, 1990) as errors do not manifest themselves until the right circumstances arise (Reason, 2000; Wiegmann & Shappell, 2003). Latent conditions have been characterized as situations placed in the system as part of its design or caused by actions taken by decision makers removed from the direct provision of care (Sasou & Reason, 1999). ‘‘Latent conditions are to technological organizations what resident pathogens are to the human body’’ (Reason, 1998, p. 10). Latent errors are literally ‘‘accidents waiting to happen’’ (Whittingham, 2004) in the absence of attention and treatment. As such, latent conditions are present as hidden malfunctions in organizational systems (Reason, 1990) contributing to the occurrence of adverse clinical events that are potentially harmful to patients (Rasmussen, 1990; Reason, 1998, 2000). Latent errors provide early and perhaps repetitive warnings of imminent accidents of consequence (Reason, 1998). Thus, in relation to health care systems, latent error identification and intercession provides an important adverse event prevention strategy (Cook & O’Connor, 2005; Cook & Woods, 1994; Reason et al., 2001).
Active Conditions and Errors Active errors are actual breaches in the system defenses arising from the dormant latent conditions energized by some type of stimuli (Reason, 1990). Revealed through immediate feedback (Reason, 1998), active errors are usually related to clinical tasks manifesting out of patient care activities such as administering an intravenous antibiotic or performing a knee replacement surgery. Often, active errors result when latent strategic and operational decisions made at the highest organizational levels (Reason, 1998; Reason et al., 2001) shape a potentially dangerous setting for clinicians to chance. The active errors act like small holes in a water container. Occasionally, improvisation is utilized such that moving a single finger over a hole offers a temporary workaround and water stops leaking (Hanley & Fenton, 2007). Over time, however, more holes will likely materialize and soon the professionals can no longer mange the workaround. Although, the circumstance
44
PATRICK A. PALMIERI ET AL.
necessitating a workaround can temporarily alleviate local symptoms from a systemic failure, this localized improvisation can worsen or potentate the overall decline in a complex system. As with other industries, health care professionals can either manage uexpected events ineffectually (Weick & Sutcliffe, 2001) as if covering a hole with a finger, leading to future accidents or health care professionals can pull the system into correction adverting future debacles. Similar to other industries, sentinel-like events are virtually impossible to predict and even more difficult to scientifically study (Reason & Mycielska, 1982). Even though active errors appear more common due to the immediate result, latent errors are actually more prevalent (Reason, 1998) but less distinguishable without diligent surveillance activities (Lawton & Parker, 2002), such as robust occurrence reporting, failure mode effect analysis (FMEA), or root cause analysis (RCA). Hence, focused efforts to correct active errors provide little benefit to the system going forward (Rasmussen, 1990; Reason et al., 2001), especially from the future patient’s perspective. While most health care organizations find surveillance activities difficult to master, those with a sensitivity to operations and a demonstrated concern for discovering the unexpected prove successful in receiving active reporting of discrepancies (Weick & Sutcliffe, 2001). The practitioner’s perspective for error and error causation makes a difference in the robustness of surveillance. Through providing practitioners with clear guidance for detecting, discussing, and reporting error, as well as frequently soliciting feedback, system issues begin to illuminate for easier identification. This perspective is an important aspect to identify early issues related to two situations which might arise when clinicians become aware of an error but may choose not report. The first situation is the ‘‘error of judgment and planning’’ when clinician performance progresses as intended but the overall plan is flawed leading to an error. The second situation is the ‘‘action not as planned’’ that describes poor clinical execution that occurs despite the presence of a good plan (Reason & Mycielska, 1982). When patient outcome is not impacted by either of these situations, clinicians may not report the error even though the chances of recurrence are significant. There are several explanations suggested as to the reason why clinicians elect not to report near-errors or actual errors where patient harm was adverted. The clinician perception about what information should be reported is discussed in the literature. For example, when an active error causes injury practitioners generally believe it most appropriate to report ‘‘what happened’’ but not necessarily ‘‘how it happened’’ (Espin, Lingard, Baker, & Regehr, 2006). Surveillance activities that highlight active errors,
Error in Health Care
45
or the ‘‘what happened’’ have been favored to process analysis for the ‘‘how it happened’’ leading to forfeited improvement prospects. This active error emphasis perpetuates system volatility (Reason et al., 2001). Attention to correcting latent errors directly correlates to error reduction and systemic improvement (Institute of Medicine, 2004; Kohn et al., 2000; Reason, 1990, 2000) frequently attributed to reliable system processes (Weick & Sutcliffe, 2001).
The Proximity of Errors to the Adverse Event: Blunt and Sharp-Ends Latent and active errors are described, in part, by the system element they arise from, as well as, by their proximity to an adverse event. As illustrated in Fig. 1, latent errors usually remain distal to the adverse event site (Reason, 2000; Reason et al., 2001). Within the complex health care system, distinct failure causation, and event prevention opportunities exist at varying levels of the organization (represented in Fig. 1 as the layers of defenses). There are four defensive layers in the model. As these layers are discussed in the following section, it is important to remember that each layer may contain multiple sublayers and other complex attributes not specifically discussed in this chapter. At the first macroscopic layer (layer 1), leaders make organizational decisions about policy, procedure, and clinical function. These decisions potentially change, create, and/or eliminate holes at different levels within the complex adaptive system. Then, the second layer (layer 2) represents the supervisory and management role in the context of localized operations. These managers direct and organize the localized operations with varied strategies. Next, layer 3 represents the zone where policy, procedural, and environmental imbalances impact clinical practice. The resulting practices may contribute to the systematic error trajectory and interactions with system defenses. Finally the last defensive layer, layer 4, is the proverbial ‘‘rubber meeting the road’’ layer where unsafe clinical acts can result or lead to adverse events. At this microscopic level, clinicians work at the sharp-end while often not realizing they are protecting the patient from the unintended consequences created by multiple failsafe breakdowns. The large triangle in Fig. 1 represents the complex adaptive health care system. Complex adaptive systems characteristically demonstrate selforganization as diverse agents interact spontaneously in nonlinear relationships (Anderson, Issel, & McDaniel, 2003; Cilliers, 1998), where professionals act as information processors (Cilliers, 1998; McDaniel &
46
PATRICK A. PALMIERI ET AL.
Driebe, 2001) and co-evolve with the environment (Casti, 1997). Health care professionals function in the system as diverse actors within the complex environment utilizing different methods to process information (Coleman, 1999) and solve systemic problems within and across organizational layers (McDaniel & Driebe, 2001). Self-organization emerges as clinicians to adjust, revise, and rearrange their behavior and practice to manage the changing internal and external environmental system demands (Anderson et al., 2003; Cilliers, 1998) utilizing experiential knowledge (McDaniel & Driebe, 2001) and improvisation (Hanley & Fenton, 2007). Health care environments supporting and accepting self-organization produce better patient outcomes (Anderson et al., 2003). As such, care delivery system selforganization attributes, represented by the triangle in Fig. 1, impact the recognition, mitigation, and prevention of latent and active errors. Next, the large square reflects the health care environment including the influences and forces outside the organization such as regulatory boards, consumers, payers, legislators, and others. The impact of payment systems, litigation trends, and evidence-based practice changes all contribute to external complexity that impacts individual health care organizations (Davis et al., 2002; Kohn et al., 2000). Although we describe some of these features within this chapter, the specific application of this external environment model attribute is better suited for another discussion. A particularly valuable and integral aspect of our model is the attention to both the blunt-end and the sharp-end of error causation. Latent errors tend to reside closest to the triangle’s blunt-end (left side) and represent the organizational level attributes. Errors manifesting at the clinician level develop as active errors or possibly near-events. A near-miss event is the nomenclature to describe those errors coming close to injuring a patient with harm adverted by a ‘‘last minute’’ defensive action close to the bedside. At the other end of the triangle and adjacent to the accident, active errors intimately link to an adverse event at or closest to the patient (Cook & Woods, 1994). The apex or pointed-end of the triangle resides closest to the accident. If the triangle were metal, the pointed-end would be sharp; hence, the term ‘‘sharp-end’’ is used to describe active errors. Also, the term sharpend error is metaphorically analogous to the blame and the punishment frequently exacted on the health care professionals for human error (Kohn et al., 2000; Reason, 2000). The specific relationship of the sharp-end and the blunt-end concepts to adverse events and the defensive layers will be further developed in the sections to follow. Active and latent errors are impacted and influenced, directly and indirectly, by the internal defensive layers as well as the entire complex adaptive health care system.
Error in Health Care
47
HEALTH CARE ERROR PROLIFERATION MODEL In this section, we adapt the Swiss Cheese Model (Reason, 1990) to health care organizations in which we call the Health Care Error Proliferation Model (Fig. 1). The Swiss Cheese Model, likens the complex adaptive system to multiple hole infested slices of Swiss cheese positioned side-by-side (Reason, 1990, 2000). The cheese slices are dubbed defensive layers to describe their role and function as the system location outfitted with features capable of intercepting and deflecting hazards. The layers represent discrete locations or organizational levels potentially populated with errors permitting error progression. The four layers include: (1) organizational leadership, (2) risky supervision, (3) situations for unsafe practices, and (4) unsafe performance. The Swiss Cheese Model portrays hospitals as having multiple operational defensive layers outfitted with essential elements necessary to maintain key defensive barricades (Cook & O’Connor, 2005; Reason, 2000). By examining the defensive layers attributes, prospective locales of failure, the etiology of accidents might be revealed (Leape et al., 1995). Experts have discussed the importance of examining these layers within the context of the complex adaptive health care system (Kohn et al., 2000; Wiegmann & Shappell, 2003) as illustrated in our Health Care Error Proliferation Model. The contribution that the complex system provides to error was suggested in a 2001 survey in which 76% of nurses indicated that their inability to deliver safe health care was attributed to impediments created by unsafe working conditions (American Nurses Association, 2006). This data reflects the general presence of unsafe working conditions in health care facilities across the nation. There is probably an operational disconnect at many organizational levels and between multiple disciplines created by the expansive system complexity while under intense financial, human, and political pressures. The holes in the cheese represent actual areas where potential breakdowns or gaps in the defensive layers permit hazardous error progression through the system (Cook et al., 2000; Reason, 1990). These holes continuously change size and shift locations (Reason, 2000). When gaps at each layer sequentially align, the system fails. In other words, sentinel events occur when multiple individual, yet small faults, come togather to create the circumstances for a hole alignment sufficient to produce total system failure (Cook & O’Connor, 2005). This unobstructed hole alignment occurs infrequently. In the vast majority of situations, health care clinicians control these holes and defend the system integrity (Cook et al., 2000) to
48
PATRICK A. PALMIERI ET AL.
stop error progression. Only when the practitioner is unable to anticipate, detect, and impede hazards from passing through the holes will an adverse event manifest (Reason, 1998). However, given the substantial number of activities involved in delivering patient care (Institute of Medicine, 2004), coupled with the heavy workloads of clinicians (Joint Commission on the Accreditation of Healthcare Organizations, 2007), individual holes or gaps within specific layers represent the normal consequence of complexity and operational demands. The alignment of the holes to create an adverse or sentinel event is analogous to the concept of the ‘‘perfect storm.’’ Using the concepts illustrated in Fig. 1, we discuss in the following sections the organization, leader, manager, and clinician contribution to strengthening and protecting the defensive layers along the continuum from the bluntend to the sharp-end of adverse events. When an accident manifests, the presence of latent factors may be revealed through careful and unbiased examination of the system (Farrokh, Vang, & Laskey, 2007) by methodically evaluating each defensive layer (Reason, 2000). Hazards blocked within or at any one of the first three defensive layers is termed a latent error (Reason, 2000), while failure blocked at the fourth layer, virtually at the patient’s bedside, is termed a near-event (Cook et al., 2000; Cook & Woods, 1994; Reason, 2000). The culmination of error proliferation is heavily dependent not only on the layers of interaction but interdependent to the general culture of both the health care system internally, at the organization level, and externally, at the professional, legal, and delivery system levels.
Layer 1: Organizational Leadership The first layer, most distant from the adverse event, is leadership at the highest organizational level. This level of defense is the most challenging to alter in health care organizations (Nolan, Resar, Haraden, & Griffin, 2004) because the current ‘‘blame and shame’’ person approach is frequently derived from an attribution like process (Reason, 1998; Sasou & Reason, 1999). Leaders often ascribe responsibility for faulty processes or adverse events as resulting from the lack of effort (Reason, 2000), inability (Vincent, 2003), incompetence (Rasmussen, 1990, 1999), and absence of vigilance (Reason & Hobbs, 2003) to the clinical professional. As such, attribution for poor performance, or ‘‘practice breakdowns’’ results in actions aimed at clinicians instead of the decisions produced by leaders. Effective leadership is necessary to maintain safe systems (Joint Commission Resources, 2007). Leaders are accountable for ‘‘engineering a
Error in Health Care
49
just culture’’ as a critical aspect to providing safe patient care (Reason, 2000). The Institute of Medicine (2004) speaks to three vital aspects of a leader’s responsibility to shape their institutional culture of safety. First, leaders need to recognize and accept that the majority of errors are created by the system that they cultivate and direct (Institute of Medicine, 2004; Reason, 2000). Second, the role of leadership should include the perceptible daily support for practitioners (Institute of Medicine, 2004; Weick & Sutcliffe, 2001). Third, leaders should sincerely embrace and inculcate continuous organizational learning (Hofmann & Stetzer, 1998; Institute of Medicine, 2004). In combination, these three IOM recommended qualities not only create efficient organizations but they also help reduce latent and active errors. Accordingly, the Institute of Medicine (2004) suggests several focus areas for strengthening the organization. These measures include: adopting evidence-based and transformational leadership practices, increasing workforce capability, enhancing workplace design to be error resistant, and creating a sustainable safety culture as necessary elements for vigorous safety defenses. Developing a robust error reporting system or incorporating prospective process reviews by quality teams are just two examples of leaders positively influencing organizational acceptance of a patient safety culture. In order to better understand the behavior of individuals, leaders must attempt to determine what subordinates are thinking about situations (Pfeffer, 1977). Green and Mitchell (1979) developed a model to study attributional leader traits. Leader behavior is a consequence of interpretation relevant to subordinate performance. In attempting to understand how subordinate performance affects leader reaction, it is vital to determine the cause of subordinate performance whether good or poor. There are four causes that may or may not be within the subordinate’s control: competence, effort, chance, and other uncontrollable external causes. Both competence and effort are internal causes of performance, while chance and other uncontrollable causes are outside the subordinate’s control (Green & Mitchell, 1979). Causality is attributed more to the subordinate than to the situation if the subordinate has had a history of poor performance. Additionally, if the effects of the poor performance have severe outcomes, it could be determined that the subordinate was at fault (Mitchell & Wood, 1979). In this situation, the leader would focus more on some remedial action towards the subordinate, rather than on the situation (Bass, 1990). Continued punishment may lead to a downward spiral in performance leading to learned helplessness which would be dealt with even more punitively if a
50
PATRICK A. PALMIERI ET AL.
subordinate was previously viewed as a poor performer (James & White, 1983). Unfortunately, this punitive punishment can lead to even greater declines in subordinate performance (Peterson, 1985) and possibly greater system instability. Developing and supporting a culture of excellence begins at the leadership level by embracing a preoccupation with preventing failure (Weick & Sutcliffe, 2001). Chiles (2002, p. 15) describes a leader’s preoccupation with system cracks as the ‘‘chess master spend[ing] more time thinking about the board from his opponent’s perspective than he does from his own.’’ Leaders ought to seek assistance from clinicians and other health care professionals with expertise in detecting and extracting the subtle signals of impending issues from the constant noise of routine workflow prior to initiating systematic change. These experts can provide the feedback necessary to proactively identify and correct system abnormalities with minimal disturbance given that the best intended system changes can be quite disruptive to the institution.
Layer 2: Risky Supervision The second defensive layer is management, specifically those supervisors at various organizational levels directly responsible to the leadership team for system metrics and outcomes. When management is ineffective in correcting known gaps or holes, patient welfare is endangered. Thus, management involvement is critical in maintaining a defense against adverse events (Reason, 1990). Risky supervision arises when management decisions lead to circumventing or not enforcing written policies and procedures (Reason & Hobbs, 2003), overlooking known issues, and the absence of corrective actions (Leduc, Rash, & Manning, 2005) as employees engage in potentially consequential unsafe activity. Practitioner deviations from written policies and procedures significantly contribute to adverse events (Vaughn, 1996). As such, management’s enforcement of policies is critical to the safe functioning of any organization. Workload pressures created by understaffing or overly stressful work assignments can lead to poor outcomes (Aiken et al., 2001; Aiken, Clarke, Sloane, Sochalski, & Silber, 2002; Stone et al., 2007). Performance pressures caused by any number of issues creating a difficult and error-prone workplace are examples of risky supervision. In 2002, the Joint Commission reported the lack of nursing staff contributed to nearly 25% of the unanticipated issues resulting in patient death or injury (Joint Commission on Accreditation of
Error in Health Care
51
Healthcare Organizations, 2007). Tucker and Spear (2006) reported inadequate nursing task times, multiple unplanned changes in task, and frequent interruptions mid-task throughout a typical shift. These reported issues may result in harm to patients, especially in complex care situations such as the intensive care or perioperative environments. In addition, unpaid overtime at the end of the shift is a well known phenomenon (Tucker & Spear, 2006) with nurses reporting on average greater than one hour per shift (Rogers, Wang, Scott, Aiken, & Dinges, 2004; Tucker & Spear, 2006). The intensity of work coupled with the inability of nurses to complete their work within a scheduled shift indicates poor job design and inadequate support, possibly at the level of immediate supervision. Also, these work pressures are not unique to nursing since other health care professional face similar conditions (Institute of Medicine, 2004; Kohn et al., 2000). In short, managers, both clinical and administrative, are directly responsible for organizing, implementing, and evaluating policies and procedures to guide the safe supervision and management of the complex adaptive care delivery system. Considering system design, establishment of boundaries, and expectations under which clinicians practice, management can foster safer performance through more supportive organizational norms and values (Institute of Medicine, 2004; Kohn et al., 2000; Rasmussen, 1999; Reason et al., 2001). Supervisors should foster a safe environment by actively identifying and addressing deficiencies among practitioners, equipment, processes, and/or training. Safe practices include correcting issues, utilizing progressive discipline and on-the-spot performance coaching immediately addressing unacceptable clinical behavior while promoting adherence to policies and procedures. When considering the situations related to unsafe practices, supervisory level staff are integral to maintaining the sensitivity to the operations and commitment to the resiliency of the organization (Weick & Sutcliffe, 2001). Imperfect environmental conditions represented in the next layer, preconditions for unsafe practices can be revealed by vigilant supervisory and management participation. Thus, situational awareness of their respective environment in relation to the ‘‘big picture’’ is vital to managers’ impact on the next layer.
Layer 3: Situations for Unsafe Practices The third layer represents precursors to practices that are not safe. Careful attention to error substance at the situations for unsafe acts level is
52
PATRICK A. PALMIERI ET AL.
necessary to protect the system and the clinicians from mistaken error attribution. Substandard conditions and practices are two situational circumstances responsible for the presence of unsafe acts in practice (Reason, 1998). In fact, these two synergistic conditions each fuel the other as situations increasing the likelihood of an unsafe act (Reason, 1990). The inability of local systems to provide practitioners with reliable support, including information, equipment, and supplies, has been described as ‘‘operational failures’’ (Tucker, 2004) which create substandard conditions for practitioners attempting to execute their patient care responsibilities. Tucker and Spear (2006) observed that nurses experience an average of 8.4 operational failures per shift. An example of a serious, but often stealth, operational failure situation is the ‘‘down-time’’ associated with an automated pharmaceutical dispensing system interface that links the dispensing machine to the pharmacy. When this operational failure occurs, the dispensing machine is unable to profile each patient’s personal medication schedule. When a patient specific medication profile is absent a notable defensive hole is created. This issue increases the probability for error as the dispensing machine’s entire pharmaceutical contents are readily available and the selection is not limited to only those medications specifically ordered for each patient. Notably, a clinician is able to remove the wrong medication or the wrong medication dose from the dispensing unit, an act that is less likely when the pharmacymachine interface is properly functioning. As such, this situation creates the precondition for a potentially serious event as delivering the wrong medication or dosage in this example results in the occurrence of an unsafe practice that may lead to patient harm or death. Other failures, or errors, at the clinician level result from difficulties in task partitioning exemplified by postponing a task until missing supplies arrive due to operational failures (Tucker & Spear, 2006) acting as a precondition to actualize clinician error. With task partitioning, clinicians experience work interruptions (such as not being able to administer an urgent or stat medication) linked to an unsafe precondition (reoccurring delays in pharmacy providing stat medications to the clinical unit) thereby leading to substandard practice (omission of medication delivery, possibly due to memory, or deterioration in patient condition, due to time). These examples illustrate the complexity of error at the clinician level that might be considered substandard practice or even a ‘‘practice breakdowns’’ (Benner et al., 2006). System focused organizations highlight the reluctance to accept simplifications like ‘‘practice breakdowns’’ in the pursuit of error causation. Informed organizations use strategies such as tracing
53
Error in Health Care
preconditions through the complex adaptive system, layer by layer, prior to the clinical practice level, seeking to determine why the event manifested versus rather than who was responsible (Weick & Sutcliffe, 2001). In a concerted effort to universally improve the situations for safer clinical practices, many agencies and organizations advocated best practices and suggested system attributes purposed at addressing errors related to substandard practices. The Joint Commission, through the creation of the National Patient Safety Goals (NPSG), and the Institute for Safe Medication Practices (ISMP), through medication delivery recommendations have positively impacted health care in emphasizing the importance of patient safety. For example, sound-alike, look-alike drugs (SALAD) and high-alert medications have been identified as areas prone to compromise patient safety, especially when clinician vigilance is altered by other complex adaptive attributes, such as fatigue and workload. Strategies have been adopted to address issues related to substandard medication practices caused by unsafe situations. Recommendations can quickly become accreditation standards such as removing dangerous highly concentration drugs like potassium chloride from patient care units, adopting formal patient identification requirements, and mandating surgical instrument counts as an essential step at the end of surgical procedures. On the horizon, situations related to using automated pharmaceutical dispensing devices, incorporating bar-scanning technologies into the medication delivery process, and implementing electronic medical records all promise to reduce opportunities for unsafe practice.
Layer 4: Unsafe Performance The fourth layer represents unsafe practices which are usually linked to the health care professional’s performance at the ‘‘sharp-end’’ of care leading to an adverse event. In the simplest form, unsafe practices consist of errors and violations (Reason, 2000). Errors frequently involve perceptual (Wiegmann & Shappell, 2001), decision making (Norman, 1988; Perrow, 1984; Rasmussen, 1990), and/or skill-based lapses (Norman, 1988; Reason, 1990). Unsafe practices are the tip of the adverse event iceberg (Reason, 1998) as the number of these acts tends to be rather large but discrete (Reason, 1990, 1998) and usually are the most noticeable following a sentinel event (Perrow, 1984; Reason, 1998, 2000; Wiegmann & Shappell, 2003). Although the error and sentinel event numbers have yet to be
54
PATRICK A. PALMIERI ET AL.
accurately quantified, the magnitude of unsafe practices is put into perspective by considering the IOM’s (2007b) finding that on average, each hospitalized patient experiences one medication error per day. Violations are errors that are more serious in nature but occur less frequently (Reason, 1990). Wiegmann and Shappell (1997) describe violations as the willful disregard of policies, procedures, and rules governing health care professionals at both the organizational and regulatory levels. Violations are willful, therefore, missing the sincere intention to enhance a problematic situation through improvisation. Reason (1998) attributes violations to issues related to motivation, attitude, and culture. These acts amount to deviations from safe operating practices (Vincent, Taylor-Adams, & Stanhope, 1998) even possibly resulting from a lack of training or education (Institute of Medicine, 2004). Examples of clearly willful clinician violations amounting to ‘‘practice breakdowns,’’ are purposely falsifying documents, failing to execute life-saving procedures when observably necessary, and stating untruthfully that actions, events, or treatments were completed. At this layer, clinicians develop capabilities to detect, contain, and report unsafe situations or errors. Expert clinicians maintain their distance from the sharp-end of an adverse event with heightened situational improvisation (Hanley & Fenton, 2007) to generate a protective barrier despite numerous complex care management issues (Kohn et al., 2000; Vincent, 2003). This protection essentially results from experience and heightened familiarity with improvisation to mitigate issues created by lack of clear protocols for a situation, high stress clinical scenarios, and fatigue from heavy workloads and long working hours (Hanley & Fenton, 2007; Helmreich & Davies, 2004; Vincent, 2003). Left to novice clinician hands, improvisation may lead to unfavorable consequences, as important experiential knowledge has not developed. It is suggested that error is the symptom of a practice breakdown (Benner et al., 2006). The complexity embedded within the situation when an error is unsuccessfully defended by an expert clinician at the sharp-end is substantial. Specifically, incorporating improvisation as a normal practice attribute might render the system better capable of defending from error. Health care professionals should be expected to draw upon their prior experiences (Weick, 1993) in order to positively impact troublesome situations with the resources at hand (Hanley & Fenton, 2007) even if it requires them to not follow policies and procedures. When system opportunity for failure exceeds the clinician’s ability to workaround and minimize error, the risk for a system meltdown is substantial (Chiles, 2002;
Error in Health Care
55
Kohn et al., 2000). Expert health care professionals when found at or near the sharp-end of a sentinel event signals the likely presence of a system overload rendering ineffective the expert’s improvisation skills. Weick and Sutcliffe suggest that organizations embrace ‘‘the reluctance to accept error simplifications responsible for creating more complete and nuanced pictures’’ as an important HRO characteristic (2001, p. 11). There are times when occurrences considered to be violations are actually issues related to motivation, attitude, and culture (Reason, 1998). Some acts amount to deviations from safe operating practices (Vincent et al., 1998) while others probably result from a lack of training or education (Institute of Medicine, 2004). Some might argue leaders and managers responsible for safe staffing and correcting hazards are responsible for the commission of a violation when patients are harmed due to known but uncorrected issues (Reason, 1990, 1998). Willful practice violations assume the presence of an informed and trained professional with an unobstructed understanding of the policies and procedures (Amalberti, Vincenct, Auroy, & de Saint Maurice, 2006). The professional must be working in an environment where the specific policy and/or procedure is enforced, and the professional knowingly and deliberately disregarded the rule or rules (Reason, 1998). However, health care education and training specific to patient safety is lacking in many curriculums (Institute of Medicine, 2003) and it is an important aspect to view adverse events in the context of error versus willful violation. As such, the significant issue facing organizations, even society in general, is they do not recognize errors as an unavoidable aspect of limitations in human performance (Reason et al., 2001). Despite the negative impact upon the complex adaptive system when human error manifests, the overall impact of excellent patient care produced by wellintended clinicians should outweigh the consequence caused by unintentional error.
Defending the Patient from System Errors Although problem solving and short-term fixes correct errors that arise in the normal course of task completion (Tucker & Edmondson, 2003), these fixes are not typically reported as clinicians improvise and adapt in the normal course of work. Consider the situation created when important medications are unavailable to nurses, yet the patient’s condition requires availability. The result is a clear gap in the medication delivery system.
56
PATRICK A. PALMIERI ET AL.
By improvising, the clinician could borrow the same medication from a different patient’s medication bin (which could be considered a violation of policy). Time permitting, the nurse could leave the busy unit in order to walk to the pharmacy. This fills the existing gap, or need, but may create yet another if another patient deteriorates while the nurse is off of the unit. It is important to recognize that eliminating active failures stemming from unsafe acts prevents only one adverse event (Reason, 2000) and does not construct better defenses for removing system gaps (Cook et al., 2000). Therefore, examining and adjusting the system at the organizational and managerial layers to ‘‘manage error’’ is a more effective method for eliminating a larger variety of latent errors (Amalberti et al., 2006), subsequently limiting the manifestation of active errors (Reason, 1998). To prevent adverse events that result directly from unsafe acts requires blocking holes or trapping hazards that result from within the clinician’s controlled defensive layer. The most effective method to mitigate any errorprone process is to create and support an occurrence or incident reporting system (Cook & O’Connor, 2005; Department of Veterans Affairs, 2002). Sasou and Reason (1999, p. 3) state, ‘‘failures to detect are more frequent than failures to indicate and correct.’’ However, leaders must support the removal of blame and punishment if error reporting and detection is to become routine in practice (Hofmann & Stetzer, 1998; Kohn et al., 2000). In an effort to protect the system and the patient from error, it is perhaps even more important to review the organizational process when near-events occur, those situations where an adverse event is avoided at the very last moment. Contrary to contemporary patient safety knowledge, professional and quasi-legal bodies appear less concerned with the category of error resulting in the near-events or nearly harmful to patients instead favoring those situations resulting in actual patient injury. This focus might be explained by the regulatory body predisposition to allow the consequences of an accident to ‘‘color and even distort’’ the perception of the system attributes and the actions of the clinicians leading up to the event (Reason & Mycielska, 1982). These almost accidents, or near-events, present significant evidence of an approaching sentinel event like the tremor felt prior to an impending earthquake. Reason (2000) considers near misses as free lessons that previews or represents potential and preventable future adverse events. Organizations must be alert as well as vigilant in identifying these near misses in order to correct, change, or modify policies, procedures, and practices to prevent future errors that could result in a sentinel event.
Error in Health Care
57
THE ROLE OF HEALTH CARE LEADERSHIP IN PATIENT SAFETY Since its formalization in 2000, the patient safety movement gained both national attention and increased resources dedicated to creating safer health care delivery systems. In 2003, the Joint Commission established the initial NPSG. These initiatives helped patient safety advocates to actively transform health care based on an understanding from the organizational accident and human factors sciences. This transdisciplinary knowledge transfer assists leaders in understanding the path to develop health care into a high reliability or high risk but low accident industry (Institute of Medicine, 2003; Kohn et al., 2000). Developing and maintaining the optimal patient safety culture while concurrently managing the overall organization performance metrics is challenging (Joyce, Boaden, & Esmail, 2005; Reason & Hobbs, 2003). In a 2004 survey, hospital administrators ranked financial challenges at the top of the list of their 11 priorities while patient safety ranked 10th and patient satisfaction ranked 11th (American College of Healthcare Executives, 2005). Over the last decade, health care organizations attempted to eliminate error through the zero-deficit culture (Institute of Medicine, 2004) with modest success (Nolan et al., 2004; Roberts, 2002). As a direct result of the flawed human approach to error management, health care professionals hide errors in order to protect themselves from reprisal and punishment (Kohn et al., 2000). Unreported latent errors exist partly due to the organizational inertia (Palmieri, Godkin, & Green, 2007) created by fear (Kohn et al., 2000; Millenson, 2003), and remain free in the environment until a system meltdown results in an adverse event (Reason, 1998; Reason et al., 2001; Vaughn, 1996, 1999). While the NCSBN seeks to establish the attentiveness and surveillance standard of nursing practice associated with watchful vigilance in preventing the failure to rescue (Benner et al., 2006), health care leaders are familiar with clinician inability to effectively monitor patients within the complex adaptive system due to a variety of pressures. The increased complexity of the typical hospital unit coupled with reduced nursing hours due to financial, human resource, and simple supply constraints and increased physician workload is problematic to effective and safe hospital operations. As a result, health care leaders should recognize that patient safety experts are unable to uniformly define a realistic ‘‘one case fits all’’ threshold for system error versus professional neglect in the contemporary health care system.
58
PATRICK A. PALMIERI ET AL.
Professional and governing organizations for pharmacy, medicine, and nursing play pivotal roles in providing support to all constituents in advocating the system approach versus their traditional person approach to event investigation and adjudication. Health care leaders are well positioned to support clinical professionals and improve patient safety by vigorously advocating that error causation investigations related to adverse events embrace the system approach. Some organizations remain attached to the human approach as evidenced by statements such as, ‘‘when nursing practice falls below this minimal acceptable threshold sanctions are necessary to protect the public’’ (Benner et al., 2006, p. 59). The differentiation between willful and accidental error is an essential point for consideration in any RCA investigation. Learning about near-events and latent errors necessitates an open, honest, and sincere organizational culture. The Institute of Medicine (2004, p. 226) states, ‘‘although near-miss events are much more common than adverse events – as much as 7 to 100 times more frequent – reporting systems for such events are much less common.’’ In order to turn the potential lemon of near misses in the lemonade of error prevention, leaders should consider defining what a near miss is in their organization, talk openly about near-events when they occur, and preach near-events are positive signs of a system with functioning safeguards despite vulnerability (Weick & Sutcliffe, 2001). Preoccupied with probing their potential and actual failures related to nearly adverting disaster (Weick & Sutcliffe, 2001), HROs realize these nearly missed events create future success for all parties. Leaders should also recognize that the vast majority of patient safety knowledge, including the NPSGs, are based on expert opinions, recommendations, and experiences as well as anecdotal knowledge (Agency for Healthcare Research and Quality, 2001). The Agency for Healthcare Research and Quality (AHRQ) analyzed the current state of affairs for patient safety practices of which 79 practices were reviewed for the strength of research evidence supporting current recommendations. After review, only 11 patient safety practices were correlated with a high level of reliable evidence. These 11 practices were clinical in nature and did not include crew resource management, computerized physician order entry, simulation use, or bar-coding technology. In addition, AHRQ found the literature is insufficient to make rational decisions about organizational nursing environments. A number of recommended practices with longstanding success outside of health care were not included in the analysis because of the general weakness of health care evidence. These practices included incident reporting, application of human factors principles, utilization of
Error in Health Care
59
communication models, and promoting cultures of safety. Further research, especially for practices drawn from industries outside of health care, is needed to fill ‘‘the substantial gaps in the evidentiary base’’ of patient safety practices (Agency for Healthcare Research and Quality, 2001).
Health Care Leaders and High Reliability Organizations Through ‘‘problemistic search’’ (Cyert & March, 1963, p. 120) managers and leaders scan the environment for quick solutions to immediate problems regardless of complexity. Reason (2000, p. 770) states, ‘‘most managers y attribute human unreliability to unwanted variability and strive to eliminate it.’’ Effective error management includes decreasing the incident of critical error (Reason, 2000) and developing improved systems capable of containing dangerous error (Reason et al., 2001). When management demonstrates positive organizational safety practices, subordinates’ will accept safety as an essential job responsibility considerably improves (Dunbar, 1975; Katz-Navon, Naveh, & Stern, 2005; Kohn et al., 2000). This approach, reflective of HROs, focuses attention to learning and awareness at all levels of the organization (Reason, 1997), including the individual, department, and leadership. Preoccupied with probing their potential and actual failures related to nearly adverting disaster, HROs frequently avoid sentinel events (Weick & Sutcliffe, 2001) as these nearly missed events create future successes from a single realization. In fact, an organization should develop a policy specifically endorsing the system philosophy to error with emphasis on the non-punitive consequences related to unintentional errors (Helmreich & Davies, 2004). HROs exhibit resilient systems and superior hazard intelligence (Reason, 2000) where practitioners employ positive adaptive behaviors (Mallak, 1998) to immediately defend the patient from potentially harmful situations therefore sustaining the reliability of the process (Carroll & Rudolph, 2006). Linking the actual error to organizational factors is difficult (Wiegmann & Shappell, 2001) as system failure results from the misguided dynamics and imperfect interactions of the care delivery system (Institute of Medicine, 2004). Recognizing the importance of developing safe and reliable health care delivery systems (Institute of Medicine, 2004) necessitates active involvement by multiple health care stakeholders such as practitioners, administrators, regulators, and even patients (Katz-Navon et al., 2005; Leonard, Frankel, & Simmonds, 2004). The reliability of an organization is a measurement of process capability (Reason et al., 2001) within a complex
60
PATRICK A. PALMIERI ET AL.
system (Cook & O’Connor, 2005) under the stresses of ordinary and unordinary conditions (Berwick & Nolan, 2006). An elevated level of organizational vigilance is reflective of HROs as they generally support a cultural predisposition that prevents the materialization of adverse events (Weick & Sutcliffe, 2001). Hence, the investigatory work for organizations embracing the system approach deemphasizes active errors in order to scrutinize system malfunctions (Reason, 1990) in the quest to discover latent conditions and rejuvenate processes (Rapala & Kerfoot, 2005). Rigid hierarchies create a vulnerability to error and leaders should first seek to elimination error at their own defensive layer. In doing so, leaders prevent the culmination with lower level errors by deferring decisions to those with the expertise (skills, knowledge, and experience) consequently improving the overall system (Kohn et al., 2000; Reason, 2000; Weick & Sutcliffe, 2001). Leaders focused on eliminating the majority of common error from their respective organization should recognize and understand the elements of Reason’s work and our subsequent adoption to the Health Care Error Proliferation Model. Low incidences of accidents are observed and reported in HROs, most commonly found in aerospace (Nolan et al., 2004) and nuclear generation industries (Cook, Woods, & Miller, 1998). HROs experience fewer accidents then other organizations (Reason, 2000) as HROs recognize the importance of limiting human variability in their pursuit of safety (Roberts, 2002). In addition, HROs remain vigilant in recognizing the possibility of failures (Reason, 2000; Weick & Sutcliffe, 2001) and discover solutions in the work of human factor and organizational accident experts (Helmreich & Davies, 2004; Kohn et al., 2000). The increased diligence in examining systems for error is important. In recent literature, the concept of technological iatrogenesis (Palmieri & Peterson, 2008) and the resulting e-iatrogenic error (Campbell, Sittig, Ash, Guappone, & Dykstra, 2007; Weiner, Kfuri, Chan, & Fowles, 2007) has been suggested to be a significant future contributor to health care adverse events. As the health care technology enterprise rapidly expands, the likelihood of new and previously unseen error typologies impacting systems of care is probable (Harrison, Koppel, & Bar-Lev, 2007; Palmieri, Peterson, & Ford, 2007). This new error formation presents significant challenges for the system as Lyons et al. (2005) found that administrators, physicians, and nurses hold different beliefs about the universal assistance and the specific obstacles relevant to the overall information system as a clinical benefit. The most important distinguishing feature of HROs is their collective preoccupation with identifying and eliminating system failure (Reason,
61
Error in Health Care
2000; Weick & Sutcliffe, 2001). The notion that HRO’s are preoccupied with proactively identifying system flaws as an effective methodology for organizational improvement is not universally accepted, however. Some health care experts suggest ‘‘a systems approach is based upon a post hoc analysis and redesign of a system based upon unsafe performance’’ (Benner et al., 2006, p. 51). As the proactive FEMA increases in popularity with continued organizational interest in developing HRO attributes, the system approach to preventing failures will be universally recognized. Health care leaders remain ideally positioned to make the system approach a reality by transforming their institutions into HROs.
CONCLUSION The Health Care Error Proliferation Model is based on the premise that health care organizations struggle to effectively integrate existing research, novel approaches, practice innovations, and new knowledge about patient safety into their complex care delivery systems (Gray, 2001; Kohn et al., 2000; Nolan et al., 2004). Health care clinicians and leaders are beginning to recognize the probable creation of more serious problems should the prevailing ‘‘punitive cultures and a focus on ‘bad apples’ instead of poorly constructed systems’’ be permitted to continue to impedes progress (Devers et al., 2004, p. 114). We reviewed a diverse body of knowledge and provided a transdisciplinary analysis of health care errors and events. Then, we discussed the relevant literature pertaining to the conceptualization of errors, approaches to identifying the causes of errors, and the utility of the Swiss Cheese Model of adverse events in reconsidering the existing health care leadership approach for developing organizational cultures of safety. Finally, we discussed the critical role of health care leadership in building defenses against adverse events by demonstrating the attributes of HROs. We conclude that within the discipline of health care leadership, the value of learning and applying basic organizational accident principles (Leape et al., 1998) to the complex adaptive system of health care organizations is paramount to preserving patient safety (Helmreich & Davies, 2004; Reason, 2000). Embracing the system philosophy to error management is vitally important for health care leaders desiring to achieve HRO status. Health care professionals should support the patient safety movement by focusing on systems versus people as the fundamental strategy for developing into a high reliability industry.
62
PATRICK A. PALMIERI ET AL.
We agree with Bennis (1989) who argued that organizational cultures are created, supported, and sustained through leadership role modeling of acceptable organizational conduct. Leaders create and sustain organizational safety by: (1) role modeling the acceptable behaviors; (2) embracing the system approach to managing critical organizational incidents; (3) rewarding desired and productive behaviors; and (4) eliminating the use of blame, shame, and punishment to address professional error. Understanding the Health Care Error Proliferation Model as a strategic overview for identifying latent and active error will improve the complex health care delivery systems. The responsibility for organization improvement rests in the capable hands of health care leaders to promote discovery, motivate and advocate transformation, and protect the integrity of systems and people in the pursuit of patient safety. As leaders continue to emphasize support for the system approach and push to reduce the person ‘‘blame and shame’’ approach, errors will decline. As errors decline, hospitals will continue their progress to demonstrate the characteristics associated with HROs. In the future the incredible number of patients who die each year due to health care errors will decline and the entire system will successfully improve the quality and safety of patient care.
ACKNOWLEDGMENTS We would like to acknowledge the assistance of Larry Reising for his thorough content review as an expert in the area of industrial and aviation safety, to Ruth Anderson for her assistance with editing aspects related to complex adaptive systems, and to Barbara Cherry and Rodney Hicks for their extensive and helpful editorial suggestions and comments.
REFERENCES Abramson, L. Y., Garber, J., & Seligman, M. E. P. (1980). Learned helplessness in humans: An attributional analysis. New York: Academic Press. Agency for Healthcare Research and Quality. (2001). Making health care safer: A critical analysis of patient safety practices, AHRQ Publication No. 01-E058. Rockville, MD: Agency for Healthcare Research and Quality. Aiken, L. H., Clarke, S. P., Sloane, D. M., Sochalaski, J. A., Busse, R., Clarke, H., Giovanneti, P., Junt, J., Rafferty, A. M., & Shamian, J. (2001). Nurse reports on hospital care in five countries. Health Affairs, 20(3), 43–53. Aiken, L. H., Clarke, S. P., Sloane, D. M., Sochalski, J. A., & Silber, J. H. (2002). Hospital nurse staffing and patient mortality, nurse burnout and job dissatisfaction. Journal of the American Medical Association, 288(16), 1987–1993.
Error in Health Care
63
Amalberti, R., Auroy, Y., Berwick, D., & Barach, P. (2005). Five system barriers to achieving ultrasafe health care. Annals of Internal Medicine, 142(9), 756–764. Amalberti, R., Vincenct, C., Auroy, Y., & de Saint Maurice, G. (2006). Violations of migrations in health care: A framework for understanding and management. Quality and Safety in Health Care, 15(S1), 66–71. American College of Healthcare Executives. (2005). Top issues confronting hospitals: 2004; Available at http://www.ache.org/pugs/research/ceoissues; January 8. American Nurses Association. (2006). The American nurses association comments on the Wisconsin department of justice decision to pursue criminal charges against an RN in Wisconsin; Available at http://nursingworld.org/ethics/wis11-20-06.pdf; December 10. Anderson, R. A., Issel, M. L., & McDaniel, R. R. (2003). Nursing homes as complex adaptive systems: Relationship between management practice and resident outcomes. Nursing Research, 52(1), 12–21. Bass, B. M. (1990). Bass and Stogdill’s handbook of leadership: Theory, research and managerial applications (3rd ed.). New York: The Free Press. Benner, P., Malloch, K., Sheets, V., Bitz, K., Emrich, L., Thomas, M. B., Bowen, K., Scott, K., Patterson, L., Schwed, K., & Farrel, M. (2006). TERCAP: Creating a national database on nursing errors. Harvard Health Policy Review, 7(1), 48–63. Berta, W. B., & Baker, R. (2004). Factors that impact the transfer and retention of best practices for reducing error in hospitals. Health Care Management Review, 29(2), 90–97. Berwick, D. & Nolan, T. (2006). High reliability healthcare; Available at http://www.ihi.org/ihi/ topics/reliability/reliabilitygeneral/emergingcontent/highreliabilityhealthcarepresentation. htm; November 22. Blendon, R. J., Schoen, C., Donelan, K., Osborn, R., DesRoches, C. M., Scoles, K., Davis, K., Binns, K., & Zapert, K. (2001). Physicians’ views on quality of care: A five-country comparison. Health Affairs, 20(3), 233–243. Campbell, E. M., Sittig, D. F., Ash, J. S., Guappone, K. P., & Dykstra, R. H. (2007). In reply to: e-Iatrogenesis: The most critical consequence of CPOE and other HIT. Journal of the American Medical Informatics Association, 14(3), 389–390. Carroll, J. S., & Rudolph, J. W. (2006). Design of high reliability organizations in health care. Quality and Safety in Health Care, 15(Suppl. 1), i4–i9. Casti, J. L. (1997). Would-be worlds. New York: Wiley. Cherns, A. B. (1962). Accidents at work. In: A. T. Welford, M. Argyle, D. V. Glass & J. N. Morris (Eds), Oriblems and methods of study. London: Routledge and Kegan Paul. Chiles, J. R. (2002). Inviting disaster: Lessons from the edge of technology. New York: HarperCollins Publishers. Cilliers, P. (1998). Complexity and post modernism: Understanding complex systems. New York: Routledge. Coleman, H. J. (1999). What enables self-organizing behavior in business. Emergence, 1(1), 33–48. Committee on the Future of Rural Healthcare. (2005). Quality through collaboration: The future of rural healthcare. Washington, DC: National Academies Press. Cook, R. I., & O’Connor, M. F. (2005). Thinking about accidents and systems. In: H. Manasse & K. Thompson (Eds), Improving medication safety. Mathesda, MD: American Society for Health-System Pharmacists. Cook, R. I., Render, M., & Woods, D. D. (2000). Gaps in the continuity of care and progress on patient safety. British Medical Journal, 320(7237), 791–794.
64
PATRICK A. PALMIERI ET AL.
Cook, R. I., & Woods, D. D. (1994). Operating at the sharp end: The complexity of human error. In: M. S. Bogner (Ed.), Human error in medicine (pp. 285–310). Hinsdale, NJ: Lawrence Erlbaum. Cook, R. I., Woods, D. D., & Miller, C. (1998). A tale of two stories: Contrasting views of patient safety, Report from a workshop on assembling the scientific basis for progress on patient safety: 1–86. Chicago: National Health Care Safety Council of the National Patient Safety Foundation at the AMA. Cyert, R., & March, J. G. (1963). A behavioral theory of the firm. Englewood Cliffs, NJ: Prentice-Hall. Davis, K., Schoenbaum, S. C., Collins, K. S., Tenney, K., Hughes, D. L., & Audet, A. J. (2002). Room for improvement: Patients report on the quality of their health care. New York: The Commonwealth Fund. Department of Veterans Affairs. (2002). The Veterans Health Administration national patient safety improvement handbook. Washington, DC: U.S. Department of Veterans Affairs. Devers, K. J., Pham, H. H., & Lui, G. (2004). What is driving hospitals’ patient-safety efforts? Health Affairs, 23(3), 103–115. Dorner, D. (1996). The logic of failure: Recognizing and avoiding error in complex situations. New York: Metropolitan Books. Dunbar, R. L. M. (1975). Manager’s influence on subordinates’ thinking about safety. Academy of Management Journal, 18(2), 364–369. Espin, S., Lingard, L., Baker, G. R., & Regehr, G. (2006). Persistence of unsafe practice in everyday work: An exploration of organizational and psychological factors constraining safety in the operating room. Quality and Safety in Health Care, 15(3), 165–170. Farrokh, F., Vang, J., & Laskey, K. (2007). Root cause analysis. In: F. Alemi & D. H. Gustafson (Eds), Decision analysis for healthcare managers. Chicago: Health Administration Press. Gano, D. L. (2003). Apollo root cause analysis: A new way of thinking (2nd ed.). Yakima, WA: Apollonian Publications. Garber, J., & Seligman, M. E. P. (1980). Human helplessness: Theory and applications. New York: Academic Press. Gibson, R., & Singh, J. P. (2003). Wall of silence: The untold story of the medical mistakes that kill and injure millions of Americans. Washington, DC: Lifeline Press. Gray, J. A. M. (2001). Evidence-based healthcare: How to make health policy and management decisions. London: Churchill Livingstone. Green, S. G., & Mitchell, T. R. (1979). Attributional processes of leaders in leader-member interaction. Organizational Behavior and Human Performance, 23, 429–458. Hanley, M. A., & Fenton, M. V. (2007). Exploring improvisation in nursing. Journal of Holistic Nursing, 25(2), 126–133. Harrison, M. I., Koppel, R., & Bar-Lev, S. (2007). Unintended consequences of information technologies in health care: An interactive sociotechnical analysis. Journal of the American Medical Informatics Association, 14(5), 542–549. Harvey, J. H., & Weary, G. (1985). Attribution: Basic issues and applications. San Diego, CA: Academic Press. Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley. Helmreich, R. L. (2000). On error management: Lessons from aviation. British Medical Journal, 320, 781–785.
Error in Health Care
65
Helmreich, R. L., & Davies, J. M. (2004). Culture, threat, and error: Lessons from aviation. Canadian Journal of Anesthesia, 51(6), R1–R6. Hofmann, D. A., & Stetzer, A. (1998). The role of safety climate and communication in accident interpretation: Implications for learning from negative events. Academy of Management Journal, 41(6), 644–657. Institute for Safe Medication Practices. (2006). Since when it is a crime to be human? Available at http://www.ismp.org/pressroom/viewpoints/julie.asp; November 30. Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press. Institute of Medicine. (2003). Health professions education: A bridge to quality. Washington, DC: National Academy Press. Institute of Medicine. (2004). Patient safety: Achieving a new standard for care. Washington, DC: National Academy Press. Institute of Medicine. (2007a). Frequently asked questions; Available at http://www.iom.edu/ CMS/6008.aspx; 8 June. Institute of Medicine. (2007b). Preventing medication errors: Quality chasm series. Washington, DC: National Academies Press. James, L. R., & White, J. F. (1983). Cross situational specificity in manager’s perceptions of subordinate performance, attributions and leader behavior. Personnel Psychology, 36, 809–856. Joint Commission on Accreditation of Healthcare Organizations. (2007). Health care at the crossroads: Strategies for addressing the nursing crisis; Available at http://www. jointcommission.org/publicpolicy/nurse_staffing.htm; 18 March. Joint Commission Resources. (2007). Front line of defense: The role of nurses in preventing sentinel events (2nd ed). Oakbrook Terrace, IL: Joint Commission Resources. Joyce, P., Boaden, R., & Esmail, A. (2005). Managing risk: A taxonomy of error in health policy. Health Care Analysis, 13(4), 337–346. Katz-Navon, T., Naveh, E., & Stern, Z. (2005). Safety climate health care organizations: A multidimensional approach. Academy of Management Journal, 48(6), 1075–1089. Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (Eds). (2000). To err is human: Building a safer health system. Washington, DC: National Academy Press. Lawton, R., & Parker, D. (2002). Barriers to incident reporting in a healthcare system. Quality and Safety in Health Care, 11(1), 15–18. Leape, L. L., Bates, D. W., Cullen, D. J., Cooper, J., Demonaco, H. J., Gallivan, T. R. H., Ives, J., Laird, N., Laffel, G., Nemeskal, R., Peterson, L. A., Porter, K., Servi, D., Shea, B. F., Small, S. D., Sweitzer, B. J., Thompson, B. T., & van der Vliet, M. (1995). Systems analysis of adverse drug events. ADE prevention study group. Journal of the American Medical Association, 274(1), 35–43. Leape, L. L., & Berwick, D. M. (2005). Five years after ‘‘To err is human’’: What have we learned? Journal of the American Medical Association, 293(19), 2384–2390. Leduc, P. A., Rash, C. E., & Manning, M. S. (2005). Human factors in UAV accidents. Special Operations Technology, 3(8). Available at http://www.special-operations-technology. com/article.cfm?DocID=1275. Accessed on May 18, 2006. Leonard, M. L., Frankel, A., & Simmonds, T. (2004). Achieving safe and reliable healthcare: Strategies and solutions. Chicago: Health Administration Press. Lyons, S. S., Tripp-Reinmer, T., Sorofman, B. A., Dewitt, J. E., Bootsmiller, B. J., Vaughn, T. E., & Doebbeling, B. N. (2005). Va queri informatics paper: Information technology for
66
PATRICK A. PALMIERI ET AL.
clinical guideline implementation: Perceptions of multidisciplinary stakeholders. Journal of the American Medical Informatics Association, 12(10), 64–71. Mallak, L. (1998). Putting organizational resilience to work. Industrial Management, 40(6), 8–13. Martinko, M. J., Douglas, S. C., & Harvey, P. (2006). Attribution theory in industrial and organizational psychology: A review. In: G. P. Hodgkinson & K. J. Ford (Eds), International review of industrial and organizational psychology (Vol. 21, pp. 127–187). Chicester: Wiley. Martinko, M. J., & Gardner, W. L. (1982). Learned helplessness: An alternative explanation for performance deficits. Academy of Management Review, 7(2), 195–204. Martinko, M. J., & Thomson, N. F. (1998). A synthesis and extension of the Weiner and Kelley attribution models. Basic and Applied Social Psychology, 20(4), 271–284. McDaniel, R. R., & Driebe, D. J. (2001). Complexity science and health care management. Advances In Health Care Management, 2, 11–36. Millenson, M. L. (1997). Demanding medical excellence. Chicago: University of Chicago Press. Millenson, M. L. (2003). The silence. Health Affairs, 22(2), 103–112. Mitchell, T. R., & Wood, R. E. (1979). An empirical test of attributional model of leaders’ responses to poor performance. Paper presented at the Symposium on Leadership, Duke University, Durham, NC. National Council of State Boards of Nursing. (2007). Practice and discipline: TERCAP; Available at https://www.ncsbn.org/441.htm; April 18. Nolan, T., Resar, R., Haraden, C., & Griffin, F. A. (2004). Innovation series: Improving the reliability of health care. Cambridge, MA: Institute for Healthcare Improvement. Norman, D. (1988). The design of everyday things. New York: Doubleday. Palmieri, P. A., Godkin, L., & Green, A. (2007). Organizational inertia: Patient safety movement slows as organizations struggle with cultural transformation. Texas: Tech University. Palmieri, P. A., & Peterson, L. T. (2008). Technological iatrogenesis: An expansion of the medical nemesis framework to improve modern healthcare organization performance. Proceedings of the Annual Meeting of the Western Academy of Management. March 26–29, 2008. Oakland, California. Palmieri, P. A., Peterson, L. T., & Ford, E. W. (2007). Technological iatrogenesis: New risks necessitate heightened management awareness. Journal of Healthcare Risk Management, 27(4), 19–24. Perrow, C. (1984). Normal accidents: Living with high risk systems. New York: Basic Books. Peterson, C., Maier, S., & Seligman, M. E. P. (1993). Learned helplessness: A theory for the age of personal control. New York: Oxford University Press. Peterson, M. F. (1985). Paradigm struggles in leadership research: Progress in the 1980s. Paper presented at the Academy of Management, San Diego, CA. Pfeffer, J. (1977). The ambiguity of leadership. Academy of Management Review, 2, 104–112. Rapala, K., & Kerfoot, K. M. (2005). From metaphor to model: The Clarian safe passage program. Nursing Economic, 23(4), 201–204. Rasmussen, J. (1990). The role of error in organizing behavior. Ergonomics, 33, 1185–1199. Rasmussen, J. (1999). The concept of human error: Is it useful for the design of safe systems in health care? In: C. Vincent & B. deMoll (Eds), Risk and safety in medicine (pp. 31–47). London: Elsevier. Reason, J. (2000). Human error: Models and management. British Medical Journal, 320(7237), 768–770. Reason, J. T. (1990). Human error. New York: Cambridge University Press.
Error in Health Care
67
Reason, J. T. (1997). Managing the rosks of organizational accidents. Aldershot: Ashgate Publishing. Reason, J. T. (1998). Managing the risks of organizational accidents. Aldershot, England: Ashgate. Reason, J. T., Carthey, J., & de Leval, M. R. (2001). Diagnosing ‘‘Vulnerable system syndrome’’: An essential prerequisite to effective risk management. Quality in Health Care, 10(S2), 21–25. Reason, J. T., & Hobbs, A. (2003). Managing maintenance error: A practical guide. Aldershot, England: Ashgate. Reason, J. T., & Mycielska, K. (1982). Absent-minded? The psychology of mental laspes and everyday errors. Englewood Cliffs, NJ: Prentice-Hall Inc. Roberts, K. (1990). Some characteristics of one type of high reliability organization. Organization Science, 1(2), 160–176. Roberts, K. H. (2002). High reliability systems. Report on the institute of medicine committee on data standards for patient safety on September 23, 2003. Rogers, A., Wang, W. T., Scott, L. D., Aiken, L. H., & Dinges, D. F. (2004). The working house of hospital staff nurses and patient safety. Health Affairs, 23(4), 202–212. Rosenthal, N. M. (1994). The incompetent doctor: Behind closed doors. London: Open University Press. Sasou, K., & Reason, J. T. (1999). Team errors: Definitions and taxonomy. Reliability Engineering and System Safety, 65(1), 1–9. Seligman, M. E. P., Maier, S. F., & Geer, J. (1968). The alleviation of learned helplessness in dogs. Journal of Abnormal Psychology, 73, 256–262. Smetzer, J. L., & Cohen, M. R. (1998). Lessons from Denver medication error/criminal negligence case: Look beyond blaming individuals. Hospital Pharmacy, 33, 640–656. Smetzer, J. L., & Cohen, M. R. (2006). Lessons from Denver. In: P. Aspden, J. Wolcott, J. L. Bootman & L. R. Cronenwett (Eds), Preventing medication errors (pp. 43–104). Washington, DC: National Academy Press. State of Wisconsin. (2006). Criminal complaint: State of Wisconsin versus Julie Thao. Wisconsin: Circiut court of Dane County. Stone, P. W., Mooney-Kane, C., Larson, E. L., Hran, T., Glance, L. G., Zawanziger, J., & Dick, A. W. (2007). Nurse working conditions and patient safety outcomes. Medical Care, 45, 571–578. Tucker, A. L. (2004). The impact of organizational failures on hospital nurses and their patients. Journal of Operations Management, 22(2), 151–169. Tucker, A. L., & Edmondson, A. (2003). Why hospitals don’t learn from hailures: Organizational and psychological dynamics that inhibit system change. California Management Review, 45(2), 1–18. Tucker, A. L., & Spear, S. J. (2006). Operational failures and interruptions in hospital nursing. Health Services Research, 41(3), 643–662. Vaughn, D. (1996). The challenger launch decision. Chicago: Chicago University Press. Vaughn, D. (1999). The dark side of organizations: Mistake, misconduct, and disaster. In: J. Hagan & K. S. Cook (Eds), Annual review of sociology (pp. 271–305). Palo Alto, CA: Annual Reviews. Vincent, C. (1997). Risk, safety and the dark side of quality. British Medical Journal, 314, 1175–1176. Vincent, C. (2003). Understanding and responding to adverse events. New England Journal of Medicine, 348(11), 1051–1056.
68
PATRICK A. PALMIERI ET AL.
Vincent, C., Taylor-Adams, S., & Stanhope, N. (1998). Framework for analyzing risk and safety in clinical medicine. British Medical Journal, 316(7138), 1154–1157. Wachter, R. M. (2004). The end of the beginning: Patient safety five years after ‘‘To err is human’’ [web exclusive]. Health Affairs, W4, 534–545. Weick, K. E. (1993). The collapse of sense-making in organizations: The Mann Gulch disaster. Administrative Science Quarterly, 38, 628–652. Weick, K. E., & Sutcliffe, K. M. (2001). Managing the unexpected: Assuring high performance in a range of complexity. San Francisco: Jossey-Bass. Weiner, B. (1972). Theories of motivation. Chicago: Markham Publishing Company. Weiner, B. (1995a). Judgment of responsibility: A foundation for a theory of social conduct. New York: Guilford Press. Weiner, B. (1995b). Attribution theory in organizational behavior: A relationship of mutual benefit. In: M. J. Martinko (Ed.), Attributional theory in organizational perspective (pp. 3–6). Del Ray Beach, FL: St. Lucie Press. Weiner, J. P., Kfuri, T., Chan, K., & Fowles, J. B. (2007). ‘‘e-Iatrogenesis’’: The most critical unintended consequence of CPOE and other HIT. Journal of American Medical Informatics Association, 14(3), 387–389. Whittingham, R. B. (2004). The blame machine: Why human error causes accidents. Amsterdam: Elsevier Butterworth-Heinemann. Wiegmann, D. A., & Shappell, S. A. (1997). Human factor analysis of post-accident data: Applying theoretical taxonomies of human error. The International Journal of Aviation Psychology, 7(4), 67–81. Wiegmann, D. A., & Shappell, S. A. (1999). Human error and crew resource management failures in naval aviation mishaps: A review of U.S. Naval safety center data, 1990–1996. Aviation Space Environmental Medicine, 70(12), 1147–1151. Wiegmann, D. A., & Shappell, S. A. (2001). Human error perspectives in aviation. The International Journal of Aviation Psychology, 11(4), 341–357. Wiegmann, D. A., & Shappell, S. A. (2003). A human error approach to aviation accident analysis: The human factors analysis and classification system. Aldershot: Ashgate Publishing Company. Wisconsin Hospital Association. (2006). Hospital association statement regarding legal actions against nurse; Available at http://www.wha.org/newscenter/pdf/nr11-2-06crimchargestmt. pdf; 10 November. Wisconsin Medical Society. (2006). Position statment regarding attorney general charges files today; Available at http://www.wisconsinmedicalsociety.org/member_resources/insider/ files/reaction_to_ag_changes.pdf; 7 November. Woods, D. D., & Cook, R. I. (2002). Nine steps to move forward from error. Cognitive Technology and Work, 4(2), 137–144. Zhan, C., & Miller, M. R. (2003). Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. Journal of the American Medical Association, 290, 1868–1874.
PROMOTING PATIENT SAFETY BY MONITORING ERRORS: A VIEW FROM THE MIDDLE Michal Tamuz, Cynthia K. Russell and Eric J. Thomas ABSTRACT Hospital nurse managers are in the middle. Their supervisors expect that they will monitor and discipline nurses who commit errors, while also asking them to create a culture that fosters reporting of errors. Their staff nurses expect the managers to support them after errors occur. Drawing on interviews with 20 nurse managers from three tertiary care hospitals, the study identifies key exemplars that illustrate how managers monitor nursing errors. The exemplars examine how nurse managers: (1) sent mixed messages to staff nurses about incident reporting, (2) kept two sets of books for recording errors, and (3) developed routines for classifying potentially harmful errors into non-reportable categories. These exemplars highlight two tensions: the application of bureaucratic rule-based standards to professional tasks, and maintaining accountability for errors while also learning from them. We discuss how these fundamental tensions influence organizational learning and suggest theoretical and practical research questions and a conceptual framework.
Patient Safety and Health Care Management Advances in Health Care Management, Volume 7, 69–99 Copyright r 2008 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1474-8231/doi:10.1016/S1474-8231(08)07004-3
69
70
MICHAL TAMUZ ET AL.
INTRODUCTION The influential Institute of Medicine (IOM) report on medical errors recommends that health care organizations ‘‘implement mechanisms of feedback and learning from error’’ (Kohn, Corrigan, & Donaldson, 2000, p. 181). Researchers demonstrate how accountability measures reduce frontline health care providers’ willingness to disclose potential problems and medical errors – information necessary for learning. They call for enacting a non-punitive environment (e.g., Kohn et al., 2000) or a ‘‘just culture’’ (Marx, 2001, 2003) in hospitals. Nurse managers are in the middle. Their supervisors expect that they will monitor and discipline nurses who commit errors, while also asking them to create a culture that fosters reporting of errors. Their staff nurses expect the managers to support them after errors occur and to understand that errors are often due to circumstances beyond their control. But from their positions in the middle, nurse managers also play a pivotal role in gathering information about and learning from medical errors. Drawing on interviews with nurse managers from three hospitals, we present three exemplars that illustrate how nurse managers cope with the demands of monitoring nursing errors and we explore the implications of these coping strategies for organizational learning. Understanding the roles of nurse managers is important because they can influence the availability of information about errors to other parts of the organization. Nurses are expected to report errors directly to their managers and managers also discover events that nurses do not report. Nurse managers can then pass on information to relevant parts of the hospital or sequester it. Finally, the tension between ensuring accountability and fostering learning is generalizable to many safety-reporting systems (Tamuz, 2001), and examining the roles of nurse managers may also inform our understanding of safety monitoring in high-hazard industries.
Nursing Roles Nurses play key roles in enabling the hospital to learn from its experience with errors and in maintaining accountability for mistakes. Nurses are well positioned to observe and receive feedback about ‘‘small failures’’ in the hospital (e.g., Edmondson, 2004) because they serve in a liaison role; they coordinate between the patient and other diagnostic and treatment units in the hospital (e.g., pharmacy, laboratory, and imaging). Moreover, nurses
Promoting Patient Safety
71
usually serve as the last safety barrier before patients receive medications. This position in an organization is often referred to as the ‘‘sharp end,’’ and leaders frequently blame those working at the sharp end when something goes wrong (Cook & Woods, 1994). Nurses are in a position to observe and be involved in medical errors, but they may be reluctant to report them. Although many hospitals have means for staff nurses to report errors independent of their managers, survey research indicates that nurses consistently attribute their failure to report errors to fear of negative repercussions. They are wary of being subject to disciplinary action and punitive responses by management (Wakefield et al., 1999; Osborne, Blais, & Hayes, 1999; Wolf, Serembus, Smetzer, Cohen, & Cohen, 2000). Wakefield et al. (1999) also found that nurses attributed their non-reporting to disagreement about whether an error occurred, the effort required to submit reports, and concerns about appearing incompetent. However, nurses surveyed expressed their intent to report errors if they were life threatening to the patient (Walker & Lowe, 1998) or if the patient had been injured, even at a minimal level of injury (Throckmorton & Etchegaray, 2007). The relationship between nurse managers and the nurses they supervise also influences whether nurses will call attention to medical errors and threats to patient safety. Edmondson (1996, 1999) found that nurse managers could be instrumental in encouraging staff nurses to report errors and problems; managers can create an environment of psychological safety that fosters nurses’ error reporting. In a survey of hospital nurses, Vogus and Sutcliffe (2007) also found that nurses’ trust in their managers was important. However, trust interacted with other safety-promoting activities to reduce medication error reporting when nurses also described a high level of safety-related activities in their unit. These studies illustrate the difficulties of untangling error reporting behavior and the actual underlying error rate. The relationship between nurses and their managers can also affect informal reporting of potential threats to patient safety. Individual nurses demonstrate their professional skills by problem solving. When they devise work arounds to solve immediate problems, they rarely inform nurse managers about the underlying issues; and thus, the lack of information hinders organizational learning (Tucker, Edmondson, & Spear, 2002; Tucker & Edmondson, 2003). The roles of nurse managers differ from those of staff nurses. Nurse managers act as a communication and administrative conduit between the staff nurses and the health care providers and administrators outside the patient care unit and thus, can be instrumental in maintaining accountability and promoting learning. Researchers have extensively studied the role of
72
MICHAL TAMUZ ET AL.
middle level managers in business (e.g., Balogun, 2006; Fenton-O’Creevy, 1998; Kanter, 1982; Mair, 2005) and in health care organizations (e.g., Ashmos, Huonker, & McDaniel, 1998; Brewer & Lok, 1995; Carney, 2004). Although considerable research has focused on the role of nursing middle management in hospitals, more research is needed on the roles of middle level nurse managers in promoting (or hindering) organizational learning.
Organizational Learning in Hospitals Research on organizational learning in hospitals and related health care organizations has focused on how organizations learn (or fail to learn) from an array of events that vary in their outcomes. Researchers discuss organizational learning from small failures (e.g., Edmondson, 2004; Sitkin, 1992), near misses (e.g., Callum et al., 2001), and preventable patient deaths. Some of these events resulted from errors, while others did not. Yet in other cases, health care providers did not recognize that their actions contributed to adverse outcomes (e.g., Weick & Sutcliffe, 2003). Hospitals formally designate decision-making forums for organizational learning from preventable injuries, near injuries, and deaths. These include physicians’ Morbidity and Mortality sessions (e.g., Lipshitz & Popper, 2000) and hospital-wide Root Cause Analysis meetings (e.g., Bagian et al., 2002). Nurses and nurse managers routinely participate in Root Cause Analysis sessions; however, they do not have a tradition of reviewing significant deaths and injuries, comparable to the physicians. Nurses engage in organizational learning activities at the unit-level and these activities can be distributed unevenly among various patient care units in a single hospital (e.g., Edmondson, 2004). In the current study, organizational learning is defined as a process in which decision makers weigh the organization’s experience as a basis for changing the routines that will guide future behavior (Levitt & March, 1988; March, 1999). This definition emphasizes that learning is a process, but it does not necessarily result in change or improvement.
METHODS Using qualitative research methods (Miles & Huberman, 1994), we examined interviews with nurse managers who worked in four critical care units located in three tertiary care teaching hospitals. The research design
73
Promoting Patient Safety
used semi-structured interviews supplemented by workplace observations of health care providers, allowing us to focus on the details of the hospital. We contextualized our research design primarily through rich description (Rousseau & Fried, 2001). We concentrated on examining the medication process from distinct professional viewpoints; in this study the focus is on nurse managers. This allowed us to gather rich, nuanced descriptions (e.g., Weick, 2007) of the methods that hospitals used to monitor nurses’ medication errors, as seen from the nurse managers’ perspectives. Specifically, we identified key exemplars that illustrate the role of nurse managers in monitoring and assessing nurses’ errors. The purpose of discussing these exemplars is to raise theoretical and practical questions that can guide further research.
Study Sample This study is part of a research project that examines how three hospitals learn from their experience how to improve medication safety. The larger research project focuses on drug-related errors because adverse events (i.e., involving patient harm) resulting from medications are some of the most common and costly errors (Aspden, Wolcott, Bootman, & Cronenwett, 2007; Bates et al., 1995; Thomas et al., 2000). The investigators used three criteria to select hospitals and sub-units within them (the hospital pharmacies and four critical care patient units). The three hospitals had a high volume of medication usage and a high potential for adverse events due to the complexity of their patients. The hospitals also had implemented medication safety programs, including reporting systems, indicating their efforts to reduce errors. The pseudonyms for the three hospitals are West Hospital (WH), South Hospital (SH), and North Hospital (NH). The larger research project was based on interviews with study participants who worked at different levels in the organizational hierarchy and represented diverse professional groups. The study was designed to interview a purposeful sample of key hospital administrators and middle managers and a ‘‘purposeful random sample’’ (Patton, 2002, p. 141) of health care providers, including nurses, physicians, and pharmacists. The objective was to reduce bias and enhance the credibility of the small sample, not to generalize from it. The larger research included 341 participants, representing 86 participants in WH, 98 in SH, and 157 in NH. In the three hospitals, 18 people declined to participate and two interviews were excluded due to recording malfunctions.
74
MICHAL TAMUZ ET AL.
The study reported here focuses on interviews with a purposeful sample of 20 nursing middle managers from four critical care patient units in three hospitals. The sample was designed to include all of the managers who were responsible for supervising nurses in the four critical care units. The study is based on interviews with 6 middle managers from a unit in WH, 3 from a unit in SH, and 11 from two units in NH, including 6 responsible for one critical care unit and 5 responsible for another. All of the managers who met the purposeful sample criteria were interviewed and no one declined to participate. The nurse managers served in middle management positions between the top hospital administrators, such as the Vice President of Nursing, and the frontline staff nurses, who were almost all registered nurses. The highest level middle management positions were Directors of Nursing, who were responsible for several patient care units (including the critical care unit we studied), and who reported to a hospital vice president. The second level middle management positions were unit managers who were responsible for the nursing staff in one critical care unit. The third level positions were assistant managers who reported to the unit manager and worked as frontline supervisors on each shift, in a specific critical care unit. To de-identify the study participants, we use the term ‘‘nurse manager’’ when referring to all three levels of middle management nursing positions. The study reported here does not include ‘‘charge nurses’’ (i.e., experienced lead nurses) because many of the registered nurses rotated positions, acting alternatively as charge nurses or staff nurses.
Data Collection and Analysis We used a general interview guide (Patton, 2002) in which we asked participants to describe hospital reporting systems and programs for monitoring medication safety as well as to discuss their own experiences with medication problems. Interviews with the nurse managers averaged about 90 min, ranging in length from approximately 1–6 h (across multiple interview sessions). Nurse managers who gave longer interviews were not over-represented because the study did not calculate the frequency of medication errors or other related events. We examined how the nurse managers described their roles in identifying, responding to, and making decisions about possible medication errors. Interviews were audio-recorded, transcribed, and checked for accuracy of transcription. Field notes,
Promoting Patient Safety
75
document review, and observations of routine activities supplemented the interviews. In the study reported here, we conducted a ‘‘framework analysis’’ of the interviews with nurse managers to identify the nurse managers’ roles in and their methods of making decisions about monitoring and assessing nursing errors (Green & Thorogood, 2004). Specifically, we sought to identify key exemplars that could illustrate these roles and decision-making processes. Applying framework analysis, two research team members familiarized themselves with the data by reading the transcripts and discussing field notes. They constructed a thematic framework by focusing on a priori study objectives and previous research as well as themes that emerged from the data. Specifically, they identified specific issues that the study participants raised (e.g., mixed messages) and took note of patterns in the activities that the study participants described (e.g., decision-making guidelines). The thematic framework was also based on key concepts derived from previous research: production pressures, accountability, and learning. Research findings suggest that staff nurses refrain from disclosing errors because of fears of reprimand (i.e., accountability) and lack of time (i.e., production pressures) (e.g., Wakefield, et al., 1999). Unit-level nurse managers’ actions can intensify (or alleviate) the nurses’ concerns and thus, influence the disclosure of error data necessary for organizational learning (e.g., Edmondson, 1996, 1999). The investigators used the thematic framework to select and analyze excerpts from the transcripts that illustrated error monitoring. The research team also looked for counter-examples that contradicted the coding categories in the thematic framework. The team analyzed the data by comparing within and across the exemplars. Two researchers grouped the data (i.e., transcript excerpts) into exemplars and looked for common and distinctive themes in each of the exemplars. A third team member reviewed the preliminary analysis and interpretation of the exemplars to assess whether the data supported the interpretation and to assure deidentification. The purpose of the data analysis was to refine and generate research questions and propose a conceptual framework to guide future studies. To protect the confidentiality of the data, the investigators removed details that could identify the research site and study participants. Pseudonyms disguise the names of the hospitals, patient care units, and the specific position titles. We also use a pseudonym for the External Review Committee (ERC) and do not specify the regulatory agency that enacted
76
MICHAL TAMUZ ET AL.
the committee. Although the nurses and nurse managers included men and women, in the results, male pronouns are used to refer to all study participants. In excerpts from the interview transcripts, the text in brackets was changed to de-identify participants or added to clarify the contents. The Institutional Review Boards designated by the hospitals and the universities approved the study.
RESULTS Error-Monitoring Systems In the three study hospitals, nurse managers were responsible for holding the staff nurses accountable for their errors. The emphasis on maintaining accountability in nursing came from multiple sources. The hospital expected nurse managers to ensure that nurses reported errors through the hospital’s voluntary incident reporting system. Nursing norms guided staff nurses to report errors to their charge nurse – usually an experienced lead nurse who assumed administrative duties for a shift in a particular unit. After consulting with (or being directed by) the charge nurse, the staff nurse might file a report with the hospital incident reporting system. Consultation with the charge nurse was customary, but not required. In each of the hospitals, a nurse could file an incident report directly to the reporting system. Indeed in SH, health care providers could file reports anonymously. Nurse managers were also expected to follow the mandates of two disciplinary systems: one system based on professional norms and standards and another separate due process system established by the hospital and common to many bureaucracies (Edelman, 1990). At the hospital level, the human resource department implemented the customary due process measures to assure a fair and equitable disciplinary system. For example, before dismissing an employee, the hospital must demonstrate that it provided adequate verbal and written warnings. This system not only applies to deviations from standard hospital routines (e.g., tardiness), but also to nurses’ medication errors. State-level regulators also implemented an ERC designed to protect patient safety by holding nurses accountable for their mistakes. Hospitals were required to report a nurse to the ERC if he made three minor errors or one major error. According to the ERC classification system, the category of minor errors included medication errors in which the patient
77
Promoting Patient Safety
was unharmed. Before the hospital reported a nurse to the ERC, the hospital convened an Internal Review Committee (IRC). The committee, comprised of representatives of hospital nurses, assessed the particular circumstances of the case and decided whether the nurse should be referred to the ERC. Although the nurse managers were bound by sometimes overlapping sets of bureaucratic rules, the hospital enabled them to use professional discretion when applying these rules. Nurse managers described how they could choose one or more of the following options: record the event in the nurse’s personnel file, relay the report to the hospital incident reporting system, report the event to the hospital disciplinary system, or initiate an internal review by the IRC for possible referral to the ERC. Each of the following three exemplars illustrates how nurse managers monitored errors, while buffering and balancing the pressures to hold nurses accountable for their actions. The first exemplar focuses on the strategies that nurse managers used to deal with hospital incident reporting requirements; the second examines how some of these managers kept two sets of books; and the third considers how managers responded to pressures from the ERC.
Exemplars This section includes three exemplars based on the interviews with nurse managers. Each exemplar begins with a description of the error-monitoring practice from the perspective of the nurse managers; the exemplar concludes with the investigators’ analyses of the themes related to production pressures, accountability, and learning, where applicable, as well as the description of the roles of the nurse managers.
Exemplar 1: Promoting Incident Reporting The first exemplar focuses on managers’ strategies for responding to hospital incident reporting requirements. The nurse managers developed idiosyncratic criteria for classifying which medication errors constituted reportable incidents. Some managers inadvertently generated mixed messages about whether nurses could be disciplined for their mistakes, while others filtered the incident reports that were relayed from the nursing unit to the hospital incident reporting system.
78
MICHAL TAMUZ ET AL.
Mixed Messages Nurse managers expressed their awareness of the staff nurses’ concerns about reporting their own errors or the errors of fellow nurses. They also addressed the efforts by hospital administrators to allay the nurses’ fears and explain the hospital’s non-punitive policy towards making errors. A nurse manager vividly described this situation in WH. They’re [the nurses] scared. I mean, because, when you make an incidentywhen you do something wrong, y that really should be written up; but you know what? You think to yourself, ‘‘Gosh! I’m really scared. You know, it didn’t do anything to the [patient], nobody knows, I’m just not gonna do it. I’m scared. Maybe I could get fired y’’ So they don’t want to report themselves, either.
According to the managers, the nurses’ fear of reporting also extended to reporting their nursing co-workers: They don’t want to get the person in trouble; they don’t want to send the person to [Internal Review Committee] or think a person could potentially be terminated. You know, the only thing, and I’m not trying to be mean, but it’s ignorance. That is why they would not write somebody up.
As the interview continued, the manager explained further. He described the institution’s challenges in encouraging nurses to document nursing errors officially so that the hospital could understand the full range of errors. Because, if they truly understood what the process was, they would know that it has nothing to do [with punitive actions] y because, the hospital has tried over and over again. We’ve given in-services; we’ve had Risk Management come y to try to get over to them that the incident report is not anything punitive. I mean, occasionally, once you follow up, and if there is a trend, there may be a punitive thing that does happen to the nurse.
This illustrates how, during the lengthy interviews, nurse managers contradicted themselves in describing the potential punitive consequences of nurses reporting mistakes. Indeed, nurse managers explained that they usually initiated disciplinary measures if a nurse repeated similar mistakes. One nurse manager was acutely aware of the mixed message that he and other health care providers had been sending to the nurses: It’s kind of double–talk or something, because then one side is saying it’s not punitive, but the other side you’re saying, ‘‘Well, they can get counseling, so that can get referred to [the IRC]’’ y It does sometimes not make sense, because there is situations where it could become punitive.
Promoting Patient Safety
79
These comments suggest that the nurse manager inadvertently sent mixed messages to the staff nurses regarding the punitive response to incident reporting. The hospitals publicly espoused a non-punitive policy towards incident reporting. But if nurses repeated similar mistakes, they could be subject to internal and external disciplinary procedures. These disciplinary measures gave nurses disincentives for reporting their own errors and those of other nurses. They also created the conditions for sending mixed messages to the staff nurses. On one hand, nurse managers instructed nurses that all incidents should be reported so that the hospital could have a better understanding of the range of incidents occurring. On the other, the managers also noted that there was an unspoken range of acceptable and unacceptable frequency and types of incidents that could jeopardize a nurse’s career. Filtering. The managers indirectly filtered the events that staff nurses reported to the hospital by excluding those errors that the managers consider to be non-reportable. Nurse managers developed their own idiosyncratic criteria for which errors should be classified as reportable or non-reportable incidents. The managers’ various rules of thumb for classifying reportable incidents illustrate how nurse managers reinterpreted the hospital guidelines for incident reporting. Based on the managers’ descriptions of reportable incidents, the investigators described the criteria as including the potential and actual severity of the patient injury, specific types of medication errors, how quickly the error was detected and corrected, and the involvement of others in the event. Nurse managers considered the potential harm to the patient. An NH manager stated, ‘‘It depends on the severity.’’ An SH manager clarified further, ‘‘There are classes of mistakes that I don’t bat an eye at. They’re too low risk to be of any real concern to me.’’ However, the managers’ classification criteria also extended beyond concerns about the potential harm to the patient. One of the NH managers focused on specific infractions that constituted a reportable incident: ‘‘If they omitted some medication [y] and it’s in the order, then they have to make an incident report that it was omitted.’’ An SH manager focused on how quickly the nurse noticed and corrected their mistake: If you hang a brand new bag, and you look at it and then you say, ‘‘Oh my goodness, this is the wrong concentration, let me change this real quick in the pump.’’ No, they wouldn’t [report] that. If it’s something that is not rectified pretty much immediately, and that they didn’t catch immediately, then they [file an incident report about] it.
80
MICHAL TAMUZ ET AL.
An SH manager focused on the involvement of other health care providers in the error. If nurses on another shift were involved in an error, he classified the event as a reportable incident: ‘‘If they [the nurses] come in and it’s a change of shift issue and they find something that happened on the prior shift, they have to [file an incident report on] that.’’ Similarly, if physicians were informed about the error, it is a reportable incident: ‘‘If it’s a situation where intervention was required, that physicians had to be notified, then absolutely they must [file an incident report on] that.’’ In this case, physician involvement also signals that the event was serious. Furthermore, an NH manager used patient involvement as an event classification criterion: Because a nurse would come to me and would say, ‘‘Gosh I did this’’ and they really feel bad about it then I say, ‘‘Then we need to do [an incident report]. It’s not to get you in trouble but we need to in case something happens, especially if the patient knows.’’ If the patient knows that a mistake was made, we need to do [an incident report].
All of these examples reflect the managers’ expectations of what constitutes a reportable incident. They are also consistent with the staff nurses’ accounts that they often consult with a nursing supervisor, before submitting an incident report. Once officially documented, not all incident reports were relayed from the specific nursing unit to the hospital reporting system. A nurse manager described how a nursing management team directly filtered an incident report: I know a nurse who wrote himself up, because he had a delay on a [hazardous drug] dose. He was supposed to give it like 9 o’clock in the morning and he didn’t give it until like 4 o’clock in the afternoon, and he wrote himself up. I was like, ‘‘What are you doing? Why did you [report] yourself on that?’’ He was like, ‘‘Well, because I was late with it; and it was wrong, and if something happens to that patient, if he gets a DVT [deep vein thrombosis] or something, you know, it’s my responsibility.’’ I am like, ‘‘Okay, you are just a straight up [person], you know.’’
When asked what would happen to the incident report written by that nurse, the manager indicated that it would be given to the nursing management team; they would make the ultimate call on whether to send it on to the hospital’s incident reporting system. The manager noted, however, that it would likely go no further than the team and that it would not go to the hospital’s reporting system. In contrast, nurse managers in the same hospital stated that if an incident was filed on someone in their unit by another
Promoting Patient Safety
81
department or if a nurse from their unit filed an incident on someone in another department, then the incident ‘‘has to go through the proper channels.’’ Analysis In the first exemplar, production pressures did not appear to be salient. However, accountability and learning issues were significant. The nurse managers also engaged in buffering and balancing roles. Accountability The case of the mixed messages highlights the nurse managers’ responsibility to monitor their staff nurses’ performance. The hospitals declared a nonpunitive policy towards incident reporting, but hospital administrators continued to hold the nurse managers accountable for insuring that their staff followed the rules. Nurse managers developed informal guidelines for specifying the conditions under which they would hold staff nurses accountable for reportable incidents. As could be expected, the managers were more likely to hold nurses accountable if the patient actually was harmed or could have been harmed under slightly different circumstances. Public disclosure of the error also seemed to increase the likelihood that nurse managers would hold nurses accountable for an error and require that they submit an incident report. Learning This exemplar underscores several possible impediments to learning. When nurses hear mixed messages about filing an incident report, they might continue to be fearful of reporting their own errors and those of other nurses. Thus, the hospital would have fewer cases from which to learn. Furthermore, because the official hospital guidelines for reporting incidents differed from the nurse managers’ idiosyncratic reporting criteria, it could limit the ability of nurses to learn when incident reporting was required. When the nurse managers directly or indirectly filtered event reports, it reduced the hospital’s capacity for organizational learning. Because the nurses and nurse managers did not report all relevant errors, it limited the amount of data available to the hospital from which it could observe patterns or draw conclusions. Nurse Manager Roles The use of mixed messages clearly illustrates the nurse manager’s role in balancing contradictory hospital-level policies. On one hand, the hospital
82
MICHAL TAMUZ ET AL.
maintains a non-punitive policy towards incident reporting, while on the other, hospital administrators expect that nurse managers will enforce compliance with hospital rules and procedures. Nurse managers translated these policies into an informal guideline in which a nurse usually would be disciplined after repeating the same mistake – but not as the result of an isolated error. Nurse managers saw themselves as protecting staff nurses when they filtered out what they perceived as non-hazardous infractions. In doing so, the nurse managers buffered their staff from the possible future intensification of disciplinary measures. If a nurse conscientiously reported every error he thought to be important, he would strengthen his local professional reputation within the patient care unit, but could undermine his official standing in the hospital. In the future, if he were to be involved in an error that caused harm to a patient, his history of previous mistakes might increase the severity of the hospital’s disciplinary measures. The nurse managers also served as informal consultants to the staff nurses. Although the nurses were not required to obtain approval by the charge nurse or a nurse manager before filing a report, the staff nurses often conferred with them before submitting an incident report. This informal consulting also occurred in SH even though health care providers had the option of filing anonymous reports. Consider that the incident reporting systems in the three hospitals were not computerized at the time of the interviews. Computerized incident reporting may have reduced the opportunities for nurse management teams directly to filter out the incident reports they perceived as inconsequential. However, the electronic filing of incident reports would be unlikely to eliminate the informal consulting between staff nurses and nurse managers, when deciding whether an event warrants submitting a formal report.
Exemplar 2: Keeping Two Sets of Records The second exemplar examines how nurse managers kept two sets of error records – a formal one, available to hospital administrators, and an informal one for the nurse managers’ usage. The formal, official file contained a subset of a nurse’s errors and related disciplinary measures; the informal file included a complete record of all of the nurse’s (reported) mistakes. Nurse managers developed idiosyncratic guidelines for determining which errors should be placed in the official permanent record or the informal one.
Promoting Patient Safety
83
Members of the nurse management teams in SH and WH spontaneously described remarkably similar processes for keeping two sets of records. Nurse managers could decide when and how to discipline nurses for making medication errors. As a nurse manager at WH explained: I don’t think that every time something happens with a medication error that I have to write them up. I think I have the option to, you know, choose how I’m going to handle the situation.
An SH nurse manager shared the view that ‘‘every medication error is different.’’ However, nurses and managers in all three hospitals consistently stated that in the case of patient harm, nurses must report the injury to their supervisors and their supervisors must formally record the injury. ‘‘There’s some things that you have no option, that you would have to write them [the nurses] up if there was adverse reaction to the patient or something,’’ a WH nurse manager explained. The nurse managers developed guidelines or rules of thumb for deciding whether or not they would take formal disciplinary measures (i.e., by ‘‘writing-up’’ a nurse). Nurse managers in different hospitals developed comparable guidelines. For example, a WH nurse manager specified one of his guidelines: I would not write somebody up if it was like a circumstance-type thing. I mean, obviously, if. I guess I might have to write somebody up if it was like a major thing, and then it caused adverse reaction to the [patient], I would have no option.
A nurse manager in SH also took into account the potential danger to the patient and was wary of errors ‘‘that can affect a patient so drastically; they can kill someone.’’ On the other hand, the same nurse manager was also attentive to the circumstances under which the error occurred: So you didn’t tell me you gave your medication an hour late, big deal. I think that’s going overboard to tell me that you gave something an hour late. Sometimes we’re three hours late in there. There’s just no getting to it. You can’t get to it and you’ve got to prioritize.
When assessing an error, nurse managers also took into consideration conditions that were beyond the control of the nurse: That’s when you would write in your investigation that this was a totally unreasonable assignment, or it was easy and then it just fell apart, and we didn’t have enough staff. There was nothing they could do; they did the best job they could do.
84
MICHAL TAMUZ ET AL.
When deciding how to record an error, the nurse managers emphasized the importance of taking into consideration not only the patient outcomes, but also the circumstances that could have contributed to a nursing error. Some managers explained that the perceived severity of the infraction influenced their decision whether or not to include a report to the nurse’s permanent file. For example, if the nurse manager considered the nurse’s infraction to be inconsequential, then he did not record the counseling session in the nurse’s file. To illustrate, an SH manager recalled his conversation with a nurse: [Nurse:] ‘‘I gave my [medication] one hour late; I can’t believe I missed it.’’ [Manager:] ‘‘Please try and write yourself a note, let yourself know what times all your meds are due y’’ That’s not going in their permanent file.
However, if the error could potentially harm the patient, this same SH manager would not only talk with the nurse, but also record the event in the nurse’s file. It depends on what happened. Sometimes I just call them in and talk to them and tell them that this is the situation, this is what happened. It depends. If they gave 40 of [hazardous medication] instead of 20 of [hazardous medication], and they realized it [y] There was no intervention required with the patient, everything was okay, then it just goes in their file as, ‘‘This was a medication error.’’ It’s like a verbal warning and they have to be conscientious of that. And then usually you find that they’re much more careful.
These examples show how nurse managers differentiated between formally choosing to record a warning in a nurse’s file and using informal methods to record a nurse’s errors. When nurse managers monitored mistakes that did not warrant a formal warning, they devised informal methods to remember whom they had coached or counseled about mistakes. The nurse managers, however, differed in the informal strategies they used. One nurse manager explained that his management team developed a spreadsheet to keep an informal record of all of the nurses’ errors: If I looked at it [the event] and I saw, I thought it was extremely unreasonable, I would not write them up. I would probably put it on the spreadsheet, which the spreadsheet is just ours; nobody sees that, I don’t send it out to anybody, that’s just for the management team so we can just glance at it and see: ‘‘Is there people that we need to be following up with?’’ that kind of stuff.
Promoting Patient Safety
85
The nurse managers also used informal methods to keep track of counseling sessions, as a WH nurse manager explained: ‘‘You might write it on a paper towel and put it in their [the nurse’s] file, so it’s an unofficial thing.’’ Indeed, some managers described relying on their memories: ‘‘If something happens and say it’s little bitty tiny things, and, ‘Oops I caught this, Oops I caught this’ – and it happens several times in a row, sure you remember that.’’ They referred to their memories to keep track of positive events (i.e., nurses catching something before it became a problem) as well as more negative ones (i.e., nurses who had made minor errors that did not warrant an incident report). These examples illustrate how the nurse managers kept two sets of books regarding medication errors. They kept an informal set of records comprised of all of the nurse’s (reported) medication errors, including apparently insignificant infractions. They also formally assembled a list of counseling sessions and written warnings about potentially significant errors in the nurse’s permanent file. The nurse managers were concerned about the possible consequences of recording all of the nurse’s errors in his permanent file. An SH nurse manager noted that the formal documentation of all medication errors could discourage nurses from reporting important, potentially dangerous events: If you have a file full of verbal warnings because of little bitty incidents that have happened, I think that would be discouraging to them [the nurses] that later they might not want to report something that should actually be reported.
Furthermore, a nurse’s conscientious error reporting could also undermine his professional reputation and misrepresent his history of previous errors, should the nurse be reported to the IRC: Their file goes to [the IRC]. So if there’s other incidents where they’ve made medication errors in their file, that’s reported in the [IRC]. And so they [the IRC] know: Okay, this is their fourth time that they’ve had a medication error; or this is their first.
In case the nurse was involved in additional errors, the nurse managers and hospital administrators would use the official file to establish the nurse’s history. Thus, this nurse manager was aware of the long-term implications of formally recording an error. Analysis In this section, we examine the managers’ concerns with production pressures, accountability, and learning. The nurse managers’ roles are also considered.
86
MICHAL TAMUZ ET AL.
Production Pressures This exemplar illustrates how nurse managers balance the demands for stringent rule enforcement and accountability with the production pressures on staff nurses caring for patients. The managers’ concern for production pressures is reflected in their attentiveness to the circumstances under which nurses made errors. Consider the SH nurse manager who explained that nurses might not be able to administer drugs on schedule because ‘‘You can’t get to it and you’ve got to prioritize.’’ When deciding how to respond to a nurse’s error, the managers took into consideration the extent to which work conditions contributed to the error. Thus, if the error occurred due to circumstances beyond the nurse’s control, it would influence the manager’s decision whether or not to record it officially. Accountability Keeping two sets of records clearly involves accountability issues. But it also raises issues of accountability to whom? Managers seem almost always to hold a nurse accountable to himself – the managers consider it important to inform the nurses about the mistakes they have made. However, the managers have developed finely calibrated guidelines for deciding whether a nurse should be held accountable only to the nursing unit’s management team, or also to the hospital, and possibly to the ERC. Learning The nurse managers did not explicitly discuss organizational learning, however, they emphasized that it was important to let a nurse know that he made a mistake, implicitly, so that he could learn from his experience. Therefore, an individual nurse could learn from his error, regardless of whether it was recorded in the official file. By keeping informal records, the nurse managers not only kept a complete history of an individual nurse’s reported errors, but also could discover problems that were common to other nurses. Indeed, several managers mentioned keeping a running tab in their memory of mistakes that had occurred in the unit. When they noticed that different nurses made the same mistake, the pattern of mistakes called their attention to the problem, and they began investigating whether there was a system problem. Thus, the ability of the management team to learn from their experience within the patient care unit did not appear to be hampered by keeping two sets of records.
Promoting Patient Safety
87
The practice of keeping two sets of records limited how the hospital learned from nursing mistakes. The hospital could learn from nursing mistakes that were reported to the incident reporting system or that the nurse managers discussed with risk management or other hospital administrators. If the nurse management team kept informal tabs on mistakes within the unit, this information was not available to other nursing units or at the hospital level. Thus, the lack of information could hinder the discovery of emerging error patterns, especially in events that rarely occurred in one patient care unit, but that could crop up across the hospital.
Nurse Manager Roles The investigators observed that the nurse managers fulfilled three key roles. First, the nurse managers developed decision-making guidelines for classifying which events should be handled formally or informally. The guidelines included one set of criteria focused on the patient and the second, on the nurse and other health care providers. On one hand, the managers assessed the actual and potential harm to the patient and whether the error required that providers give the patient additional treatment. On the other, they considered mitigating circumstances, such as the nurse’s workload and whether the nurse noticed and corrected his own error. Second, they buffered the nurses from hospital’s disciplinary procedures. If managers had officially recorded all of the nurse’s benign mistakes, their official files would be full of warnings. In case a nurse was later involved in an adverse drug event, in which a patient suffered a preventable injury from a medication error, then this history of warnings might prompt the hospital to take severe disciplinary action. Third, the nurse managers balanced multiple, and sometimes conflicting goals espoused by the hospital administrators: to ensure that employees followed hospital and professional procedures, to hold employees accountable for their mistakes, and to refrain from blaming health care providers for system failures. By keeping dual records, nurse managers could comply with these different objectives by addressing them one at a time. To illustrate, they could maintain accountability when they classified events as reportable and warranting a formal disciplinary notice. They could refrain from blame when they informally recorded an error and fostered the nurse’s learning. They could address system failures when they tracked patterns in informal records of circumstance-based errors and thus, promoted organizational learning within the patient care unit.
88
MICHAL TAMUZ ET AL.
Exemplar 3: The External Review Committee In the third exemplar, nurse managers responded to pressures from an ERC. Cases of serious patient harm were invariably reported to the ERC. For less harmful mishaps, the managers developed routines for classifying mishaps into reportable or non-reportable categories. If the nurse managers considered an event reportable, they were required to report it to the hospital’s IRC for possible referral to the ERC. When managers classified a significant patient safety-related event into a non-reportable category, they did not report the event to the IRC or ERC. Tamuz and Thomas (2006), in their previous analysis of the ERC, described this situation as ‘‘classifying away dangers.’’ The following discussion draws upon Tamuz and Thomas’s description of the ERC, while adding additional detail about how nurses responded to it and providing a more extensive analysis. The ERC regulations specified that hospitals report every time a nurse makes a major error or a series of three minor errors; however, they did not provide detailed instructions for implementing the regulations. Nurse management teams could exercise discretion in determining which situations were reportable, as a nurse manager explained: Because I don’t think there’s anything really clearly spelled out. I mean, there’s not a policy that says, ‘‘If this happens, this is a minor, you must do this; if this happens, it’s a major, you must do this.’’ So it’s kind of left up to you as a leadership team to decide how are you gonna handle these situations and what are you gonna do? (Tamuz & Thomas, 2006, p. 931).
Nurse managers met as a team to discuss how to respond to particular errors. A nurse manager explained, ‘‘Usually we discuss what’s the best way to handle it, depending y I mean, every [incident] is different.’’ A different management team followed a similar routine, as a manager noted: ‘‘Most of the time when we have incidents like this, the managers and I will discuss them.’’ Thus, nurse management teams could decide how to classify and respond to each error report. The managers not only assessed the circumstances surrounding the event, but also considered the nurses who were involved. ‘‘It depends on the circumstances surrounding it. It depends on the significance of the error. It depends on a lot of things, a lot of factors,’’ a manager noted. A manager in a different hospital described how the team worked together ‘‘to protect this nurse and educate this nurse and never have this happen again.’’ Consider an instance where a nurse administered the wrong drug intravenously and after repeated medical interventions, the patient recovered from the error.
Promoting Patient Safety
89
The management team gave the nurse a verbal warning, assigned him to complete a study plan, and required that he sign-off particular medication orders with another nurse for six months. Thus, they reported the event through the hospital disciplinary system (e.g., verbal warning), but not to the ERC. A manager emphasized, ‘‘Everybody deserves one chance’’ and this nurse ‘‘is a very conscientious person.’’ This example illustrates that the management teams could exercise considerable discretion in deciding how to maintain accountability. Nurse management teams followed guidelines for classifying events as reportable (or non-reportable) to the ERC; however, the teams developed different classification criteria. To illustrate, NH team guidelines concentrated on nursing process and SH guidelines were based on system issues. An NH management team did not consider an error as reportable to the ERC – if the nurse followed the appropriate process. For example, a manager recounted how a nurse gave a patient the wrong dose of medication and the patient was unharmed: She had juxtaposed the numbers. This nurse had charted as she went. She looked at this, she printed, she highlighted [y.] She had gone through all her steps, but she had made a mistake (Tamuz & Thomas, 2006, p. 932).
The nurse manager continued to explain how he queried the nurse to decided whether he was following accepted procedures: ‘‘Tell me what you were thinking. Talk me through your process.’’ The stuff where they [the nurses] have logical process and it came to a bad outcome – they’re comfortable that they will get support. If their process was sound (Tamuz & Thomas, 2006, p. 932).
Thus, if a nurse went through the correct process and it resulted in a benign error, the NH management team classified the event as non-reportable to the ERC. An SH nurse management team considered errors resulting from system issues as non-reportable to the ERC. However, if the team attributed an error to an individual nurse’s lack of skill or knowledge, than they classified the event as reportable. Therefore, the team investigated events to decide if they were caused by a nurse’s inadequacy (e.g., a lack of knowledge or skill) or by a system failure (e.g., a pneumatic tube malfunction or understaffing). To decide whether an error could be attributed to system problems, team members asked: ‘‘If any other nurse had been in this situation, would he have made the same mistake?’’ If the team concluded that anybody could do this and it could happen again, then they classified the mistake ‘‘as a system problem’’ and ‘‘not an individual problem.’’ Because the managers
90
MICHAL TAMUZ ET AL.
concluded that the error resulted from a system problem, they did not consider it reportable to the ERC. Analysis The nurse managers’ methods of classifying away events relates to issues of production pressures, accountability, and learning. It also illustrates their roles in balancing conflicting objectives. Production Pressures The nurse managers mentioned production pressures as a possible cause of system-based errors. For example, if there were an insufficient number of nurses on duty, it may have contributed to errors. However, the context leads the investigators to speculate that production pressures also might underlie the reasoning behind some of the managers’ reluctance to report a ‘‘conscientious nurse’’ to the ERC. There was a nation-wide nursing shortage in the US and these nurse managers may have been concerned about retaining their ‘‘good nurses.’’ Accountability Maintaining accountability is a central theme in this exemplar. Maintaining accountability seems like it should be objective and rule-based, but it was subjective in practice – based in part on nurse managers’ estimates of who was a ‘‘good nurse.’’ Furthermore, nurse managers sought to hold the staff nurses accountable for their mistakes, but without having to report them to the ERC, unless it was unavoidable. The nurse managers also had to maintain their own personal accountability. As an NH nurse manager emphasized, he was required to report nurses to the ERC and must carry out his responsibility; ERC reporting was mandatory. However, in another NH unit, the nurse managers rarely classified an event as reportable to the ERC. The staff nurses in this unit usually did not mention the ERC during the research interviews, and when the interviewer directly questioned some nurses about it, they did not express concerns. These examples suggest that by classifying away errors as non-reportable, nurse managers technically could follow the rules and maintain their own responsibility as managers. Learning Classifying away potentially dangerous events did not prevent learning within a nursing unit, but it also did not promote organizational learning. Some nurse managers fostered learning, while others did not. In both NH and SH, even if the nurse managers did not classify the event as an error
91
Promoting Patient Safety
according to ERC criteria, the data remained available to them. The nurse and the nursing unit could learn from the event and, as SH managers described, change nursing practices within the unit. Although classifying away dangers did not preclude learning within the management team, the relevant information usually was not shared with other nursing units in the hospital, limiting the potential for organizational learning at the hospital level. Nurse Manager Roles Nurse managers clearly were buffering the staff nurses from pressures from the ERC. For example, some nurse managers thought it was unrealistic for the ERC regulations to stipulate that every medication error, regardless of potential severity, constituted a minor error; and if a nurse made three minor errors, he should be reported to the ERC. If this regulation had been followed precisely, one manager ventured, then he would have to report almost all the nurses to the ERC. The ERC example also illustrates how nurse managers aimed to achieve multiple hospital objectives. They balanced the hospital goal of assuring that employees follow hospital procedures and the goal of maintaining a non-punitive environment for health care providers who report their mistakes. The nurse managers also confronted the dual objectives of protecting patient safety, while protecting nurses from being blamed for errors that primarily were caused by hospital shortcomings. In addition, the state-level regulators expected nurse managers to meet their obligations to refer a nurse to the ERC if he made three minor errors in a year. To avoid breaking the rules, it appears that the nurse managers reinterpreted how to apply them. They did not fail to report nursing errors, but simply classified these errors in non-reportable categories.
DISCUSSION Nurse managers are charged with fulfilling multiple, and sometimes conflicting objectives. They are expected to hold staff nurses accountable for medical errors and also learn from the nurses’ mistakes. These objectives come into conflict when the same data are used as a basis for disciplining nurses and as a source of learning. The conflict is also expressed in the difference between the espoused goals of maintaining a non-punitive environment and the expectations that managers will discipline nurses who repeatedly make mistakes.
92
MICHAL TAMUZ ET AL.
Nurse managers grappled with this tension in different ways. The first exemplar focuses on the incident reporting system. It illustrates how some nurse managers sent mixed messages to the staff nurses. Other managers directly and indirectly filtered the events that were reported. In both exemplars two and three, the nurse managers developed classification systems for evaluating each situation and determining whether it was a reportable event. Based on the event classifications, the nurse managers enacted routines for handling the situations. In the second exemplar, managers created idiosyncratic classification schemes to determine whether an event was reportable to the hospital disciplinary system. The classification criteria focused primarily on whether the patient was harmed, whether additional medical intervention was required, and whether the nurse noticed his own mistake. Based on these criteria, managers handled the event formally or informally, resulting in two sets of records. In the third exemplar, nurse management teams constructed classification guidelines to determine whether an event was potentially reportable to the ERC. Events that were considered non-reportable were handled within the unit; they were not reported to the hospital-level IRC that could determine if the event warranted reporting to the ERC. When nurse managers classified away potential threats to patient safety as non-reportable, they did not preclude learning within the unit. However, they limited the potential for organizational learning at the hospital level. Similarly, these exemplars highlight two critical tensions that influence nurse managers’ efforts to promote patient safety: (1) applying bureaucratic rule-based standards to control professionals’ behavior; and (2) using the same information as a basis for maintaining accountability and learning.
Bureaucratic Rules and Professional Tasks Relatively little attention has been focused on the consequences of and conflicts inherent in using bureaucratic rule enforcement to guide and control professional nursing behavior. The lack of attention is especially puzzling because of the extensive social science literature on this topic (e.g., Blau & Scott, 1962; Hall, 1968; Scott, 1965, 1969). Researchers have long recognized the consequences of subjecting professionals to bureaucratic controls. Furthermore, similar issues were addressed in classic studies of alternative means of assessing employee performance in organizations, such as monitoring performance by rule compliance, processes, or outcome measures (e.g., Scott, 1977).
Promoting Patient Safety
93
This raises research questions related to both theory and practice. Perhaps some of the difficulties of determining ‘‘what is an error’’ should not be attributed to disagreements over semantics and standards, but rather to the inappropriate use of rules to guide professional activities. Medication administration, while seemingly straightforward, depends on multiple contingencies – such as, patient condition, type of medication, and triaging needs (i.e., determining priorities among multiple critically ill patients). Perhaps the ineffectiveness of using simple rules to guide contingent nursing tasks can contribute to the explanation of why nurses redefine medication errors based on the situations in which they occurred (Baker, 1997), and the nurse managers in this study generated their own criteria for classifying reportable incidents. Furthermore, it raises questions about the possibility of specifying and proposing conditions under which nurse managers protect nurses by reclassifying errors. Practice-related research questions might include: Are there alternative methods of guiding and monitoring nursing behavior – in particular in the administration of medications? Is it time to take a fresh look at the traditional ‘‘five rights of medicine administration’’ – and not just by adding a sixth (Wilson & DiVito-Thomas, 2004)? In this study, we focus on registered nurses in critical care units of tertiary care hospitals. However, nursing researchers could explore whether there should be gradated methods of guiding nursing practice, depending on nurses’ levels of education, training, and experience.
Data for Maintaining Accountability and Learning The three exemplars illustrate some of the difficulties that occur when organizations use the same error data for the purposes of disciplining employee behavior and learning how to improve the organization’s performance (Tamuz, 2001). As organizational researchers have long known (e.g., Lawler & Rhode, 1976), the organization gives individual employees disincentives to reveal information about errors, when information from their self-disclosures can be used to judge their performance. The disincentives for self-reporting errors and for disclosing co-workers’ errors are pervasive in organizations and can partially account for the widespread underreporting of patient safety-related incidents in hospitals (e.g., Kohn et al., 2000). If a hospital uses error data for disciplining employees, the interviews suggest that there might be a complicated set of countervailing pushes and
94
MICHAL TAMUZ ET AL.
pulls that can alternatively foster and hinder error reporting. For example, if another health care provider or the patient knows about an error, a nurse might report an error because failure to report it could damage his professional reputation, given the strong professional norms for taking individual responsibility for mistakes (e.g., Witt, 2007). Alternatively, nurses may be reluctant to reveal their errors, for fear of being reported to the ERC. However, when nurse managers buffer staff nurses from the ERC, such concerns may be less salient. Furthermore, nurse managers may also contribute to underreporting by classifying away errors into non-reportable categories (Tamuz, Thomas, & Franchois, 2004). The ERC exemplar also contributes to the discussion on how nurse managers influence error reporting by staff nurses (Edmondson, 1996,1999; Vogus & Sutcliffe, 2007). It suggests that managers may build trust, in part, by buffering the staff nurses from a punitive regulatory environment, and thus, foster the nurses’ disclosure of errors to their direct supervisors. We propose a conceptual framework that reflects the influence of these fundamental tensions on event classification and how the nurse managers chose to respond to them. The tension between maintaining accountability and promoting learning intensifies when the same data are used not only as a basis for learning, but also as an indicator for assessing the performance of individuals. Nurse managers delivered mixed messages when the espoused objectives conflicted with actual behavior. The dual use of data for disciplinary and learning purposes generated disincentives for nurses to report errors and incentives for nurse managers to carefully classify events into reportable and non-reportable categories. Furthermore, based on these exemplars, we posit an interaction effect in nursing because the bureaucratic rules used to assess performance are often ill suited to reflect the professional judgment that nurses use to assess complex patients, prioritize their care, and cope with unexpected workplace conditions. Thus, it is difficult to classify reportable events simply as deviations from procedures, without specifying the context in which the procedures were performed. Hence, these two fundamental tensions can interact to influence event classification. The classification of events, in turn, influences the choice of organizational response routines. The choice of routines can influence the information about errors that managers report to the hospital and thus, affect the hospital’s ability to learn from its experience. To illustrate, in exemplar two, the classification of events as reportable or non-reportable to the hospital disciplinary system resulted in formal or informal recordkeeping responses. In exemplar three, the classification of errors as
95
Promoting Patient Safety
reportable or non-reportable to the ERC resulted respectively in hospitallevel or unit-level responses. When managers classified errors as nonreportable, they reduced the error data available to the hospital and thus, constrained the capacity for organizational learning at the hospital level. Further research could explore how the organizational capacity to learn from errors is shaped by these two underlying tensions and the strategies that managers developed for coping with them.
Practical Applications The study results underscore the widely accepted (and often ignored) recommendations that data from voluntary reporting systems should not be used as a proxy for the ‘‘true’’ event rate. Error and adverse event rates derived from these systems are biased and cannot be used to measure progress in safety or compare hospitals or clinical areas (Thomas & Petersen, 2003; Pronovost, Berenholtz, & Needham, 2007). For instance, the study results document why and how nurse managers influence whether an event is reported. These influences are not related to the true rate of events. When event data are used as safety indicators to compare across units and hospitals, it compounds the disincentives for disclosure and confounds the hospital’s ability to learn from experience. The study underscores that even when hospitals state that they want to learn, it is difficult to foster open reporting. The results might also encourage hospital leaders to re-examine how they apply bureaucratic rules and standards to nursing care, and to try to dissociate disciplinary and learning processes.
STUDY LIMITATIONS This is an exploratory study. It is limited by its sample and study design. The study is based on interviews with 20 nurse managers from four critical care units in three tertiary care hospitals. We do not claim to generalize from this small sample, but rather through rich description we attempt to gain insights into the roles of nurse managers in promoting patient safety by monitoring nursing errors. We propose a conceptual framework that can be examined through further research. Furthermore, we present three exemplars that are related and not necessarily mutually exclusive. They reflect the nurse managers’ experience with multiple, partially overlapping systems for monitoring medication
96
MICHAL TAMUZ ET AL.
errors. Because we inferred the nurse managers’ decision-making processes from the examples they provided, we cannot be certain that event classification criteria described in the first exemplar, for instance, refer exclusively to incident reports.
CONCLUSION We present three exemplars of how nurse managers maintained external demands for accountability, while buffering the staff nurses from these pressures. First, they influenced staff nurses’ reporting to the hospital voluntary incident reporting system by inadvertently sending mixed messages to the nurses about the consequences of incident reporting. Nurse managers also indirectly and directly filtered the incidents that were reported. Second, nurse managers kept two sets of error records. Third, they classified potentially harmful events into non-reportable categories. In practice, these methods enabled nurse managers to hold nurses accountable for their performance. However, these methods tended to reduce reporting to the hospital and the ERC, and thus, reduced the information available to the hospital for learning from its mistakes. By examining error monitoring from the point of view of nurse managers, we explore how the nurse managers juggled multiple objectives. On one hand, they pursued the objective of ensuring that nurses followed hospital procedures and they held nurses accountable if the nurses inadvertently varied from the procedures. On the other, the nurse managers sought to protect the staff nurses from inappropriately harsh discipline. The nurse managers also grappled with applying rule-based performance measures that were ill suited to assess the complexities of tasks requiring professional judgment. This exploratory study raises research questions regarding theory and practice. Further research on these questions might eventually enable hospitals simultaneously to maintain compliance with their procedures, while learning from their mistakes.
REFERENCES Ashmos, D. P., Huonker, J. W., & McDaniel, R. R., Jr. (1998). Participation as a complicating mechanism: The effect of clinical professional and middle manager participation on hospital performance. Health Care Management Review, 23(4), 7–20.
Promoting Patient Safety
97
Aspden, P., Wolcott, J. A., Bootman, J. L., & Cronenwett, L. R. (Eds). (2007). Preventing medication errors. Washington, DC: The National Academies Press. Bagian, J. P., Gosbee, J., Lee, C. Z., Williams, L., McKnight, S. D., & Mannos, D. M. (2002). The veterans affairs root cause analysis system in action. Joint Commission Journal of Quality Improvement, 28(10), 531–545. Baker, H. M. (1997). Rules outside the rules for administration of medication: A study in New South Wales, Australia. IMAGE: Journal of Nursing Scholarship, 29(2), 155–158. Balogun, J. (2006). Managing change: Steering a course between intended strategies and unanticipated outcomes. Long Range Planning, 39(1), 29–49. Bates, D. W., Cullen, D. J., Laird, N., Petersen, L. A., Small, S. D., Servi, D., Laffel, G., Sweitzer, B. J., Shea, B. F., & Hallisey, R. (1995). Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE prevention study group. JAMA, 274(1), 29–34. Blau, P. M., & Scott, W. R. (1962). Formal organizations. San Francisco: Chandler. Brewer, A. M., & Lok, P. (1995). Managerial strategy and nursing commitment in Australian hospitals. Journal Advanced Nursing, 21(4), 789–799. Callum, J. L., Kaplan, H. S., Merkley, L. L., Pinkerton, P. H., Rabin Fastman, B., Romans, R. A., Coovadia, A. S., & Reis, M. D. (2001). Reporting of near-miss events for transfusion medicine: Improving transfusion safety. Transfusion, 41, 1204–1211. Carney, M. (2004). Perceptions of professional clinicians and non-clinicians on their involvement in strategic planning in health care management: Implications for interdisciplinary involvement. Nursing Health Science, 6(4), 321–328. Cook, R. I., & Woods, D. D. (1994). Operating at the sharp end: The complexity of human error. In: M. S. Bogner (Ed.), Human error in medicine. Hillsdale, NJ: Lawrence Erlbaum. Edelman, L. B. (1990). Legal environments and organizational governance: The expansion of due process in the American workplace. American Journal of Sociology, 95(6), 1401–1440. Edmondson, A. C. (1996). Learning from mistakes is easier said than done: Group and organizational influences on the detection and correction of human error. Journal of Applied Behavioral Science, 32, 5–32. Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(4), 350–383. Edmondson, A. C. (2004). Learning from failure in health care: Frequent opportunities, pervasive barriers. Quality and Safety in Health Care, 13, ii3–ii9. Fenton-O’Creevy, M. (1998). Employee involvement and the middle manager: Evidence from a survey of organizations. Journal of Organizational Behavior, 19(1), 67–84. Green, J., & Thorogood, N. (2004). Qualitative methods for health research. Thousand Oaks, CA: Sage Publications. Hall, R. H. (1968). Professionalization and bureaucratization. American Sociological Review, 33, 92–104. Kanter, R. M. (1982). The middle manager as innovator. Harvard Business Review, 60(4), 95–105. Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (Eds). (2000). To err is human: Building a safer health system. Washington, DC: National Academy Press, Institute of Medicine. Lawler, E., & Rhode, J. (1976). Information and control in organizations. Santa Monica: Goodyear. Levitt, B., & March, J. G. (1988). Organizational Learning. Annual Review of Sociology, 14, 319–340.
98
MICHAL TAMUZ ET AL.
Lipshitz, R., & Popper, M. (2000). Organizational learning in a hospital. Journal of Applied Behavioral Science, 36(3), 345–361. Mair, J. (2005). Exploring the determinants of unit performance – the role of middle managers in stimulating profit growth. Group and Organization Management, 30(3), 263–288. March, J. G. (1999). The pursuit of organizational intelligence. Malden, MA: Blackwell. Marx, D. (2001). Patient safety and the ‘‘just culture’’: A primer for health care executives. New York, NY: Columbia University. Available at: http://www.mers tm.net/support/ Marx_Primer.pdf Marx, D. (2003). How building a ‘just culture’ helps an organization learn from errors. OR Manager, 19(5), 1, 14–5, 20. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage Publications. Osborne, J., Blais, K., & Hayes, J. S. (1999). Nurses’ perceptions: When is it a medication error? Journal of Nursing Administration, 29(4), 33–38. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications. Pronovost, P. J., Berenholtz, S. M., & Needham, D. M. (2007). A framework for health care organizations to develop and evaluate a safety scorecard. JAMA, 298, 2063–2065. Rousseau, D. M., & Fried, Y. (2001). Location, location, location: Contextualizing organizational research. Journal of Organizational Behavior, 22, 1–13. Scott, W. R. (1965). Reactions to supervision in a heteronomous professional organization. Administrative Science Quarterly, 10, 65–81. Scott, W. R. (1969). Professional employees in a bureaucratic structure: Social work. In: A. Etzioni (Ed.), The semi-professions and their organization (pp. 82–144). New York: Free Press. Scott, W. R. (1977). Effectiveness of organizational effectiveness studies. In: P. S. Goodman & J. M. Pennings (Eds), New perspectives on organizational effectiveness (pp. 63–95). San Francisco: Jossey-Bass. Sitkin, S. B. (1992). Learning through failure: The strategy of small losses. Research in Organizational Behavior, 14, 231–266. Tamuz, M. (2001). Learning disabilities for regulators: The perils of organizational learning in the air transportation industry. Administration and Society, 33(3), 276–302. Tamuz, M., & Thomas, E. J. (2006). Classifying and interpreting threats to patient safety in hospitals: Insights from aviation. Journal of Organizational Behavior, 27, 919–940. Tamuz, M., Thomas, E. J., & Franchois, K. (2004). Defining and classifying medical error: Lessons for reporting systems. Quality and Safety in Health Care, 13, 13–20. Thomas, E. J., & Petersen, L. A. (2003). Measuring errors and adverse events in healthcare. Journal of General Internal Medicine, 18, 61–67. Thomas, E. J., Studdert, D. M., Burstin, H. R., Orav, E. J., Zeena, T., Williams, E. J., Howard, K. M., Weiler, P. C., & Brennan, T. A. (2000). Incidence and types of adverse events and negligent care in Utah and Colorado in 1992. Medical Care, 38(3), 261–271. Throckmorton, T., & Etchegaray, J. (2007). Factors affecting incident reporting by registered nurses: The relationship of perceptions of the environment for reporting errors, knowledge of the Nursing Practice Act, and demographics on intent to report errors. Journal of PeriAnesthesia Nursing, 22(6), 400–412.
Promoting Patient Safety
99
Tucker, A., & Edmondson, A. (2003). Why hospitals don’t learn from failures: Organizational and psychological dynamics that inhibit system change. California Management Review, 45(2), 55–72. Tucker, A. L., Edmondson, A. C., & Spear, S. J. (2002). When problem solving prevents organizational learning. Journal of Organizational Change Management, 15(2), 122–137. Vogus, T. J., & Sutcliffe, K. M. (2007). The impact of safety organizing, trusted leadership, and care pathways on reported medication errors in hospital nursing units. Medical Care, 45(10), 997–1002. Wakefield, D. S., Wakefield, B. J., Uden-Holman, T., Borders, T., Blegen, M., & Vaughn, T. (1999). Understanding why medication administration errors may not be reported. American Journal of Medical Quality, 14(2), 81–88. Walker, S. B., & Lowe, M. J. (1998). Nurses’ views on reporting medication incidents. International Journal of Nursing Practice, 4(2), 97–102. Weick, K. E. (2007). The generative properties of richness. Academy of Management Journal, 50(1), 14–19. Weick, K. E., & Sutcliffe, K. M. (2003). Hospitals as cultures of entrapment: A re-analysis of the Bristol Royal Infirmary. California Management Review, 45(2), 73–84. Wilson, D., & DiVito-Thomas, P. (2004). The sixth right of medication administration: Right response. Nurse Educator, 29(4), 131–132. Witt, C. L. (2007). Holding ourselves accountable. Advances in Neonatal Care, 7(2), 57–58. Wolf, Z. R., Serembus, J. F., Smetzer, J., Cohen, H., & Cohen, M. (2000). Responses and concerns of healthcare providers to medication errors. Clinical Nurse Specialist, 14(6), 278–287.
QUALITY AND PATIENT SAFETY FROM THE TOP: A CASE STUDY OF ST. FRANCIS MEDICAL CENTER GOVERNING BOARD’S CALL TO ACTION Louis Rubino and Marsha Chan ABSTRACT The Institute for Healthcare Improvement (IHI) has broadened their campaign focus to include protecting hospital patients from five million incidents of medical harm through 2008. A critical component of this campaign is the engagement of governance in the process, noting evidence of better patient outcomes for hospitals with governing boards that spend at least 25% of their time on quality and safety. St. Francis Medical Center (SFMC), a 384-bed hospital in Southeast Los Angeles serving a high number of uninsured and underinsured patients and a population characterized by significant poverty, has initiated through a top-down approach, an aggressive plan to improve the care at its facilities through a call to action by its board of directors. In this article innovative methods are shared, tools are provided, and the initial positive results achieved are reported which show how a cultural change is occurring regarding
Patient Safety and Health Care Management Advances in Health Care Management, Volume 7, 103–126 Copyright r 2008 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1474-8231/doi:10.1016/S1474-8231(08)07005-5
103
104
LOUIS RUBINO AND MARSHA CHAN
quality and patient safety (QPS) at this hospital’s organizational and delivery system level.
A CULTURE OF QUALITY AND PATIENT SAFETY The United States health care system must improve the quality of care. A recent RAND national study found quality deficits in the acute as well as the preventative and chronic levels of care (Goldman & McGlynn, 2005). Hospital governing boards should see quality of care as one of their top priorities. With the recent oversight emphasis placed on financial management and managerial ethics, due to the issues with such companies as Enron, WorldCom, and health care organizations Columbia/HCA and Tenet Healthcare, governance attention has been diverted from core operations. Dr. Donald Berwick, President and CEO of the Institute of Healthcare Improvement (IHI) stated in a recent roundtable interview, ‘‘The majority of American hospitals or health systems still haven’t centered on improvement as a strategy. We have serious governance gaps. The board of trustees are not yet feeling the stewardship of quality of care as a fundamental board duty y’’ (Molpus et al., 2006, p. 1). While not synonymous, quality health care and patient safety go hand-inhand. Without a commitment to quality, an organization would struggle to implement the proper patient safety practices. Similarly, without a concerted patient safety effort, care cannot be of the highest quality. Within the last decade, agencies have heralded the importance of both quality and patient safety (QPS) (see Table 1). Hospitals governing bodies have quality oversight as one of their core responsibilities (Griffith & White, 2006). Much is discussed at board meetings about QPS but sometimes the conversations rest on retrospective reviews of problematic peer-reviewed cases, malpractice claims, incident reports, and root cause analysis of sentinel events. In fact, many experts cite the ‘‘passive’’ role governing bodies play in overseeing quality of patient care (National Quality Forum, 2005). Lister (2003) states that board members are unsure how their board can effectively review and monitor QPS data. CEOs also voice frustration that their boards ‘‘are supportive but passive, don’t really understand clinical information, get mired in the details and can’t see the big picture’’ (Bader, 2006, p. 64). Even with the best of systems and tools in place, one challenge trumps all others; changing the culture of the organization. This is finally being
Agency
Institute for Healthcare Improvement Founded
Institute of Medicine
American Hospital Association
Leapfrog Group
Institute of Medicine
Hospital Quality Alliance National Quality Forum
JCAHOa
JCAHOa
Year
1991
1999
2000
2000
2001
2002
2002
2003
2003
National patient safety goals
Serious reportable events Leadership standards for patient safety
Public accessibility
Crossing the quality chasm
Getting health care right
Committee on governance
To err is human
Improving healthcare worldwide
Initiative
Patient identification, communication, surgery, medication use, critical alarms, and infusion pumps
Six aims: safe, effective, patient-centered, timely, efficient, and equitable Endorsed national voluntary consensus standards Need to publicly report occurrence of 27 ‘‘never’’ events Need more strategic direction by ‘‘leaders of the organization’’
Aim for no needless deaths, no needless pain/suffering, no helplessness in those served/ serving, no unwanted waiting, no waste 98,000 people die in U.S. hospitals due to preventable medical error Need for direct quality measurement and error reporting Mobilizing health purchasers can reduce errors, save lives, and save costs
Finding
Leadership’s Call
Adopt standards established by experts Coordinate event reporting for public accountability Ensure implementation of integrated patient safety program Align safety improvement efforts with others in health care industry
Reduce preventable medical mistakes, quality, and affordability through initiating ‘‘leaps’’ Create safety as a system property
Understand importance of system and processes ‘‘Approach the door, Open the door, Ask the right questions’’
To collaborate, set the vision, motivate, innovate, and drive change to get results
Table 1. Chronology of Quality and Patient Safety Initiatives.
Quality and Patient Safety 105
Call to responsibility
International patient safety center
Institute for Healthcare Improvement
National Quality Forum
JCAHOa
Institute for Healthcare Improvement
2004
2004
2005
2006
Improve quality of care through financial rewards and transparency of performance data Adoption of six strategies to reduce preventable deaths in U.S. hospitals Outlined role of the hospital trustee in quality improvement Mission to continuously improve patient safety in all health care settings Board’s active involvement will lead to improvements (getting boards on board)
Finding
Spend at least 25% of meeting time on quality and safety issues
Focus on improvements to clinical delivery systems to improve patient outcomes and reimbursement Commitment to implement strategies and report mortality data Develop quality literacy, oversee participation and performance Continuous learning and focus on patient safety
Leadership’s Call
Sources: Adapted from Schyve (2003), DeLashmutt et al. (2003), www.leapfroggroup.org, www.qualityforum.org; www.cms.hhs.gov, www.IHI.org., www.JointCommission.org a From 1988–2006, the Joint Commission was known as the Joint Commission on Accreditation of Healthcare Organizations (JCAHO).
Protecting five million lives from harm
100,000 Lives Campaign
Medicare pay for performance
Centers for Medicare and Medicaid Services
2003
Initiative
Agency
Year
Table 1. (Continued )
106 LOUIS RUBINO AND MARSHA CHAN
Quality and Patient Safety
107
recognized by today’s hospitals (Bader & O’Malley, 2006). Hospital leadership can only be successful in improving quality when it is truly committed to a culture of patient safety. As this is not often the case, all trustees should declare health care quality and safety are not at the level they should be.
LEADERSHIP RESPONSIBILITY FOR CULTURE CHANGE Governing boards might feel vulnerable though if they make this declaration. Yet, they must acknowledge the need to improve QPS if they are determined to change the culture of their hospitals. A veil of secrecy is held over poor quality and a sense of protectionism may be displayed not only for the hospital but also for specific physicians, other clinicians, and even some clinical departments. Maintaining compliance with regulatory requirements is frequently found to drive patient safety initiatives rather than hospitals striving for the intrinsic and real rewards stemming from improved quality of care (Devers, Hoangmai, & Liu, 2004). A recent study by the Agency for Healthcare Research and Quality (AHRQ) demonstrated improvements to patient safety have a positive financial benefit for hospitals based on their Medicare payment history (Zhan, Friedman, Mosso, & Pronovost, 2006). Taking a proactive approach in addressing quality issues is still the exception rather than the norm. Hospitals should recognize that effective quality improvement efforts will result in benefits beyond financial upswings to positively affect the overall performance of the institution. Little research has been conducted regarding the role of hospital governance in quality efforts (Joshi & Hines, 2006). Some evidence is present that hospital governing boards adopting a proactive philosophy and actively engaging in governance work are more likely to perform well overall, using Solucient Center for Health Improvement’s 100 top hospitals for correlation (Prybil, 2006). Descriptive studies of hospital governing boards actually acting specifically on patient quality and safety issues are needed as best practice models. A hospital governing board taking this approach through active engagement in QPS improvement is that of St. Francis Medical Center (SFMC), a 384-bed acute care facility in Lynwood, California. This nonprofit hospital serves a high number of patients who lack health insurance and access to primary health care
108
LOUIS RUBINO AND MARSHA CHAN
services, and thus faces financial constraints similar to many other American hospitals (Reinhardt, 2000). Yet, guided by its value of compassionate service, this health care ministry’s mission to serve the poor and sick is led by a governing body that states QPS is its number one priority.
INSTITUTE FOR HEALTHCARE IMPROVEMENT Five years after the Institute of Medicine’s (IOM) report, ‘‘To Err is Human,’’ (IOM, 2000) there is greater acknowledgment of patient safety issues but few infrastructure changes have been implemented to measurably decrease medical errors (Wachter, 2004). One organization which has mobilized health care leaders to address the problems identified in the IOM’s landmark report is the IHI. Over 3,000 hospitals adopted the IHI’s initial six strategies to reduce needless deaths as part of its ‘‘100,000 Lives Campaign,’’ but many still waiver and are not participating (Gosfield & Reinertsen, 2005). This prompted the IHI to launch a new campaign, ‘‘Protecting 5 Million Lives from Harm Campaign.’’ The campaign calls for six additional interventions intended to advance patient safety, one specific to leadership. The IHI recognizes the key role governance plays in processing a hospital’s agenda for enhanced QPS by challenging boards to set and prioritize goals for error reduction and monitor the progress (Robeznieks, 2006). The IHI recommends hospital boards ‘‘get on board’’ by defining and initiating new processes for hospital board directors so that they can become more effective in accelerating the improvement of care at their facilities (www.IHI.org/ihi/programs/campaign). A Framework for Leadership of Improvement has been presented by IHI and provides direction through its steps to (1) set direction; (2) establish the foundation; (3) build will; (4) generate ideas; and (5) execute change (IHI, 2006a, 2006b); as illustrated in Fig. 1. Although the quality of an organization’s core business should be of primary concern, a recent article citing the top 10 nonprofit governance trends did not address quality oversight (Peregrine, 2007). The IHI’s advocacy for the necessity of hospitals to recognize the need for change beyond clinical interventions does not come without support. There is evidence that board involvement can lead to improvements in care quality. Outcomes are better in hospitals where (1) the board spends more than 25% of its time on quality and safety; (2) the board receives a formal quality measurement report; (3) bases the senior executives’ compensation in part on quality improvement performance; and (4) there is a high level of
Fig. 1.
• Build Relationships • Develop Future Leaders
• Use Model for Improvement for Design and Redesign • Review and Guide Key Initiatives • Spread Ideas • Communicate Results • Sustain Improved Levels of Performance
5. Execute Change
Make the future attractive
PULL
IHI’s Framework for Leadership of Improvement (Reproduced with Permission from IHI).
• Prepare Personally • Choose and Align the Senior Team
2. Establish the Foundation
• Understand Organization as a System • Read and Scan Widely, Learning from Other Industries & Disciplines • Benchmark to Find Ideas • Listen to Patients • Invest in Research & Development • Manage Knowledge
4. Generate Ideas
Make the status quo uncomfortable
• Reframe Operating Values • Build Improvement Capability
• Plan for Improvement • Set Aims/Allocate Resources • Measure System Performance • Provide Encouragement • Make Financial Linkages • Learn Subject Matter
3. Build Will
PUSH
1. Set Direction: Mission, Vision and Strategy
Quality and Patient Safety 109
110
Table 2.
LOUIS RUBINO AND MARSHA CHAN
IHI’s Six Strategies for a Hospital Board to Improve Quality and Patient Safety (from www.IHI.org).
Set aims: Make an explicit, public commitment to measurable improvement Seek data and personal stories: Audit at least 20 randomly chosen patient charts for all types and levels of injury, and conduct a ‘‘deep dive’’ investigation of one major incident, including interviewing the affected patient, family, and staff Establish and monitor system-level measures: Track organization-wide progress by installing and overseeing crucial system-level metrics of clinical quality, such as medical harm per 1,000 patient days or risk-adjusted mortality rates over time Change the environment, policies, and culture: Require respect, communication, disclosure, transparency, resolution, and all the elements of an organization fully committed to quality and safety Encourage learning, starting with yourself: Identify the capabilities and achievements of the best hospital boards and apply that standard to yourself and all staff Establish accountability: Set the agenda for improvement by linking executive performance and compensation
interaction between the board and the medical staff on quality strategy (Vaughn et al., 2006). Through IHI’s call for hospital boards’ involvement, six specific strategies (see Table 2) were published for boards to adopt in an effort to improve QPS (www.IHI.org). The IHI took the unusual step to instruct boards to gather data on sentinel events and actually learn the stories by putting a ‘‘human face’’ on the data. Boards are encouraged to set a goal of interacting with at least one patient (or family member of a patient) who sustained injury from a sentinel event at their institution within the last year. This strategy proves effective as shown in the PBS documentary, ‘‘Remaking American Medicine: Health care for the 21st Century.’’ This documentary is the story of Mrs. Sorrel King, whose 18 month old daughter’s death as a result of medical errors sparked a partnership with Johns Hopkins Medical Center, to improve the quality of care and safety for other patients (Public Broadcasting Service, 2006). The IHI recognizes the unique role that hospital governing boards and senior leaders have in patient safety citing this role critical and nondelegable (Botwinick, Bisognano, & Haraden, 2006). This leadership role is to establish the value system in the organization; set strategic goals for activities to be undertaken; align efforts within the organization to achieve these goals; provide resources for the creation, spread, and sustainability of effective systems; remove obstacles to improvements for clinicians and staff; and require adherence to known practices that will promote patient safety.
Quality and Patient Safety
111
IHI believes the culture of the organization will begin to change as these processes focus on what happened rather than who is to blame.
BARRIERS TO EFFECTIVE BOARD QUALITY OVERSIGHT Many have called for leadership to place QPS at the forefront of the hospital’s operations (Spath, 2002; Mohr, Abelson, & Barach, 2002; Gautam, 2005), yet this request is unfulfilled. There are numerous reasons expressed for this situation. First, a board may be comprised of any combination of hospital managers, community members, and practitioners. Some board members may not possess the health care knowledge supporting a complete understanding about the QPS matters discussed at the board level. A hospital governing board comprised of many non-clinical members may not be attuned to matters in the same way as a committee of only practicing professionals. Therefore, these non-clinical board members may not completely engage in deliberations as the board examines important QPS issues. It is not necessarily the board’s indifference about quality matters but their diminishing concern as one is farther removed from the actual situation. As a result, leadership may be less aware of the patient safety issues than the front-line workers (Singer, Gaba, Geppert, Sinaiko, & Park, 2003). As one goes further up the organizational chart, individuals tend to have a higher perception of the organization’s quality. For example, boards perceive quality is better at their hospitals than their CEOs (Sandrick, 2007). This blindness may easily affect the priority level the governing board places on quality and safety measures. At times, a governing body delegates their quality oversight responsibility to the medical staff, acknowledging they have the clinical expertise for medical care evaluation. However, the perceived disincentives in peer review and the excessive demands on physicians’ time may impede their leading a true quality improvement effort (Marren, 2004). Research has demonstrated that active staff physician involvement in governance can result in a significant positive effect on board activity in quality improvement (Weiner, Alexander, & Shortell, 1996). Active physicians therefore are a good addition to boards to push the quality agenda, but the board should not rely solely on them for overseeing the quality at their facilities.
112
LOUIS RUBINO AND MARSHA CHAN
When physicians are involved, care must be given to ensure the appropriate information is transferred for the board to make effective decisions regarding their quality oversight. Yet, communication is often poor between administration and medical staff thus not allowing for effective information flow. Measures by management are needed to ensure the correct information flow is occurring. One way to have this concerted effort is by establishing a QPS subcommittee of the board which has active physician membership. Finally, organizations may not have the infrastructure in place to fully assess and evaluate QPS. This takes a commitment by leadership to allocate the appropriate resources in personnel, systems and information technology to collect, display and analyze, in an easy to understand manner, the data needed for its determination. The size or type of the hospital should not be an issue when initiating a culture change. Large systems have embraced a commitment to quality and safety as demonstrated by the best practices of U.S. Department of Veterans Affairs (VA). Leaders at the VA made a public commitment to address its system of past failures and to establish a culture of safety (Beyea, 2002). The VA developed patient safety centers of inquiry that individually research a different aspect of safety and identify problems in patient care processes. The VA has provided monetary and recognition awards to employees who design and implement solutions or solve patient safety issues, In addition, the VA has incorporated patient safety as part of the performance measurement system, instituted a mandatory and voluntary reporting system, and implemented specific safety programs throughout the system.
BALANCED SCOREBOARD APPROACH To apply the IHI suggested practice of board engagement in QPS, a hospital framework needs to be selected to assess care being provided and to monitor its improvement. The Malcolm Baldrige Criteria (Fisher, 1996) and Six Sigma (Christianson, Warrick, Howard, & Vollum, 2005) are two modern approaches hospitals use to bolster their quality programs and to ensure alignment with their strategic goals. Spath (2007a) recognizes that measurement has advanced quality of patient care. She further expresses the need for senior leaders to actively support an ever-evolving structure that can tame the ‘‘measurement monster’’ with sufficient flexibility to meet current and future requirements.
Quality and Patient Safety
113
One popular method in use, to summarize volumes of quality data into a well-organized format and highlight important focus areas, is the balanced scorecard or dashboards. Nebraska Medical Center in Omaha supported the use of balanced scorecards indicating it allows the board to be more effective by holding its management more accountable for outcomes. ‘‘Using the balanced scorecard forced us to focus on what we were doing and reprioritize what areas of the hospital we should be focusing on’’ (Meyers, 2004, p. 14). Introduced by Kaplan and Norton (1992), the balanced scorecard approach is a rational planning model which can guide a hospital board to improve patient safety and quality. Experience from industries proves using the balanced scorecard can be an effective method to drive the process of change (Kaplan & Norton, 1993). Information from the report can be obtained by high-level leaders who can then direct initiatives for improvement from the top-down. Recent research asserts this high-level engagement trickles down to front-line staff and cements care quality as an organization-wide priority (Advisory Board, 2006). This does not mean that board members are micro-managing the process. One specific research study of 562 hospitals suggested that boards in top-performing hospitals take explicit actions that are significantly correlated with high organization-wide performance measures (Lockee, 2006). These hospitals work with medical staff and executives to set standards using traditional board oversight powers to achieve goals rather than involving themselves in implementation and execution. This is the key approach taken by SFMC.
ST. FRANCIS MEDICAL CENTER: A CASE STUDY The Daughters of Charity Health System (DCHS) is a regional health care system of six hospitals and medical centers spanning the California coast from the Bay Area to Los Angeles. The health care ministry exists to support the Mission of its sponsors, the Daughters of Charity of St. Vincent de Paul, by providing holistic, spiritually centered care to the sick and the poor. DCHS was formed on January 1, 2002. Facilities within DCHS include O’Connor Hospital, San Jose; Seton Medical Center, Daly City; Seton Coastside, Moss Beach; SFMC, Lynwood; Saint Louise Regional Hospital, Gilroy; and St. Vincent Medical Center, Los Angeles. Recognizing the need to formulate a system-wide strategy to ensure the delivery of the highest quality and safest patient care at each of its six facilities, the DCHS Board appointed a Quality Task Force charged with developing a QPS strategy. The strategy, driven by the DCHS Mission, Vision, and Values,
114
LOUIS RUBINO AND MARSHA CHAN
The Daughters of Charity health System quality and patient safety efforts are guided by our Mission and Values to serve the sick and poor and emphasizes our commitment to “comprehensive, excellent health care that is compassionate and attentive to the whole person: body, mind and spirit”. We live this commitment by striving to provide: • Caring and compassionate service, respectfully and ethically delivered by competent individuals • Error free processes, systems and technologies and a state of the art environment that is safe • Patient outcomes that are consistently excellent and validated by objective, national comparative data Guided by the DCHS Value of “Inventiveness to Infinity”, the quality and patient safety efforts are system-wide and enhance the local health ministries (LHM) and our system’s evolving clinical interdependence through: • Establishing system-wide priorities for excellence that are jointly pursued by each LHM • Trending and comparing hospital data and outcomes (unblended) • Providing clinicians with opportunities for sharing expertise and resources across the system • Sharing organizational and operational knowledge and resources (i.e. replicating best practices) • Embracing a common performance improvement model (PDCA) across all LHM • Learning from experiences, practices and outcomes – both positive and negative – in an environment committed to ongoing excellence and individual and corporate responsibility and accountability, and • Celebrating our success together as a unified system
Fig. 2.
DCHS Quality and Patient Safety Statement (2004).
was detailed in the QPS Statement (Fig. 2). From the strategy, the Task Force developed a series of recommendations adopted in 2004 by the DCHS Board of Directors. This governing body called for implementation of the recommendations over the subsequent two years, including the creation of a board committee dedicated to QPS endeavors at each of the hospitals. From a recent extensive survey conducted by the Health Research and Educational Trust (HRET), 52% of hospital governing boards had a quality improvement committee demonstrating this focused subgroup has not yet reached its full potential (Margolin, Hawkins, Alexander, & Prybil, 2006). In 2006, the SFMC QPS Committee of the Board of Directors was formed.
BOARD QUALITY AND PATIENT SAFETY LITERACY One of the first commitments of the Board’s Quality and Patient Safety Committee was to self-educate about key quality and patient safety
Quality and Patient Safety
115
initiatives, metrics, the medical staff credentialing and re-appointment process, and patient satisfaction. The members recognized the need to identify best practices including the value of learning about strategies that have been successful elsewhere. Seeking to identify and adopt best practices, representatives of the board and QPS visited St. Mary’s Hospital and Medical Center in Grand Junction, Colorado, a recognized top performer in the Hospital Quality Incentive (HQI) Demonstration Project. This delegation of highest-level leadership met with St. Mary’s executive team and key medical staff leaders, and nursing and quality leaders. The St. Mary’s Hospital and Medical Center team provided an overview of their medical staff credentialing and peer review process, the medical staff and quality infrastructure, and their approach to clinical quality improvement including patient satisfaction. The St. Mary’s approach to the HQI Demonstration Project and the IHI’s 100,000 Lives Campaign were presented and the St. Francis representatives attended a surgical care infection prevention (SCIP) team meeting, interacted with the rapid response team leaders and clinical unit directors throughout the visit. The site visit was an invaluable experience for the SFMC team, providing the QPS and leadership team with greater understanding about the level of commitment, accountability, and focus required to effect significant and sustainable improvement. The experience and best practices from St. Mary’s Hospital and Medical Center are continually referenced by the SFMC QPS and used as a benchmark for the organization.
QUALITY AND PATIENT SAFETY COMMITTEE The QPS Committee is comprised of seven (7) voting members: four (4) members of the SFMC Governing Board of Directors (a physician, a health care administration university professor, and two Daughters of Charity), a community representative, and the Chief and Chief elect of the Medical Staff. The hospital President/CEO and the system Vice President of Quality serve as ex-officio members to the committee which is further supported by other hospital leaders (administration, nursing, quality, patient satisfaction, risk, and medical staff services). On a monthly basis, the committee reviews recommendations made and actions taken by the Medical Executive Committee (MEC), such as credentialing, peer review, reappointment, and policies and procedures. The quality scorecard includes data reflecting the hospital’s clinical quality and process performance in Joint Commission
116
LOUIS RUBINO AND MARSHA CHAN
Core Measures, Centers for Medicare and Medicaid Services (CMS) HQI Demonstration Project, patient safety, and the 100,000 Lives campaign initiatives (see Fig. 3). It is reviewed along with any adverse/sentinel events and patient throughput metrics. The committee reviews patient satisfaction data, risk reports and environment of care metrics on a quarterly basis, and licensing and accreditation issues on an as needed basis. The reporting frequency is depicted in Table 3. The committee establishes oversight and accountability for the results, setting the direction for continuously improving quality and safety. On a monthly basis, the QPS chairman provides the SFMC Board of Directors with a summary report of progress made and recommendations for board action, as necessary.
CLINICAL UNIT/DEPARTMENT ROUNDS Through a unique hands-on approach, members of the QPS initiated rounds to clinical units/departments prior to each committee meeting in order to gain first-hand knowledge of the patient delivery system. Walk-around rounds are recommended to positively impact QPS (Spath, 2007b; Bisognano, Lloyd, & Schummers, 2006). The visits are pre-announced to the unit manager/director, who is invited to provide the board members with an introductory overview and tour of their unit. While rounding, the board members interact with staff, physicians, and patients on the unit. Staff members are asked to share any quality of care and/or patient safety concerns and to discuss their department’s resource wish list. The unit visits are intended to evaluate compliance of clinical quality processes, accreditation and licensing, appropriateness of care, and medical record documentation. Examples of issues addressed during board rounds are listed in Table 4. Furthermore, the rounds provide the committee members the opportunity to validate information they have received at previous QPS meetings and to identify additional issues warranting further review. A summary report of the unit visit and findings is then reported to the whole committee. These rounds serve as a springboard as the QPS committee identifies additional improvement areas. The rounds are an effective tool to create real dialogue between the board and hospital staff, further reflecting the board’s level of engagement in the provision of high quality and safe patient care.
117
Quality and Patient Safety
Fig. 3.
Quality Scorecard.
118
LOUIS RUBINO AND MARSHA CHAN
Table 3.
Reporting Frequency.
Topic
Monthly Quarterly Annually
Medical staff peer review activities Medical staff appointments, credentialing, and reappointment Quality scorecard Clinical quality Patient safety Patient throughput Patient satisfaction Risk management Environment of care Human resource reports Staffing effectiveness Competency Associate retention Associate satisfaction Annual plans Plan for the provision of patient care Plan for improving organizational performance Patient safety Risk management Management of human resources Policies and procedures Licensing and accreditation
As Needed
O O O
O O O O
O
O O
Peer Review While the majority of peer review cases are reviewed and scored for care variations at medical staff department meetings, it is not unusual for some cases to require repeated reviews. Prior to the formation of the QPS, delays in rendering a peer review decision were often present. In order to support timely adjudication, the board requested the MEC to ensure reviews are completed 90–120 days following the initial screening to the end assignment of a peer review score. Peer review timeliness was regularly reported as a metric to each department. With increased focus, the past six months reflect 100% compliance in case reviews being scored. In addition, the QPS Committee of the board emphasized the need for the medical staff to be self critical and thorough in the peer review process. If reviews and peer review scorings are not reflective of the care rendered and resultant patient outcome, QPS recommends the board return the case to
119
Quality and Patient Safety
Table 4.
Examples of Issues Evaluated During Board Rounds.
System/Process Evaluated
Thrombosis risk screening National patient safety goals Excess length of stay Legibility Fall precautions Medication reconciliation Patient throughput Restraint use Immunization screening Patient satisfaction Advance health care directive and informed consent
Clinical Quality Processes
Accreditation and Licensing
Appropriateness of Care
Medical Record Documentation
O
O
O
O
O O
O O O
O O
O O O O
O O O
O O O O O O O
O O O
the MEC to reconsider the score assignment. From October 2005, the board rejected the peer review decision in six cases and sent each back to the MEC for a more thorough and critical review. In the period after the QPS was formed, the peer review scores are more reflective of the actual care rendered. These board level actions served to reinforce medical staff accountability in the peer review process. The thoroughness of the reviews and the overall timeliness has improved.
MEDICAL STAFF CREDENTIALING AND RE-APPOINTMENT The medical staff utilizes a tiered approach to categorize medical staff appointment and reappointment applications. The medical staff has designated tiers, or categories, which reflect the physician’s record from the standpoint of professional liability settlements/judgments, disciplinary action/licensure status, references, and verification of information for initial appointment (Fig. 4). For reappointment, additional data reflecting the physician’s case volume as well as quality performance is reviewed. Medical record documentation
120
LOUIS RUBINO AND MARSHA CHAN
Medical Staff Appointment & Reappointment Categories Appointment: Level One (Fast Track) Reappointment: Level One Practitioner has:
• • • • • • •
Satisfactory references, Satisfied all criteria for membership and privileges requested, No record of malpractice payments within the past 5 years, No disciplinary actions, no licensure restrictions, No problems verifying information, No indications of investigations or potential problems, Information returned in a timely manner and contains nothing that suggests the practitioner is anything other highly qualified in all areas.
Practitioner meets all criteria for Level One Appointment, plus:
• • • • •
•
Practitioner is not requesting new privileges, or satisfies all criteria for any new privileges requested, No record of malpractice payments within the past 2 years, CME is sufficient in volume and relates to privileges, Practitioner meets all criteria and is currently competent to perform privileges requested, Practitioner’s specific profile indicates that performance has been satisfactory in all areas (clinical practice, behavior, etc.) and is absent problematic trends or patterns, No problems have been identified regarding the practitioner’s ability to perform the privileges requested.
Appointment: Level Two
Reappointment: Level Two
•
•
Applicant fails to meet one or more of the criteria identified in Level One above, however, after careful review by the Credentials Committee and department, has been recommended for appointment.
Practitioner fails to meet one or more of the criteria identified in Level One Appointment or Level One Reappointment above, however, after careful review by the Credentials Committee and department, has been recommended for a two year reappointment.
Appointment: Level Three
Reappointment: Level Three
•
•
Applicant fails to meet one or more of the criteria identified in Level One above and, after careful review by the Credentials Committee and department, has NOT been recommended for APPOINTMENT.
Practitioner fails to meet one or more of the criteria identified in Level One Appointment or Level One Reappointment above, however, after careful review by the Credentials Committee and department, has been recommended for a limited reappointment. Limited reappointment is for a time period of less than two years.
Reappointment: Level Four
•
Fig. 4.
Practitioner fails to meet one or more of the criteria identified in Level One Appointment or Level One Reappointment above, and after careful review by the Credentials Committee and department, has NOT been recommended for REAPPOINTMENT. Note – Level Four Reappointment decisions based on medical disciplinary cause or reason must be forwarded to the Board for final action.
Medical Staff Appointment and Reappointment Categories.
121
Quality and Patient Safety Reappointment Criteria
Medical Record Suspension
Level I *
Level II *
Level III *
0 – 15 days
16 – 60 days
> 60 days
2
3–5
6 or more
(cumulative days)
Illegible documentation (# incidents referred to Medical Records or P&T Committee for action)
Malpractice Settlements
0
Monetary Trend Settlement Included in Peer Review, handled on a case by case basis.
Complaints (# patient or family complaints/ # Inpatient Discharges)
Avoidable Days – Admission Criteria Not Met Avoidable Days – Physician Delay
0
Medi-Cal Denied Days
0
Medication Use Pharmacy Interventions Compliance
0
75 – 100%
10% of total days for that doctor 10% of total days for that doctor 10% of total days for that doctor
20% of total days for that doctor 20% of total days for that doctor 20% of total days for that doctor
50 – 74%