Six Sigma and Beyond. The Implementation Process. Volume VII

  • 51 255 7
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Six Sigma and Beyond. The Implementation Process. Volume VII

SIX SIGMA AND BEYOND The Implementation Process © 2003 by CRC Press LLC SIX SIGMA AND BEYOND A series by D.H. Stamati

1,399 157 6MB

Pages 551 Page size 410.4 x 626.4 pts Year 2008

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

SIX SIGMA AND BEYOND The Implementation Process

© 2003 by CRC Press LLC

SIX SIGMA AND BEYOND A series by D.H. Stamatis Volume I

Foundations of Excellent Performance

Volume II

Problem Solving and Basic Mathematics

Volume III

Statistics and Probability

Volume IV

Statistical Process Control

Volume V

Design of Experiments

Volume VI

Design for Six Sigma

Volume VII

The Implementation Process

© 2003 by CRC Press LLC

D. H. Stamatis

SIX SIGMA AND BEYOND The Implementation Process

ST. LUCIE PRES S A CRC Press Company Boca Raton London New York Washington, D.C. © 2003 by CRC Press LLC

SL316X FMFrame Page 4 Wednesday, October 2, 2002 8:24 AM

Library of Congress Cataloging-in-Publication Data Stamatis, D. H., 1947Six sigma and beyond : foundations of excellent performance / Dean H. Stamatis. p. cm. -- (Six Sigma and beyond series) Includes bibliographical references. ISBN 1-57444-314-3 1. Quality control--Statistical methods. 2. Production management--Statistical methods. 3. Industrial management. I. Title. II. Series. TS156 .S73 2001 658.5′62--dc21

2001041635

This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.

Visit the CRC Press Web site at www.crcpress.com © 2003 by CRC Press LLC St. Lucie Press is an imprint of CRC Press LLC No claim to original U.S. Government works International Standard Book Number 1-57444-314-3 Library of Congress Card Number 2001041635 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper Volume VII: The Implementation Process ISBN 1-57444-316-X © 2003 by CRC Press LLC

SL316X FMFrame Page 5 Wednesday, October 2, 2002 8:24 AM

To John and Helen Chalapis (my teachers, mentors, friends and Koumbari)

© 2003 by CRC Press LLC

SL316X FMFrame Page 7 Wednesday, October 2, 2002 8:24 AM

Preface In the first six volumes of this series, we followed a traditional writing style to explain, modify, and elaborate points, concepts, and issues as they pertained to the Six Sigma methodology. This volume is quite different and may be considered somewhat unorthodox. There are at least two reasons for this drastic shift in writing style. The first is that the material has already been presented in a very detailed fashion in the individual volumes; therefore, there is no need to repeat them. Second, this volume is an implementation volume, which means that the information is geared toward helping the reader in formalizing his own training, whatever that may be. We are very cognizant of the fact that a variety of organizations exist, each seeking its own application of the Six Sigma methodology. That is why we have developed the material in such a way as to help the reader in the development of his own needs. The predominant break of the material is Part I, in which we provide material essential to the reader in developing his own training. All human learning is about skills, knowledge, and attitudes (SKAs). Since we believe that these three attributes are the drivers for learning we have spent several chapters making sure that SKAs are identified and planned in the training. We believe that with the basic tools described in these chapters anyone can create a system for training adults that is much faster, has high user validity, and is exceptionally adaptable. (This is very important because part of the implementation process requires that black belts cascade the training to green belts.) The second break is Part II, in which we give the reader a prescriptive approach to training. We start by identifying the executives, champions, master black belts, black belts, and green belts and providing a general overview of the Six Sigma methodology. Within each of the groupings we also identify the objectives that each category should be responsible for and then gradually present the training outline for three options: 1) transactional, 2) technical, and 3) manufacturing. The outline is somewhat thematic in nature and in some places repeats what is contained in earlier chapters due to overlap in the knowledge requirements of the various levels of leadership. The reader is encouraged to return to the main volumes to extract more material and examples. (Although we would encourage the reader to generate personalized examples from his own processes.) In some cases, the outline merely identifies a topic without further explanation, for example, FMEA, TRIZ, QFD, control charts, capability, capability indices, DOE, Taguchi, and others. The reason for such laconic statements is that we have gone to great lengths to explain these terms elsewhere in earlier volumes. On the other hand, there are situations in which we find it necessary to either elaborate or explain or reemphasize certain issues, even though we have already explained them. This is because these items are very important in the process of Six Sigma. © 2003 by CRC Press LLC

SL316X FMFrame Page 8 Wednesday, October 2, 2002 8:24 AM

The third significant break is Part III, in which we discuss the training for DFSS and certification. This part also contains an epilog. The approach is the same as that described above. In addition, the reader will notice that the objectives are identified in such a way that transactional, technical, and manufacturing executives, champions, master black belts, black belts and green belts may benefit from the information. They are all grouped together in their respective categories. (For example, the objectives for the black belt are one entity covering transactional, technical, and manufacturing areas.) The difference is in the selection process for each, which will depend on the background of the individual group and the organization’s needs. In the case of the actual outlines, we have tried to make that distinction. However, the reader will notice that there is a great overlap in the content. This is not incidental. It is on purpose, because all groups must have virtually the same understanding of what Six Sigma is all about. The difference is in the detail of that knowledge. Furthermore, as we already mentioned, the reader will notice that the outlines for each of the training are short, in the sense that they are not very elaborative. Again, this is by design. We have tried to provide the structure of the content and we hope that the reader will turn to volumes one through six to obtain detailed information as needed. We also have tried not to give any examples or simulations in the outline because we hope that the reader will generate his own examples as they relate to his organization. (If you need to generate examples, you may want to use some from the individual volumes of this series.) Specifically, this volume contains the following chapters: Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter

1 2 3 4 5 6 7 8 9 10 11 12 13 14

Understanding the Learner and Instruction Front-End Analysis Design of Instruction Development of Material and Evaluation Delivery of Material and Evaluation Contract Training Six Sigma for Executives Six Sigma for Champions Six Sigma for Master Black Belts Six Sigma for Black Belts Six Sigma for Green Belts Six Sigma for General Orientation DFSS Training Six Sigma Certification

© 2003 by CRC Press LLC

SL316X FMFrame Page 9 Wednesday, October 2, 2002 8:24 AM

Acknowledgments We have come to the end of this series on Six Sigma and Beyond and I am indebted to so many individuals who have helped directly or indirectly along the way. A series of volumes of this magnitude is necessarily based on a wide variety of original sources. While I have made original contributions in some specific areas of analysis, and certainly in the conceptual framework of the topic, the bulk of the material (i.e., SPC, DOE, Taguchi, project management, reliability, statistics and probability, value analysis and so many other topics) is based on or expanded from the contribution of others. I have very carefully shown the sources of these materials at the points they are discussed. I hope I have made no omissions. I am indebted to The Six Sigma Academy, The Biometrika, Institute of Mathematical Statistics, CRC Press, Tennessee Associates, Marketing News, McGrawHill, John Wiley & Sons, Prentice Hall, Ford Motor Company, Thompson Learning, Houghton Mifflin Company, American Supplier Institute, Mr. D. R. Bothe, and Dr. E. Buffa for granting me permission to use their material throughout these volumes. Special thanks to the people at CRC for helping me throughout this project in making the material presentable. You all are great! Thanks also to the hundreds of seminar participants and graduate students at Central Michigan University who over the years have helped in defining some of my thoughts and clarifying others. These two sources have indeed been the laboratory for many of my thoughts and approaches. Based on their contribution I have modified and changed quite a few items for the better. I am indeed grateful. My special thanks, however, are reserved for my family and especially my wife. Through her support and encouragement I was able to do this project without any reservation and difficulty. Thank you.

© 2003 by CRC Press LLC

SL316X FMFrame Page 11 Wednesday, October 2, 2002 8:24 AM

About the Author D. H. Stamatis, Ph.D., ASQC-Fellow, CQE, CMfgE, is president of Contemporary Consultants, in Southgate, Michigan. He received his B.S. and B.A. degrees in marketing from Wayne State University, his master’s degree from Central Michigan University, and his Ph.D. degree in instructional technology and business/statistics from Wayne State University. Dr. Stamatis is a certified quality engineer for the American Society of Quality Control, a certified manufacturing engineer for the Society of Manufacturing Engineers, and a graduate of BSI’s ISO 9000 lead assessor training program. He is a specialist in management consulting, organizational development, and quality science and has taught these subjects at Central Michigan University, the University of Michigan, and Florida Institute of Technology. With more than 30 years of experience in management, quality training, and consulting, Dr. Stamatis has served and consulted for numerous industries in the private and public sectors. His consulting extends across the United States, Southeast Asia, Japan, China, India, and Europe. Dr. Stamatis has written more than 60 articles and presented many speeches at national and international conferences on quality. He is a contributing author in several books and the sole author of 20 books. In addition, he has performed more than 100 automotive-related audits and 25 preassessment ISO 9000 audits, and has helped several companies attain certification. He is an active member of the Detroit Engineering Society, the American Society for Training and Development, the American Marketing Association, and the American Research Association, and a fellow of the American Society for Quality Control.

© 2003 by CRC Press LLC

SL316X FMFrame Page 13 Wednesday, October 2, 2002 8:24 AM

Tables Table 1.1 Table 1.2 Table 1.3 Table 1.4

Instructional events as they relate to the five types of learned capability Typical delivery systems and related information Standard verbs to describe learning capabilities Desirable sequence characteristics associated with five types of learning outcome Table 1.5 Decision cycle Table 1.6 Different routes to organizational payoff Table 1.7 Kirkpatrick’s evaluation with several examples

Table Table Table Table Table Table

2.1 2.2 2.3 2.4 2.5 2.6

Data collection techniques Contributing factors to problems Front end analysis report information Front end analysis formative evaluation checklist Information about essential tasks Task analysis formative evaluation checklist

Table Table Table Table Table

3.1 3.2 3.3 3.4 3.5

Example of content outline – changing a tire (terminal objective) Example of instructional plan Types of instructional media Learning principles Design of formative evaluation checklist

Table Table Table Table Table

4.1 4.2 4.3 4.4 4.5

Development principles Example of rough draft of text – changing a tire (terminal objective) Rough draft evaluation form Development of materials – formative evaluation checklist Evaluation: pilot testing – formative evaluation checklist

Table Table Table Table Table Table Table

5.1 5.2 5.3 5.4 5.5 5.6 5.7

A typical delivery plan Delivery of materials – formative evaluation checklist On-the-job application – formative evaluation checklist Post-instructional data collection tools Self-evaluation measurement tool Research design action plan Evaluation: post-instruction – formative evaluation checklist

© 2003 by CRC Press LLC

SL316X FMFrame Page 14 Wednesday, October 2, 2002 8:24 AM

Table Table Table Table Table Table Table

6.1 6.2 6.3 6.4 6.5 6.6 6.7

Criteria for evaluating products Design and development principles Development principles Forms of rough drafts Typical audience’s response questionnaire Learner/supervisor post-instructional agreement Research design action plan

© 2003 by CRC Press LLC

SL316X FMFrame Page 15 Wednesday, October 2, 2002 8:24 AM

Frequent Abbreviations in Six Sigma Methodology ANOVA COPQ COQ Cp Cpk CT CTC CTD CTP CTQ CTS CTX CTY D df DOE DPO DPU DVP EVOP EVP EWMA FMA FMEA GR&R J KPIV KPOV LCL LSL m MCP m.tot MTBF OA P-diagram PLEX

Analysis of variance Cost of poor quality Cost of quality Short-term process capability Long-term process capability Critical to (matrix) Critical to customer Critical to delivery Critical to process Critical to quality Critical to satisfaction Critical to process Critical to product Observed defects Degrees of freedom Design of experiment Defects per opportunity Defects per unit Design validation (verification) plan Evolutionary operation Engineering validation plan Exponential weighted moving average Failure mode analysis Failure mode and effect analysis Gage repeatability and reproducibility Units scrapped Key process input data Key process output data Lower control limit Lower specification limit Opportunities per unit Manufacturing control process Opportunities submitted Mean time between failure Orthogonal array Parameter diagram Planning experiment

© 2003 by CRC Press LLC

SL316X FMFrame Page 16 Wednesday, October 2, 2002 8:24 AM

PPM PTAR PVP QFD R RSS S SIPOC SOP SPC SPM SS SSBB SSC SSGB SSMBB TDPO TOP U UCL USL WIP Y.A Y.final Y.ft Y.M Y.m Y.normal Y.rt Z.lt Z.shift Z.st

Parts per million Plan-Train-Apply-Review Process validation plan Quality function deployment Units repaired Root sum of squares Units passed Supplier-Input-Process-Operation-Customer Standard operating procedures Statistical process control Statistical process monitoring Sum of squares Six Sigma black belt Six Sigma champion Six Sigma green belt Six Sigma master black belt Total defect per unit Total opportunities Units submitted Upper control limit Upper specification limit Work in process Annual rate of improvement Final throughput First time yield Monthly rate of improvement Yield per opportunity Normalized yield Rolled-throughput yield Long-term sigma Shift factor Short-term sigma

© 2003 by CRC Press LLC

SL316X FMFrame Page 17 Wednesday, October 2, 2002 8:24 AM

Table of Contents Part I Understanding Adult Training and Instructional Design ................................................... 1 Introduction..............................................................................................................3 What Is Diffusion? ....................................................................................................3 Characteristics of Innovations ...................................................................................4 The Process of Six Sigma Diffusion in the Organization ........................................5 Reference ...................................................................................................................6 Chapter 1

Understanding the Learner and Instruction .........................................7

Expectations for Participants.....................................................................................7 Prepare for Successful Learning ...............................................................................7 Prepare for Each Training Course.............................................................................8 Assume an Active Role in the Learning Environment .............................................9 Understanding Adult Learners.................................................................................10 To Start With, We Must Recognize That Adults Are Interested In.................10 Principles of Instructional Design...........................................................................14 Stages or Phases of Design .....................................................................................14 Conditions of Learning ....................................................................................15 Desirable Sequence Characteristics Associated with Five Types of Learning Outcome...............................................................................................20 References................................................................................................................24 Chapter 2

Front-End Analysis ............................................................................25

Introduction..............................................................................................................25 Problem-Solving Front-End Analysis .....................................................................26 Task Analysis ...........................................................................................................31 Steps in Task Analysis.............................................................................................33 References................................................................................................................39 Selected Bibliography..............................................................................................39 Chapter 3

Design of Instruction .........................................................................41

Preparation ...............................................................................................................41 Steps in Design of Instruction.................................................................................41 References................................................................................................................48 Selected Bibliography..............................................................................................48 © 2003 by CRC Press LLC

SL316X FMFrame Page 18 Wednesday, October 2, 2002 8:24 AM

Chapter 4

Development of Material and Evaluation..........................................51

Steps in Development of Materials .........................................................................51 Planning ............................................................................................................51 Implementation ........................................................................................................52 Evaluation: Pilot Testing .........................................................................................55 Steps in Pilot Testing...............................................................................................57 Planning ............................................................................................................57 Implementation ........................................................................................................59 References................................................................................................................61 Selected Bibliography..............................................................................................61 Chapter 5

Delivery of Material and Evaluation .................................................63

Steps in Delivery of Materials ................................................................................63 Planning ............................................................................................................63 Preparation.................................................................................................63 Implementation ........................................................................................................66 On-the-Job Application............................................................................................69 Steps in On-the-Job Application .............................................................................69 Planning ............................................................................................................69 Preparation.................................................................................................69 Implementation ........................................................................................................71 Before Training........................................................................................................71 After Training ..........................................................................................................72 Evaluation: Post-Instruction ....................................................................................73 Steps in Post-Instructional Evaluation ....................................................................74 Planning ............................................................................................................74 Implementation ........................................................................................................77 References................................................................................................................79 Selected Bibliography..............................................................................................79 Chapter 6

Contract Training ...............................................................................81

Front-End Analysis ..................................................................................................81 Task Analysis ...........................................................................................................82 Design of Instruction ...............................................................................................83 Design of Job Aids ..................................................................................................84 Development of Materials .......................................................................................85 Evaluation: Pilot Testing .........................................................................................88 Delivery of Materials...............................................................................................88 On-the-Job Application............................................................................................90 Evaluation: Post-Instruction ....................................................................................90 References................................................................................................................92 Selected Bibliography..............................................................................................92

© 2003 by CRC Press LLC

SL316X FMFrame Page 19 Wednesday, October 2, 2002 8:24 AM

Part II Training for the DMAIC Model................................ 93 Chapter 7

Six Sigma for Executives...................................................................95

Instructional Objectives — Executives.................................................................... 95 Recognize Customer Focus..............................................................................95 Business Metrics...............................................................................................95 Six Sigma Fundamentals..................................................................................96 Define Nature of Variables ...............................................................................98 Opportunities for Defects .................................................................................98 CTX Tree ..........................................................................................................98 Process Mapping ..............................................................................................98 Process Baselines..............................................................................................99 Six Sigma Projects ...........................................................................................99 Six Sigma Deployment.....................................................................................99 Measure.............................................................................................................99 Scales of Measure .....................................................................................99 Data Collection.................................................................................................99 Measurement Error....................................................................................99 Statistical Distributions ...........................................................................100 Static Statistics ........................................................................................100 Dynamic Statistics...................................................................................100 Analyze Six Sigma Statistics ..................................................................100 Process Metrics .......................................................................................101 Diagnostic Tools......................................................................................101 Simulation Tools......................................................................................101 Statistical Hypotheses .............................................................................101 Continuous Decision Tools ............................................................................101 Discrete Decision Tools..................................................................................101 Improve Experiment Design Tools ................................................................101 Robust Design Tools ......................................................................................102 Empirical Modeling Tools..............................................................................102 Tolerance Tools...............................................................................................102 Risk Analysis Tools ........................................................................................102 DFSS Principles .............................................................................................102 Control Precontrol Tools ................................................................................102 Continuous SPC Tools....................................................................................102 Discrete SPC Tools.........................................................................................102 Outline of Actual Executive Training Content — 1 Day .....................................102 Maximize Customer Value .............................................................................103 Minimize Process Costs .................................................................................103 Six Sigma Leadership............................................................................................103 The Six Sigma DMAIC Model......................................................................103

© 2003 by CRC Press LLC

SL316X FMFrame Page 20 Wednesday, October 2, 2002 8:24 AM

How Six Sigma Fits .......................................................................................103 Leadership Prerequisites.................................................................................104 Deployment Infrastructure..............................................................................104 Sustaining the Gains.......................................................................................104 Project Review Guidelines .............................................................................104 Alternative Six Sigma Executive Training — 2 Days ..........................................105 Measurement...................................................................................................105 Maximizing the Customer Supplier Relationship..........................................106 The Classical vs. the Six Sigma Perspective of Yield...................................106 Traditional Yield View....................................................................................106 The Two Types of Defect Models..................................................................106 Process Characterization ................................................................................106 The Focus of Six Sigma — Customer Satisfaction and Organizational Profitability .....................................................................................................106 Definition of a Problem..................................................................................107 Roles and Responsibilities .............................................................................107 Roles of a Champion......................................................................................107 Roles of the Master Black Belt......................................................................107 Roles of the Black Belt ..................................................................................108 There Are Five Actions That Have Proven Critical to Continued Six Sigma Breakthrough ..........................................................109 Six Sigma Breakthrough ................................................................................109 Define..............................................................................................................110 Purpose ...........................................................................................................110 Questions to Be Answered .............................................................................110 A Typical Checklist for the Define Phase .....................................................110 Tools................................................................................................................111 Measure...........................................................................................................111 Purpose ...........................................................................................................111 Questions to Be Answered .............................................................................111 Typical Checklist for the Measure Phase ......................................................112 Tools................................................................................................................112 Analyze ...........................................................................................................112 Purpose ...........................................................................................................112 Questions to Be Answered .............................................................................112 Typical Checklist for the Analyze Phase .......................................................113 Tools................................................................................................................113 Improve...........................................................................................................113 Purpose ...........................................................................................................113 Questions to Be Answered .............................................................................113 Typical Checklist for the Improve Phase.......................................................114 Tools................................................................................................................114 Control ............................................................................................................114 Purpose ...........................................................................................................114 Questions to Be Answered .............................................................................114 Typical Checklist for the Control Phase ........................................................115 © 2003 by CRC Press LLC

SL316X FMFrame Page 21 Wednesday, October 2, 2002 8:24 AM

Tools................................................................................................................115 Six Sigma — The Initiative............................................................................115 Process — Systematic Approach to Reducing Defects That Affect What Is Important to the Customer ...............................................................115 Six Sigma... the Practical Sense .............................................................116 Foundation of the Tools .................................................................................116 Getting to Six Sigma......................................................................................116 The Standard Deviation..................................................................................116 Chapter 8

Six Sigma for Champions................................................................117

Curriculum Objectives for Champion Training ....................................................118 Recognize .......................................................................................................118 Customer Focus.......................................................................................118 Business Metrics .....................................................................................118 Six Sigma Fundamentals.........................................................................118 Define..............................................................................................................120 Nature of Variables..................................................................................120 Opportunities for Defects........................................................................120 CTX Tree.................................................................................................121 Process Mapping .....................................................................................121 Process Baselines ....................................................................................121 Six Sigma Projects ..................................................................................121 Six Sigma Deployment ...........................................................................121 Measure...........................................................................................................122 Scales of Measure ...................................................................................122 Data Collection........................................................................................122 Measurement Error..................................................................................122 Statistical Distributions ...........................................................................122 Static Statistics ........................................................................................122 Dynamic Statistics...................................................................................123 Analyze ...........................................................................................................123 Six Sigma Statistics.................................................................................123 Process Metrics .......................................................................................124 Diagnostic Tools......................................................................................124 Simulation Tools......................................................................................124 Statistical Hypotheses .............................................................................125 Continuous Decision Tools .....................................................................125 Discrete Decision Tools ..........................................................................126 Improve...........................................................................................................126 Experiment Design Tools........................................................................126 Robust Design Tools ...............................................................................127 Empirical Modeling Tools.......................................................................127 Tolerance Tools .......................................................................................127 Risk Analysis Tools.................................................................................127 DFSS Principles ......................................................................................127 © 2003 by CRC Press LLC

SL316X FMFrame Page 22 Wednesday, October 2, 2002 8:24 AM

Control ............................................................................................................128 Precontrol Tools ......................................................................................128 Continuous SPC Tools ............................................................................128 Discrete SPC Tools .................................................................................128 Six Sigma Project Champion Transactional (General Business and Service — Nonmanufacturing) Training .........................128 Six Sigma Breakthrough Goal .......................................................................129 Six Sigma Goal ..............................................................................................129 Comparison between Three Sigma and Six Sigma Quality..........................129 Short Historical Background..........................................................................129 Overview of the Big Picture ..........................................................................130 Identify Customer...........................................................................................132 The DMAIC Process ......................................................................................133 Detailed Model Explanation ..........................................................................135 Performance Metrics Reporting .....................................................................135 Establish Customer Focus ..............................................................................135 Define Variables: Key Questions Are.............................................................136 The Focus of Six Sigma.................................................................................136 Process Optimization......................................................................................136 Process Baseline: Key Questions Are ............................................................136 Process Mapping ............................................................................................137 Cause and Effect.............................................................................................138 The Approach to C&E Matrix .......................................................................138 Links of C&E Matrix to Other Tools ............................................................138 Basic Statistics................................................................................................138 Converting DPM to a Z Equivalent ...............................................................139 Basic Graphs...................................................................................................139 Analyze ...........................................................................................................140 Improve...........................................................................................................140 Control ............................................................................................................140 Six Sigma Project Champion — Technical Training............................................140 Six Sigma Breakthrough Goal .......................................................................141 Six Sigma Goal ..............................................................................................141 Comparison between Three Sigma and Six Sigma Quality..........................142 Short Historical Background..........................................................................142 Overview of the Big Picture ..........................................................................142 Identify Customer...........................................................................................145 The DMAIC Process ......................................................................................146 Detailed Model Explanation ..........................................................................148 Performance Metrics Reporting .....................................................................148 Establish Customer Focus ..............................................................................148 Define Variables: Key Questions Are.............................................................148 The Focus of Six Sigma.................................................................................149 Process Optimization......................................................................................149

© 2003 by CRC Press LLC

SL316X FMFrame Page 23 Wednesday, October 2, 2002 8:24 AM

Process Baseline .............................................................................................149 Process Mapping ............................................................................................150 Cause and Effect.............................................................................................151 The Approach to C&E Matrix .......................................................................151 Links of C&E Matrix to Other Tools ............................................................151 Basic Statistics................................................................................................151 Converting DPM to a Z Equivalent ...............................................................152 Basic Graphs...................................................................................................152 Analyze ...........................................................................................................152 Improve...........................................................................................................153 Control ............................................................................................................153 Six Sigma Project Champion Training — Manufacturing ...................................153 Exploring Our Values ............................................................................................153 Short Overview...............................................................................................153 Six Sigma Manufacturing Champion Training — Getting Started ......................155 Tips on Success for Six Sigma Manufacturing Champion...................................165 Champion Issues....................................................................................................166 Project Report Out.................................................................................................169 Project Presentation Milestone Requirements — Week 1 Training .....................174 Presentation Goals ..........................................................................................174 Presentation Notes ..........................................................................................175 Project Presentation — Week 2.............................................................................175 Presentation Goals ..........................................................................................175 Presentation Notes ..........................................................................................175 Project Presentation – Week 3...............................................................................175 Presentation Goals ..........................................................................................175 Presentation Notes ..........................................................................................176 Project Presentation – Week 4...............................................................................176 Presentation Goals ..........................................................................................176 Presentation Notes ..........................................................................................176 Typical Champion’s Questions for the Project Review........................................177 In the Define Phase ........................................................................................177 Have You .................................................................................................177 For Each Individual Project, Have You: .................................................177 In the Measure Phase .....................................................................................177 Typical Questions at This Phase Should Be:..........................................177 In the Analyze Phase......................................................................................178 Typical Questions in This Phase Should Be: .........................................178 In the Improve Phase......................................................................................178 Typical Questions in This Phase Should Be: .........................................178 In the Control Phase.......................................................................................179 Typical Questions in This Phase Should Be: .........................................179 Reference ...............................................................................................................179 Selected Bibliography............................................................................................179

© 2003 by CRC Press LLC

SL316X FMFrame Page 24 Wednesday, October 2, 2002 8:24 AM

Chapter 9

Six Sigma for Master Black Belts...................................................181

Instructional Objectives — Shogun (Master Black Belt) .....................................181 Recognize .......................................................................................................181 Customer Focus.......................................................................................181 Business Metrics .....................................................................................181 Six Sigma Fundamentals.........................................................................182 Define..............................................................................................................184 Nature of Variables..................................................................................184 Opportunities for Defects........................................................................184 CTX Tree.................................................................................................184 Process Mapping .....................................................................................184 Process Baselines ....................................................................................185 Six Sigma Projects ..................................................................................185 Six Sigma Deployment ...........................................................................185 Measure...........................................................................................................186 Scales of Measure ...................................................................................186 Data Collection........................................................................................186 Measurement Error..................................................................................186 Statistical Distributions ...........................................................................186 Static Statistics ........................................................................................187 Dynamic Statistics...................................................................................187 Analyze ...........................................................................................................188 Six Sigma Statistics.................................................................................188 Process Metrics...............................................................................................188 Diagnostic Tools......................................................................................189 Simulation Tools......................................................................................189 Statistical Hypotheses .............................................................................189 Continuous Decision Tools .....................................................................190 Discrete Decision Tools ..........................................................................191 Improve...........................................................................................................192 Experiment Design Tools........................................................................192 Robust Design Tools ...............................................................................194 Empirical Modeling Tools..............................................................................194 Tolerance Tools .......................................................................................194 Risk Analysis Tools.................................................................................195 DFSS Principles ......................................................................................195 Control ............................................................................................................195 Precontrol Tools ......................................................................................195 Continuous SPC Tools ............................................................................196 Discrete SPC Tools .................................................................................196 Training...........................................................................................................196 Chapter 10 Six Sigma for Black Belts ...............................................................199 Instructional Objectives — Black Belt..................................................................199 © 2003 by CRC Press LLC

SL316X FMFrame Page 25 Wednesday, October 2, 2002 8:24 AM

Recognize .......................................................................................................199 Customer Focus.......................................................................................199 Business Metrics .....................................................................................200 Six Sigma Fundamentals.........................................................................200 Define..............................................................................................................202 Nature of Variables..................................................................................202 Opportunities for Defects........................................................................202 CTX Tree.................................................................................................202 Process Mapping .....................................................................................203 Process Baselines ....................................................................................203 Six Sigma Projects ..................................................................................203 Six Sigma Deployment ...........................................................................203 Measure...........................................................................................................204 Scales of Measure ...................................................................................204 Data Collection........................................................................................204 Measurement Error..................................................................................204 Statistical Distributions ...........................................................................204 Static Statistics ........................................................................................205 Dynamic Statistics...................................................................................205 Analyze ...........................................................................................................206 Six Sigma Statistics.................................................................................206 Process Metrics .......................................................................................206 Diagnostic Tools......................................................................................207 Simulation Tools......................................................................................207 Statistical Hypotheses .............................................................................207 Continuous Decision Tools .....................................................................208 Discrete Decision Tools ..........................................................................209 Improve...........................................................................................................210 Experiment Design Tools........................................................................210 Robust Design Tools ...............................................................................212 Empirical Modeling Tools..............................................................................212 Tolerance Tools .......................................................................................212 Risk Analysis Tools.................................................................................213 DFSS Principles ......................................................................................213 Control ............................................................................................................213 Precontrol Tools ......................................................................................213 Continuous SPC Tools ............................................................................214 Discrete SPC Tools .................................................................................214 Content of Black Belt Training — Outline...........................................................214 Transactional Training – 4-Week Training ....................................................215 Week 1 ...................................................................................................................215 Week 2 ...................................................................................................................226 Key Questions from Week 1 ..........................................................................226 Week 3 ...................................................................................................................228 Week 4 ...................................................................................................................233 © 2003 by CRC Press LLC

SL316X FMFrame Page 26 Wednesday, October 2, 2002 8:24 AM

Technical Training — 4 Weeks .............................................................................237 Week 1 ...................................................................................................................237 Week 2 ...................................................................................................................252 Hypothesis Testing Introduction ....................................................................256 Parameters vs. Statistics .................................................................................256 Formulating Hypotheses.................................................................................257 Week 3 ...................................................................................................................257 Week 4 ...................................................................................................................263 Fractional Factorials .......................................................................................263 Control Plans ..................................................................................................277 Manufacturing Training – 4 Weeks.......................................................................281 Week 1 ...................................................................................................................281 Week 2 ...................................................................................................................296 Hypothesis Testing Introduction ....................................................................299 Week 3 ...................................................................................................................299 DOE Introduction ...........................................................................................300 Week 4 ...................................................................................................................303 Fractional Factorials .......................................................................................304 SPC Flowchart................................................................................................311 Control Plans ..................................................................................................318 Chapter 11 Six Sigma for Green Belts...............................................................323 Instructional Objectives — Green Belt .................................................................323 Recognize .......................................................................................................323 Customer Focus.......................................................................................323 Business Metrics .....................................................................................323 Six Sigma Fundamentals.........................................................................324 Define..............................................................................................................326 Nature of Variables..................................................................................326 Opportunities for Defects........................................................................326 CTX Tree.................................................................................................326 Process Mapping .....................................................................................326 Process Baselines ....................................................................................327 Six Sigma Projects ..................................................................................327 Six Sigma Deployment ...........................................................................327 Measure...........................................................................................................327 Scales of Measure ...................................................................................327 Data Collection........................................................................................328 Measurement Error..................................................................................328 Statistical Distributions ...........................................................................328 Static Statistics ........................................................................................328 Dynamic Statistics...................................................................................329 Analyze ...........................................................................................................329 Six Sigma Statistics.................................................................................329 Process Metrics .......................................................................................330 © 2003 by CRC Press LLC

SL316X FMFrame Page 27 Wednesday, October 2, 2002 8:24 AM

Diagnostic Tools......................................................................................331 Simulation Tools......................................................................................331 Statistical Hypotheses .............................................................................331 Continuous Decision Tools .....................................................................332 Discrete Decision Tools ..........................................................................333 Improve...........................................................................................................334 Experiment Design Tools........................................................................334 Robust Design Tools ...............................................................................335 Empirical Modeling Tools.......................................................................336 Tolerance Tools .......................................................................................336 Risk Analysis Tools.................................................................................336 DFSS Principles ......................................................................................336 Control ............................................................................................................336 Precontrol Tools ......................................................................................336 Continuous SPC Tools ............................................................................336 Discrete SPC Tools .................................................................................337 Six Sigma Transactional Green Belt Training ......................................................337 The DMAIC Model in Detail ........................................................................341 The Define Phase............................................................................................341 Who Is the Customer?....................................................................................341 Measurement Phase ........................................................................................341 Measurement Systems Analysis .....................................................................343 The Analysis Phase ........................................................................................344 The Improvement Phase.................................................................................345 The Control Phase ..........................................................................................345 Selecting Statistical Techniques..............................................................348 Hypothesis Testing Introduction ....................................................................351 Parameters vs. Statistics .................................................................................351 Introduction to Design of Experiments..........................................................352 Screening Designs ..........................................................................................353 Control Plans ..................................................................................................357 Six Sigma Green Belt Training — Technical .......................................................362 Short Historical Background..........................................................................362 The DMAIC Process ......................................................................................364 The DMAIC Model in Detail ........................................................................364 Define ......................................................................................................364 Measure ...................................................................................................365 Analyze....................................................................................................367 Improve....................................................................................................367 Control.....................................................................................................368 Six Sigma Green Belt Training — Manufacturing ...............................................368 Phases of Process Improvement.....................................................................371 The Define Phase ....................................................................................371 The Measurement Phase .........................................................................372 Measurement Systems Analysis..............................................................373 © 2003 by CRC Press LLC

SL316X FMFrame Page 28 Wednesday, October 2, 2002 8:24 AM

The Analysis Phase ........................................................................................374 The Improvement Phase..........................................................................375 The Control Phase...................................................................................375 Selecting Statistical Techniques..............................................................378 Hypothesis Testing Introduction ....................................................................381 Parameters vs. Statistics .................................................................................381 Introduction to Design of Experiments..........................................................382 Screening Designs ..........................................................................................383 Control Plans ..................................................................................................387 Reference ...............................................................................................................391 Chapter 12 Six Sigma for General Orientation..................................................393 Instructional Objectives — General ......................................................................393 Recognize .......................................................................................................393 Customer Focus.......................................................................................393 Business Metrics .....................................................................................394 Six Sigma Fundamentals.........................................................................394 Define..............................................................................................................395 Nature of Variables..................................................................................395 Opportunities for Defects........................................................................395 CTX Tree.................................................................................................395 Process Mapping .....................................................................................395 Process Baselines ....................................................................................396 Six Sigma Projects ..................................................................................396 Six Sigma Deployment ...........................................................................396 Measure...........................................................................................................396 Scales of Measure ...................................................................................396 Data Collection........................................................................................396 Measurement Error..................................................................................396 Statistical Distributions ...........................................................................396 Static Statistics ........................................................................................397 Dynamic Statistics...................................................................................397 Analyze ...........................................................................................................397 Six Sigma Statistics.................................................................................397 Process Metrics .......................................................................................397 Diagnostic Tools......................................................................................397 Simulation Tools......................................................................................397 Statistical Hypotheses .............................................................................397 Continuous Decision Tools .....................................................................397 Discrete Decision Tools ..........................................................................398 Improve...........................................................................................................398 Experiment Design Tools........................................................................398 Robust Design Tools ...............................................................................398 Empirical Modeling Tools.......................................................................398 Tolerance Tools .......................................................................................398 © 2003 by CRC Press LLC

SL316X FMFrame Page 29 Wednesday, October 2, 2002 8:24 AM

Risk Analysis Tools.................................................................................398 DFSS Principles ......................................................................................398 Control ............................................................................................................398 Precontrol Tools ......................................................................................398 Continuous SPC Tools ............................................................................399 Discrete SPC Tools .................................................................................399 Outline of Content..........................................................................................399 Process Improvement .....................................................................................399 Define..............................................................................................................400 Measure...........................................................................................................400 Measurement ...........................................................................................400 Variation..........................................................................................................401 Sampling..................................................................................................401 Simple Calculations and Conversions ....................................................401 Analyze ...........................................................................................................402 Data Analysis...........................................................................................402 Cause-and-Effect Analysis ......................................................................402 Root Causes Verification .........................................................................402 Determine the Opportunity .....................................................................402 Improve....................................................................................................402

Part III Training for the DCOV Model ............................... 405 Chapter 13 DFSS Training..................................................................................407 The Actual Training for DFSS ..............................................................................407 Executive DFSS Training ......................................................................................408 DFSS Champion Training .....................................................................................416 DFSS – 2-Day Program ........................................................................................416 DFSS Champion Training Outline — 4 Days ......................................................417 Project Member and BB DFSS Training ..............................................................421 Week 1 ............................................................................................................422 DCOV Model in Detail ..................................................................................429 The Define Phase ....................................................................................429 The Characterize Phase...........................................................................431 Ideal Function and P-Diagram .......................................................................433 Identifying Technical Metrics ........................................................................434 Week 2 ............................................................................................................438 The Optimize Phase ................................................................................438 Design for Producibility .................................................................................446 Deliverables/Checklist for the Optimize Phase .............................................446 The Verify Phase ............................................................................................447 Step 1: Update/Develop Test Plan Details.....................................................447 © 2003 by CRC Press LLC

SL316X FMFrame Page 30 Wednesday, October 2, 2002 8:24 AM

Step 2: Conduct Test ......................................................................................449 Step 3: Analyze/Assess Results .....................................................................450 Step 4: Does the Design Pass Requirements? ...............................................450 Step 5: Develop Failure Resolution Plan.......................................................451 Step 6: Record Actions on Design Verification Plan and Record (DVP&R) ........................................................................................................452 Step 7: Complete DVP&R .............................................................................453 Selected Bibliography............................................................................................454 Chapter 14 Six Sigma Certification....................................................................455 The Need for Certification ....................................................................................457 General Comments Regarding Certification as It Relates to Six Sigma .............459 Conclusion .............................................................................................................461 References..............................................................................................................463 Epilog ....................................................................................................................465 Glossary ................................................................................................................467 Selected Bibilography..........................................................................................535

© 2003 by CRC Press LLC

SL316XCh01Frame Page 7 Monday, September 30, 2002 8:16 PM

1

Understanding the Learner and Instruction EXPECTATIONS FOR PARTICIPANTS

Education is the single greatest investment you will ever make in implementing the Six Sigma methodology. This education will give you the potential to alter your perceptions, thinking, and behaviors; it may also empower you to choose work and interests that will add additional meaning to your career life as well as to the organization’s culture. Like all endeavors paying a high dividend, the personal cost of attaining higher learning is considerable. Every participant must contribute time, resources, and energy. The following recommendations are intended to help you maximize your investment in this lengthy process.

PREPARE FOR SUCCESSFUL LEARNING Approach learning experiences with an open mind, set challenging goals, and monitor your progress. • Familiarize yourself with the goals and objectives of your specific program. • Set challenging goals for your own learning: • Create a personalized study plan for your training program and the individual material of the course in which you enroll. • Periodically assess progress using self, instructor(s), and peer feedback (take advantage of other similar courses that may help in your training. For example: DOE, Powerpoint presentation, statistical software, and so on). • Schedule frequent, self-paced study sessions with fellow participants or the instructor — or even the champion and even the master black belt. They are there to help you. Use them! Take a personal approach to learning • • • •

Reflect on your own thinking, learning, and prior experiences. Relate outside activities to course material and training activities. Analyze how new information relates to existing knowledge. Clarify your thinking and the assimilation of new knowledge by asking your instructors questions and actively participating in classroom discussions, online chat-rooms, or e-mail discussion lists — if available.

7 © 2003 by CRC Press LLC

SL316XCh01Frame Page 8 Monday, September 30, 2002 8:16 PM

8

Six Sigma and Beyond: The Implementation Process

Seek and understand scholarly research • Use libraries and other information sources, including the organization’s library. This is not as bad as it sounds. Sometimes we do have to do some kind of research to find information about our customers, competitors, and the market and, of course, to track down technical information. • Develop proficiency in the use of library research databases and especially your own organization’s databases for things gone wrong, things gone right, warranty, things learned, and so on. • Acquire the ability to critically assess the quality and validity of the information sources you use. • Actively integrate scholarly knowledge and research evidence into training discussions and course assignments; after all, one of the objectives in Six Sigma is to introduce new tools, when appropriate and applicable, to solve problems. • Learn how to compose a research paper or complete a research project including selection of appropriate topics and resources, a literature review, and the proper citation of references. This can be very helpful if you are assigned to a DFSS project and you are trying to identify the “ideal function.”

PREPARE FOR EACH TRAINING COURSE Know the rules • Review the objectives for the course and any policies regarding class participation, attendance, overdue assignments, and make-up project assignments. • Ask how much work will be expected of you (e.g., in and out of class, assignments, projects, and so on) and arrange your work/study schedule accordingly. Know course materials • Take advantage of periodic classroom reviews of previously covered content. • Seek additional learning resources to fill any knowledge gaps and expand your understanding of course content. • Realize that meeting course objectives often entails knowledge of material not directly mentioned by the course instructor or included in course materials. For example, DOE, software manipulation, and many specific objectives that may be required to pursue Six Sigma.

© 2003 by CRC Press LLC

SL316XCh01Frame Page 9 Monday, September 30, 2002 8:16 PM

Understanding the Learner and Instruction

Build effective working relationships with your instructors, other black belts, master black belts, and champion, as well as other participants. • View your instructors as “facilitators,” offering guidance and feedback on your personal learning process. • Seek additional contact and communication with your instructors, other black belts, master black belts, and champion, as well as other participants to enrich the learning experience. • Be tolerant of opposing views and treat others with respect and civility. • Seek support and advice from your instructors, peers, other black belts, master black belts, and mentors.

ASSUME AN ACTIVE ROLE IN THE LEARNING ENVIRONMENT Bring awareness and a sense of purpose: • Expect to earn your “grade.” • Attend all class sessions, including all scheduled activities. Try to minimize, if not avoid, all double scheduling during the training hours. • Maintain compliance with any and all project deadlines. • Meet expectations regarding project integrity (cost, measurement, capability, etc. are some of the issues we all like to cut corners with). Bring knowledge, perspectives, and interest: • Actively participate in class activities. • Complete all required reading assignments prior to each class meeting and read suggested material. • Expect your ideas to be challenged and prepare to support them with facts, research evidence, and expert judgment, whether discussing concepts online, asynchronously, or in the classroom. Participants and the project assignment • Meeting recommended deadlines for completing required assignments and projects. • Participating in all scheduled virtual or asynchronous classroom discussions or working directly with the instructor to negotiate a suitable alternative. • Taking advantage of optional learning activities, suggested readings, and opportunities for informal virtual or asynchronous communication with the course faculty and fellow students.

© 2003 by CRC Press LLC

9

SL316XCh01Frame Page 10 Monday, September 30, 2002 8:16 PM

10

Six Sigma and Beyond: The Implementation Process

UNDERSTANDING ADULT LEARNERS To be effective in any educational endeavor one must understand the learner. However, the adult learner has quite a few idiosyncrasies that are not present with the child learner. Perhaps the most important one is the fact that the adult learner views learning as a means to an immediate end. In other words, the adult learner wants to learn things as they pertain to his work “right now.” Learning is more applicationdriven than theoretical. As a consequence, in this part of the book we are going to provide a very general overview of some of the issues and concerns in understanding the learner, the material, and, above all, the instructional process. For more information on adult learners, see Wlodkowski (1985), Cross (1981), Wonder and Donovan (1984), Knox (1986), and Brookfield (1986).

TO

START WITH, WE MUST RECOGNIZE THAT ADULTS ARE INTERESTED IN

• Enhancing proficiency at a given task (work-related) • Development and learning — recognition that different learners learn at different paces and with different methods (learning style, change events, responding to learner’s diversity, occasions for new learning) • Influencing participation by impulsive questioning and answering The instructor or facilitator, therefore, must make sure that the following are observed at all times, so that learning may be enhanced. • • • •

Respect Reasons Options Making learning relevant to their experiences

Every one of these items may contribute to learning; however, in specific terms, all these may be derived through appropriate and applicable tasks in the following: • • • • • • • • • • • • •

Procedures presentations Active learning Meaning Variety Stages of the program Affective and cognitive elements Interpersonal relationships Past and future purposes Support and challenge Models Self direction for learners Feedback Flexibility

© 2003 by CRC Press LLC

SL316XCh01Frame Page 11 Monday, September 30, 2002 8:16 PM

Understanding the Learner and Instruction

11

This is not as easy as it sounds. However, if one were to use the principles of Instructional Design, the instruction becomes very systematic and productive. The idea of Instructional Design is based fundamentally on the SKA model. The SKA model focuses on three areas. • Skill (Have you…?) • Knowledge (Do you…?) • Ability (Can you…?) Therefore, to bring out the “best” in a participant, the instructor or facilitator must apply managerial skills in the instruction itself. Managerial skills are known as events of instruction (Gagne and Briggs, 1979). The more these events are understood by the instructor or facilitator, the better the instruction, the better the comprehension of the participant, and the more effective the overall training. The events are: • Gaining attention — reception of patterns of neutral impulses • Informing the learner of the objective — activating a process of executive control • Stimulating recall of prerequisite learning — accessing working memory • Presenting the stimulus material — emphasizing features for selective perception • Providing “learning guidance” — encoding material semantically • Eliciting performance — activating a response organization • Providing feedback about performance correctness — establishing reinforcement • Assessing performance — activating retrieval; making reinforcement possible • Enhancing retention and transfer — providing cues and strategies for retrieval An example of how these nine events may be used as part of the instruction is shown in Table 1.1. This table associates the nine events with the five basic types of learned capabilities. For more information on the conditions of learning, see Gagne (1977) and Travers (1982). In the case of Six Sigma training, all these play an important role since the people who are undergoing the training may have different experiences and certainly different backgrounds as well as education. Because of these differences the educational/training implications must be focused on three areas: • Planning for learning • Managing learning • Instructing In the course of Six Sigma training, it will be necessary to consider at least two methods of instruction. The first is the group presentation in which a facilitator or © 2003 by CRC Press LLC

Type of Capability Intellectual skill

Gain attention Informing learner of objectives

Introduce stimulus change; variations in sensory mode Provide description and Clarify the general Indicate the kind of example of the nature of the solution verbal question to be performance to be expected answered expected Stimulate recall of Stimulate recall of task Stimulate recall of subordinate concepts strategies and context of organized and rules associated intellectual information skills Present examples of Present novel problems Present information in concept or rule prepositional form

Stimulating recall of prerequisites

Presenting the stimulus material

Providing learning guidance

© 2003 by CRC Press LLC

Provide verbal cues to proper combining sequence

Cognitive skill

Provide prompts and hints to novel solution

Information

Provide verbal links to a larger meaningful context

Attitude

Motor skill

Provide example of the kind of action choice aimed for

Provide a demonstration of the performance to be expected

Stimulate recall of relevant information, skills, and human model identification Present human model, demonstrating choice of personal action

Stimulate recall of executive sub routine and part skills

Provide for observation of model’s choice of action, and of reinforcement received by model

Provide external stimuli for performance, including tools or implements Provide practice with feedback of performance achievement

Six Sigma and Beyond: The Implementation Process

Instructional Event

SL316XCh01Frame Page 12 Monday, September 30, 2002 8:16 PM

12

TABLE 1.1 Instructional events as they relate to the five types of learned capability

TABLE 1.1 Instructional events as they relate to the five types of learned capability Cognitive skill

Information

Attitude

Motor skill

Eliciting the performance

Ask learner to apply rule or concept to new examples Confirm correctness of rule or concept application Learner demonstrates application of concept or rule

Ask for problem solution

Ask for information in paraphrase, or in learner’s own words Confirm correctness of statement of information Learner restates information in paraphrased form

Ask for execution of the performance

Provide spaced reviews including a variety of examples

Provide occasions for a variety of novel problem solutions

Ask learner to indicate choices of action in real or simulated situations Provide direct or vicarious reinforcement of action choice Learner makes desired choice of personal action in real or simulated situation Provide additional varied situations for selected choice of action

Providing feedback

Assessing performance

Enhancing retention and transfer

Confirm originality of problem solution Learner originates a novel solution

Provide verbal links to additional complexes of information

Provide feedback on degree of accuracy and timing of performance Learner executes performance of total skill Learner continues skill practice

Understanding the Learner and Instruction

Intellectual skill

13

© 2003 by CRC Press LLC

SL316XCh01Frame Page 13 Monday, September 30, 2002 8:16 PM

Type of Capability Instructional Event

SL316XCh01Frame Page 14 Monday, September 30, 2002 8:16 PM

14

Six Sigma and Beyond: The Implementation Process

an instructor leads the presentation of the material through small or large group activities by way of lectures, discussion simulations, etc. The second, and somewhat less frequent in Six Sigma, is the individualized approach to instruction. Here the participant may work independently at a self pace. This approach is self-centered and learner-determined based on specific situations. Typical delivery systems and related information are shown in Table 1.2.

PRINCIPLES OF INSTRUCTIONAL DESIGN Now that we have reviewed some of the instruction systems, let us examine the actual instructional design process. For extensive information on instructional design, see Briggs (1977), Richey (1986), Reigeluth (1987, 1983), Seels and Richey (1994), Dick and Carey (1978). • Instruction is a human undertaking whose purpose is to help people learn • Instruction is a set of events which affect learners in such a way that learning is facilitated Therefore, instruction must be planned to accomplish: • The aiding of learning • Immediate and long range goals (we are focusing on transferring knowledge) • Human development (no one is educationally disadvantaged) • System approach (analysis of need or goals to evaluation)

STAGES OR PHASES OF DESIGN As with anything else, there is a process that one must follow to facilitate learning. That process is called instructional design and it has ten phases. They are: • • • • • • • • • •

Front-end analysis Task analysis Product survey Design instruction Design of job aids Development of material Evaluation Delivery of materials On the job application Evaluation

We are going to address all of them, with the exception of phases 3 and 5. The reason for this is that in both cases the application to Six Sigma is very straightforward and contains no bottlenecks or unusual problems. The product is already © 2003 by CRC Press LLC

SL316XCh01Frame Page 15 Monday, September 30, 2002 8:16 PM

Understanding the Learner and Instruction

15

known, and the instructional aids simply consist of handouts of statistical formulas with their application or perhaps special forms. In all cases, the instructional design may also be subdivided into levels, such as system, course, and lesson, with each one having different requirements and instructional characteristics. System level may be interpreted as a curriculum. In Six Sigma training, the system is the entire methodology — from the requirements of the executive, to the champion, to the master black belt, to the black belt, and to the green belt. The requirements under the system are to develop • • • • • • • • •

Analysis of need, goals, and priorities Analysis of resources, constraints, and alternatives to the delivery system Determination of scope and sequence of curriculum and courses Delivery system design Trainer preparation Formative evaluation Pilot and revision Summative evaluation Installation and diffusion

Course level may be interpreted as the specific training of the executive, champion, master black belt, black belt, and green belt. Course-level requirements are • Determining course structure and sequence • Analysis of course objectives Lesson level may be interpreted as the content of each course broken down on a daily basis. Lesson-level requirements are • • • •

Defining performance objectives Preparing lesson plans (modules) Developing and selecting materials and media Assessing participant performance

CONDITIONS

OF

LEARNING

In order for anyone to learn, the instructor and or facilitator must be aware of the “learning process.” Some of the issues here are The association tradition associates learning with known items. This may be done through a) continuity (building on old knowledge or new knowledge systematically) or b) repetition of facts and items of interest. Repetition does not have to be boring and devoid of context. Rather, it may be conducted as a summary, review, Q and A, or in several other formats. Trial and error is the most common and yet most inefficient form of instruction. Under this method, we try things as we go and constantly evaluate the outcome.

© 2003 by CRC Press LLC

Possible Media

Learner Activity

Methods, Teacher Roles

Group instruction

Books, other reading materials Charts, chalkboard, displays Teacher Guest speakers Real objects, models Parts Overheads Movies/videos Programmed texts Books Modules Audio-visual devices

Reading Listening Observing demonstrations Manipulating objects Visits Participating in simulation(s) Home study Exercises and projects Reading Responding Self-pacing Self-checking

Lectures Discussions Demonstrations Oral quizzing Corrects papers Evaluate results Prepares reports Field trips Placement testing Diagnostic testing Monitors progress Remedial instruction

Individualized instruction

© 2003 by CRC Press LLC

Six Sigma and Beyond: The Implementation Process

Delivery System

SL316XCh01Frame Page 16 Monday, September 30, 2002 8:16 PM

16

TABLE 1.2 Typical delivery systems and related information

Possible Media

Learner Activity

Methods, Teacher Roles

Small group

Books Exercises Simulation activities Slide/tape presentations Sound recordings Completing team assignments Books Libraries Reading lists

Reading to each other Performing exercises Performing simulations Discussion Watching presentations Assists in locating and using resources Reading and independent study Conducting library searches Reviewing lab experiments

Assesses level of participant progress Forms small groups for specific lessons Evaluate exercises set up and results Assesses overall progress Keeps records Introduces new projects to group(s) Advisor performs guidance function Suggests or assigns tasks Confers with learner upon request or as scheduled

Laboratories Learning centers and associated equipment and materials Any or all of the above for study portion of program

Writing papers Conferring with instructor Any or all of the above for study portion of program Any assigned work function, under supervision Home study by reading, completion of exercises Communications with instructor

Independent study

Work related

Home study

Work at specified locations Involves variety of persons and equipment as media Books Modules References

Conducts evaluations of progress Any or all of the above for study portion of program Coordinates work assignment with study portion of program Assigns materials, exercises, and evaluates exercises May prepare and mail supplementary materials

Understanding the Learner and Instruction

Delivery System

17

© 2003 by CRC Press LLC

SL316XCh01Frame Page 17 Monday, September 30, 2002 8:16 PM

TABLE 1.2 Typical delivery systems and related information

SL316XCh01Frame Page 18 Monday, September 30, 2002 8:16 PM

18

Six Sigma and Beyond: The Implementation Process

Our evaluation of achievement is based on positive reinforcement in accordance with the expectations and objectives we have set. Conditioned response is a very common approach to instructing for “learning,” as it appears simple and harmless. But it is neither. It presupposes magical powers on the part of the instructor or facilitator to a) interpret voluntary and involuntary responses from the participants and b) figure out the learner’s “insight.” This implies that the instructor or facilitator knows the participant’s optimal learning style. The possibilities are the holistic (Gestalt learning) approach and the participant’s prior learning ability and benchmark. While both approaches are acceptable, the instructor or facilitator must be cognizant of both and use them as necessary. Remember, different people learn differently for many reasons. The second item of concern for conditioned response is the presumption that the participant learns through verbal associates (memorization). This may be a very serious problem (in fact, a trap) in the Six Sigma methodology, especially since many formulas must be learned. We recommend that the instructor not rely on memorization but employ repetition exercises. This is a much better approach, and in the long run, the participant is better equipped to transfer the learned information outside of the classroom environment. Miscellaneous: the primary concern here is motivation. The instructor’s (external condition) as well as the participant’s (internal condition) motivation has a lot to do with learning. For example, if the “learning event” is the central focus of our experience, then the external factors will be: a) continuity, or the temporal arrangement of conditions, b) repetition, and c) reinforcement, or the arrangement of contingencies. Note that none of these factors is learner-dependent. In fact, each one is dependent on the instructor or facilitator. In contrast, internal factors are dependent on the learner and represent a) factual formation, i.e., they may be presented or recalled from prior learning, b) intellectual skills, in that they are recalled from prior learning, and c) strategies, i.e., they are self-activated from prior practice and or experience. Note that none of these is instructor-dependent. The learner associates previous experience and learning with current material. The richer the experiences, the more pleasant and value-added the current material and knowledge. So why do we bother with the above items? What is their relation to Six Sigma training? It turns out that the above conditions of learning are inherently important in Six Sigma, because Six Sigma methodology provides some very challenging items for the instructor and facilitator in the areas of “learning capability.” They are: 1. Intellectual skills (demonstrating symbol use) • Discrimination (distinguish) • Concrete concept (spatial relation) • Defined concept (using a definition, clarification occurs) • Higher-order rule (combination) 2. Cognitive strategy (efficient use of recalling or solving a problem) 3. Verbal information (recall) 4. Motor skill (action) 5. Attitude © 2003 by CRC Press LLC

SL316XCh01Frame Page 19 Monday, September 30, 2002 8:16 PM

Understanding the Learner and Instruction

19

TABLE 1.3 Standard verbs to describe learning capabilities Capability

Capability Verb

Example (Action Verb in Italics)

Intellectual Skill Discrimination

DISCRIMINATES

Concrete Concept

IDENTIFIES

Defined Concept Rule

CLASSIFIES DEMONSTRATES

Higher-order Rule (Problem Solving) Cognitive Strategy

GENERATES ORIGINATES

Information

STATES

Motor Skill Attitude

EXECUTES CHOOSES

discriminates, by matching French sounds of “u” and “ou” identifies, by naming the root, leaf, and stem of representative plants classifies, by using a definition, the concept “family” demonstrates, by solving verbally stated examples, the addition of positive and negative numbers generates, by synthesizing applicable rules, a paragraph describing a person’s actions in a situation of fear originates a solution to the reduction of air pollution by applying model of gaseous diffusion states orally the major issues in the development of the Six Sigma methodology executes backing a car into driveway Chooses playing golf as a leisure activity

Table 1.3 provides some very simple examples of standard verbs to describe learning capabilities. On the other hand, a motivating or an enthusiastic instructor plays a major role in the learning ability of the participant. Some characteristics and skills of motivating instructors are: • They know something beneficial for adults • They know the subject matter well • They are prepared to convey their knowledge through an instructional process • They have a realistic understanding of learners’ needs and expectations for what they are offering them to learn • They have adapted the instruction to the learners’ level of experience and skill development • They continually consider the learners’ perspective • They care about and value what they teach, both for themselves as well as for the learners • This commitment is expected in the instruction with appropriate degrees of emotion, animation, and energy • rapid, uplifting, varied vocal delivery • dancing, wide-open eyes • frequent, demonstrative gestures • varied, dramatic body movements • varied, emotive facial expressions © 2003 by CRC Press LLC

SL316XCh01Frame Page 20 Monday, September 30, 2002 8:16 PM

20

Six Sigma and Beyond: The Implementation Process

• selection of varied words, especially adjectives • ready, animated acceptance of ideas and feelings • exuberant overall energy level The benefits of a motivating instructor or facilitator are demonstrated in the instruction process through the creation of a positive attitude. A motivating instructor: • Shares something of value with her learners • Concretely indicates her cooperative intentions to help adults learn • Reflects, to the degree authentically possible, the language, perspective, and attitudes of her learners • Gives her rationale when issuing mandatory assignments or training requirements • Allows for introductions • Eliminates or minimizes any negative conditions that surround the subject • Ensures successful learning • Makes the first experience with the subject as positive as possible • Positively confronts the possible erroneous beliefs, expectations, and assumptions that may underlie a negative learner attitude • Associates the learner with other learners who are enthusiastic about the subject • Encourages the learner • Promotes the learner’s personal control of the learning context • Helps learners to attribute their success to their ability and effort • Helps learners to understand that effort and persistence can overcome any obstacles when learning tasks are suitable to their ability • Makes the learning goal as clear as possible • Makes evaluation criteria as clear as possible • Uses models learners can relate to when demonstrating expected learning • Announces the expected amount of time needed for study and practice for successful learning • Uses goal-setting methods • Uses contracting methods

DESIRABLE SEQUENCE CHARACTERISTICS ASSOCIATED WITH FIVE TYPES OF LEARNING OUTCOME We have been discussing the learner and some of the issues and concerns of the instructional process. In Table 1.4, we summarize some of the desirable sequence characteristics, so that the instruction may be fruitful and appreciated by the participant. In conjunction with the desirable sequence, there is also a decision cycle of training that must be developed. In the case of Six Sigma, the decision is pretty straightforward, but let us summarize some key points of the general process.

© 2003 by CRC Press LLC

SL316XCh01Frame Page 21 Monday, September 30, 2002 8:16 PM

Understanding the Learner and Instruction

21

TABLE 1.4 Desirable sequence characteristics associated with five types of learning outcome Type of learning outcome Motor Skills

Verbal Information

Intellectual Skills

Attitudes

Cognitive Strategies

Major principles of sequencing

Related sequence factors

Provide intensive practice on skills of critical importance and practice on total skill. For major subtopics, order of presentation is not important. Individual facts should be preceded or accompanied by meaningful context. Presentation of learning situation for each new skill should be preceded by prior mastery of subordinate skills. Establishment of respect for source as an initial step. Choice situations should be preceded by mastery of any skills involved in these choices. Problem situations should contain previously acquired intellectual skills.

First, learn the executive routine (rule). Prior learning of necessary intellectual skills involved in reading, listening, etc. is usually assumed.

Information relevant to the learning of each new skill should be previously learned or presented in instructions. Information relevant to choice behavior should be previously learned or presented in instructions. Information relevant to solution of problems should be previously learned or presented in instructions.

Table 1.5 shows the decision cycle, and Table 1.6 shows the different routes of payoff to the organization. Again, these two tables are shown here so that the reader may appreciate the complexity of training in deciding what is proper and what the payoff is to the organization. Specifically, under Six Sigma, the decision is generally made by top executives in the organization, and the payoff is hoped to be demonstrated in increased customer satisfaction and profitability for the organization. Perhaps one of the most contested topics in training for the last several years has been the effectiveness of training. That means that as training progresses and draws to a close, the question of whether or not the training is meeting or has met the objectives or was beneficial and to what degree is asked. There are two basic evaluations. The first one is the formative, which is an ongoing evaluation of the training to ensure that everything fulfills the objectives. It is conducted during the development of the training. The second is the summative evaluation, which is performed at the end of the training and focuses on whether or not the objectives were met. Obviously, there are many ways to evaluate the training, but the classic evaluation is Kirkpatrick’s Hierarchy of Evaluation model. What Kirkpatrick did was to separate the various outputs of training and evaluate them separately. Level 1 is the weakest, for it evaluates based on perception of “likes” and “dislikes.” In other words, it focuses on learner reactions. Level 2 focuses on learning, and level

© 2003 by CRC Press LLC

SL316XCh01Frame Page 22 Monday, September 30, 2002 8:16 PM

22

Six Sigma and Beyond: The Implementation Process

TABLE 1.5 Decision cycle The logical steps

Some key decisions

Goals for HRD that will be worthwhile to the organization are established

Is there a worthwhile problem or opportunity to be addressed? Is the problem worth solving or addressing? What organizational benefits could HRD produce? Can HRD help? Is HRD the best solution? Who should receive HRD? What SKA are needed? What learning processes will best produce needed SKA? Is a design already available? Can an effective design be created? Is it likely to work? What is really happening? Has the design been installed as planned? Is it working? What problems are occurring? What changes should be made? Who has and has not acquired SKA? What else was learned? Are SKA sufficient to enable on-the-job usage? Have HRD effects lasted? Who is using new SKA? Which SKA are and are not being used? How are SKA being used? How well are SKA being used? What benefits are occurring? What benefits are not occurring? Are any problems occurring because of new SKA use or nonuse? Should HRD be continued? Should less be done? More? Are revisions needed? Was it worth it?

A workable program design is created

A program design is implemented and made to work

Recipients exit with new SKA; enough HRD has taken place Recipients use new SKA on the job or in personal life; reactions to HRD are sustained

Usage of SKA benefits the organization; original HRD needs are sufficiently diminished

3 focuses on job application. On the other hand, level 4 is the strongest and most difficult to perform; it is also the most valuable. Evaluation is based on the objectives of the training in relation to results. It focuses on the following questions. • Should I conduct a level 4 evaluation? (An issue of cost and effectiveness.) • Is a level 4 study feasible? (How would I go about validating and correlating the data of training and implemented benefit?) • Which design should I use? (Specifically, what do I want to measure?) • What will the training cost? • How will I analyze the data? • How will I report the results? © 2003 by CRC Press LLC

SL316XCh01Frame Page 23 Monday, September 30, 2002 8:16 PM

Understanding the Learner and Instruction

23

TABLE 1.6 Different routes to organizational payoff Training intervention Safety training

Conflict resolution

Six Sigma training Black belts Green belts FMEA

New SKA or reactions Awareness of and skill in following safety procedures Skill and knowledge in methods

Appropriate level of skill and knowledge in tools and methodology Skill and knowledge in the construction and analysis of the FMEA

Mistake proofing

Skill and knowledge in the method of mistake proofing

Project management

Skill and knowledge in the theory and application of project management

Behavior change

Benefits to organization

Greater adherence to procedures

Reduced injuries and lost time

Use of techniques when called for

Reduce conflict in the workplace; increase productivity, morale, and commitment to organization Improve customer satisfaction and profitability

Use techniques when called for

Use the FMEA as a preventive tool to improve design and process Use mistake-proofing approaches to eliminate defects Use project management methodology to improve budgets, delivery and scheduling

Reduce design and process defects

Reduction of waste through appropriate mistake-proofing devices, and controls Reduce problems due to scheduling, budget, and delivery of specific projects and or products

A summary of that classification is shown in Table 1.7. Kirkpatrick’s evaluation is also known as the four-level evaluation model. In the case of Six Sigma the outputs would be: Level 1: Were the participants satisfied with the training? (material, instructor, environment, expectations, etc.) Level 2: Can the participants demonstrate knowledge of what they learned? (In the Six Sigma methodology, this is measured by the progress toward the objective.) Level 3: Are the skills of the Six Sigma methodology used beyond the specific assigned projects? Level 4: Is customer satisfaction and profitability better off after the training? (For most of the training, this level is the most difficult to measure. However, for Six Sigma, it should be very easy since from the beginning this correlation was the driving force of the project itself.) © 2003 by CRC Press LLC

SL316XCh01Frame Page 24 Monday, September 30, 2002 8:16 PM

24

Six Sigma and Beyond: The Implementation Process

TABLE 1.7 Kirkpatrick’s evaluation with several examples Levels of evaluation

Job training

Nutrition education

Adult literacy

Level 4: Results (community or organizational impact) Level 3: Behavior (Transference of skills) Level 2: Learning (demonstration of learning) Level 1: Reaction (general evaluation)

Does output rise?

Do hospital admissions fall?

Does public library usage increase?

Are skills used in work? Do learners demonstrate their acquisition of skills? Do learners express their satisfaction with the overall program?

Do food purchasing habits change? Do participants show knowledge of good diet? Do participants express their satisfaction with the program?

Do learners read at home? Do learners show mastery of reading and writing skills? Do participants of the training express their satisfaction with the program?

REFERENCES Briggs, L. J. (Ed.) (1977). Instructional design: principles and applications. Educational Technology Publications. Englewood Cliffs, NJ. Brookfield, S. D. (1986). Understanding and facilitating adult learning. Jossey-Bass Publishers. San Francisco. Cross, P. (1981). Adults as learners. Jossey-Bass Publishers. San Francisco. Dick, W. and L. Carey (1978). The systematic design of instruction. Scott, Foresman and Company. Glenview, IL. Gagne, R. M. and L. J. Briggs (1979). Principles of instructional design. 2nd ed. Holt, Reinhart and Winston. New York. Gagne, R. M. (1977). The conditions of learning. 3rd ed. Holt, Reinhart and Winston. New York. Knox, A. B. (1986). Helping adults learn. Jossey-Bass Publishers. San Francisco. Reigeluth, C. M. (Ed.) (1987). Instructional theories in action: lessons illustrating selected theories and models. Lawrence Erlbaum Associates. Hillsdale, NJ. Reigeluth, C. M. (Ed.) (1983). Instructional design theories and models: an overview of their current status. Lawrence Erlbaum Associates. Hillsdale, NJ. Richey, R. (1986). The theoretical and conceptual bases of instructional design. Kogan Page. London. Seels, B. B. and R. Richey (1994). Instructional technology: the definition and domains of the field. Association for Educational Communications and Technology. Washington, D.C. Travers, R. M. W. (1982). Essentials of learning: the new cognitive learning for students of education. 5th ed. Macmillan Publishing Co. New York. Wlodkowski, R. J. (1985). Enhancing adult motivation to learn. Jossey-Bass Publishers. San Francisco. Wonder, J. and P. Donovan (1984). Whole brain thinking: working from both sides of the brain to achieve peak job performance. William Morrow and Company, New York. © 2003 by CRC Press LLC

SL316XCh02Frame Page 25 Monday, September 30, 2002 8:15 PM

2

Front-End Analysis INTRODUCTION

The purpose of front-end analysis (FEA) is to find the most effective way to stimulate needed individual or organizational change. FEA is the first step in the instructional systems design (ISD) process, because it is critically important to become clearly aware of three major things: • WHAT problem to solve (in the case of a problem solving FEA) • WHAT new goals or directions to set (in the case of a planning FEA) • HOW to achieve each of these most effectively Traditionally, ISD has been applied to problems or gaps in performance that need solving. Performance gaps arise from differences between actual and desired performance. The intent is always to close the gap between the two. FEA helps to locate any gaps between actual and desired performance. Your goal should be to reach desired performance levels. First, however, you must identify where you are. Then, you need to identify where you need to go. Finally, you must decide how to get there. FEA addresses all these issues. FEA does not, however, assume the gap or problem is related to “training” or the solution related to “instruction.” In fact, when using FEA you may find the problem is unrelated to instruction! Only when your problem can be solved using instruction or job aids will you design and develop instructional materials. FEA will clearly define for you such cases, saving you the costs of developing unnecessary instruction. Generally, there are two different types of FEA. The first, and most common, FEA is the problem-solving FEA. It deals with finding performance gaps and their causes and solutions. In addition, it is a somewhat focused, short-term problem-solving approach that is used to isolate and address gaps that are the source of organizational problems. As such, it is very similar to the early stages of the Global Problem Solving Process, for which see volume 2 of this series. FEA can also be used to identify and plan for completely new organizational goals and directions. This is in contrast to merely fixing existing systems. The planning FEA takes a systems approach to bringing about organizational change and in that way is similar to process improvement. It focuses on improvement of basically sound systems, rather than on short-term problem settlement. As such, the steps in the planning FEA duplicate the first steps in process improvement: 1) identify the opportunity and 2) define the scope (including stakeholders).

25 © 2003 by CRC Press LLC

SL316XCh02Frame Page 26 Monday, September 30, 2002 8:15 PM

26

Six Sigma and Beyond: The Implementation Process

In the case of Six Sigma, we are more interested in the second approach, since we are about to embark on a completely new organizational directive. Thus, we are interested in determining our needs early so as to plan for future opportunities. Need can be addressed by using either the following problem-solving FEA steps or the process improvement methodology. Since you are at the very beginning of the ISD process for the Six Sigma methodology, no formal preparation is required at this point. However, you do need to make a commitment to following systematic procedures. This is a definite departure from the more unstructured approach serving many instructional programs. Prepare yourself by eliminating any assumptions you may have about the problem. Find out whether instruction is the answer using FEA. In FEA, a commitment to following systematic procedures means not assuming a training problem but attempting to verify what type of problem really exists.

PROBLEM-SOLVING FRONT-END ANALYSIS The steps for conducting a problem-solving FEA include the following: 1) identify the problem, 2) identify potential and actual causes, 3) identify potential solutions, 4) choose the best solutions, and 5) report your findings. 1) Identify the problem: in a problem-solving FEA, you are trying to locate and remedy gaps between actual and desired performance. Collect data on gaps using the techniques described in Table 2.1. Be sure to gather information from a variety of perspectives (e.g., job incumbents, supervisors, customers, etc.) to limit collection of biased information. In addition, examine current operations and current performance levels and define desired performance levels. The difference between the two is your gap or “problem” area. (NOTE: desired performance levels should be based on similar, “best-in-class” in-house operations.) Or, benchmark to similar operations outside the company. In defining the problem, aim to be as specific as possible. Focus on the who, what, where, and how often of the problem. Assign dollar values. For example, instead of stating “parts are being rejected too often” as the problem, be more specific: PROBLEM Rejection of parts from Line 5. This problem has occurred daily over the past 6 weeks, costing an estimated $7000 per week. Notice a dollar value has been assigned to the problem. In addition, you know where the problem is, what is happening, and how often. This allows you to assess whether the problem is worth further analysis. When defining the problem, also consider the following: • Have you thoroughly identified all gaps in performance? • What are the specific differences between actual and desired performance? © 2003 by CRC Press LLC

SL316XCh02Frame Page 27 Monday, September 30, 2002 8:15 PM

Front-End Analysis

27

TABLE 2.1 Data collection techniques Technique

Definition

ADVISORY GROUPS BRAINSTORMING

Subject matter experts brought together to discuss various issues. Small group discussions formed to generate ideas about a particular topic. Rules of discussion include: openness to each other’s ideas, encouragement of far-fetched ideas, the more ideas the better, and zero negative evaluations. Reports covering events that led to a particular event or problem. These reports offer facts, as opposed to opinions, about what happened. A way of gathering information through a type of mall survey. Participants express opinions about a problem or opportunity. Opinions are collated to form a majority opinion list, which is redistributed through a series of mailings for reprioritization. Individuals brought together to discuss a particular issue. The purpose is to discover attitudes, ideas, possible barriers, etc. Face-to-face question-and-answer discussions among a group of individuals. Group interviews cost less than individual interviews, but allow for less depth in examining opinions and attitudes. Face-to-face question-and-answer discussions using preset questions. Individual interviews allow for in-depth examination of opinions and attitudes but are costly. This method is comparable to a Delphi Method but in a group setting. Individuals write down and share in turn their opinions about problems, their causes, and solutions. The group rank orders opinions according to validity, etc. A way to examine on-the-job behaviors using a preset checklist. The observer must be given direction on who, what, and how to observe. Observations can provide a wealth of information about what actually is occurring on the job. A series of questions sent to a number of individuals seeking information on opinions, attitudes, facts, etc. about problems, causes, and potential solutions. Questionnaires take time to develop yet can reach many people in a short period of time. Questionnaires cost less than the interview or observation method, but may not delve deeply into any one area. All questions must relate to the information being sought.

CRITICAL INCIDENT REPORTS DELPHI METHOD

FOCUS GROUPS INTERVIEWS (GROUP) INTERVIEWS (INDIVIDUAL) NOMINAL GROUP METHOD

OBSERVATION

QUESTIONNAIRES

• Have you attempted to break down complex gaps? • Have you identified the jobs, operations, employees, etc. involved in the problem? • Have you identified problem locations? Have you determined when the problem occurs? • Do you know the problem’s impact? • Did you gather enough information from enough people to give you insight into the problem? Do you understand the culture in which the problem exists? © 2003 by CRC Press LLC

SL316XCh02Frame Page 28 Monday, September 30, 2002 8:15 PM

28

Six Sigma and Beyond: The Implementation Process

TABLE 2.2 Contributing factors to problems Contributing Factors to Problems TECHNICAL/WORK ENVIRONMENT

ORGANIZATION

INDIVIDUAL

SUPERVISION

Examine Tools, equipment, material, work distractions Workload distribution Temperature, illumination, ventilation General environment Technical inputs (engineering, systems, etc.) Standards, policies, practices, values Use of “systems” thinking Relationships (social, political, economic, employee, customer, supplier, etc.) Interpersonal skills (teamwork, handling personality conflicts and communication problems, flexibility, cooperation, agreement with organizational goals, etc.) Skill and knowledge (knowledge of basic facts, concepts, principles, strategies, etc.) Interpersonal skills Skill and knowledge Standards, policies, procedures Management skills (objective setting, team building, leadership, time and stress management, etc.) Support skills (recognition, feedback, reinforcement, modeling, mentoring, motivation)

In addition, determine if the problem is random or continuous. For example, if the problem occurs regularly, it may continue due to some cause in the organizational system. This is a continuous problem. In such cases, an FEA becomes a valuable problem-solving tool. If the problem is a one-time event, such as with a random problem, it would not be worth conducting an entire FEA. 2) Identify potential and actual causes: in this step, you need to identify the cause of the problem. What are the contributing factors to the problem? Consider technical, organizational, individual, and supervisory performance (see Table 2.2 for a list of possible contributing factors). Gather this information using the same techniques outlined above, using a variety of techniques and sources to avoid bias. (This step is similar to the who, what, why, where step in the global problem-solving process.) After listing several contributing factors, narrow the list to the most likely causes. Ask for assistance from others who understand the problem and probable causes. For example, suppose a supervisor from one of your manufacturing plants presents you with a problem: there are too many parts being rejected. You need to further define the problem. You then can begin to identify potential and actual causes. © 2003 by CRC Press LLC

SL316XCh02Frame Page 29 Monday, September 30, 2002 8:15 PM

Front-End Analysis

• • •

3)

Identify the problem: as you attempt to define the problem more precisely, you find the rejection problem is located on Line 5. You also find the problem has occurred over the last 6 weeks, on a daily basis. Thus, it’s not a random problem but a continuous one. You also find the cost of the rejection problem to be an estimated $7,000 per week. Because the problem is continuous and costing large sums, you decide it is worthy of further investigation. Identify potential causes: at this point, you can either guess at the potential causes, ask for opinions, or begin an in-depth investigation. You choose to look in depth at what has happened. You begin interviewing all those involved with the problem. You find the following: • The in-plant handling of raw materials parts appears inadequate. Raw materials are haphazardly tossed into bins. • The raw material supplier does not seem to be meeting the product blueprints given to them by Purchasing. • The manufacturing process does not seem to support the product blueprints. Identify actual causes: Because you have gathered “opinions” vs. facts, you now need to confirm which opinions are accurate. This will lead you to the actual causes. You find the following: First, a random sample of raw materials is examined before entering Line 5. You find raw materials are passable. Second, you find the supplier’s product meets the product blueprints given to them by Purchasing. Supplier products are passable. HOWEVER, reports show differences between the manufacturing process and what’s required by the product blueprints. The manufacturing process must support the product blueprints if a passable product is expected. For example, machines must be tooled to meet product blueprints, etc. You find this is not happening. You have located the actual cause. You may need to search further, however, for “less immediate” causes. Searching further into the actual cause, you find the department creating the product blueprints (Product Engineering) and the department creating the manufacturing process blueprints (Process Engineering) do not meet or communicate on a regular basis. Thus, there is no assurance that the manufacturing process will support the product blueprints. You also find that neither department operates on a “team basis;” they are not used to working together with other departments on a consistent, proactive basis. Identify potential solutions: after defining the problem and finding related causes, you now need to solve the problem. Gather potential solutions through the same data collection procedures outlined earlier (in practice, you may collect data on all these questions at the same time). Often, solutions will flow directly from knowledge of causes. For example, continuing with the earlier problem, how would you solve it? Identify solutions: essentially, you have two different types of causes requiring two different solutions:

© 2003 by CRC Press LLC

29

SL316XCh02Frame Page 30 Monday, September 30, 2002 8:15 PM

30

Six Sigma and Beyond: The Implementation Process

• The immediate cause needing a solution is the difference between the product blueprints and the manufacturing process. Product blueprints and manufacturing processes must be compatible. You find this can be solved either by making product or manufacturing changes or by redesigning the part from scratch. You would need to perform a cost-benefit analysis on each potential solution before making a decision. • The less-immediate cause needing a solution is the communication problem between the departments. (Yet this must be solved if future problems are to be eliminated.) Two solutions are available. First, biweekly meetings between department heads can be established. Second, each department can partake in a “team-building” instructional program. Such a program would foster open communication between other departments. You would need to perform a cost-benefit analysis on each potential solution before making a decision. 4) Choose the best solutions: how can you solve the problem? Can the problem be solved using education or training, or is something else needed? When choosing the best solution or solutions, you need to determine the suitability of each. Are the solutions realistic and affordable? Do they match the problem? Which is best? Consider the following factors when evaluating the feasibility of the solutions using a typical cost-benefit analysis: • Cost of the problem • Cost of the solution • How well the solution will solve the problem • Whether customers and stakeholders will accept and support the solution • Whether the solution is acceptable to the organization’s culture • Whether the cost and time required fit the available resources • Whether there are barriers to implementing the solution, including delineation of each barrier. • Whether the solution is consistent with long-term objectives and continuous improvement • Potential return on investment By comparing potential solutions against these criteria you will be using cost-benefit analysis to link your solution to strategic business goals. Frequently, instruction will not be the optimal solution. In our example, the immediate cause did not require instruction. Yet the less immediate cause could require education or training. FEA can enhance the professional image of education and training. When instructional solutions are used only when supported by data, the results will be far more positive. Too often, instructional programs are “thrown” at problems when the best solution lies in another area. Choose the best solutions: based on your cost-benefit analysis, you can choose the best possible solution. You may find a solution is not feasible because of limited funds, timing, potential barriers, etc. When this happens, you may need to modify your chosen solutions to fit your constraints. © 2003 by CRC Press LLC

SL316XCh02Frame Page 31 Monday, September 30, 2002 8:15 PM

Front-End Analysis

31

TABLE 2.3 Front end analysis report information Title

Specifics

INTRODUCTION

Describe how and why the FEA process began. Describe procedures used to locate and confirm flaps, causes, and solutions. Discuss flaps identified, causes, operating consequences, personnel, jobs, and costs involved. Describe solutions requiring no action, action involving instruction or job aids, and action not involving instruction or job aids. Compare solutions and discuss problems associated with each solution. Detail chosen solution and how to measure or determine success. Give rationale for choice. Identify population, jobs, and costs involved. Discuss relationship to organizational objectives and benefits. Describe how the solutions will support continuous improvement objectives. Describe scope of project. Identify constraints, required resources, and estimated schedule. Identify customer and client relationships. Suggest measures or definitions of “success.” Back up correspondence, budgets, data gathering tools, raw data, outside sources, etc.

FINDINGS POSSIBLE SOLUTIONS

RECOMMENDED SOLUTIONS

CONTINUOUS IMPROVEMENT PROTECT SCOPE AND SCHEDULE

APPENDICES

The worst scenario would be having to develop new solutions because of constraints found in the cost-benefit analysis. 5) Report your findings: report the information from your FEA in a report that includes the information shown in Table 2.3. Proceed to the next step in the ISD process, task analysis, only if your analysis has shown instruction to be an appropriate solution. Otherwise, explore other interventions. These could include organizational restructuring, organizational development, etc. After you have completed your FEA, evaluate the quality of your efforts by using a formative evaluation. A typical check list for such an activity is shown in Table 2.4.

TASK ANALYSIS Task analysis is performed when trying to determine what you want out of — and want to put into — an instructional program. Task analysis data become the foundation for the entire ISD process.

© 2003 by CRC Press LLC

SL316XCh02Frame Page 32 Monday, September 30, 2002 8:15 PM

32

Six Sigma and Beyond: The Implementation Process

TABLE 2.4 Front end analysis formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your FEA process needs improvement!

Y

N

1. Does your situation call for a problem-solving FEA? (If so, you should have followed the steps outlined in this phase.) 2. Does your situation call for a planning FEA? (If so, you should have followed process improvement methodology.) 3. Have you used a variety of information-gathering techniques and sources to identify your performance gaps or problems? 4. Have you defined the problem as specifically as possible, focusing on observable, measurable outcomes with assignable dollar values when possible? 5. Have you established whether the problem is random or systematic? 6. Have you gathered information to identify potential and actual causes of performance gaps? 7. Have you generated a comprehensive list of potential solutions to the problem? 8. Have you systematically evaluated potential solutions, using a cost-benefit approach, to select the most appropriate ones? 9. Have you summarized your FEA information in a report?

Task analysis will identify everything someone would need to expertly perform a particular job, skill, or function. For example, consider the job of changing a tire. Task analysis would identify all the steps involved in changing a tire. This would include who performs what and when, using what tools, and under what conditions. Task analysis data becomes vital when deciding what other performers should know if they, too, are to become “experts.” Essentially, this is how instructional content is formed. In the case of Six Sigma methodology, this is a very critical stage in the process because, depending on what level the instruction is for, the requirements will change quite drastically. This will be discussed in greater detail in Part II of this volume. Only when your FEA has indicated a need for instruction, and only after management has approved of the instruction, is task analysis started. Task analysis is a highly structured process and can often be time-consuming and expensive. To move from FEA to task analysis make sure the appropriate preparation has taken place. A good rule of thumb is to review the following parts of the FEA. • Confirm that instruction is an appropriate, cost-effective solution to the identified problem. In the majority of cases, instruction alone will not solve an organizational problem. For example, instruction may not address motivational or organizational issues. This may require not only design and development of education and training but also design and development of © 2003 by CRC Press LLC

SL316XCh02Frame Page 33 Monday, September 30, 2002 8:15 PM

Front-End Analysis

33

organizational development programs. In our case, we will address issues and concerns that deal with the training portion of Six Sigma diffusion in the organization. • Review data gathered from all sources about the nature of the problem. This includes contributing factors, causes, and solutions. You’ll use this information when developing objectives and content, choosing delivery methods, and measuring whether or not learning has occurred. (Keep in mind that the requirements for executives, champion, MBBs, BBs and GBs are not the same; they must be treated differently.)

STEPS IN TASK ANALYSIS Traditionally, the steps for conducting any task analysis have been to: 1) analyze your audience, 2) collect task data, 3) develop instructional objectives, 4) classify objectives by storage medium, and 5) develop assessment instruments. The same steps are applicable in Six Sigma methodology. 1) Analyze your audience: analyzing your audience gives you the information you need to tailor an instructional program to a particular audience. A major goal of creating any instructional program is to make sure the audience can understand, accept, and feel comfortable with the learning experience. For example, things like reading level, previous experience, and skill and knowledge level will impact an audience’s reactions and degree of learning. You can control for this by becoming as familiar as possible with your audience and planning accordingly. If you know your audience, you can provide content, materials, examples, and instructional experiences with which your audience can closely identify. You can also use audience information to set design standards and baseline program requirements. For example, if the majority of your audience is at a ninth grade reading level, you can design your program and set entrance requirements to that level. Thereafter, those who are not up to a ninth grade level would need a prerequisite class. Those beyond a ninth grade level might need to take a higher-level course. On the other hand, if you expect all your participants to be graduate engineers or statisticians, the requirements would clearly change drastically, not only in the prerequisites domain but also in the instructional characterization of the material. Gather audience information from personnel records, surveys, etc. (This is very important in the case of figuring out the content of the overview and Green Belt training.) Focus on group, rather than individual characteristics, maintaining privacy of individuals. Pay particular attention to these characteristics: • Demographics: age, gender, culture (such as ethnicity and socioeconomic background), homogeneity of group • Capacity: intellect, physical development © 2003 by CRC Press LLC

SL316XCh02Frame Page 34 Monday, September 30, 2002 8:15 PM

34

Six Sigma and Beyond: The Implementation Process

• Competence: prior skills and training, experiential background, reading ability, languages spoken, current skill and knowledge level (relative to the instruction program), level within the organization • Attitudes: values (toward training, subject), self-concept (academic, personal, professional) • Motivation (goals, interests, perseverance) Gather all task analysis information on two audiences: primary and secondary. The primary audience consists of those going through the instruction or using the job aid. The secondary audience includes anyone whose support is necessary for successful performance by the primary audience. (This is also significant because the results of this analysis should dictate, among other requirements, who is going to be trained as a Black Belt or a Green Belt.) Support from the secondary audience (Green Belts, in this case) is vital to achieving transfer of learning and organizational results (e.g., productivity). The best-designed instruction or job aids alone will not guarantee transfer or changes in the bottom line. However, these things can be enhanced through secondary audience support. Generally, the secondary audience requires some instruction about what the primary audience has learned. They need to understand the value and benefits of the instruction — both to the primary and secondary audiences. This is why it is strongly recommended that the cascading training to the Green Belts should be done by the Black Belts. The support or secondary audience usually includes the employees’ supervisors as well as anyone whose work is related to or influenced by the primary audience performance. For example, if supervisors are being trained to manage in a more participative manner, their employees must be equipped to take on more responsible roles. In the case of Six Sigma methodology, for example, it would be ludicrous to assign a DOE responsibility to an operator if the operator has no idea of what DOE is or what to do. 2) Collect Task Data: two important benefits arise from collecting task data: • All the tasks required to perform expertly a particular skill, job, or function are identified • A sequence of instruction is determined A task is a series of sequenced actions leading to a desired outcome. Outcomes include broad instructional goals like giving a presentation, building a car, or changing a tire. Within ISD, outcomes are referred to as terminal objectives. Terminal objectives come directly from the front end analysis. Essentially, terminal objectives are the desired behaviors needed to solve the problem. The question you need to address is, What does an individual need to do in order to reach the desired outcome? (For example, what does an individual need to do to successfully change a tire? Or what does the Black Belt need to know to approach a project, solve it, and present the results to management?) © 2003 by CRC Press LLC

SL316XCh02Frame Page 35 Monday, September 30, 2002 8:15 PM

Front-End Analysis

To answer these types of questions you need to locate all the tasks leading to your desired outcome — or to one broad, terminal objective (i.e., changing a tire, necessary knowledge of a Black Belt). Tasks include major tasks, which are refined and broken into subtasks, subsubtasks, etc. In the example of changing a tire, the following hierarchy may be developed with major, subtasks, and subsubtasks in a sequence. • Terminal objective: change tire • Major task: secure car • Subtasks: set transmission; set parking brake; block wheel • Subsubtasks: is the car automatic? If yes, put in park then move to subtask of setting the transmission; if transmission is manual, then put it in gear (first gear) and proceed to setting transmission in the subtask. Once you locate and sequence all these tasks, this information can be used to create your instruction. Where do you go for task data? Sources for collecting task data vary. Use workplace sources and processes such as these: • Interviews with accomplished performers, subject matter experts, etc. • Administrative checklists and flowcharts • Locally-constructed job aids • Manufacturer suggestions and documents • Observation of tasks being performed • Process sheets • Quality deployment sheets • Research literature (periodicals, etc.) • Surveys • Tests • Facilitating or focus groups using brainstorming, Delphi method, nominal method, etc. • Critical incident reports When collecting task data, consider using Table 2.5 as a guide. If you gather all this information on each task step, you will end up with a thorough knowledge base about each task. You will use this information to specify conditions and standards for your instructional objectives (discussed shortly). In addition, this information will become invaluable during later ISD stages (i.e., design and development). Remember, you need to use “variety” when collecting task data. This means gathering information from as many different sources and processes as possible. This will help minimize the risks associated with misinterpretations or individual biases. 3) Develop instructional objectives: task data tells you what an individual needs to do to reach a particular goal or outcome. Now you need to think in terms of what instruction is needed and how to help the learner reach the desired goal. Instructional objectives translate task data into required learner behavior. They are vital to the design and development phases. Without instructional objectives, you will not know what specifics to put into your instruction. © 2003 by CRC Press LLC

35

SL316XCh02Frame Page 36 Monday, September 30, 2002 8:15 PM

36

Six Sigma and Beyond: The Implementation Process

TABLE 2.5 Information about essential tasks Find out about

Ask these questions

PREREQUISITE SKILLS AND KNOWLEDGE TASK IMPORTANCE

What previously learned skills and knowledge must be present in order for the learner to understand the instruction? How critical is the task to operations? What happens if task is not performed? When is the task performed? What is the trigger event? Look for cues, signals, indications for action or reaction. What is the concluding step or event in the task performance? How is successful completion defined? Look for cues, signals, and indications that action taken is correct and adequate. What will happen if improper performance occurs? Are the potential effects expensive or harmful to operations?

INITIATION CONCLUSION SUCCESSFUL COMPLETION CONSEQUENCES OF UNSUCCESSFUL COMPLETION FOLLOW-UP TASKS OTHERS INVOLVED TOOLS, EQUIPMENT, SUPPLIES, ETC. SAFETY CONSIDERATIONS REFERENCE MATERIAL

Are there related tasks that need to be performed after this particular task step? Are other task performers involved? Is a team effort required? Who is the leader? What tools or commodities are used or manipulated for successful performance? Does the task pose any risks to life, limb, equipment, or supplies? Is reference material needed during task performance?

Develop an instructional objective for each terminal objective, major task step, subtask step (if needed), and so on. Each instructional objective has three components (Mager, 1984; 1984a): • The desired, observable task to be performed • The standards by which the task accomplishment will be measured or evaluated for successful achievement • The conditions or circumstances under which the task is performed For example, an instructional objective for loosening wheel nuts when changing a tire is shown below: • TASK: the user shall loosen the wheel nuts and raise the flat tire above the ground, • STANDARDS: in ten minutes, without assistance, using appropriate safety procedures, without personal injury or damage to the vehicle, • CONDITION: given a vehicle with a flat tire, jack and handle, wheel lug nut wrench, gloves, block, and operator’s manual, under any road conditions. Do you see how this information would help in making your instruction specific? You set the stage for the best way of learning or teaching a specific task. How you write your instructional objectives depends on the type of behavior you want the learner to demonstrate. For example, three © 2003 by CRC Press LLC

SL316XCh02Frame Page 37 Monday, September 30, 2002 8:15 PM

Front-End Analysis

common types of desired learner behavior include cognitive (knowledge), affective (attitude), and psychomotor (performance). In Six Sigma methodology, we have all of them. However, the predominant one is the cognitive. When writing an instructional objective asking for cognitive (knowledge) skills, use action verbs such as recall, identify, classify, analyze, explain. For affective (attitude) behavior, use verbs such as choose, select, approve. For psychomotor (performance) behavior, use verbs such as loosen, locate, secure, change, move. In the example above, we wanted the learner to perform a behavior, versus just recalling how it was done. Thus, the learner was asked to loosen the wheel nut —a behavior task. 4) Classify objectives by storage medium: often, you will find your instructional objectives do not require that a learner attend an instructional program. Sometimes, all the learner needs is a job aid. Therefore, it is best to separate your instructional objectives according to what is called “storage medium.” Information can either be stored in the learner’s memory or in on-the-job reference materials, such as job aids. In the case of Six Sigma, a typical “memory” item is the history of the Six Sigma methodology, whereas the formula for the normal distribution is a candidate for a job aid storage medium. In general, only some types of information should be stored in memory. Not everything a person learns is remembered or stored properly. This is in contrast to a computer, which can store and retrieve enormous amounts of data. Job aids work in the same manner. Large quantities of information can be more effectively stored in job aids. In addition, job aids often cost far less than instruction. In the tire-changing example, you would probably want the learner to remember safe practices. The location of tools required and specific steps might be contained in a job aid such as the operator’s manual. A job aid is especially appropriate since these details might vary across vehicles. 5) Develop assessment instruments: assessment instruments (tests) measure how much learners know before and after education and training. They are developed during task analysis to assure a match between the assessment and the instructional objectives. Too often, assessments are developed late in the ISD process. When this happens, objectives are frequently “lost.” The assessment ends up measuring something other than what was initially intended. The same assessment instrument is normally used for the before (pre) and after (post) assessment. There are several reasons for using assessment instruments: • To identify any lack of prerequisite skills or knowledge. • To identify accomplished performers. Qualified individuals should not consume organizational resources by attending instruction on tasks they can already perform with proficiency. • To measure gains in knowledge and skill, by comparing pre- and post-assessments. This can be done both for individual learners and for groups. © 2003 by CRC Press LLC

37

SL316XCh02Frame Page 38 Monday, September 30, 2002 8:15 PM

38

Six Sigma and Beyond: The Implementation Process

TABLE 2.6 Task analysis formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your task analysis needs improvement!

Y

N

1. Have you identified important characteristics of both your primary and secondary audiences? 2. Have you collected essential task information from a variety of sources? 3. Have you developed instructional objectives that state desired task accomplishment, conditions under which task performance occurs, and standards by which performance will be evaluated? 4. Have you classified objectives according to whether they will be addressed by instruction or job aids? 5. Have you developed the necessary assessment instruments to evaluate whether or not your objectives have been met?

The learning outcomes desired from education and training should follow a 90/90 rule. This requires that 90% of the learners learn 90% of the instructional material. The goal of the assessment instrument is to measure such accomplishment. If through assessment you find that learning is less than 90/90, you need to assess why. Perhaps the product needs modification. Or, you may find learner remediation is required until the learner reaches a 90/90 level. Another reason for developing assessment instruments during task analysis is to develop expectations for program development. Sponsors and champions need to accept the business objectives, organizational goals, and standards set for instructional outcomes. Assessment instruments need to be valid (contain accurate content; a good validity is considered to be anything over 80%) and reliable (contain repeatability; a good repeatability for assessment instruments is anything over 78%). Thus, if you want to know whether a given learner can loosen wheel nuts, the test should seek this specific information. A poorly constructed test may measure something completely different than what is needed. This is not fair to the learner, and it will not help you accurately measure what is really going on. You may wish to seek professional assistance in developing assessment instruments. In the case of Six Sigma training, we know of no instance where tests are given during or after completion of training. However, this may be because the expected outcome is to deliver “a solved problem” to management. When this happens, it is assumed that the training was successful. It must also be mentioned that there is a movement to certify the Black Belts and Master Black Belts through some kind of testing. It is hypocritical to push for such certification since, as has already been mentioned, there is no agreed-upon body of knowledge. The certification, in all cases as of this writing, is useless because there is no agreement as to what a Black Belt © 2003 by CRC Press LLC

SL316XCh02Frame Page 39 Monday, September 30, 2002 8:15 PM

Front-End Analysis

39

or a Master Black Belt should know. Furthermore, not all available training is consistent with a “set” of specific knowledge. For more information, see Chapter 14. After you have completed your task analysis, evaluate the quality of your efforts by using a formative evaluation checklist such as the one shown in Table 2.6.

REFERENCES Mager, R. F. (1984). Goal analysis. 2nd ed. Belmont, CA: David S. Lake Publishers. Mager, R. F. (1984a). Preparing instructional objectives. Rev. 2nd ed. Belmont, CA: David S. Lake Publishers.

SELECTED BIBLIOGRAPHY Birnbrauer, H. (Ed.) (1985). The ASTD Handbook for Technical and Skills Training. Alexandria, VA: American Society for Training and Development. Bloom, B. S. (Ed.) (1954). Taxonomy of Educational Objectives. Book 1, Cognitive Domain. New York: Longman. Craig, R. L. (1987). Training and Development Handbook. 3rd ed. New York: McGraw-Hill Book Company. Hanafin, M. J. and Peck, K. L. (1988). The Design, Development, and Evaluation of Instructional Software. New York: Macmillan Publishing Company. Hannum, W. and Hansen, C. (1989). Instructional Systems Development in Large Organizations. Englewood Cliffs, NJ: Educational Technology Publications. Kearsley, G. (1982). Costs Benefits and Productivity in Training Systems. Reading, MA: Addison-Wesley Publishing Co. Rossett, A. (1987). Training Needs Assessment. Englewood Cliffs, NJ: Educational Technology Publications.

© 2003 by CRC Press LLC

SL316XCh03Frame Page 41 Monday, September 30, 2002 8:15 PM

3

Design of Instruction

“Winging it” in a training course may be challenging, but don’t count on reaching high levels of success. What you need is a blueprint — some concise action plan. The objective of such an action plan is to assure across-the-board, quality results. Design of instruction is your action plan for building quality products. Before attempting to design instruction, you need to be aware of how learners retain information. Recall from task analysis that information can be stored either in memory, in a job aid, or in a combination of the two. Some information must be stored in memory as opposed to a job aid. Storing information in memory is best facilitated using instruction, where learners attend a class, work through a computer-based instruction program, etc. The development of instruction evolves through a design phase. Design of instruction focuses on finding the best ways to move information into a learner’s memory for later recall and use. How do you go about designing instruction? The first step is developing an instructional plan, comparable to a construction blueprint. It requires data from the front-end and task analyses.

PREPARATION • Decide whether instruction is appropriate. Review Chapter Two for things to consider when deciding whether to use instruction, job aids, or a combination. • Assemble front-end and task analysis information. This includes sequenced instructional objectives, task steps, and audience characteristic information. • Do a product survey. Determine whether a suitable instructional package is already available. If so, you may not need an instructional plan, or you may need one that centers on product revisions.

STEPS IN DESIGN OF INSTRUCTION The activities described in this phase will take you through the phase of designing instruction. Consider at all times what will facilitate learning, retention, and future use of what was learned. There are five basic steps you need to follow. They are: 1) develop content outline and course strategy, 2) choose instructional methods, 3) choose instructional media, 4) choose instructional elements, and 5) plan for remaining ISD phases

41 © 2003 by CRC Press LLC

SL316XCh03Frame Page 42 Monday, September 30, 2002 8:15 PM

42

Six Sigma and Beyond: The Implementation Process

TABLE 3.1 Example of a content outline — changing a tire (terminal objective) I. Secure car. A. Set transmission. 1. If automatic, put gearshift in park. 2. It manual, put gearshift in first. B. Set parking brake. C. Block wheel diagonally opposite. II. Locate equipment. A. Get spare tire. B. Get jack. C. Get wheel nut wrench. III. Change tire. A. Remove wheel covers. 1. If poly cast wheel ornaments, follow a different procedure. B. Loosen wheel lug nuts. 1. If anti-theft wheel lug nuts, follow a different procedure. C. Find jack notch. D. Put jack in jack notch. E. Turn handle of jack clockwise until wheel is off ground. F. Raise tire completely off ground. G. Remove wheel lug nuts. H. And so on

1) Develop content outline and course strategy: what content are you planning to put into your instructional program? Developing a content outline is the first step in formalizing the material you plan to present to your learner. Content outline data come directly from the task analysis. In fact, your task hierarchy can be easily transformed into a content outline. Table 3.1 above is an example of what a content outline might look like for the desired outcome or terminal objective “Changing a tire.” (Objectives and content outline are presented in Part III for each of the Six Sigma level requirements.) In the development of materials phase, you will see how this outline is expanded into a rough draft. From the completed content outline, formulate a course strategy. This includes deciding upon a course title, lessons and modules, etc. For example, using the above outline, the course title could be “How to Change a Tire.” Lessons within the course could include 1) securing the car, 2) locating equipment, and 3) changing the tire. Often, major task steps become the title and subject for each lesson. Now your instructional program is beginning to take shape. See Table 3.2 for a way to condense and summarize this information in an instructional plan.

© 2003 by CRC Press LLC

SL316XCh03Frame Page 43 Monday, September 30, 2002 8:15 PM

Design of Instruction

43

TABLE 3.2 Example of an instructional plan Instructional Plan Course Title

Target Audience Description

Anticipated Number of Participants

Lesson Number

Lesson Title

Instructional Objectives

Methods

Media

Instructional Elements

Other

2) Choose instructional methods: how are you going to present your instructional content (from the content outline) to the learner? And what is the best way for your audience to learn what you want them to learn? Once each content outline is drafted, you can begin choosing “instructional methods.” Instructional methods are ways of communicating your message to the learner. When making choices about what instructional methods to use, consider: Learning principles: these principles are important, proven ways to increase learner attention, retention, understanding, motivation, and transfer of what was learned to the job. Choose a method that promotes interactivity, is appealing to the senses, and promotes acceptance. Refer to Table 3.2 for additional information. Objectives: each instructional objective will direct your instructional methods decisions. The task, standards, and conditions (instructional objective) determine to some degree your choice of instructional method. For example, the objective “Changing a Tire” may be more fully reached using lecture and practice sessions, in comparison to using role-play or case-study methods. Audience: audience characteristics, described in task analysis, will influence your choice of instructional method. For example, one audience may initially need one-on-one instruction to build levels of self-confidence, whereas another may do best on their own, using self-paced instruction. Resources: available resources, as well as other constraints, also impact choice of instructional methods. Determine first the most appropriate instructional method after considering objectives and audience. Can you afford this method? If not, develop a list of other instructional method choices. Prioritize this list and evaluate the cost of each. You will need to compromise between the most appropriate method and affordability.

© 2003 by CRC Press LLC

SL316XCh03Frame Page 44 Monday, September 30, 2002 8:15 PM

44

Six Sigma and Beyond: The Implementation Process

Hannum and Hansen (1989, pp. 145–148) discuss the advantages and disadvantages of instructional methods, which include features such as the following. • Lecture combined with practice sessions • Group discussion, study groups • One-on-one instruction • Self-paced instruction such as programmed learning or computerassisted instruction • Simulation or mock-up situations • Role play • Case study • On-the-job training, fieldwork, internship, research • Field trips • Structured experience You may vary your instructional methods within any one program. For example, you would probably want to use group discussion along with lectures, or one-on-one instruction along with on-the-job training. In addition, remember to choose instructional methods for both your primary and secondary audiences. Always try to condense and summarize your course of action for the lesson in an instructional plan. Based on learning principles, audience, and resource constraints, your task is to match each instructional objective (from task analysis) with the best instructional method choice. Your goal is always to enhance learning and transfer. 3) Choose instructional media: instructional media are “supplemental” ways to present your instruction or job aid to the learner. For example, a lecture may make use of print materials, visual aids, physical objects, etc. Your objective as the designer is to find method and media combinations that will enhance learner attention, retention, understanding, motivation, and use of learning. Refer to Table 3.3 for an analysis of different instructional media. The table identifies and describes several medium to help you in making a decision about instructional media. Table 3.3 describes various types of media you can use with previously chosen instructional methods. Decisions about type of media are based on the same factors as when choosing instructional methods. Keep in mind these additional three points: • There is no perfect medium for all audiences and instructional objectives. For example, reading ability or cultural differences will influence audience response to various media. • The effectiveness of learning from various media is somewhat unrelated to learner preferences. An example is the lack of learning that occurs with a very popular medium — television! • Each instructional medium has its own strengths and weaknesses. For example, when comparing computer-based instruction to a lecture method, the cost of developing computer-based instruction is higher. © 2003 by CRC Press LLC

SL316XCh03Frame Page 45 Monday, September 30, 2002 8:15 PM

Design of Instruction

45

TABLE 3.3 Types of instructional media Medium

Description

PRINT VISUAL AIDS

Textbooks, workbooks, manuals, programmed texts. Charts, diagrams, graphs, illustrations, drawings, photographs, exhibits, projected images, overheads, slides. Radio, cassettes, reel-to-reel, disc, records. Filmstrips, television, motion pictures, video. Computer-based instruction, computer-supported learning and job aids, computers, interactive video. Tools, equipment, simulated environments. Used to promote interactivity during instructions, presentations. The audience responds to questions using a keypad.

AUDIO AUDIO-VISUAL COMPUTERIZED

PHYSICAL OBJECTS AUDIENCE RESPONSE SYSTEMS

In comparison, however, the cost of delivering live lectures can be extraordinary considering costs of preparation, facilitator, travel, and student time. See Table 3.2 for a way to condense and summarize media choices into an instructional plan. Your task is to match each instructional objective (from task analysis) with the best instructional medium choice. 4) Choose instructional elements: as previously emphasized, your goal — when designing any instruction — is to increase learner attention, retention, understanding, motivation, and transfer. Choice of appropriate method and medium will help you reach this goal. The following instructional elements, however, will also impact the success of your instruction or job aid. Develop specifications for what, when, where, and how each of the following should be added to your instructional content. These specifications will be followed during the actual development of materials phase. • Examples • Drill and practice sessions • Activities • Illustrations • Charts and diagrams • Exhibits • Simulations • Reviews • Summaries • Remediation • Projects • Exercises See Table 3.2 for ways to condense and summarize this information in an instructional plan. Remember, integration of these instructional elements © 2003 by CRC Press LLC

SL316XCh03Frame Page 46 Monday, September 30, 2002 8:15 PM

46

Six Sigma and Beyond: The Implementation Process

TABLE 3.4 Learning principles Principle

Description

SEQUENCE MATERIAL

Gain learner attention. Inform learner of the objective. Present desired outcome. Demonstrate desired outcome. Ask for performance. Give feedback on performance. Insert questions within materials. Use 70% of time devoted to discussion formats, active practice sessions, and immediate feedback. Use role plays, games, self-discovery exercises, individual and team presentations, and physical activities such as skits or pantomime. Integrate different questioning strategies such as question cards, one-on-one, “ask the wizard,” “pass the hat,” etc. Link participants to one another using group-based activities such as learning games and learning projects. Present only one idea at a time at the appropriate level of difficulty. Use logical sequencing. Plan for a natural, comfortable, relaxed, colorful delivery setting, including music, table-top displays, wall hangings, kites, flowers, etc. Use mental imagery exercises, audio tapes, flipcharts, flannel boards, videotapes, computers, physical objects, sketches, skits, colorful transparencies, and so on. Use color to draw attention and multiple delivery systems to add variety and interest. (Note: use the red color to point out or emphasize “the” most important…). Create hands-on learning experiences. Use examples, nonexamples, analogies, metaphors, contrasts, comparisons, and imagery. Use frequent previews and reviews. Elaborate on the content. Restate in greater detail or in different ways (pictorially, verbally, written, etc.). Introduce new concepts at the beginning and go over in detail later. Determine and accommodate different learning styles, speeds, and needs. Develop outlines or job aids to reinforce principles and concepts learned. Provide study guides, audiotapes of class material, board games, etc. for post-class follow-up. Use early and frequent self-assessments. Use post-class follow-up and support, such as a buddy system, meetings, newsletter, in-person discussions.

MAKE IT INTERACTIVE AND COLLABORATIVE

KEEP IT SIMPLE APPEAL TO THE SENSES

PROMOTE UNDERSTANDING

PROMOTE REINFORCEMENT

© 2003 by CRC Press LLC

SL316XCh03Frame Page 47 Monday, September 30, 2002 8:15 PM

Design of Instruction

47

TABLE 3.4 (continued) Learning principles Principle

Description

PROMOTE ACCEPTANCE

Connect instruction to learners’ personal or professional goals, interests, present job, or experiences. Combine new material with learners’ current knowledge base. Stress learners’ ability to be successful. Eliminate or reduce any known fears. Give learners choices regarding pace, activities, etc., if possible. Use precourse packets consisting of pamphlets, booklets, audio tape, videotape, computer program, book, etc. that describe the program, emphasize learner benefits, include testimonials, create positive visual images of the program, etc. (Specifically, for Six Sigma this will be an opportunity to provide a job aid with the most frequent statistics used and the appropriate formulas.) Provide numerous opportunities for learners to practice what they learned, such as exercises and other activities. Provide remediation opportunities. Ask learners to describe, out loud or to each other, what they learned.

PROMOTE PRACTICE

will make your instructional product interactive and interesting and will increase the chances that your learners will learn and use what they have learned. When developing specifications for instructional elements, consider the learning principles outlined in Table 3.4. 5) Plan for remaining ISD phases: plan for the remaining phases of instruction, for ways to plan and implement design of job aids, development of materials, delivery of materials, evaluation: pilot testing, on-the-job application, and evaluation: post-instruction. Headings in each phase show which steps are part of “planning” and which are part of “implementation.” After you have completed design of instruction, evaluate the quality of your efforts by using the formative evaluation checklist as in Table 3.5.

© 2003 by CRC Press LLC

SL316XCh03Frame Page 48 Monday, September 30, 2002 8:15 PM

48

Six Sigma and Beyond: The Implementation Process

TABLE 3.5 Design of formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your design of instruction needs improvement!

Y

N

1. Do you know the difference between the need for a job aid and the need for memorization? 2. Is instruction (vs. a job aid) appropriate for your learners’ needs? 3. Have you assembled information from your front-end and task analyses, such as sequenced instructional objectives, task steps, and audience characteristics, for all of your audiences? 4. Have you used this information to verify your audience needs? 5. Did you perform a product survey to determine whether suitable instruction is already on the market? 6. Have you developed a content outline based on the sequenced objectives and task steps from the task analysis? 7. Have you chosen the appropriate instructional methods given learning principles, instructional objectives, audience, and resources and constraints. 8. Have you chosen appropriate media given your learning principles, instructional objectives, audience, and resources and constraints? 9. Have you planned for instructional elements? 10. Have you planned for development of materials? 11. Have you planned for delivery of materials? 12. Have you planned for pilot testing? 13. Have you planned for on-the-job application of your instruction? 14. Have you planned for post-instructional evaluation? 15. Do you have an instructional plan for all of your audiences?

REFERENCES Hannum, W. and Hansen, C. (1989). Instructional Systems Development in Large Organizations. Englewood Cliffs, NJ: Educational Technology Publications.

SELECTED BIBLIOGRAPHY Briggs, L. J. and Wagner, W. W. (1981). Handbook of Procedures for the Design of Instruction. Englewood Cliffs, NJ: Educational Technology. Dick, W. and Carey, L. (1986). The Systematic Design of Instruction. 2nd ed. Glenview, IL: Scott, Foresman, & Co. © 2003 by CRC Press LLC

SL316XCh03Frame Page 49 Monday, September 30, 2002 8:15 PM

Design of Instruction

49

Gagne, R. (1985). The Conditions of Learning. 4th ed. New York: Holt, Rinehart, & Winston. Gagne, R. M. (Ed.) (1987). Instructional Technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Gill, M. J. and Meier, D. (March 1989). Accelerated learning takes off. Training and Development Journal, pp. 63–65. Knirk, F. G. and Gustafson, K. L. (1986). Instructional Technology. New York: Holt, Rinehart, and Winston. Mallory, W. J. (1987). Technical Skills: American Society for Training and Development’s Training & Development Handbook. New York, NY: McGraw Hill.

© 2003 by CRC Press LLC

SL316XCh04Frame Page 51 Monday, September 30, 2002 8:14 PM

4

Development of Material and Evaluation

The success of development of materials depends on the following things: • Using the design plans created in phases 4 and 5 • Creating a development plan with the various development principles outlined in this phase • Implementing the development plan Unfortunately, design plans often do not exist, and when they do, they are frequently overlooked. The purpose of this phase is to dispel the notion that design plans are expendable. Just as a builder needs to follow a blueprint to achieve desired results, so does the ISD expert. Development of materials requires you to follow design plans previously outlined.

STEPS IN DEVELOPMENT OF MATERIALS PLANNING Preparation: before you embark in this phase of instructional design, make sure you have completed the following steps before beginning the development of materials phase, do a product survey; that is, determine whether a suitable product is already on the market (see phase 3). If there is, you will not need to develop a new product. However, you may need to customize or alter the existing product. In such cases, create all design plans for customizing before beginning to develop materials. Gather your design plans from phases 4 and 5. Review the design plans and content outlines created in phases 4 and 5. Check to make sure each plan matches prior front-end and task analysis results. 1) Plan for development: before beginning to develop your instructional product, you need to make some development decisions. Table 4.1 outlines various development principles. For example, using the plans created in previous phases, you will now decide on specific placement of illustrations, font type and size, page layout, etc. Development plans should include brief samples of how your finished product will look, along with any written specifications. For example, provide a sample of finished text, video, audiotape, examples, scripts, illustrations, etc. The purpose of creating a brief sample is not to develop an entire, finished product but only small parts. Stakeholders can review the brief samples for making

51 © 2003 by CRC Press LLC

SL316XCh04Frame Page 52 Monday, September 30, 2002 8:14 PM

52

Six Sigma and Beyond: The Implementation Process

TABLE 4.1 Development principles MEDIUM

DEVELOPMENT PRINCIPLE

PRINT

Place illustrations close to referenced text. Label and caption all illustrations, etc. Keep “cues” (boldface, etc.) to 10% or less of text. Place critical information either first or last in sentences or lists. Use color coding for easy access. Write procedure name at top of each page. Indent, bullet, and number steps and substeps. Use three to four sentences per paragraph. Use the same vertical and horizontal spacing throughout. Use lots of blank space. Keep visual short and simple and text large and legible, giving details on separate handout. Use no more than eight lines per visual, and eight words per line. Use short titles, borders, and white space. Use the same fonts throughout, except for titles. Integrate graphics and color. Use short pauses, change volume, pitch, pace to make key words or phrases stand out, or to maintain attention. Use short phrases and limit unwanted sounds. Make sure music does not compete or distract. Make sure narration is clear and can be heard. Refer to audio and visual sections. Divide information into small parts instead of having a full day’s lesson plan in one session. Use neutral fashion and decor. Use bold video graphics for visibility. Program “easy access” into each lesson. Use boxes, color, and highlights to direct attention. Allow learner control of pacing. Allow adequate learner response time. Limit amount of text on screen. Present one idea per screen, one or two sentences long.

VISUAL AIDS

AUDIO

AUDIO/VISUAL

COMPUTER RELATED

acceptance decisions. And those developing the product will have something more to follow than just a list of written specifications. (NOTE: Design and development decisions often overlap. For example, you may have already made some development decisions during phases 4 and 5. They would have been included in your design plans.)

IMPLEMENTATION 2) Obtain format approval: after developing brief samples, but before beginning to develop rough drafts, obtain format approval from stakeholders. © 2003 by CRC Press LLC

SL316XCh04Frame Page 53 Monday, September 30, 2002 8:14 PM

Development of Material and Evaluation

TABLE 4.2 Example of rough draft of text – changing a tire (terminal objective) 1. Make sure your car will not move or roll. If you have an automatic transaxle, put the gearshift in Park. If you have a manual transaxle, put the gearshift in First. Set the parking brake and block the wheel that is diagonally opposite the tire that you are changing. (Warning: when one front wheel is lifted off the ground, neither the automatic transaxle P (Park) position nor the manual transaxle 1 (First) position will prevent the vehicle from moving and possibly slipping off the jack, even if those positions are properly engaged. To prevent the car from moving while changing a tire, always set the parking brake fully and always block [both directions] the wheel that is diagonally opposite the wheel being changed.) 2. Get out the spare tire and jack if you haven’t already done so. The jack is located in its own storage compartment on the right side of the trunk. 3. Remove the wheel covers or ornaments with the tapered end of a wheel nut wrench. Insert the handle of the wrench and twist it against the inner wheel cover flange. 4. Loosen the wheel lug nuts by pulling up on the handle of the wrench one half turn counterclockwise. Do not remove the wheel lug nuts until you raise the tire off the ground. For information about removing antitheft lug nuts, see later chapter sections. 5. Find the jack notch next to the door of the tire that you are changing. Put the jack in the notch and turn the handle of the jack clockwise until the wheel is completely off the ground.

Rough-draft development is usually a time-consuming and costly venture. You can save time and money by receiving format approval first and then proceeding with the rough draft. It is similar to an architect developing sketches, from blueprints, for his client. It is a lot easier for the client to understand the sketch, as opposed to visualizing a finished product from blueprints. And after seeing the end product, the client is in a much better position to make changes before beginning costly development. In ISD, format approval is similar to an architect gaining client approval. Format approval requires developing a brief sample of the finished product first. The brief sample is then given to stakeholders for their review and acceptance. 3) Create rough drafts: after receiving format approval (or revising the format until approval is received), expand your brief samples into complete rough drafts. Rough drafts should mirror the finished product. Once rough drafts are developed, you will be in a position to review and revise prior to creating the finished product. Some features may be too costly to produce in rough form. When this occurs, use prototypes to show what is intended. The particular type of rough draft you create will vary according to your medium. For example, when using print, your form of rough draft will end up as a written draft of any text materials. Table 6.4 shows assorted forms of rough drafts based on different media. Tables 3.1 and 4.2 show how a content outline (formed in phase 4) turns into a rough draft of text. 4) Review and revise rough drafts: after completing rough drafts, determine technical and editorial accuracy. You need to make sure the rough drafts meet all accuracy requirements before spending any time or money on final product development. For example, you may find flaws in the rough draft that would be too costly to fix after development. Use a sample of © 2003 by CRC Press LLC

53

SL316XCh04Frame Page 54 Monday, September 30, 2002 8:14 PM

54

Six Sigma and Beyond: The Implementation Process

TABLE 4.3 Rough draft evaluation form Product being evaluated

A = acceptable; NA = not acceptable

1. Can you reach your FEA goals given your content? MODIFICATIONS NEEDED: 2. Does content match objectives and audience? MODIFICATIONS NEEDED: 3. Does content match content outline? MODIFICATIONS NEEDED: 4. Is content sequenced logically? MODIFICATIONS NEEDED: 5. Is content technically accurate? MODIFICATIONS NEEDED: 6. Is content clear and concise? MODIFICATIONS NEEDED: 7. Are instructional elements (examples, illustrations, etc.) properly placed and technically accurate? MODIFICATIONS NEEDED: 8. Do instructional elements match and clarify content? MODIFICATIONS NEEDED: 9. Is remediation provided and acceptable? MODIFICATIONS NEEDED: 10. Are instructions clear? MODIFICATIONS NEEDED: 11. Is audio/video clear, appropriate, and understandable? MODIFICATIONS NEEDED: 12. Is punctuation, grammar, etc. accurate? MODIFICATIONS NEEDED:

stakeholders, sponsors, subject matter experts, and potential customers to assist in rough draft review. Ask for input using a rough draft evaluation form such as the one in Table 4.3. Begin with one-on-one reviews and progress to a small group for purposes of gaining broader reactions. Set review schedules and feedback deadlines to keep the project on track. The review process may repeat itself numerous times as revisions are made and reevaluated. It is best to specify in advance how many draft revisions will be allowed to prevent endless reviews and revisions. Remember, each revision is costly. You may find some reviewers will emphasize form over content. It is possible to attain quality levels in both areas through the use of development principles. A project manager can act to help achieve such a balance. 5) Produce final materials: in this step, the instruction or job aid becomes a reality. The rough drafts, with all their agreed-upon revisions, will become ready for pilot testing. Unless you have specialized expertise in the medium you have chosen, it is advisable to obtain professional assistance during final development (e.g., see a video producer, graphic artist, typesetter, etc.). Professionals will be able to supply the most effective techniques and technology to achieve your desired objectives. © 2003 by CRC Press LLC

SL316XCh04Frame Page 55 Monday, September 30, 2002 8:14 PM

Development of Material and Evaluation

55

TABLE 4.4 Development of materials — formative evaluation checklist ASK YOURSELF THESE QUFSTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your development of materials phase needs improvement!

Y

N

1. Have you done a product survey to determine if a suitable product already exists? 2. If so, does the product need customizing? (If yes, go to design of instruction/design of job aids.) 3. Have you gathered all your design plans for each audience from phases 4 and 5? 4. Have you obtained format approval for your brief samples? 5. Have you considered development principles when creating samples? 6. Have you developed your rough drafts, incorporating development principles? 7. Has a review team evaluated the rough drafts for technical and editorial accuracy? 8. Have all necessary revisions been made to rough drafts? 9. Are rough drafts ready for final production?

The quality of your final product will reflect the degree of careful planning (or lack thereof!) that has gone into all earlier stages of analysis and design. The time and resources invested earlier should pay off in this development stage. After you have completed development of materials, evaluate the quality of your efforts by using the formative checklist (see Table 4.4).

EVALUATION: PILOT TESTING Up to this point, all the steps in the ISD model have been related to planning and development. You began with analysis procedures to understand the problem, possible solutions, the audience, etc. You may have selected an off-the-shelf product if your survey indicated an appropriate one was available. Or, you may have designed and developed a new product to fit your needs. Now, you need to begin evaluating the product or products you created or purchased. Using pilot testing, you will evaluate how well earlier ISD steps were completed. (NOTE: pilot testing is similar to the verification process required in team oriented problem-solving. It is also similar to the validation process in process improvement.) Be aware that pilot testing is only one phase of the ISD evaluation process. The first informal evaluation began by using formative evaluation checklists provided at the end of each phase. These checklists monitor the quality of your in-process efforts. In contrast, pilot testing looks for early indications of problems after a sample audience has received the instruction or job aids. Pilot testing provides an “early warning system.” It, too, is considered a formative or “in-process” evaluation. Its purpose, however, is to find problems after delivery to a sample audience. Revisions are based on improving the product before delivery to the total target population. © 2003 by CRC Press LLC

SL316XCh04Frame Page 56 Monday, September 30, 2002 8:14 PM

56

Six Sigma and Beyond: The Implementation Process

Remember, throughout the pilot testing phase, the main focus is on the learner. The goal with any instructional product is that the learner acquire and practice new behaviors. If this does not occur, the program or the organizational environment (and not the learner) has failed in some way. Unless you pilot your products, however, you will not know if and how they work. For example, you may end up delivering the product for months, at great costs, only to find goals were never reached. Eliminate such risks from the beginning. Take a “proactive approach” and support all efforts to make a quality product using pilot testing. Because pilot testing is part of a larger evaluation process, you need to understand something about the total evaluation process. The following section provides a brief overview of evaluation levels, before proceeding to an in-depth explanation of the steps in pilot testing. Evaluation is the only way to find out whether earlier analysis, design, and development phases have succeeded in meeting your objectives. The following evaluation model is based largely on the work of Donald L. Kirkpatrick (1967). It consists of a process of measuring four outcomes of instruction (referred to as Levels 1 through 4): • Level 1 Reaction (How did learners feel about the instruction/job aid?) This information generally takes the form of post-instruction questionnaires or interviews. Participants report their impressions of instructor effectiveness, curriculum, materials, facilities, and course content. • Level 2 Learning (What facts, techniques, skills, or attitudes did learners understand and retain?) This is generally assessed with pre- and post-assessments, testing either learning or performance gains. • Level 3 Behavior (Did the instructional product change learners’ behavior in a way that impacts on-the-job performance?) This requires measurement of how effectively the skills, etc. have been transferred back to the work environment. • Level 4 Organizational changes (Did the program have an impact beyond the individual learner?) This measures whether instruction/job aids have been effective in achieving the kind of organizational changes often intended (e.g., improving morale and teamwork, reducing turnover). Since the four steps are listed in order of increasing complexity and cost, it is not surprising that the frequency of use is highest with level 1. Unfortunately, level 4 is rarely evaluated. However, it is important to recognize that there is little relationship between how learners “feel” about a program and what they have learned — or, more importantly, what they will do on the job because of it. Therefore, even though it is often difficult and frustrating, it is critically important to consider and address all four evaluation levels in some manner. This should be done early in the planning process to facilitate later assessment. The first two evaluation levels must be addressed before you can measure levels 3 and 4. Learning must occur before it can be used on the job. Pilot testing focuses on levels 1 and 2: audience reactions and knowledge gained. Its purpose is to test © 2003 by CRC Press LLC

SL316XCh04Frame Page 57 Monday, September 30, 2002 8:14 PM

Development of Material and Evaluation

57

the instruction or job aid on a small, representative audience. The rest of this phase explains the steps for both planning and implementing a pilot test.

STEPS IN PILOT TESTING PLANNING Preparation: use the following preparation suggestions when you plan for pilot testing: • Assemble front-end and task analysis information. You need to be familiar with your sequenced objectives, audience characteristics, etc. Knowledge or performance assessments should have been written during task analysis. Use these assessment instruments in your pilot test. • Assemble product survey information. Even if a suitable off-the-shelf product has been found, the product will still need pilot testing. Find out whether the product has been tested. If so, what were the testing procedures and results? • Review your design plans from phases 4 and 5. Refamiliarize yourself with how the product or products should be delivered. There are three fundamental stages in pilot testing: 1) select pilot test sample group, 2) assess baseline levels of knowledge or skill, and 3) plan post-delivery assessments (levels 1 and 2). 1) Select pilot test sample group: the objective of this step is to ensure that your pilot test will be conducted with an audience representative of your total population. For example, if instruction is geared to mid-level managers in finance, the pilot audience should include mid-level managers in finance. This increases the likelihood that pilot test results will mirror those expected from the target population. Ideally, the pilot audience should be randomly drawn from your total population to avoid biasing the sample in any way. If this is not possible, choose an audience that represents a mix of your audience’s characteristics. For example, in your sample include an equal mix of • Low, average, and high achievement levels • Male and female • Experienced and inexperienced • Young and mature • Motivated and unmotivated 2) Assess baseline levels of knowledge or skill: in order to assess knowledge gain it is necessary to know the learners’ baseline knowledge and skill levels. The traditional way of making this assessment has been through the use of “pretests,” administered prior to instruction. Unfortunately, preassessments have remained largely an ideal rather than a reality. Preassessments can be time-consuming to develop and © 2003 by CRC Press LLC

SL316XCh04Frame Page 58 Monday, September 30, 2002 8:14 PM

58

Six Sigma and Beyond: The Implementation Process

administer. Additionally, learners tend to be threatened by tests in general. There are, however, ways to overcome such obstacles. For instance, additional development time is not required when assessment instruments have already been developed in task analysis. These assessments or “tests” also serve as a way of clarifying instructional objectives. Additionally, “testing anxiety” can be overcome by removing the major reasons for its existence. Both pre- and post-instruction assessments can be coded with a number known only to the learner. For example, the learners can use the last four digits of their Social Security numbers. Using this method, results will not be linked to identifiable learners. Another method is for learners to score their own instruments, so the process becomes more of a learning experience rather than a test. Using the terms “pre- and post-assessments,” instead of “tests,” can also help eliminate testing anxiety. However, if your audience demonstrates a strong resentment toward assessments, you may need a different approach. One direction worth pursuing is to integrate instruction more closely with the work environment. Supervisors or other team members are often in a position to know what is needed and how well it is applied afterwards. Involving these individuals may provide you with information about the learners’ baseline knowledge or skills. 3) Plan post-delivery assessment (levels 1 and 2): recall that the purpose of pilot testing is to measure audience reactions and knowledge gained. Gather this information using questionnaires, tests, interviews, or observations. Your choice of evaluation tools should rest on which approaches are most reliable and valid. For example, unless you develop systematic observation procedures, your observations may be biased and invalid. Consider also resources (time and money) and other practical constraints of doing research in a “field” setting. What is most effective in a laboratory is often not feasible in organizations. You also may need to train evaluators. They need knowledge and skill in using observation methods, administering a questionnaire, conducting an interview, etc. If the evaluator does not measure outcomes properly, you will not know true program results. Following is an overview of things to consider when measuring learner reactions and knowledge gains. Assessing Audience reactions to instruction: this usually includes gathering participant self-reports using questionnaires or interviews. The purpose is to find areas that were bothersome to the audience and had the potential to interfere with learning. If audience reactions reflect any difficulties in learning or resistance to the instructional material, your product may need changing. (However, some negative audience reactions may reflect problems in the organization and not problems with the product.) Examine the comments and decide how to change your product. © 2003 by CRC Press LLC

SL316XCh04Frame Page 59 Monday, September 30, 2002 8:14 PM

Development of Material and Evaluation

Assessing audience reactions to job aids: this follows identical procedures as for assessing reactions to instruction, except the questions may vary slightly. For example, questionnaires or interviews might include these questions: • Did the aid help in performing the job? • Did the aid make your job easier? • Did the aid help to solve any on-the-job problems? • Was the job aid confusing? • What improvements could be made to the job aid? • Did the aid include enough information? • Do you have any other reactions or comments? Assessing learning gains: traditionally, this has been done by comparison of preassessment scores to post-assessment scores. Assessments of learning gains should have been prepared during task analysis. Give any preassessments before the program, with post-assessments immediately following. Measure the difference between pre- and postscores. Were there any general improvements? If not, improve your product by making changes to the design, development, or delivery. (A simple t-test may do the job.) For example, perhaps too much information was covered in too short a time, readability level was too high, or method of instruction was inappropriate. The true benefit of pilot testing is being able to know what needs improvement and making those improvements. Use post-assessment scores also to certify extent of knowledge gained, using a “90/90” measure. This means requiring that 90% of the audience learn 90% of the material upon completion of the instruction. If this does not occur, your product may need modification. Modification may be required in the design, development, or delivery phases. Or, perhaps the learner is in need of individualized remediation. Assessing performance gains: this requires evaluation of “hands-on” performance. For example, evaluating someone’s skill in driving a truck requires seeing the person drive, as opposed to just giving her a written assessment. If hands-on performance is required, consider using an observation technique along with the 90/90 measure. Ninety percent of the people should perform the task successfully 90% of the time. If this doesn’t happen, consider making necessary changes.

IMPLEMENTATION The following step will help you implement pilot testing: 4) Implement planning steps: once you have selected a representative audience and assessed baseline knowledge or skill, deliver the product. The purpose of pilot testing is to measure learner reactions and knowledge gained. If the pilot audience is similar to your target audience, and the © 2003 by CRC Press LLC

59

SL316XCh04Frame Page 60 Monday, September 30, 2002 8:14 PM

60

Six Sigma and Beyond: The Implementation Process

TABLE 4.5 Evaluation: pilot testing — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your pilot test needs improvement! Y

N

1. Are you familiar with the four levels of evaluation? 2. Do you understand the purpose of pilot testing? 3. Have you assembled information from your front-end and task analyses such as sequenced objectives, audience characteristics, etc. (see phases 2 and 3)? 4. Have you gathered assessment instruction developed in task analysis? 5. Have you refamiliarized yourself with your design plans from phases 4 and 5? 6. Have you planned for a representative, sample group for the pilot? 7. Have you planned for baseline assessments of knowledge or skill using the assessment instruments developed in task analysis? 8. Have you planned for how to measure levels 1 and 2 after the product is delivered to your sample audience? 9. Has the product been delivered according to the delivery specifications outlined in phase 8? 10. Have you measured for levels 1 and 2, audience reactions and learning? 11. Have you modified the product, where feasible, according to results from your pilot test?

instruction achieves its objectives, you can assume it will be similarly effective with the entire target audience. Remember to deliver the instruction or job aids according to delivery plans outlined in phase 8. For example, if delivery is individualized, deliver the instruction in that way. In addition, make the audience aware they are participating in a pilot. Let them know you need and want their feedback. (The next step outlines more fully the type of feedback you may want.) Consider also the need to train those delivering the instructional product. If the product is not delivered properly in the pilot test, your results will not be valid. You will not know whether the product or the delivery method is producing negative results. Using the post-delivery assessment procedures outlined in step 3, measure for levels 1 and 2. How has the audience reacted to your program or job aid? Has learning occurred? What modifications do you need to make? Make any necessary modifications and proceed to phase 8. After you have completed pilot testing, evaluate the quality of your efforts by using the formative evaluation checklist shown in Table 4.5.

© 2003 by CRC Press LLC

SL316XCh04Frame Page 61 Monday, September 30, 2002 8:14 PM

Development of Material and Evaluation

61

REFERENCES Kirkpatrick, D. L. (1967). “Evaluation of Training.” In Craig, R. (Ed.), Training and Development Handbook, American Society of Training and Development, 1967.

SELECTED BIBLIOGRAPHY Baker, E. L. (1974). “Formative Evaluation of Instruction.” In Popham, W. J. (Ed.), Evaluation in Education. Berkeley, CA: McCutchan Publication Corporation. Baker, E. L. and Atkin, M. C. (1973). “Formative Evaluation of Instructional Development.” AV Communication Review, 2, 1, 1973, pp. 389–418. Briggs, L. J. and Wager, W. W. (1981). Handbook of Procedures for the Design of Instruction (2nd ed.). Englewood Cliffs, NJ: Educational Technology Publications. Converse, J. M. and Presser, S. (1987). Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage Publications. Deshler, D. (Ed.) (1984). Evaluation for Program Improvement. San Francisco: Jossey-Bass. Dick, W. (1977). “Formative Evaluation.” In Briggs, L. J. Instructional Design: Principles and Applications. Englewood Cliffs, NJ: Educational Technology Publications. Dick, W. (1980). Formative Evaluation in Instructional Development. Journal of Instructional Development, 3, pp. 3–6. Dick, W. and Carey, L. (1986). The Systematic Design of Instruction. 2nd ed. Glenview, IL: Scott, Foresman, and Co. Fink, A. and Kosecoff, J. (1985). How to Conduct Surveys — A Step by Step Guide. Thousand Oaks, CA: Sage Publications. Gagne, R. M. (Ed.) (1987). Instructional Technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum Associates. Herman, J. L. (Ed.) (1987). Program Evaluation Kit. 2nd ed. Thousand Oaks, CA: Sage Publications. Phillips, J. J. (1983). Handbook of Training Evaluation and Measurement Methods. Houston: Gulf Publishing Company. Reigeluth, C. M. (Ed.) (1983). Instructional-Design Theories and Models: An Overview of Their Current Status. Hillsdale, NJ: Lawrence Erlbaum Associates. Ribler, R. I. (1983). Training Development Guide. Reston, VA: Reston Publishing Co. Richey, R. (1986). The Theoretical and Conceptual Bases of Instructional Design. New York: Nichols Publishing. Sudman, S. and Bradburn, N. M. (1985). Asking Questions: A Practical Guide to Questionnaire Design. San Francisco: Jossey-Bass.

© 2003 by CRC Press LLC

SL316XCh04Frame Page 62 Monday, September 30, 2002 8:14 PM

© 2003 by CRC Press LLC

SL316XCh05Frame Page 63 Monday, September 30, 2002 8:13 PM

5

Delivery of Material and Evaluation

At some time, your finished instructional products must reach your learner audience. Delivery of materials is the ISD phase in which you decide how best to attain this goal. You’ll find some delivery decisions have already been made during design of instruction and design of job aids. For example, you may have chosen to deliver instruction through a lecture method, using print and visuals as enhancers. There are additional delivery issues, however, that you also need to plan for and implement. This phase discusses these additional issues. Remember, delivery of materials can help or hinder acceptance of your instructional product. The audience will decide early if the program or job aid is of any interest or value, based on how it is delivered as well as its content.

STEPS IN DELIVERY OF MATERIALS PLANNING Preparation Use the following preparation suggestions when you plan for delivery of materials: • Review audience characteristics identified in the task analysis. Delivery success is related to the extent you have considered audience characteristics. For example, knowing your audience’s reading and math skill levels, prior learning experiences, etc. will help in determining the best way to deliver your instruction or job aid. • Review your design plans for previous phases. Some delivery decisions have already been made. Check your design plans for delivery requirements you now need to consider. For optimum results the following three steps will help you plan for delivery of materials: 1) analyze the delivery environment, 2) plan for management support, and 3) plan for audience acceptance. 1) Analyze the delivery environment: in task analysis, you were asked to analyze your learner audience. The more you found out about the audience, the better able you were to meet their needs. Now, for the same reasons, you are asked to analyze the environment where the instructional products will be delivered. The more you know about the delivery environment, the better able you

63 © 2003 by CRC Press LLC

SL316XCh05Frame Page 64 Monday, September 30, 2002 8:13 PM

64

Six Sigma and Beyond: The Implementation Process

will be to tailor your instruction accordingly. Ask yourself the following questions: • What is the environment in which the instruction or job aid will be delivered? Will your instruction be delivered in a classroom, office, laboratory, manufacturing plant, or the learner’s home? Assume in your FEA you found a need to improve employee skill levels. The solution included an instructional program. To successfully implement this solution, however, you must know something about the delivery environment. Where is your instruction going to be delivered? You may decide delivery should be within the employee’s home environment. You need to know this before you can develop a delivery plan. • What are the expected “patterns of use?” Is the instruction or job aid going to be used sporadically or on a schedule? During the day or evening hours? During what season or climate, day, week, month, or year, and for how long? You also need this information in order to plan for delivery facilities, equipment, etc. For example, instruction designed for a learner’s use at home (the pattern of use) will need to be easily transportable. Once you understand the physical delivery environment, you must also understand the social and psychological factors that can influence effective delivery of your program or job aid. 2) Plan for management support: the most effective and enjoyable learning experience is wasted if there is inadequate — or negative — reinforcement for using the instruction or job aid back on the job. You have invested time and energy to facilitate application when you designed and developed your products. Unfortunately, this work can be undermined by conditions in the workplace. You can have some influence on these conditions, though, by considering these factors as part of your delivery plan. How can you plan for management support? There are several ways to accomplish this. Make greater use of learners’ management and coworkers when assessing knowledge and skill levels. In this way, management becomes involved in the process from the beginning. In addition, simply planning for the “secondary audience” (see Task Analysis) addresses the management support issue. For example, hold “information sessions” for the learners’ management. In these sessions, management learns about the program’s objectives. They gain knowledge about why the program was instituted, why the learners need to attend, and why learners need to use it back on the job. These sessions also serve to resolve any qualms or organizational conflicts on the part of management. In this way, management becomes a part of the program. In many cases, just making sure that management is not “surprised” by the content of the program will assure increased support. A more difficult problem to address, however, is the existence of organizational systems that fail to reward — or that even penalize — learners for using what was learned. Because such systems may be firmly © 2003 by CRC Press LLC

SL316XCh05Frame Page 65 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

embedded in the organization’s culture, change may be difficult. In such cases, discussing such barriers during training allows learners to deal openly with expected difficulties and share possible solutions. This is particularly valuable when a wide range of participant backgrounds is represented in the group. Finally, as training is integrated more effectively with business strategies, such conflicts with “cultural realities” should decrease in frequency. ISD contributes to this goal by beginning with a front-end analysis that highlights what problems training can and cannot address from the onset. 3) Plan for audience acceptance: obviously, management support and effective design and development will increase audience acceptance. There are additional factors, however, specific to delivery that will contribute to greater audience acceptance. Consider these factors now when developing your delivery plan. Increasing audience acceptance of instruction: • Consider instructor characteristics. Learners are more receptive to an instructor with whom they can identify. This is more a matter of style than of demographics. The most important instructor characteristics are presentation skills, thorough knowledge of product content, enthusiasm, and empathy for the learners. “Train-the-trainer” preparation and instructor certification are ways to ensure effective instructor skills. In your pilot test, question the audience about the instructor’s delivery. Make sure each instructor is properly trained and placed. • Consider environmental characteristics. An effective learning environment is free from distractions. This helps the learner focus attention on the instructional content. For example, room lighting, temperature, seating, acoustics, accommodations, and even the lunch menu exert an influence on learner attention. Any equipment should facilitate, not distract, from learning. It should be in excellent operating condition. An additional factor is how easy it was for the learner to get there. This involves taking into consideration such things as starting time, clear directions, and availability and ease of transportation. Increasing audience acceptance of job aids: • The major factor in promoting audience ability to effectively use a job aid is effective design. Audience response can also be strongly influenced by delivery considerations. For example, if the job aid came in the mail without explanation, acceptance would be low, regardless of design. The following examples offer some effective ways to deliver job aids: • Job aids are given to learners by their supervisor or other instructor with an explanation of why the job aid is important (e.g., software manuals are distributed as well as summary sheets of important items). © 2003 by CRC Press LLC

65

SL316XCh05Frame Page 66 Monday, September 30, 2002 8:13 PM

66

Six Sigma and Beyond: The Implementation Process

• Job aids are given to new employees along with equipment they are expected to operate and instructions. For example, a Minitab manual is given to the learners; however; the Minitab software is already loaded in the computer. • Job aids are included in the package with equipment to be assembled, with easy-to-read, easy-to-follow instructions.

IMPLEMENTATION In order to prepare for implementation of delivery of materials, consider the following: • Review learning principles. While many of these techniques are used in the design of materials, some depend on the skills of the presenter. • Review instructional materials. Being familiar with the material is of critical importance when carrying out your delivery phase. • Review pilot test results. Use feedback regarding pacing of instruction, facilitator effectiveness, etc. to improve upon your original delivery techniques. There are two steps that will help you implement delivery of materials. They are: 4) finalize a delivery plan and 5) apply, evaluate, and revise your delivery plan. 4) Finalize a delivery plan: the delivery plan will result from completion of steps 1 through 3. The delivery plan form (see Table 5.1) may help you to develop your delivery plan. After analyzing the delivery environment and planning for management support and audience acceptance, you should have enough information to finalize a delivery plan. Detail specifics about when, where, and how to deliver each program or job aid. Include within your finalized delivery plan the following items: • Facility and sites. Include specifics about the optimal physical place for delivering the instruction or job aid. This includes any special requirements such as a room or building of a particular size, lighting, acoustics, etc. • Equipment. Include plans for items such as computers, blackboards, desks, tables, chairs, carrels, audiovisual equipment, projectors, screens, chalkboards, easels, calculators, etc. • Supplies. Include plans for items such as pencils, pens, paper, visualaid materials, workbooks, film, slides, projector, video, television, tests, exercises, handouts, etc. • Schedule. Include a detailed list of the proposed days, months, times, etc. for delivery of the instruction or job aid. • Instructors. Include the proposed number of facilitators needed to deliver the instruction or job aid, required prerequisite skills, education, certification, experience, etc. • Miscellaneous. Include items such as required security clearances, special transportation, parking, requirements, etc. © 2003 by CRC Press LLC

SL316XCh05Frame Page 67 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

67

TABLE 5.1 A typical delivery plan Items

Comments

Audience characteristics

Reading skill level Math skill level Prior learning experience Attitude toward learning Attitude toward subject Handicaps Average age Gender mix Cultural mix Other considerations Method Media Instructional elements

Design considerations

Delivery environment Patterns of use Management support plan Audience acceptance plan Facility and sites

Equipment Supplies Schedule Instructors Other

Desired location Room size requirements Lighting requirements Acoustic requirements Arrangement requirements Electrical requirements Handicap requirements Additional facility requirements Computers, blackboards, desks, tables, chairs, carrels, audiovisual equipment, projectors, chalkboards, easels, calculators, flip charts, etc. Pencils, pens, paper, visual aids, workbooks, film, slides, videos, televisions, tests, exercises, handouts, etc. Proposed days, months, times, etc. Number, prerequisite skills, education, certification, experience and so on Security clearances, special transportation, parking, etc.

5) Apply, evaluate, and revise your delivery plan: as you deliver your instruction or job aids to a wider audience, it is important to remain attentive to both audience reaction and the learning that is (or is not) occurring. Your pilot testing was based on a small, though representative, sample. It is important now to continue to test the initial conclusions of that research. You will do this in several ways: • Observe differences in audience reaction and participation during delivery of the instruction at each time and location.

© 2003 by CRC Press LLC

SL316XCh05Frame Page 68 Monday, September 30, 2002 8:13 PM

68

Six Sigma and Beyond: The Implementation Process

TABLE 5.2 Delivery of materials — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your delivery phase needs improvement!

Y

N

1. Have you considered audience characteristics as identified in the task analysis? 2. Have you reviewed your design plans from phases 3 and 4 to ensure consistency with previous delivery decisions? 3. Have you analyzed the delivery environment? 4. Have you planned for ways to increase management support? 5. Have you planned for ways to increase audience acceptance? 6. Have you reviewed design principles for effective presentation techniques to be used? 7. Have you reviewed instructional materials and feedback from the pilot test regarding delivery? 8. Have you finalized your delivery plan considering facility and sites, equipment needed, supplies, schedules, instructors, etc.? 9. Have you applied and continued to evaluate your delivery plan, making necessary revisions?

• Solicit informal feedback from participants during the program and at breaks. • Continue to collect formal feedback (questionnaires, interviews, etc.) from participants. • Continue to assess the amount of learning occurring, either via preand post-assessments or by observation. If you gather information indicating alternative approaches are needed, experiment to see which new approaches are valid and with what audiences. Sometimes, you will find you may need to tailor specific examples, or even entire procedures, to various segments of your audience. A group of your manufacturing employees, for example, would respond to the use of all technical examples differently than a group of employees from Finance, Sales, Engineering, or Personnel. The objective is always to consider audience characteristics. Such ongoing assessment and revision are more difficult to do with job aids delivered separately from instruction. It will require special effort to follow up with users to gauge audience reaction and use. This feedback is critical, however, to assuring that the job aid is not sitting on a shelf unused — or discarded because of lack of user-friendliness! After you have completed delivery of materials, evaluate the quality of your efforts by using the formative evaluation checklist (see Table 5.2). © 2003 by CRC Press LLC

SL316XCh05Frame Page 69 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

69

ON-THE-JOB APPLICATION What should you do after your instructional product is delivered to your audience? In most cases, nothing is done. Learners go through intensive, week-long programs and are usually left on their own to use or discard what they learned. This results in a waste of organizational resources. In most cases, instruction does not come about because “it’s a nice thing to do.” Instruction is purposeful. The goal is for learners to learn and use what they learn, with subsequent impact on the bottom line. But when instruction is not used, everyone loses. The learner wastes a lot of time and the organization a lot of money. So what can you do after your instructional product is delivered to your audience? Your goal should be to help the learner apply the instruction, or job aid, on the job. The process of transferring that knowledge to the job is called “on-the-job application.” On-the-job application refers to Kirkpatrick’s level 3, on-the-job behavior change. The success of on-the-job application rests on how well previous ISD phases were planned and executed. This includes using various learning and development principles (see phases 4, 5, 6, and 8). In addition, you need to know your secondary audience (see phase 2). Other techniques for increasing application are outlined in this phase. (NOTE: Since learners’ supervisors are in a pivotal position to influence application, their role is spelled out separately from that of instructional professionals. For added convenience, “Tips for Supervisors, MBBs, BBs: How to Help Your Employees Use the SIX SIGMA Methodology Education and Training on the Job” is placed at the end of this chapter. This section can be used as a “standalone” job aid for supervisors.)

STEPS IN ON-THE-JOB APPLICATION PLANNING Preparation • Review information from FEA and Task Analysis. Review FEA cost-benefit analysis information about customer and stakeholder attitudes. These attitudes can influence management and coworker support of instructional solutions. Phase 2 (task analysis) provides additional information about secondary audiences. Use this information to plan for support from these audiences. • Check design, development, and delivery plans (phases 4, 5, 6, 8) to ensure use of learning and development principles. The more closely the delivery environment simulates actual on-the-job conditions, the greater the chance of application. Also very important are provisions for feedback and remediation. There are two steps that will help you plan for on-the-job application. They are: 1) plan for application during design, development and delivery phases and 2) plan for secondary audience support. © 2003 by CRC Press LLC

SL316XCh05Frame Page 70 Monday, September 30, 2002 8:13 PM

70

Six Sigma and Beyond: The Implementation Process

1) Plan for application during design, development, and delivery phases: many of the recommendations in this step should be familiar. They were included in learning and development principles covered in earlier phases. They bear repeating, however, because they are so important to ensuring on-the-job application. Build in job relevance: a major way to ensure job relevance is to link instructional materials closely with resources available on the job. Do this by building instructional programs around job aids or existing tools, equipment, or written materials used on the job. Designing and using instructional materials that will not be used in the workplace will only confuse the learner (e.g., using an in-class WordPerfect text that is different than the on-the-job WordPerfect reference guide will cause problems. Statistical software is a major problem in this area). Also, make use of examples that relate very closely to what is familiar to learners on their jobs. Build in repetition, practice, feedback and remediation: application is facilitated when material is repeated enough to be firmly embedded in learners’ memories or captured in job aid reference materials. Too often, more material is covered than learners could possibly absorb. The result is little or nothing that is retained, much less applied to the job. Build in opportunities for practice and feedback: by providing plenty of active learning rather than lectures or discussions. Feedback can often be built into job aids by written instructions. For example, a job aid can describe how a machine should sound and function if correctly assembled, operated, etc. Finally, be sure to do enough informal observation or measurement of learner skills dues instruction. You want learners to have a chance to gain additional practice and assistance (remediation) in the areas in which they are having a difficult time. 2) Plan for secondary audience support: this planning should occur very early in the ISD process. In the task analysis phase, you analyzed important characteristics of both primary and secondary audiences. In front-end analysis, you made early assessments of expected management and coworker support. The preferred goal is to enhance primary audience learning by building in necessary secondary audience support. You can do this by planning and conducting preinstructional awareness sessions for supervisors and coworkers or by including them in the actual training for the primary audience. You can also provide tools and job aids that make it easier for the secondary audience to give learners assistance and feedback. Unfortunately, sometimes there are serious obstacles to application that you are unable to influence. In such cases, it probably makes more sense to adjust the training rather than try to change the work environment. For example, if an organization provides individual, rather than team, merit awards, instruction should be directed at fostering cooperation among workers in the context of individual rewards. In extreme cases, it may be preferable to reduce or eliminate an instructional program that is not supported by management or organizational systems rather than to waste valuable resources. Keep in mind that the role of instruction is © 2003 by CRC Press LLC

SL316XCh05Frame Page 71 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

71

TABLE 5.3 On-the-job application — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that on-the-job application needs improvement!

Y

N

1. Have you reviewed information from FEA and task analyses about secondary audiences? 2. Have you checked the design, development, and delivery plans to assure built-in opportunities for learners to practice new knowledge and skills under realistic conditions? 3. Have you linked instructional materials closely with resources available on the job? 4. Have you included enough repetition of material to ensure that it will be retained by learners long enough to apply it? 5. Have you built in plenty of opportunities for practice and feedback? 6. Have you included sufficient access to remediation during instruction? 7. Have you planned for secondary audience support, such as preinstructional awareness sessions and job aids to encourage their involvement? 8. Have you made adjustments to your instructional programs to deal with various obstacles to application?

to support strategic business goals, not to instigate cultural change. The latter requires much more extensive organizational intervention.

IMPLEMENTATION This phase is unique because implementation rests with primary and secondary audiences. Your careful attention to planning will help them succeed. After you have completed on-the-job application, evaluate the quality of your efforts by using the formative evaluation checklist (see Table 5.3). Tips for master black belts, black belts, supervisors: How to help your support team and workers to use the Six Sigma education and training on the job. As a master black belt, black belt, or supervisor, you are in a pivotal position to facilitate use of what your support team and employees learn in the Six Sigma education and training methodology. Following are some suggestions about what you can do before and after instructional programs to ensure that you, the participant employee, and the company get a good return on the investment made in training.

BEFORE TRAINING Suggest relevant instruction to employees. Focus on training that relates to specific skills you feel could be enhanced. Suggestions can be made in the context of career © 2003 by CRC Press LLC

SL316XCh05Frame Page 72 Monday, September 30, 2002 8:13 PM

72

Six Sigma and Beyond: The Implementation Process

discussions or during coaching or appraisal sessions. Since most people are interested in increasing their value to the organization, it can be helpful to put your suggestions in these terms. Discuss employee-initiated training requests. The emphasis should be on ensuring a return on the company’s investment. Demonstrate your support of employees’ individual developmental needs by encouraging them to find tie-ins to their responsibilities. It is important not to reject requests for which you do not see an immediate application or to be overly stringent about job-relatedness. In doing this, you may squelch employees’ enthusiasm for instruction that could increase their versatility and long-term value to the company. Agree on instructional goals and application. Using Table 6.6 (Learner-Supervisor Preinstructional Agreement) or some other document, discuss and come to agreement on what the learner should be able to do differently after instruction. This should include some discussion of necessary resources and expected obstacles. The main focus, however, should be on some observable improvement in behavior (usually a skill or demonstrated knowledge) that should occur within a reasonable time frame after instruction. Written summaries of the agreement are preferable so as to avoid misunderstandings and lack of follow-up. Schedule instruction in a timely fashion. The important guideline here is to make sure employees will have access to necessary resources (e.g., computer excess, appropriate and applicable software manuals or other items) immediately after training. Otherwise, too much forgetting will occur, without opportunities for practice and feedback on how well they are doing.

AFTER TRAINING Assure opportunities for practice. Incorporate desired performance accomplishments into employees’ objectives to reinforce their importance. Make employees responsible for training other workgroup members in what they have teamed. Assure access to job aids or equipment needed to perform new behaviors. Provide feedback. Observe the new behaviors and provide helpful information about how well the employee is applying them to the job. Be sure to concentrate heavily on positive reinforcement, to help build learners’ self-confidence. Approach areas for improvement by building on what learners are doing well to get even better results. (Many times when you provide only positive feedback, the learner will ask you what they could be doing better!) Make your feedback as specific and timely as possible and don’t overload the learner by dealing with more than one issue at a time. Provide rewards and reinforcement. Often the most timely and valuable reward you can provide is recognizing that the employee is applying new skills to

© 2003 by CRC Press LLC

SL316XCh05Frame Page 73 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

73

the job. Show appreciation for their interest in doing so. This can be as simple as a few words of encouragement. Give employees varied or more challenging assignments. This rewards them for acquiring new skills and provides more opportunities for demonstrating them. Remove obstacles. Make it clear that doing things differently from the “old way” will not work against employees applying new behaviors. Prevent jealousy or competition among coworkers by making sure everyone gets equal access to training. (If equal excess is not possible, explain the situation.) Also, have coworkers teach others what they have learned. Provide refresher. If possible, demonstrate the skills yourself. On-the-job role models of skills learned in training are very important to help learners apply new behaviors. Conduct follow-up sessions to review what has been learned, especially if the entire work group has participated in the training. Simply discussing with employees what they learned can serve as a refresher or even open the door to a coaching discussion. Follow up on preinstructional agreements. Are employees using what they have learned on the job? How have they dealt with obstacles? Have you provided needed resources? Revisit these issues periodically with employees until the new skills are being routinely used.

EVALUATION: POST-INSTRUCTION After months of designing, developing, and delivering the instructional products, do you know if your audience is using what they learned? Have the instructional programs impacted your organization? Has customer satisfaction increased, production increased, accidents declined, quality improved, or revenues multiplied as a direct result of the instruction or job aids? The purpose of post-instructional evaluation is to measure the extent to which learners are using the instruction/job aids on the job. Its purpose is also to measure how the products have impacted the organization. Recall from the pilot testing phase the 4 levels of evaluation. Pilot testing measures levels 1 and 2: learner reactions and how much learning has occurred. Pilot testing results are used to make improvements in the products. In contrast, post-instructional evaluation focuses on levels 3 and 4: on-the-job application of learning and organizational change. Post-instructional evaluation is performed months after the products have been delivered to the entire audience. It is a summative, or end-of-program, evaluation. Based on results obtained, you will decide whether to continue, modify, or eliminate the instructional program or job aids. (NOTE: On-the-job and organizational changes do not always occur immediately after instruction or introduction of job aids. It takes time for people to learn, practice, and master new ways of doing things. Therefore, you need to make sure learners have had time to use what they learned before evaluating. In general, measure change at least 3 months after the program.)

© 2003 by CRC Press LLC

SL316XCh05Frame Page 74 Monday, September 30, 2002 8:13 PM

74

Six Sigma and Beyond: The Implementation Process

STEPS IN POST-INSTRUCTIONAL EVALUATION PLANNING Preparation: use the following preparation suggestions when you plan for post-instructional evaluation: • Review front-end and task analyses for decisions about desired changes. Desired individual and organizational changes should have been outlined in the FEA (described as problems and solutions). Instructional objectives (see task analysis) specify in more detail desired individual changes. • If you are using an existing product, review previous evaluation studies. Is there evidence that the program can change on-the-job behavior? Have organizational gains resulted from use? If not, ask the supplier to perform a post-instructional evaluation to validate the program’s worth. If the product already comes with claims to level 3 and 4 outcomes, you will want to confirm the results using your own population. • Review any informal evaluation results from on-the-job application. Informal evaluation results may serve as input for developing your evaluation instruments. For example, when informally evaluating levels 3 and 4 outcomes during application, you may discover questions leading to valuable data. You can use these questions in your post-evaluation surveys. The post-instructional evaluation training process follows a five-step process. The process is identified by the following steps: 1) decide how to measure desired changes, 2) decide how to collect data, 3) do cost-benefit analysis of evaluation, and 4) obtain necessary commitments. 1) Decide how to measure desired changes: during front-end and task analyses, decisions were made about desired individual and organizational changes. The FEA described organizational problems and solutions. The task analysis spelled out instructional objectives, detailing what individuals need to do differently on their jobs. The decision now is, how will you measure the desired changes? Some changes, like successfully changing a tire or using WordPerfect, may be fairly easy to measure. You will be able to see the learner change the tire or use WordPerfect. Others, like coaching or participative management skills, will be far more challenging. Linking the use of instruction to organizational changes will be even more difficult. Measuring change can be accomplished using either objective or subjective tools. Objective measurements come from sources such as systematic observations, records, or reports of factual information. In contrast, subjective data comes from sources such as questionnaires, interviews, or focus groups. Because subjective data consists of individual opinions and judgments, it is more open to bias. For this reason, it is preferable to rely on objective, verifiable data whenever possible. © 2003 by CRC Press LLC

SL316XCh05Frame Page 75 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

75

TABLE 5.4 Post-instructional data collection tools TYPE OF TOOL

Description

OBSERVER CHECKLIST

This tool requires an impartial observer to observe the learner on the job for use of instruction or job aid. The checklist requires decisions on who, where, and what behaviors need observation and what observation methods to use. This tool requires learners to self-evaluate their on-the-job use of instruction or job aids. Self-reports have the potential disadvantage of being biased, yet this method costs leas than an observer checklist. This tool requires that learners evaluate their supervisors’ practice, feedback, and support methods. This tool helps pinpoint other reasons for lack of application to the job. This tool requires that learners list any questions they have about on-the-job use of the instruction or job aids. Questions will show areas that may need further instruction before transfer can occur. This tool requires that learners comment on relevance of instruction to job or skills needed, conflict between instruction and the organization’s culture, policies, etc. to accuracy of material, system changes, etc. This can help ascertain why transfer is not occurring. This tool requires that learners and supervisors report on use of instruction or job aids. “Incidents” include events when instruction or job aids should have been used but were not. Such reports should decrease as transfer increases. This tool requires that learners’ subordinates report on behavior changes they see. These reports may contain biases or misperceptions not as apparent, for example, when using more objective measurement tools. This tool requires peers to comment on learner behavior changes they see occurring on-the-job after instruction. Such subjective reports may contain biases or misperceptions. This tool requests customers to evaluate learner behaviors. This information is gathered to determine if, through the eyes of the customer, on-the-job behaviors are consistent with instruction or job aid objectives. Assessing organizational changes requires looking at concrete results related to the instruction or job aids. “Concrete results” refers to evidence of reduced costs, improved quality, increased profits and production, etc.

SELF REPORT

SUPERVISORY REPORTS

RECORD OF QUESTIONS

COMMENT FORMS

CRITICAL INCIDENT REPORTS

SUBORDINATE REPORTS

PEER REPORTS

CUSTOMER REPORTS

CONCRETE RESULTS

Table 5.4 shows various types of data-collection tools. These tools can be used to measure on-the-job behavior changes or organizational changes. Table 5.5 shows an example of a self-evaluation measurement tool to measure on-the-job behavior change. © 2003 by CRC Press LLC

SL316XCh05Frame Page 76 Monday, September 30, 2002 8:13 PM

76

Six Sigma and Beyond: The Implementation Process

TABLE 5.5 Self-evaluation measurement tool POST INSTRUCTION EVALUATION

Instructional Objectives

I demonstrated this objective by doing:

Initial (may be initialed by the learner and the supervisor)

1. 2. 3. Areas for continuous improvement:

TABLE 5.6 Research design action plan Post instructional evaluation research design action plan Desired change to be measured Measurement tools Sample source Sample size Sample location Data collection time plan Data collection personnel Statistical analysis procedures Statistical analysis time plan Statistical analysis personnel

When deciding what tool to use, consider the costs involved. For example, using existing written records requires less time and money than developing surveys or performing observations. Even using existing records, however, requires clerical and analysis resources, permission to gain access to the data, etc. 2) Decide how to collect data: once you have chosen the measurement tools, develop a research design outlining which tool will be used, when, and in what way (i.e., through a survey, interview, direct observation, or with a group or on an individual basis). Decide who will receive your tool, how they will receive it, and how they will get the information back to you. Table 5.6 is an example of a format for summarizing the details of © 2003 by CRC Press LLC

SL316XCh05Frame Page 77 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

77

your research design. Professionals can assist you with important research design considerations such as the use of control groups, comparison of pre- and post-groups, statistical methods, etc. 3) Do cost-benefit analysis of evaluation: once you have decided how to measure the desired changes and collect the data, estimate how much your evaluation will cost. The extent of resources needed may determine whether the evaluation is feasible or worthwhile. Assign dollar figures to the time, staff, expertise, materials, and overhead required to implement each part of the evaluation. You may need help from your accounting department in this area. Consider the estimated costs in relation to expected benefits. Based on the results of this analysis, is the evaluation justified? Important factors to consider include the scope and urgency of the program. Instructional programs that address relatively minor business problems may not be worth the investment of such an evaluation. On the other hand, ongoing programs that require huge resource commitments will merit the effort required for a formal evaluation study. 4) Obtain necessary commitments: after performing your cost-benefit analysis and deciding to proceed with evaluation, obtain the necessary commitments. Make a formal request to management to begin the evaluation. Outline all budget and time requirements and why the evaluation is necessary. Request permission for funds, access to necessary information, and a commitment to act on study results. (Do not waste organizational resources on training or evaluation without this important commitment.)

IMPLEMENTATION The following steps are part of the post-instructional evaluation implementation process: 5) develop measurement tools 6) collect and analyze data 7) report results and make necessary improvements. 5) Develop measurement tools: after obtaining management commitment, you are ready to put your plan into action. Develop the measurement tools outlined in your evaluation plan. This includes questionnaires, structured interview questions, or recording forms for the collection of existing records. You can begin developing measurement instruments even before instruction has been delivered. In fact, doing so may help clarify the aims of instruction and the most effective methods to achieve them. You may require professional assistance during this step to assure that measurement tools lend themselves to statistical analysis. While bar graphs and percentages are frequently used to report evaluation results, they do not really provide the level of precision necessary to make important decisions about whether to continue or modify the program. 6) Collect and analyze data: in this step, you will collect the evaluation data according to your research design plan (see Table 5.5). With professional © 2003 by CRC Press LLC

SL316XCh05Frame Page 78 Monday, September 30, 2002 8:13 PM

78

Six Sigma and Beyond: The Implementation Process

assistance in interpreting the data you gather, you will have the information necessary to know whether your program has achieved its objectives. If desired changes did not occur, you will probably want to know why. Below are some steps to follow in making this determination: Re-assess on the job application procedures: pitfalls are common with on-the-job application. Often, the organization does not support post-instructional change. When this occurs, the learner cannot be expected to use what was learned, regardless of how well the product was designed, developed, or delivered. Phase 9 discusses on-the job application problems in detail. If you find learners are supported on the job, yet change is not occurring, proceed to the next step. Re-assess extent of learning (level 2): re-administer post-tests assessing retention of instructional material. If the audience does not remember the material, there is no possibility of applying it to the job. If assessments show loss of learning, provide remediation in the areas indicated and reassess. If learning is still not demonstrated, reassess the product’s design, development, and delivery. If assessment of level 2 indicates the audience has retained the material and learners are supported back on the job, proceed to the next step. Reassess front end and task analyses: your instruction or job aids may be geared toward the wrong problem or audience! You may need to perform a new FEA or TA. What exactly is the problem? What is the appropriate solution? What are the specific tasks needing instruction? Who is your target audience? 7) Report results and make necessary improvements: regardless of what results you have obtained, your post-instructional evaluation should be summarized in a report. Work with a professional to translate technical and statistical results into a business report format. Include the following information: • What program was evaluated, and why • How the evaluation was conducted, and by whom • Findings and implications for continuation or modification • Recommendations based on findings, budget, and time constraints • Plan of action based on recommendations As you proceed to implement any necessary revisions, continue to monitor the outcomes. The ISD process requires continuous improvement. Even if you reach a point with your instructional products where desired changes are occurring, you need to continue evaluation. New requirements, audiences, and specifications can change the focus of the current program. You will probably not conduct repeated formal evaluations. However, informal or formative evaluation should continue for the duration of the instructional program or job aid. After you have completed post-instructional evaluation, evaluate the quality of your efforts by using the formative evaluation checklist (see Table 5.7).

© 2003 by CRC Press LLC

SL316XCh05Frame Page 79 Monday, September 30, 2002 8:13 PM

Delivery of Material and Evaluation

79

TABLE 5.7 Evaluation: post-instruction — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your post-instructional evaluation needs improvement!

Y

N

1. Have you allowed sufficient time for the on-the-job application phase to occur before conducting a post-instructional evaluation? 2. Have you reviewed your FEA and task analysis for decisions about what individual and organizational changes are desired? 3. If using an existing product, have you reviewed available information on previous post-instructional evaluation studies? 4. Have you reviewed informal evaluation results from the on-the-job application phase for possible use in designing your measurement tools? 5. Have you chosen the appropriate measurement tools based on objectivity, accessibility, and cost? 6. Have you developed a research design for collecting data? 7. Have you estimated the cost of the evaluation? 8. Have you weighed the costs and benefits of conducting a post-instructional evaluation? 9. Have you obtained management commitment before beginning the evaluation? 10. Have you developed your measurement tools? 11. Have you collected your data, measuring levels 3 and 4? 12. Have you analyzed your results? 13. If desired changes have not occurred, have you determined why? 14. Have you reported your post-instructional evaluation results? 15. Have you made necessary improvements and continued to monitor the process?

REFERENCES Kirkpatrick, D. L. (1967). “Evaluation of Training.” In Craig, R. (Ed.), Training and Development Handbook, American Society of Training and Development.

SELECTED BIBLIOGRAPHY Briggs, L. J. and Wager, W. W. (1981). Handbook of Procedures for the Design of Instruction. 2nd ed. Englewood Cliffs, NJ: Educational Technology Publications. Knirk, F. G. and Gustafson, K. L. (1986) Instructional Technology. New York: Holt, Rinehart, & Winston.

© 2003 by CRC Press LLC

SL316XCh05Frame Page 80 Monday, September 30, 2002 8:13 PM

80

Six Sigma and Beyond: The Implementation Process

Perry, S. B. and Robinson, E. J. (1987). Participative Techniques of Group Instruction. Princeton: Training House. Phillips, J. J. (1983). Handbook of Training Evaluation and Measurement Methods. Houston: Gulf Publishing. Renner, P. F. (1983). The Instructor’s Survival Kit: A Handbook for Teachers of Adults. Vancouver, Canada: Training Associates. Scientific Research Associates (1989). Instructor Training Curriculum: Delivery Skills. Chicago: Pergamon Press. Worthen, B. R. and Sanders, J. R. (1987). Educational Evaluation. White Plains, NY: Longman.

© 2003 by CRC Press LLC

SL316XCh06Frame Page 81 Monday, September 30, 2002 8:13 PM

6

Contract Training

Quite often, many organizations do not have the ability to develop their own instructional material. When that happens, the development for the specific training gets to be “hired out.” In fact, our experience tell us that most companies have indeed hired out not only the development of the material but the delivery as well of all Six Sigma requirements. On these occasions, the organization must be vigilant so that their wishes and concerns as well as objectives are met. This is what this chapter focuses on. We will try to answer the question of what should be done when the training, including the delivery, is bought from a third-party vendor. We call this process “specification” because it links all the phases of instructional design in the process of buying. This approach, we hope, provides guidance on how to professionalize every aspect of education and training, from front-end analysis of the problem through evaluation of program results. Specifications are an outline of the deliverables that should be expected from any organization that buys training material — especially the Six Sigma methodology. If additional detail is required to implement these specifications, it should be obtained from the project manager. Each phase in these specifications may be elaborated on in more detailed descriptions that can define any aspect of your requirements. Here, we give some of the most essential requirements for success.

FRONT-END ANALYSIS Problem-solving front-end analysis: the supplier shall perform a front-end analysis that includes the following outcomes: • Description of procedures and sources used to collect information for the front-end analysis. • Description of desired and actual performance levels or problem/gap defined in business-unit terms (e.g., direct labor, inventory, absenteeism, etc.). • Description of possible causes of the problem based on technical factors (e.g., materials, equipment, environment), organizational factors (e.g., structure, rewards, feedback, compensation), and human factors (e.g., skill and knowledge levels). • Determination of potential solutions, with supporting data. • Determination of the best solution based on a business case including description of problem cost; cost of solution; how well the solution will 81 © 2003 by CRC Press LLC

SL316XCh06Frame Page 82 Monday, September 30, 2002 8:13 PM

82

Six Sigma and Beyond: The Implementation Process

solve the problem; potential customer or stakeholder support; fit to organization’s culture, business objectives, and continuous improvement policy; potential implementation barriers and related costs; and potential return on investment. • Reporting of information to the organization including a description of how and why the FEA process began, identification of gaps, causes, operating consequences, personnel, jobs and costs involved, possible solutions, recommended solutions, how solutions will support continuous improvement, project scope and schedule, and appendices including back-up correspondence, budgets, data-gathering tools, etc. Planning front-end analysis: the supplier shall perform a planning front-end analysis that includes the following outcome: • Compliance with the organization’s process improvement procedures.

TASK ANALYSIS The supplier shall perform a task analysis that includes the following outcomes: • Analysis of the audiences receiving the instruction or job aid including primary and secondary audiences. Primary audiences include those going through the instruction or using the job aid. Secondary audiences include anyone whose support is necessary for successful performance by the primary audience, such as supervisors. Audience data, gathered from personnel records, surveys, etc., shall include information on demographics (e.g., age, gender, culture, etc.), capacity levels (e.g., intellect, cognitive style, physical development, etc.), competence levels (e.g., prior skills and training, experiential background, reading ability, languages spoken), and current skill and knowledge levels (relative to the current instructional program). • Determination of the terminal objectives. • Description of task steps (including all the sequenced major task steps, subtasks, sub-sub tasks, etc.) required to expertly reach the terminal objective. Include about each major task step the necessary information outlined in Table 2.5. Gather task step information from accomplished performers, administrative checklists, flowcharts, interviews, observation, surveys, etc. • Determination of a task hierarchy, based on description of task steps. • Description of instructional objectives for each major task step, subtask step (if needed), etc. Each instructional objective shall include a description of 1) the desired, observable task to be performed, 2) the standards by which the task accomplishment will be measured or evaluated for successful achievement, and 3) the conditions or circumstances under which the task will be performed.

© 2003 by CRC Press LLC

SL316XCh06Frame Page 83 Monday, September 30, 2002 8:13 PM

Contract Training

83

• Classification of each instructional objective by storage medium (i.e., either by instruction or job aid). For example, some instructional objectives will not require a learner to attend an instructional program. Sometimes, all the learner needs is a job aid. • Development of assessment instruments, to be used for pre- and postassessments. Items on the assessment instrument must match the instructional objectives. Product survey: the supplier shall perform a product survey that includes the following outcomes: • Survey of existing instructional products, locating those with the potential of meeting requirements outlined in the corresponding front end and task analyses. • Elimination of unsuitable products, including those products with a poor reputation, unreasonable prices, and inappropriate delivery media. • Evaluation of remaining suitable products according to Table 6.1. • A cost-benefit analysis for modifying an existing product or products, including description of all necessary changes, and comparison of costs to modify (by the company or by a supplier) with those to develop a customized program (by the company or by a supplier).

DESIGN OF INSTRUCTION The supplier shall design instruction that includes the following outcomes: • Development of a logically sequenced, technically accurate content outline based on the task hierarchy from the corresponding task analysis. • Development of a course strategy, including course title, description of lessons, modules, etc. • Description of instructional methods (e.g., lecture, group discussion, one-on-one, self-paced, simulation, role play, case study, on-the-job training, fieldwork, etc.) for each instructional objective. Instructional method decisions should be based on design and development principles (for example, see Table 6.2), instructional objectives, audience characteristics, and available resources. • Description of instructional media (e.g., print, visual aids, audio, audiovisual, computerized, physical objects, audience response systems, etc.) based on design and development principles, instructional objectives, audience characteristics, and available resources. • Determination of “A Plan for Development” of the instructional product, considering method and media chosen, and design and development principles. Development specifications shall include when, where, and how the following instructional elements shall be added to the instructional product: examples, drill and practice sessions, activities, illustrations, charts and diagrams, exhibits, simulations, reviews, summaries, and remediation. © 2003 by CRC Press LLC

SL316XCh06Frame Page 84 Monday, September 30, 2002 8:13 PM

84

Six Sigma and Beyond: The Implementation Process

TABLE 6.1 Criteria for evaluating products PRODUCT BEING EVALUATED: A-Acceptable; M-Modification Needed; NA-Not Acceptable CRITERION

DESCRIPTION

AUDIENCE

The product should meet all requirements for primary and secondary audiences. Examine readability, maturity level, cultural fit, etc.

OBJECTIVES

The product’s objectives should be similar to yours. If objectives are not explicitly stated, reconstruct each using the product’s content, test questions, exercises, etc.

TASK INFORMATION

If task steps are not explicitly identified, examine the product’s content for task information. Make sure the product’s task steps match your task analysis (TA) steps.

STORAGE MEDIUM

Your requirements may call for job aids, yet most products are geared to instruction only. Determine the adaptability of the product to your storage medium requirements.

GENERAL ANALYSIS

Review Chapter 4, Development of Materials, for a summary of learning principles. Determine the extent to which the product uses these principles.

ASSESSMENT INSTRUMENTS

Many products will not provide you with assessment instruments. Make sure tests are included and that test items match your objectives.

VALIDATION INFORMATION

Has the product resulted in real, documented performance gains? Does the research sample fit your audience?

USER REFERENCES

What do other companies or individuals within your organization think about the product?

(A) (M) (NA)

DESIGN OF JOB AIDS The supplier shall design job aid that includes the following outcomes: • Development of a logically sequenced, technically accurate content outline based on the task hierarchy from the corresponding task analysis. • Description of instructional media (e.g., print, visual aids, audio, audiovisual, computerized, physical objects, audience response systems, etc.) based on design and development principles in Table 6.2, instructional objectives, audience characteristics, and available resources. © 2003 by CRC Press LLC

SL316XCh06Frame Page 85 Monday, September 30, 2002 8:13 PM

Contract Training

85

TABLE 6.2 Design and development principles PRINCIPLE

DESCRIPTION

SEQUENCE MATERIAL

Gain learner attention. Inform learner of the objectives. Present desired outcome. Demonstrate desired outcome. Ask for performance. Give feedback on performance. Incorporate questions into materials. Use 70% of time devoted to discussion formats, active practice sessions, and immediate feedback. Present only one idea at a time at the appropriate level of difficulty. Use logical sequencing. Use color to draw attention, mnemonics to help retention, and multiple delivery systems to add variety and interest. Create hands-on learning experiences. Use examples/nonexamples/analogies. Elaborate on the content. Restate in greater detail or in different ways. Use overviews, summaries, and reviews. Use imagery, contrasts, and comparisons. Introduce new concepts at the beginning and go over in detail later. Develop outlines or job aids to reinforce principles and concepts learned. Connect instruction to learners’ personal or professional goals, interests, experiences, or present job. Combine new material with learners’ current knowledge base. Stress learners’ ability to be successful. Use lots of visuals. Give learners choices regarding pace, activities, etc., if possible. Provide numerous opportunities for learners to practice what they learned. Provide remediation opportunities.

MAKE IT INTERACTIVE

KEEP IT SIMPLE APPEAL TO THE SENSES

PROMOTE UNDERSTANDING AND REINFORCEMENT

PROMOTE ACCEPTANCE

PROMOTE PRACTICE

• Plan for development of the job aid based on design and development principles in Table 6.2, including determination of format (e.g., cookbook, flowchart, decision table, etc.), and inclusion of various instructional elements (e.g., illustrations, examples, charts, and diagrams). Determination shall be made as to when, where, and how each instructional element should be added to the job aid content.

DEVELOPMENT OF MATERIALS The supplier shall develop the instruction or job aid that includes the following outcomes: © 2003 by CRC Press LLC

SL316XCh06Frame Page 86 Monday, September 30, 2002 8:13 PM

86

Six Sigma and Beyond: The Implementation Process

TABLE 6.3 Development principles MEDIUM

DEVELOPMENT PRINCIPLE

PRINT

Place illustrations close to referenced text. Label and caption all illustrations, etc. Keep “cues” (boldface, etc.) to 10% or less of text. Place critical information either first or last in sentences or lists. Use color coding for easy access. Write procedure name at top of each page. Indent, bullet, and number steps and substeps. Use three to four sentences per paragraph. Use the same vertical and horizontal spacing throughout. Use lots of white space. Keep visuals short and simple and text large and legible, giving details on separate handout. Use no more than eight lines per visual and eight words per line. Use short titles, borders, and white space. Use the same fonts throughout, except for titles. Integrate graphics and color. Use short pauses, change volume, pitch, and pace to make key words and phrases stand out or to maintain attention level. Use short phrases and limit unwanted sounds. Make sure music does not compete or distract. Make sure narration is clear and can be heard. Refer to audio and visual sections. “Chunk” information instead of presenting a full day’s lesson plan in one session. Use neutral fashion and decor. Use bold video graphics for visibility. Program “easy access” into each lesson. Use boxes, color, highlights to direct attention. Allow learner control of pacing. Allow adequate learner response time. Limit amount of text on screen. Present one idea per screen, one or two sentences long.

VISUAL AIDS

AUDIO

AUDIO/VISUAL

COMPUTER-RELATED

• Receipt of format approval from appropriate stakeholders after development of a brief sample of the finished product, per the design or job aid plans. Use the development principles in Table 6.3 when constructing the brief sample. • Expansion of brief formats into complete rough drafts, after receiving format approval. Use prototypes to show what is intended if some features are too costly to produce in rough draft form. Refer to Table 6.4 when creating rough drafts. • Determination of technical and editorial accuracy of each rough draft, using a sample of stakeholders, sponsors, subject matter experts, and potential customers, including answers to the following information:

© 2003 by CRC Press LLC

SL316XCh06Frame Page 87 Monday, September 30, 2002 8:13 PM

Contract Training

87

TABLE 6.4 Forms of rough drafts Medium

Examples

Form of rough draft

PRINT

Textbooks, workbooks, manuals, programmed texts, placards, and any materials for participation/practice activities. Charts, diagrams, graphs, illustrations, drawings, photographs, exhibits, projected images, overheads, slides, etc. Radio, cassettes, reel-to-reel, disc, records. Filmstrips, television, motion pictures, video, lecture, lab, or other demonstrations. Computer-based instruction, computer management of instruction, computer supported learning aide, interactive video, audience response systems. Simulated environments, job aids, etc. Group discussion, role plays, case studies, on-the-job training, field trips, internships, structured environments, oneon-one instruction

Written draft of any text materials

VISUAL AIDS

AUDIO AUDIOVISUAL

COMPUTERIZED MEDIA

PHYSICAL OBJECTS PARTICIPATION

• • • • • • • •

• • • •

Sketches with accompanying text, if any

Scripts and or musical score Storyboards, scores, and scripts

Frames, storyboards, computer programs, simulations

Simulations of rough models Outline of procedures, written draft of text

Does content match content outline? Is content sequenced logically? Is content technically accurate? Is content clear, concise, and understandable? Does content teach to the test? Does content match objectives and audience characteristics? Can FEA goals be reached given the content? Are instructional elements (examples, illustrations, etc.) properly placed and technically accurate; do they match, support, and clarify content? Is remediation provided and acceptable? Are instructions clear? Is audio/video clear, appropriate, and understandable? Are punctuation, grammar, etc., accurate?

© 2003 by CRC Press LLC

SL316XCh06Frame Page 88 Monday, September 30, 2002 8:13 PM

88

Six Sigma and Beyond: The Implementation Process

• Determination of revisions according to one-on-one and small group reactions. • Development of final product following revision suggestions and design and development plans.

EVALUATION: PILOT TESTING The supplier shall pilot test the instruction or job aid that includes the following outcomes: • Assessment of the instructional products according to Kirkpatrick’s (1994, 1996) levels of evaluation, levels 1 and 2, including information on audience reactions (How did learners feel about the instruction/job aid?) and learning (What facts, techniques, skills or attitudes did learners understand and retain?). • Selection of a pilot test sample group based on a randomly drawn, representative sample of the total audience population. If a random sample is not feasible, choose an audience that represents an equal mix of the audience’s characteristics. • Assessment of baseline levels of audience knowledge and skill levels using assessment instruments developed in task analysis. • Delivery of the instruction or job aid according to the delivery plan (for example, if delivery is planned as individualized, deliver the instruction in an individual manner). The pilot audience shall be made aware they are participating in a pilot and that their feedback is necessary. The supplier shall provide training for those delivering the instructional product, if necessary. • Assessment of audience reactions to instruction using an audience response questionnaire (see typical content of a questionnaire in Table 6.5). • Assessment of audience reactions to job aids using questions such as: • Did the aid help in performing the job? • Did the aid make your job easier? • Did the aid help to solve any on-the-job problems? • What improvements could be made to the job aid? • Did the aid include enough information? • Assessment of learning gains using pre- and post-instruction knowledge and skill-assessment instruments developed in task analysis. Use a “90/90” measure for acceptable pilot testing results, meaning that 90% of the audience has learned 90% of the material upon completion of the instruction, or that 90% of the people successfully perform a newly learned skill 90% of the time.

DELIVERY OF MATERIALS The supplier shall deliver the instruction or job aid that includes the following outcomes: © 2003 by CRC Press LLC

SL316XCh06Frame Page 89 Monday, September 30, 2002 8:13 PM

Contract Training

89

TABLE 6.5 Typical audience’s response questionnaire Course name: Identify the course you are attending Instructor name: Identify the instructor’s name Date: Write the date of the training Directions for the questionnaire: How to answer the questions Program content: Several questions are asked about the program in general, including but not limited to: objectives, length, material, new information, on the job application and general understanding of the course. Instructional material: Several questions are asked about clarity and usefulness of materials. Instructional presentation: Several questions are asked about the instructor, including but not limited to: pace, time given for exercises, knowledge, response to the participant’s needs, presentation style and clarity, enthusiasm, preparation and organization, and whether or not the instructor encourages participation. General evaluation: Several questions are asked about the transfer of knowledge, expectations, future recommendations, appropriate prerequisites, appropriateness of facilities and the participant’s opinion as to whether or not the course met the organizational objectives. Demographics: Generally, this section is reserved for personal information from the participant. Typical requests are: age, education, experience in current position, experience with company, department of employment, how did you hear about the training. General comments: This section is reserved for the participant who wants to elaborate on their comments. There is space provided for: things you liked, disliked, would like more information, other comments and whether or not you need to be contacted by anyone to further discuss your concern(s). If yes, provide an e mail, or phone number.

(Note: a) questionnaires do not ask for a participant’s name and b) they are usually phrased in a Likert Type scale [1–4 or 1–6 — try to use even numbers, so that the participant may be forced to choose the direction of his or her decision. If odd number is given as a choice, the majority of people will choose the middle number. From an evaluation perspective that is not good at all because it does not communicate the true preference of the participant.] c) sometimes the responses are adjectives reflecting, for example, the agreement or disagreement with the given statement such as: strongly agree, agree, mildly agree, mildly disagree, disagree, strongly disagree, etc.) © 2003 by CRC Press LLC

SL316XCh06Frame Page 90 Monday, September 30, 2002 8:13 PM

90

Six Sigma and Beyond: The Implementation Process

• Description of the environment in which the instruction or job aid will be delivered (e.g., classroom, laboratory, etc.). • Description of expected patterns of use (e.g., hours, climate, season, etc.). • Determination of specifics about where, when, and how instruction or job aid will be delivered. • Development of “A Plan for Management Support.” • Development of “A Plan for Audience Acceptance” of instruction or job aids, considering audience characteristics, instructor, and environmental characteristics. • Development of “A Finalized Delivery Plan” including description of facility and sites, equipment, supplies, schedule, instructors, and miscellaneous items such as security clearances, special transportation, parking, etc. • Development of “A Plan for Ongoing Evaluation” for delivery of the instruction or job aid.

ON-THE-JOB APPLICATION Perhaps one of the most important elements for any training is the issue of transferring knowledge from training to the work place. In this stage, it is imperative to know in advance how this training will be implemented. This also will help determine the cost benefit of the training in level 4 evaluations. Therefore, the supplier shall deliver the instruction or job aid with the following outcomes: • Development of “A Plan Application to the Job” including analyzing the workplace environment. • Determining relevancy of the instructional product to the job. • Description of opportunities in instruction for practice, feedback and remediation. • Developing a learner/supervisor agreement (see Table 6.6). (Note: the learner/supervisor agreement, we admit, is hardly ever used, and that is perhaps why so many excellent training programs fail. Its intent is to sensitize both learner and supervisor to the idea that this training is not a free day from daily tasks but rather an investment for doing the tasks better or in innovative ways. This agreement puts both on notice as to what is expected and what the specific deliverables are.)

EVALUATION: POST-INSTRUCTION The supplier shall evaluate the instruction or job aid that includes the following outcomes: • Development of “A Plan for Evaluation,” including specific learner or organizational changes to be measured, specific data collection instruments, research design (see Table 6.7), and estimated cost of evaluation. © 2003 by CRC Press LLC

SL316XCh06Frame Page 91 Monday, September 30, 2002 8:13 PM

Contract Training

TABLE 6.6 Learner/supervisor post-instructional agreement EXPECTED POST-INSTRUCTIONAL BEHAVIOR RESOURCES NEEDED TO ATTAIN EXPECTED BEHAVIOR SUPERVISORY SUPPORT NEEDED OPPORTUNITIES FOR PERFORMANCE EXPECTED OBSTACLES DEADLINE FOR EXPECTED PERFORMANCE EMPLOYEE COMMENTS SUPERVISOR COMMENTS SIGNATURES:

TABLE 6.7 Research design action plan DESIRED CHANGE TO BE MEASURED MEASUREMENT TOOL(S) SAMPLE SOURCE SAMPLE SIZE SAMPLE LOCATION DATA COLLECTION TIME PLAN DATA COLLECTION PERSONNEL STATISTICAL ANALYSIS PROCEDURES STATISTICAL ANALYSIS TIME PLAN STATISTICAL ANALYSIS PERSONNEL

• Costs-benefit analysis of performing the post-instructional evaluation. • Development of applicable measurement tools. • A final report including what program was evaluated and why, how the evaluation was conducted and by whom, findings and implications (including statistical analyses), recommendations based on findings, implications, budget and time constraints, and plan of action. • Development of “A Plan for Continuous Improvement.” © 2003 by CRC Press LLC

91

SL316XCh06Frame Page 92 Monday, September 30, 2002 8:13 PM

92

Six Sigma and Beyond: The Implementation Process

REFERENCES Kirkpatrick, D. L. (1994). Evaluating Training Programs: The Four Levels. Berrett-Koehler. San Francisco. Kirkpatrick, D. L. (January 1996). Revisiting Kirkpatrick’s four level model. Training and Development Journal. Pp. 54–59.

SELECTED BIBLIOGRAPHY Bailey, R. W. (1982). Human Performance Engineering: A Guide for System Designers. Prentice Hall. Englewood Cliffs, NJ. Brinkerhoff, R. O. (1987). Achieving Results from Training. Jossey-Bass. San Francisco Kaufman, R. and F. W. English (1979). Needs Assessment: Concept and Application. Educational Publications. Englewood Cliffs, NJ. Kirkpatrick, D. L. (November–December 1959/January–February 1960). Techniques for evaluating training programs. Journal of the American Society of Training Directors. pp. 34–36; 54–57. Knowles, M. S. (1980). The Modern Practice of Adult Education: From Pedagogy to Andragogy. Follert. Chicago. Stamatis, D. H. (1997). TQM Engineering Handbook. Marcel Dekker. New York.

© 2003 by CRC Press LLC

SL316XCh07Frame Page 93 Monday, September 30, 2002 8:12 PM

Part II Training for the DMAIC Model

© 2003 by CRC Press LLC

SL316XCh07Frame Page 95 Monday, September 30, 2002 8:12 PM

7

Six Sigma for Executives

The intent of the executive training is to give executives an overview of the Six Sigma methodology. It is geared toward the leadership of the organization who will either approve the program in the organization or review and manage it. As a consequence, the focus is on a very high-level explanation of the methodology and the expectations and hardly any specificity about tools. Some focus is given to the significance of the project; however, even this includes no detail. It is often suggested that simple exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may be used to define a process, to explain the cost of quality in the organization, to identify the customer, etc. A central issue for this training is the notion of customer satisfaction and organizational profitability. Because organizations and their goals are quite different, we will provide the reader with a suggested outline of the training material for this executive session. It should last 1 day (sometimes 2), and the level of difficulty depends on the participants. The detailed information may be drawn from the first six volumes of this series. A typical executive program may want to focus on the following instructional objectives. The reader will notice that in some categories, there are no objectives. This is because for this stage of training, the material may be overwhelming and quite often unnecessary.

INSTRUCTIONAL OBJECTIVES — EXECUTIVES RECOGNIZE CUSTOMER FOCUS • Provide a definition of the term Customer Satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x). • Interpret the expression y = f(x).

BUSINESS METRICS • Define the nature of a performance metric. • Identify the driving need for performance metrics. • Provide a listing of several key performance metrics.

95 © 2003 by CRC Press LLC

SL316XCh07Frame Page 96 Monday, September 30, 2002 8:12 PM

96

Six Sigma and Beyond: The Implementation Process

• • • • • • • • •



Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma Rate of Improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization. State some problems (or severe limitations) inherent in the current cost-of-quality theory. Identify and define the principal categories associated with quality costs. Compute the cost-of-quality (COQ) given the necessary background data. Provide a detailed explanation of how a defect can impact the classical COQ categories. Explain the benefit of plotting performance metrics on a log scale.

SIX SIGMA FUNDAMENTALS • • • • • • • • • • • • • • • • • • • •

Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Understand the role of questions in the context of management leadership. Provide a brief history of Six Sigma and its evolution. Understand the need for measuring those things that are critical to the customer, business, and process. Define the various facets of Six Sigma and why Six Sigma is important to a business. Identify the parts-per-million defect goal of Six Sigma. Define the magnitude of difference between three, four, five, and six sigma. Recognize that defects arise from variation. Define the three primary sources of variation in a product. Describe the general methodologies that are required to progress through the hierarchy of quality improvement. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a foursigma business. Understand the key success factors related to the attainment of Six Sigma. Understand why inspection and test are nonvalue-added to a business and serve as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Provide a very general description of how a process capability study is conducted and interpreted. Understand the basic elements of a sigma benchmarking chart. Interpret a data point plotted on a sigma benchmarking chart. Understand the difference between the idea of benchmark, baseline, and entitlement cycle time.

© 2003 by CRC Press LLC

SL316XCh07Frame Page 97 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

• Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. • Understand that work-in-process (WIP) is highly correlated to the rate of defects. • Rationalize the statement: the highest quality producer is the lowest cost producer. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Understand that global benchmarking has consistently revealed four sigma as average while best-in-class is near the Six Sigma region. • Draw first-order conclusions when given a global benchmarking chart. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • State the general findings that tend to characterize or profile a four sigma organization. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Provide a qualitative definition and graphical interpretation of the standard deviation. • Understand the driving need for breakthrough improvement vs. continual improvement. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Provide a brief description for the outcome 1 – Y.rt. • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. • Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Interpret an array of sigma benchmarking charts. • Define the three primary sources of variation in a product. • Provide a very general description of how a process capability study is conducted and interpreted. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and how such defects can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Define the two primary components of process breakthrough. © 2003 by CRC Press LLC

97

SL316XCh07Frame Page 98 Monday, September 30, 2002 8:12 PM

98

Six Sigma and Beyond: The Implementation Process

• Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from a quality, cost, and cycle-time point of view. • Understand that the term sigma is a performance metric that applies only at the opportunity level.

DEFINE NATURE

OF

VARIABLES

• Explain the nature of a leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy.

OPPORTUNITIES

FOR

DEFECTS

• Provide a rational definition of a defect. • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities. • Recognize the difference between uniform and random defects. • Compute the defect-per-unit metric given a specific number of defects and units produced.

CTX TREE • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality.

PROCESS MAPPING • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT Tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. © 2003 by CRC Press LLC

SL316XCh07Frame Page 99 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

PROCESS BASELINES Nothing specific

SIX SIGMA PROJECTS • Explain why the five key planning questions are so important to project success. • Create a set of criteria for selecting and scoping Six Sigma black belt projects.

SIX SIGMA DEPLOYMENT • • • • • • • • • • • •

Provide a brief description of the nature of a Six Sigma black belt (SSBB). Provide a brief description of the nature of a Six Sigma champion (SSC). Describe the roles and responsibilities of a Six Sigma champion. Provide a brief description of the key implementation principles and identify principle deployment success factors. List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. Develop a business model that incorporates and exploits the benefits of Six Sigma. Describe the role and responsibilities of a Six Sigma black belt. Recognize the importance of, and provide a description for, the plan-train-apply-review (PTAR) learning process. Provide a brief description of the nature of a Six Sigma master black belt (SSMBB). Describe the roles and responsibilities of a Six Sigma master black belt. Understand the Six Sigma black belt instructional curriculum.

MEASURE Scales of Measure • Identify the four primary scales of measure and provide a brief description of their unique characteristics.

DATA COLLECTION • Provide a specific explanation of what is meant by the term replicate in the context of a statistically designed experiment. Measurement Error Nothing specific © 2003 by CRC Press LLC

99

SL316XCh07Frame Page 100 Monday, September 30, 2002 8:12 PM

100

Six Sigma and Beyond: The Implementation Process

Statistical Distributions • Construct and interpret a histogram for a given set of data. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. Static Statistics • Provide a qualitative definition and graphical interpretation of the variance. • Compute the sample standard deviation, given a set of data. • Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Compute the mean, standard deviation, and variance for a set of normally distributed data. • Provide a graphical understanding of the standard deviation and explain why it is so important to Six Sigma work. Dynamic Statistics • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. Analyze Six Sigma Statistics • Identify the key limitations of the performance metric Final Yield (i.e., output/input). • Identify the key limitations of the performance metric First-Time Yield (Y.ft). • Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. • Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric Rolled-Throughput Yield (Y.rt) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • Understand the impact of process capability and complexity on the probability of zero defects. • Provide a brief description of how one would implement and deploy the performance metric Rolled-Throughput Yield (Y.rt). • List at least five separate sources that could offer the data necessary to estimate a sigma capability.

© 2003 by CRC Press LLC

SL316XCh07Frame Page 101 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

101

Process Metrics Nothing specific Diagnostic Tools Nothing specific Simulation Tools Nothing specific Statistical Hypotheses Nothing specific

CONTINUOUS DECISION TOOLS • Provide a general description of the term experimental error and explain how it relates to the term replication. • Provide a general description of one-way analysis of variance and discuss the role of sample size. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact should they be violated.

DISCRETE DECISION TOOLS • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied.

IMPROVE EXPERIMENT DESIGN TOOLS • Provide a general description of what a statistically designed experiment is and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative.

© 2003 by CRC Press LLC

SL316XCh07Frame Page 102 Monday, September 30, 2002 8:12 PM

102

Six Sigma and Beyond: The Implementation Process

ROBUST DESIGN TOOLS • Explain what is meant by the term robustness and explain how this understanding translates to experimental design and process tolerancing.

EMPIRICAL MODELING TOOLS Nothing specific

TOLERANCE TOOLS Nothing specific

RISK ANALYSIS TOOLS • Demonstrate how the Six Sigma Risk Assessment methodology can be applied to engineering, manufacturing, transactional, and commercial problems. • List the disadvantages associated with worst-case analysis and compute the probability of worst case given the process capability data.

DFSS PRINCIPLES • Understand the fundamental ideas underlying the notion of manufacturability.

CONTROL PRECONTROL TOOLS • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented.

CONTINUOUS SPC TOOLS • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring.

DISCRETE SPC TOOLS Nothing specific

OUTLINE OF ACTUAL EXECUTIVE TRAINING CONTENT — 1 DAY Based on the above general objectives, it is recommended that the training follow the content format described below. By no means is this the only format possible. In fact, we provide two options. The first is the traditional 1-day orientation, and the second is a 2-day overview with more details. However, we believe that the © 2003 by CRC Press LLC

SL316XCh07Frame Page 103 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

103

content for both options follows a hierarchical sequence, and in this way we have attempted to accommodate the learning process. (The reader will notice that for the executive training, we make no distinction between transactional, technical, or manufacturing in the training because the people responsible (the executives) are one and the same for all three categories. Therefore, the material of the training is the same.) Introductions Agenda Ground rules Exploring Our Values

MAXIMIZE CUSTOMER VALUE The value of delivering outstanding quality consistently.

MINIMIZE PROCESS COSTS Dramatically reduce waste and inefficiency. In other words, Six Sigma properly applied helps your company achieve operational excellence. Improperly applied, it becomes the “program-of the-month” that fails to fully engage the commitment of valuable resources.

SIX SIGMA LEADERSHIP Six Sigma success starts at the top with managers and leaders who understand that Six Sigma is more than statistical tools and black belts but a philosophy of organizational profitability and customer satisfaction. How Six Sigma can and should be applied in your business environment. The resources needed to build your Six Sigma infrastructure. Actions required to achieve short- and long-term Six Sigma success.

THE SIX SIGMA DMAIC MODEL Define Measure Analyze Improve Control

HOW SIX SIGMA FITS What Six Sigma is and is not. How do I know Six Sigma is right for my organization?

© 2003 by CRC Press LLC

SL316XCh07Frame Page 104 Monday, September 30, 2002 8:12 PM

104

Six Sigma and Beyond: The Implementation Process

LEADERSHIP PREREQUISITES Communicating the vision, strategies, and plans. Developing an operational excellence strategy. Establishing metrics to drive and gauge continuous improvement.

DEPLOYMENT INFRASTRUCTURE Project selection Candidate selection Roles and responsibilities Champions, black belts, green belts, HR, finance Training and project support logistical considerations

SUSTAINING

THE

GAINS

Creating a learning organization. Establishing a knowledge sharing discipline.

PROJECT REVIEW GUIDELINES If time permits, it is strongly suggested to review the project guidelines. They are: Define/Measure • Identify CTQs • Ys (KPOVs) and Xs (KPIVs) • C&E matrix • C&E diagram • Data collection plan • Measurement system analysis • Pareto • Histogram/box plot • Process baseline (performance) • Process entitlement • Capability • FMEA • COPQ Analyze • Benchmarking • Multivariate study • Hypothesis testing • Regression analysis • Sample t test • Sample t test

© 2003 by CRC Press LLC

SL316XCh07Frame Page 105 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

• • • • •

105

Analysis of variance (both means and variances) Analysis of means Proportion test Chi square ID key factors

Improve • ID KPIV levels • Choose experimental design • Fractional factorial • Full factorial • Replication • Main effects plot • Interactions plot Control • Control charts • FMEA • Cost review

ALTERNATIVE SIX SIGMA EXECUTIVE TRAINING — 2 DAYS Introductions Agenda Ground rules Exploring our values • Comparing value systems • Behavior and values • Improving business performance by improving quality and consistently meeting customer expectations

MEASUREMENT • Measuring inputs, not just outputs • Reducing defects, by improving process and product, to help achieve business objectives • Measurements get attention • Performance metrics reporting • What do we measure now? • What numbers get the most attention in your area? • What quality measurements do we have? • How do we use these measures? Critical to satisfaction

© 2003 by CRC Press LLC

SL316XCh07Frame Page 106 Monday, September 30, 2002 8:12 PM

106

Six Sigma and Beyond: The Implementation Process

MAXIMIZING

THE

CUSTOMER SUPPLIER RELATIONSHIP

• Deriving value from the need–do interaction • Maximizing the interaction • Supplier strives for performance on cycle time, cost and defects to meet customers’ increasing expectations on delivery, price and quality. • Linking customer needs and what we do • The overall perspective...

THE

CLASSICAL VS. THE

• • • • • • • • •

SIX SIGMA

PERSPECTIVE OF YIELD

Measuring first-pass yield Final yield (Yfinal) First-time yield (YFT) Rolled-throughput yield Product A is produced in three consecutive (independent) steps Calculating normalized yield Normalized yield is the average yield-per-step of a sequential process... Six Sigma breakthrough challenge The hidden factory and rolled yield

TRADITIONAL YIELD VIEW THE TWO TYPES

OF

DEFECT MODELS

• Uniform defect: the same type of defect appears within a unit of product; e.g., wrong type of steel. • Random defect: the defects are intermittent and unrelated; e.g., flaw in surface finish.

PROCESS CHARACTERIZATION • Mean: arithmetic average of a set of values • Variance: the difference between the average and the measurement squared • Standard deviation: the square root of the variance. As the standard deviation increases, DPM increases. • Normal distribution: behavior of a process in the long term

THE

FOCUS OF

SIX SIGMA —

CUSTOMER SATISFACTION AND

ORGANIZATIONAL PROFITABILITY

• Y = f(x) • The leverage principle • Three variation reduction strategies © 2003 by CRC Press LLC

SL316XCh07Frame Page 107 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

107

• Six Sigma breakthrough strategy • DMAIC • Define • Measure • Analyze • Improve • Control

DEFINITION

OF A

ROLES

RESPONSIBILITIES

AND

PROBLEM

• Roles of an executive • Establish the vision — why are we doing Six Sigma? • Articulate the business strategy — how does Six Sigma support the business strategy? • Provide resources • Remove roadblocks/buffer conflicts

ROLES

OF A

CHAMPION

• • • • •

Develop a vision for the organization Create and maintain passion Develop a model for a perfect organization Facilitate the identification and prioritization of projects Develop the strategic decisions in the deployment of Six Sigma around timing and sequencing of manufacturing, transactional, and new-product focus • Extend project benefits to additional areas • Communicate and market the breakthrough strategy process and results

ROLES • • • • • • • • • • • •

OF THE

MASTER BLACK BELT

Be the expert in the tools and concepts Develop and deliver training to various levels of the organization Certify the black belts (BBs) Assist in the identification of projects Coach and support BBs in project work Participate in project reviews to offer technical expertise Partner with the champions Demonstrate passion around Six Sigma Share best practices Take on leadership of major programs Develop new tools or modify old tools for application Understand the linkage between Six Sigma and the business strategy

© 2003 by CRC Press LLC

SL316XCh07Frame Page 108 Monday, September 30, 2002 8:12 PM

108

Six Sigma and Beyond: The Implementation Process

ROLES • • • • • • • • • • • • • • • • • • • •

OF THE BLACK BELT

Knowledgeable of the breakthrough strategy application Prepare initial project assessment to validate benefits Lead and direct the team to execute projects Determine most effective tools to apply Show the data Identify barriers Identify project resources Get input from knowledgeable functional experts/team leaders/coaches Report progress to appropriate leadership levels Present the final report Deliver results on time Solicit help from champions when needed Influence without direct authority Be a breakthrough strategy enthusiast Stimulate champion thinking Teach and coach breakthrough strategy methods and tools Manage project risk Insure the results are sustained Document learning Prerequisites • Process/product knowledge • Willing and able to learn mathematical concepts • Knows the organization • Communication skill • Self starter/motivated • Open-minded • Eager to learn new ideas • Desire to drive change • Project leadership skills • Team player • Respected by others • Track record on results • Knowledgeable in breakthrough strategies • Results oriented

(Emphasis must be given to the notion of investment vs. return, since All black belts drive large cost and capacity improvements — an average of $200,000+ per project. Therefore, for a successful black belt project, involvement/ownership by the plant/support functions are critical!) How can executives accelerate the change process? The following points should be considered and discussed thoroughly: • Six Sigma breakthrough lessons learned: Six Sigma is a methodology to provide breakthrough results. However, for the breakthroughs and results © 2003 by CRC Press LLC

SL316XCh07Frame Page 109 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives









THERE

to continue there are constant barriers and challenges to breakdown or to overcome! For example: After 9 months 20–25% of all black belts typically are not working on projects • Reasons: high promotion rate • Enticed with $ from suppliers • Did one or two projects and went back to their original jobs All black belt projects were successful, but only 70% of the dollars could be tracked to the bottom line • Reasons: many “cost avoidance” projects • Finance was not involved in project selection/tracking • Projected savings were used to mask other operating issues • Projects were too future based (product line 6–9 months out) • Management did not act on breakthrough — people, inventory, bill of materials The majority of suppliers are not at a five sigma capability or will be in near future • Reasons: lack the financial resources for Six Sigma black belt training • No incentive to dedicate resources • Lack the talent to dedicate as BBs • Cannot afford to ship all BBs to suppliers • Many sites complaining that there were too many initiatives TQ, Six Sigma, materials, customer excellence, technical excellence, etc.. • Reasons: the site management teams did not have a clear understanding of the individual “tools” to use — Six Sigma, DFM, supplier partnership, customer satisfaction, etc. • Six Sigma progressive ARE FIVE ACTIONS THAT HAVE PROVEN CRITICAL

TO CONTINUED

• • • • •

109

SIX SIGMA

BREAKTHROUGH

The need for renewal every 9–10 months Senior management commitment and involvement Site leadership training/alignment Black belt dedication to projects for 2 years Supplier improvement

SIX SIGMA

BREAKTHROUGH

• Continuing the momentum • Moving from three to four sigma is based on improving fundamentals • Moving from four to five to Six Sigma is based on Six Sigma breakthrough strategy • The DMAIC model • Define © 2003 by CRC Press LLC

SL316XCh07Frame Page 110 Monday, September 30, 2002 8:12 PM

110

Six Sigma and Beyond: The Implementation Process

• • • •

Measure Analyze Improve Control

DEFINE PURPOSE • To identify the customers and their CTQs — critical to quality • To define the project scope and team charter • To map the process to be improved

QUESTIONS • • • • •

• • • • • •

TO BE

ANSWERED

Who are my customers and what is important to them (CTQ)? What is the scope of the project? What is the problem being addressed? What defect am I trying to reduce? What data has been collected to understand the customer requirements? What are the boundaries of this project? To what extent are the team roles and goals clearly understood and accepted? Are the key milestones and timelines established? Where do we currently take measurements? When, where, and to what extent does the problem occur? What is my process? How does it function? How was the process map validated? Are multiple versions necessary to account for different types of inputs? Why are you focusing on this project? What is the current cost of defects (poor quality)? What are the business reasons for completing this project? Are they compelling to the team? Are they compelling to the key stakeholders? How will you know if the team is successful? What is the goal of this project? Is the goal achievable?

A TYPICAL CHECKLIST • • • • • • • • •

FOR THE

DEFINE PHASE

Have the customers been identified? Have the data to verify customers needs been collected? Has voice of the customer (VOC) been accounted for? Has the team charter been formulated? Have all the operational definitions been identified and agreed upon? Has the problem statement been understood and agreed upon? Has the goal statement been defined and agreed upon? Is the project scope appropriate and applicable? Has it been approved? Is the time line for the project appropriate and applicable? Has it been approved?

© 2003 by CRC Press LLC

SL316XCh07Frame Page 111 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

111

• Are the financial benefits real and agreed upon? • Has the high level process map “as is” been defined and agreed upon?

TOOLS • • • • •

Process mapping — SIPOC CTMatrix Project scope contract Gantt chart Change management

MEASURE PURPOSE • To develop process measures (dependent variables or Ys) that will enable you to evaluate the performance of the process • To determine the current process performance and entitlement and assess it against the required performance • To identify the input variables that cause variation in process performance — Y

QUESTIONS

TO BE

ANSWERED

• Who are the suppliers to the process? • What are the process and output measures that are critical to understanding the performance of this process? • What are the performance standards for Y? • What is the link to the CTQ? • What are the defects for this project? • What are the primary sources of variability for this process? Are they control or noise variables? • What are the SOPs associated with each control variable? • Where will you collect data? What is your data-collection plan? How much data did you collect? • Is your ability to measure/detect “good enough?” • When did you sample? • How did you ensure you eliminated the influences of assignable causes within your rational subgroups? • How did you ensure that you included all the sources of variation between your rational subgroups? • Why is the project being addressed? • Have you created a shared need? • How is the process performing? • What is the current process sigma level for this project? © 2003 by CRC Press LLC

SL316XCh07Frame Page 112 Monday, September 30, 2002 8:12 PM

112

Six Sigma and Beyond: The Implementation Process

• What is the best that the process was designed to do? • What are the defect reduction goals for this project? • Have you found any “quick hit” improvements?

TYPICAL CHECKLIST • • • • • • •

FOR THE

MEASURE PHASE

Have the key measurements been identified? Has the rolled-throughput yield been calculated? Have the defects been identified? Has the data-collection plan been identified? Has the measurement capability study (GR&R) been completed? Have the baseline measures of process capability been addressed? Have the defect reduction goals been established and agreed upon?

TOOLS • • • • • • •

Process mapping Cause-and-effect diagram Cause-and-effect matrix FMEA GR&R Graphical techniques (run chart, control chart, histogram, etc.) Change management

ANALYZE PURPOSE • To prioritize the input variables that cause variation in process performance — Y • To analyze the data to determine root causes and opportunities for improvement • To validate the key process input variables with data

QUESTIONS

TO BE

• • • • • •

ANSWERED

Who is the process owner? What are all the key process input variables? Have you found any “quick-hit” improvements? What resistance have you experienced or do you anticipate? Where were data collected on the inputs? When you realize the opportunities represented by addressing the problem, what are the quantifiable benefits over your current process performance (COPQ)? • Why does the output of your process vary? • What are the inputs that matter most?

© 2003 by CRC Press LLC

SL316XCh07Frame Page 113 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

113

• How have you analyzed the data to identify the vital few factors that account for variation in the process? • How were the KPIVs from your C&E diagram verified? • What are the root causes of the problem?

TYPICAL CHECKLIST

FOR THE

ANALYZE PHASE

• Has the detailed “as is” process map been completed? • Have all sources of variation been identified and the prioritization initiated? • Have the SOPs been reviewed and revised as appropriate? • Is the usage and display of data appropriate and applicable to identify and verify the “vital few” (KPIVs)? • Has the problem statement been refined through an iteration process to reflect the increased understanding of the problem? • Have there been estimates of the quantifiable opportunity represented by the problem?

TOOLS • Process map • Graphical techniques (run chart, control chart, histogram, Pareto, scatter diagram, etc.) • Multivariate studies • Hypothesis testing • Correlation and regression • Change management

IMPROVE PURPOSE • To generate and validate improvements by setting the input variables to achieve the optimum output. • To determine Y = f(x…)

QUESTIONS

TO BE

• • • •

ANSWERED

Who is impacted by the change? How are they impacted? What day-to-day behaviors will need to change? What criteria did you use to evaluate potential solutions? What things have been considered to manage the cultural aspects of the change? • What has been done or will be done to mobilize support and deal with resistance?

© 2003 by CRC Press LLC

SL316XCh07Frame Page 114 Monday, September 30, 2002 8:12 PM

114

Six Sigma and Beyond: The Implementation Process

• What changes need to be made to rewards, training, structure, measurements, etc. to sustain the change? • Where was the solution validated? • When will the solution be implemented? • What is the implementation/communication plan? • Why was this solution chosen? • What are the potential problems with the plan? • How was an experiment or simulation conducted to ensure the optimum solution was found? How does the solution address the root cause?

TYPICAL CHECKLIST

FOR THE IMPROVE

PHASE

• Have there been solution alternatives to the problem? Is the one that best addresses the root cause the one that has been selected? • Has the “should be” process map been developed? • Have the key behaviors required by the new process been identified? • Has the cost-benefit analysis of the proposed solution been completed? • Has the solution been validated? • Has an implementation plan been developed? • Has a communication plan been established?

TOOLS • • • • •

Process map Design of experiments Simulation Optimization Change management

CONTROL PURPOSE • To institutionalize the improvement and implement ongoing control • To sustain the gains

QUESTIONS

TO BE

ANSWERED

• Who maintains the control plan? • How will responsibility for continued monitoring and improvement be transferred from the team to the owner? • What controls are in place to ensure that the problem does not recur? • Where is the data being collected? What control charts are being used? What evidence is there that the process is in control? • When will the data be reviewed? © 2003 by CRC Press LLC

SL316XCh07Frame Page 115 Monday, September 30, 2002 8:12 PM

Six Sigma for Executives

115

• When will the final report be completed? • Why is the control plan effective? • How has job training been affected? What are the biggest threats to making this change last? • What next? • Who is looking for translation opportunities (direct, customization, adaptation)? • What is the next problem that should be addressed in the context of this overall process? • What are some other areas of the business that could benefit from your learning? • When will the learning be shared with the other business areas? • Why is it likely to succeed? • How will the translation opportunities be communicated? • What did you as a team learn about the process of making Six Sigma improvements?

TYPICAL CHECKLIST

FOR THE

CONTROL PHASE

• • • • •

Has the control plan been completed? Is there evidence that the process is in control? Is there appropriate and applicable documentation of the project? Have translation opportunities been identified? Have the systems and structure changes been significant to institutionalize the improvement? • Have the audit plans been completed? • Has there been a poka yoke (mistake proofing) in the process? • Is there a preventive maintenance program in place?

TOOLS • • • • •

Control plans Statistical process control Gage control plan Appropriate and applicable techniques Change management

SIX SIGMA — THE INITIATIVE PROCESS — SYSTEMATIC APPROACH TO REDUCING DEFECTS THAT AFFECT WHAT IS IMPORTANT TO THE CUSTOMER • Tools — qualitative, statistical, and instructional devices for “observing” process variables and their relationships as well as “managing” their character © 2003 by CRC Press LLC

SL316XCh07Frame Page 116 Monday, September 30, 2002 8:12 PM

116

Six Sigma and Beyond: The Implementation Process

Six Sigma... the Practical Sense • The classical view of performance • The magnitude of difference: a different approach for the business — the goals of Six Sigma • Defect reduction • Yield improvement • Improved customer satisfaction • Higher net income

FOUNDATION

OF THE

TOOLS

• Qualitative • Quantitative

GETTING • • • •

TO

SIX SIGMA

How far can inspection get us? The impact of added inspection Using statistics to get us there How do we measure variation and quality?

THE STANDARD DEVIATION • • • •

Normal distribution data Variable Attribute Black belt certification program

© 2003 by CRC Press LLC

SL316XCh08Frame Page 117 Monday, September 30, 2002 8:11 PM

8

Six Sigma for Champions

The intent of champion training is to give selected executives a general understanding and familiarization of the Six Sigma methodology. It is geared toward the leadership of the organization who either facilitate the logistics (approve, review and/or manage the project; after all, the champion makes sure that the appropriate help and resources are available to the master black belts and black belts in pursuing process improvement) as well as mediate conflict in the process of the Six Sigma diffusion process in the organization. As a consequence, the focus is on a high-level explanation of the methodology and expectations, while there is little discussion about tools. Great focus is given to the significance of the project, with significant detail about how to select and define it and about what questions to ask as the project progresses. To be sure, the material for this training is at a high level more in terms of understanding the process and the requirements of Six Sigma methodology. A project champion is not expected to do a project; however, he is expected to understand the process and provide support as well as eliminate bottlenecks, especially when multiple departments are involved. In our estimation, the emphasis of this training should be on why as opposed to how. A project champion must be familiar with the process but also must understand the foundations of the approach in such a way that he or she may ask the right questions. His understanding should be on such a level that if he needs to explain the project to a green belt, his explanation would pass muster for the executive level as well. Of course, the opposite should also hold true. It is often suggested that simulated exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may include defining a process and coming up with ways to improve that process; defining five to ten operational definitions in that process; working with some variable and attribute data; calculating the DPO; working with histograms, box plots, scatter plots, Pareto charts, and DOE set-ups; running an experiment with software; and others. However, a central issue for this training is the notion of customer satisfaction and organizational profitability. Because organizations and their goals are quite different we will provide the reader with a suggested outline of the training material for this champion session. It should last 5 days and be taught by a master black belt or an outside consultant. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series. In a typical champion program, we may want to focus on the following instructional objectives. The reader will notice that in some categories, there are no objectives. This is because for that stage of training, the material may be overwhelming and quite often unnecessary:

117 © 2003 by CRC Press LLC

SL316XCh08Frame Page 118 Monday, September 30, 2002 8:11 PM

118

Six Sigma and Beyond: The Implementation Process

CURRICULUM OBJECTIVES FOR CHAMPION TRAINING RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x); y = f(x,n). • Interpret the expression y = f(x); y = f(x,n). Business Metrics • • • • • • • • • • • • • •

Define the nature of a performance metric. Identify the driving need for performance metrics. List at least six key performance metrics. Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma rate of improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization. State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. Identify and define the principal categories associated with quality costs. Compute the COQ given the necessary background data. Provide a detailed explanation of how a defect can impact the classical cost-of-quality categories. Explain the benefit of plotting performance metrics on a log scale.

Six Sigma Fundamentals • • • • • • • • •

Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Understand the role of questions in the context of management leadership. Provide a brief history of Six Sigma and its evolution. Understand the need for measuring those things that are critical to the customer, business, and process. Define the various facets of Six Sigma and why Six Sigma is important to a business. Identify the parts-per-million defect goal of Six Sigma. Define the magnitude of difference between three, four, five, and Six Sigma. Recognize that defects arise from variation.

© 2003 by CRC Press LLC

SL316XCh08Frame Page 119 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

119

• Describe the general methodologies that are required to progress through the hierarchy of quality improvement. • Define the phases of breakthrough in quality improvement. • Identify the values of a Six Sigma organization as compared to a four sigma business. • Understand the key success factors related to the attainment of Six Sigma. • Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. • Understand the difference between the terms process precision and process accuracy. • Understand the basic elements of a sigma benchmarking chart. • Interpret a data point plotted on a sigma benchmarking chart. • Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. • Understand that work in process (WIP) is highly correlated to the rate of defects. • Rationalize the statement: the highest-quality producer is the lowest-cost producer. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Understand that global benchmarking has consistently revealed four sigma as average, while best-in-class is near the Six Sigma region. • Draw first-order conclusions when given a global benchmarking chart. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • State the general characteristics or profile of a four sigma organization. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Provide a qualitative definition and graphical interpretation of standard deviation. • Understand the driving need for breakthrough improvement vs. continual improvement. • Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Define the three primary sources of variation in a product. • Provide a very general description of how a process capability study is conducted and interpreted. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Provide a brief description for the outcome 1 – Y.rt. • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. © 2003 by CRC Press LLC

SL316XCh08Frame Page 120 Monday, September 30, 2002 8:11 PM

120

Six Sigma and Beyond: The Implementation Process

• Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Define the two primary components of process breakthrough. • Provide a synopsis of what a statistically designed experiment is and what role it plays during the improvement phase of breakthrough. • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Understand that the term sigma is a performance metric that applies only at the opportunity level. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and how such a defect can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Explain the interrelationship between the terms process capability, process precision, and process accuracy. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from the points of view of quality, cost, and cycle-time.

DEFINE Nature of Variables • Explain the nature of a leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. Opportunities for Defects • Provide a rational definition of a defect. • Recognize the difference between uniform and random defects. • Compute the defect-per-unit metric given a specific number of defects and units produced. • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities.

© 2003 by CRC Press LLC

SL316XCh08Frame Page 121 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

121

CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT Tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. Process Baselines Nothing specific. Six Sigma Projects • Explain why the five key planning questions are so important to project success. • Explain how the generic planning guide can be used to create a project execution cookbook. • Create a set of criteria for selecting and scoping Six Sigma black belt projects. • Define a Six Sigma black belt project reporting and review process. • Interpret each of the action steps associated with the four phases of process breakthrough. Six Sigma Deployment • • • • • • • •

Provide a brief description of a Six Sigma black belt (SSBB). Describe the role and responsibilities of a SSBB. Understand the SSBB instructional curriculum. Recognize the importance and provide a description of the plan-train-apply-review (PTAR) learning process. Provide a brief description of a Six Sigma champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of a Six Sigma master black belt (SSMBB). Describe the roles and responsibilities of a SSMBB.

© 2003 by CRC Press LLC

SL316XCh08Frame Page 122 Monday, September 30, 2002 8:11 PM

122

Six Sigma and Beyond: The Implementation Process

• Provide a brief description of the key implementation principles and identify the principal deployment success factors. • List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. • Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. • Develop a business model that incorporates and exploits the benefits of Six Sigma. • Recognize that the SSBB curriculum sequence is correlated to the Six Sigma breakthrough strategy.

MEASURE Scales of Measure • Identify the four primary scales of measure and provide a brief description of their unique characteristics. Data Collection • Provide a specific explanation of the term replicate in the context of a statistically designed experiment. • Explain why there is a need to randomize the sequence of order in which an experiment takes place and what can happen when this is not done. Measurement Error • Describe the role of measurement error studies during the measurement phase of breakthrough. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Understand what the t distribution is and how it changes as degrees of freedom change. • Understand what the F distribution is and how it can be used to test the hypothesis that two variances are equal. Static Statistics • Provide a qualitative definition and graphical interpretation of variance. • Compute the sample standard deviation given a set of data. © 2003 by CRC Press LLC

SL316XCh08Frame Page 123 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

123

• Compute the mean, standard deviation, and variance for a set of normally distributed data. • Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Provide a graphical understanding of the standard deviation and explain why it is so important to Six Sigma work. • Compute Z.usl and Z.lsl for a set of normally distributed data and then determine the probability of defect. Dynamic Statistics • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. • Describe the role and logic of rational subgrouping as it relates to the short-term and long-term standard deviations. • Compute and interpret the total, inter-, and intragroup sums of squares for a given set of data. • Explain the difference between dynamic mean variation and static mean offset. • Explain the difference between inherent capability and sustained capability in terms of the standard deviation. • Explain why the term instantaneous reproducibility (i.e., process precision) is associated with the short-term standard deviation. • Explain why the term sustained reproducibility is associated with the long-term standard deviation. • Recognize the four principal types of process centering conditions and explain how each impacts process capability. • Compute and interpret the intra-, inter, and total sums of squares for a set of normally distributed data organized into rational subgroups.

ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). • Understand the impact of process capability and complexity on the probability of zero defects. • Provide a brief description of how one would implement and deploy the performance metric rolled-throughput yield (Y.rt). • Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. © 2003 by CRC Press LLC

SL316XCh08Frame Page 124 Monday, September 30, 2002 8:11 PM

124

Six Sigma and Beyond: The Implementation Process

• Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric rolled-throughput yield (Y.rt) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • List at least five separate sources that could offer the data necessary to estimate a sigma capability. • Explain how throughput yield (Y.tp) and opportunity counts can be employed to establish sigma capability of a product or process. • Illustrate how a system-level DPU goal can be flowed down through a product or process hierarchy to assess the required CTQ capability. • Illustrate how a series of CTQ capability values can be flowed up through a product or process hierarchy to establish the system DPU. Process Metrics • Explain why a Z can be used to measure process capability and explain its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. • Compute and interpret the Cp index of capability. • Compute and interpret the Cpk index of capability. • Explain the theoretical and practical differences between Cp, Cpk, Pp, and Ppk. • Compute and interpret Z.st and Z.lt for a set of normally distributed data organized into rational subgroups. • Compute and interpret Z.shift (static and dynamic) for a set of normally distributed data organized into rational subgroups. • Compute and interpret Cp, Cpk, Pp, and Ppk. • Explain how Cp, Cpk, Pp, and Ppk correlate to the four principal types of process centering conditions • Show how Z.st, Z.lt, Z.shift (dynamic), and Z.shift (static) relate to Cp, Cpk, Pp, and Ppk. • Create and interpret the standardized computer output report. Diagnostic Tools • Understand, construct, and interpret a multivariate chart, then identify areas of application. Simulation Tools • Describe what is meant by the term Monte Carlo simulation and demonstrate how it can be used as a design tool. • Create a series of random normal numbers with a given mean and variance. © 2003 by CRC Press LLC

SL316XCh08Frame Page 125 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

125

Statistical Hypotheses • Explain how a practical problem can be translated into a statistical problem and describe the benefits of doing so. • Explain what a statistical hypothesis is, why it is created, and show the forms it may take in terms of the mean and variance. • Define the concept of alpha risk and provide several examples that illustrate its practical consequence. • Define the concept of statistical confidence and explain how it relates to alpha risk. • Define the concept of beta risk and provide several examples that illustrate its practical consequences. • Provide a detailed understanding of the contrast distribution and how it relates to the alternate hypothesis. • Explain what is meant by the phrase statistically significant difference and recognize that such differences do not imply practical difference. • Construct a truth table that illustrates how the null and alternate hypotheses interrelate with the concepts of alpha risk and beta risk. • Recognize that the extent of difference required to produce practical benefit is referred to as delta. • Explain what is meant by the term power of the test and describe how it relates to the concept of beta risk. • Understand how sample size can impact the extent of decision risk associated with the null and alternate hypotheses. • Establish the appropriate sample size for a given situation when presented with a sample size table. • Describe the dynamic interrelationships between alpha, beta, delta, and sample size from a statistical as well as practical perspective. • List the essential steps for successfully conducting a statistically based investigation of a practical real-world problem. • Provide a detailed explanation of the null distribution and how it relates to the null hypothesis. Continuous Decision Tools • Provide a conceptual explanation of statistical confidence interval and how it relates to the notion of random sampling error. • Understand what the distribution of sample averages is and how it relates to the central limit theorem. • Explain what the standard error of the mean is and demonstrate how it is computed. • Compute the tail area probability for a given Z value that is associated with the distribution of sample averages. • Compute the 95% confidence interval for the mean of a small data set and explain how it may be applied in practical situations. © 2003 by CRC Press LLC

SL316XCh08Frame Page 126 Monday, September 30, 2002 8:11 PM

126

Six Sigma and Beyond: The Implementation Process

• Rationalize the difference between a one-sided test of the mean and a two-sided test of the mean. • Understand what the distribution of sample differences is and how it can be employed for testing statistical hypotheses. • Compute the 95% confidence interval for the mean of sample differences given two samples of normally distributed data. • Understand the nature of a one- and two-sample t test and apply this test to an appropriate set of data. • Compute and interpret the 95% confidence interval from a sample variance using the chi-square distribution. • Explain how the 95% confidence interval from a sample variance can be used to test the hypothesis that two variances are equal. • Provide a general description of the term experimental error and explain how it relates to the term replication. • Provide a general description of one-way analysis of variance and discuss the role of sample size. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact it they are violated. Discrete Decision Tools • Construct a 95% confidence interval for a Poisson mean and discuss how this can be used to test hypotheses about Poisson means. • Understand how to calculate the standard deviation for a set of data selected from a binomial distribution. • Compute the 95% confidence interval for a proportion and explain how it can be used to test hypotheses about proportions. • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied.

IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. © 2003 by CRC Press LLC

SL316XCh08Frame Page 127 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

127

• Recognize that the most powerful application of modern statistics cannot rescue a poorly designed experiment. • Explain what is meant by the term full factorial experiment and how it differs from a fractional factorial experiment. Robust Design Tools • Explain briefly the term robust design and why and when process capability data must be factored into the design process. • Explain what is meant by the term robustness and how this understanding translates to experimental design and process tolerancing. • Provide a statistical explanation of the term heteroscedasticity and discuss its practical implications. Empirical Modeling Tools Nothing specific. Tolerance Tools • Demonstrate why worst-case tolerance analysis is an overly conservative and costly design tool. • Create a graphical explanation of how performance tolerances can be defined using the results of a two-level factorial experiment. Risk Analysis Tools • Compute the standard deviation for a linear sum of variances and explain why the variances must be independent. • Compute the system-level defect probability given the subsystem means, variances (of a linear model), and relevant performance specifications. • Describe how root sum of squares (RSS) can be used as a design-to-cost tool and how it can be employed to analyze and optimize process cycle time. • Demonstrate how the Six Sigma risk assessment methodology can be applied to engineering, manufacturing, transactional, and commercial problems. • List the disadvantages associated with worst-case analysis and compute the probability of worst case given the process capability data. DFSS Principles • Understand the fundamental ideas underlying the notion of manufacturability. • Understand how statistically designed experiments can be used to identify leverage variables, establish sensitivities, and define tolerances. © 2003 by CRC Press LLC

SL316XCh08Frame Page 128 Monday, September 30, 2002 8:11 PM

128

Six Sigma and Beyond: The Implementation Process

• Understand how product and process complexity impacts design performance. • Explain the concept of error propagation (both linear and nonlinear) and what role product and process complexity plays. • Describe how reverse error propagation can be employed during system design. • Explain why process shift and drift must be considered in the analysis of a design and how it can be factored into design optimization. • Describe how Six Sigma tools and methods can be applied to the design process. • Discuss the pros and cons of the classical approach to product and process design relative to that of the Six Sigma approach.

CONTROL Precontrol Tools • Develop a precontrol plan for a given CTO and explain how such a plan can be implemented. • Describe the unique characteristics of the precontrol method and compare precontrol to statistical process control charts. Continuous SPC Tools • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. Discrete SPC Tools Nothing specific.

SIX SIGMA PROJECT CHAMPION TRANSACTIONAL (GENERAL BUSINESS AND SERVICE — NONMANUFACTURING) TRAINING Based on the above general objectives, it is recommended that the training follow the content format given below. By no means is this the only format. In fact, we provide three options. The first is transactional training, the second technical training, and the third manufacturing training. All three options follow a hierarchical sequence, and we have attempted to accommodate the learning process. The distinction of the three categories is to emphasize the need for Six Sigma in nonmanufacturing (service organizations), research and development groups (the Six Sigma methodology should be applied in the development stage, never in the research stage), and, of course, in the manufacturing areas. It is also very important to note © 2003 by CRC Press LLC

SL316XCh08Frame Page 129 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

129

that the objectives for all categories are the same. However, the different training formats emphasize different elements of the methodology. Introductions Agenda Ground rules Exploring our values Objectives Definition: the transactional approach is based on the customer, the opportunity, and the successes. (It must be stressed that we will be applying Six Sigma methodology rather than following the “pack.” Make sure that emphasis is placed on the process and the tools. In the case of the process, participants must recognize that Six Sigma methodology should be followed systematically, as should attempts to reduce nonconformances that are important to the customer. With respect to the tools, participants must recognize that qualitative as well as quantitative techniques may be employed to resolve issues.) In other words: • • • •

Know what is important to the customer. Reduce nonconformances. Center around the target. Reduce variation.

SIX SIGMA BREAKTHROUGH GOAL • A solution for improving company value • A business strategy for net income improvement • A means to enhance customer perception

SIX SIGMA GOAL Defect reduction – why is it important to focus on cost of poor quality (COPQ) Yield improvement Improved customer satisfaction and higher return on investment — the importance of learning faster than our competitors is the only sustainable advantage. This is the reason why Six Sigma methodology emphasizes breakthrough improvements rather than incremental ones.

COMPARISON

BETWEEN

THREE SIGMA

AND

SIX SIGMA QUALITY

SHORT HISTORICAL BACKGROUND The business case for implementing Six Sigma, after the definition, this item is very important. It must be understood by all before moving on to a new topic. It is the reason why Six Sigma is going to be implemented in your organization. Therefore, not only © 2003 by CRC Press LLC

SL316XCh08Frame Page 130 Monday, September 30, 2002 8:11 PM

130

Six Sigma and Beyond: The Implementation Process

must it be understood, but in addition it must make sense and be believable. Sharing the executive committee members list with everyone is one of the ways to make individuals understand the importance of the implementation process. Another way is to provide some background about the black belts as individuals and their commitment to Six Sigma and to identify specific projects that plague the organization, either genuine financial problems or issues perceived as problems by customers. Yet another way may be to present some specific examples of your company in relationship to your competitors.

OVERVIEW

OF THE

BIG PICTURE

Deployment structure: the Six Sigma implementation process must be a topdown flow; otherwise, it will not work. Executive leadership (part-time basis): executives should be the drivers of the Six Sigma process in directions that meet key business goals and address key customer satisfaction concerns. Key roles are: • Establish the vision • Articulate the business strategy • Provide resources • Remove roadblocks • Support the culture change • Monitor the results • Define the criteria for success and make others accountable for the results • Align the systems and structures with the changes taking place • Participate with the black belts through project reviews and recognize results Master black belt (full-time basis): they are the trainers, coaches, and facilitators. They are the experts of Six Sigma tools and methodologies and are responsible for training and coaching black belts. Master black belts, or shoguns as we call them, may also be responsible for leading large projects on their own. Key roles are: • Be the expert in tools and concepts • Facilitate and implement Six Sigma in the organization • Certify the Black Belts • Assist in identifying projects • Coach and support black belts • Participate in project reviews • Develop new tools or modify old tools for applications • Lead major programs • Share best practices • Drive passion • Partner with champion Project champions (part-time basis): they drive the Six Sigma through the process and are accountable for the performance of black belts and the © 2003 by CRC Press LLC

SL316XCh08Frame Page 131 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

131

results of Six Sigma projects in their area. They are the conduit between the executive leadership and the black belt, and they are supposed to eliminate bottlenecks and conflicts that arise during projects, especially projects with cross-functional responsibilities. Key roles are: • Execute the vision through the organization • Create and maintain passion • Identify and prioritize projects • Identify and select the black belts • Develop the reward and recognition program • Share best practices in the organization • Remove barriers for black belts • Drive and communicate results • Develop a comprehensive training plan • Communicate the linkage between Six Sigma and the business strategy Black belts (full-time basis): they are accountable for driving projects and are responsible for leading and teaching Six Sigma processes within the company. Black belts are also responsible for applying Six Sigma tools to complete a predetermined amount of projects worth at least $250,000 each (projects are commonly worth between $400,000 and $600,000). It is expected that the improvement will be a breakthrough improvement with a magnitude of 100X. Key roles are: • Full time • Identify barriers • Lead project teams • Identify project resources • Be expert of the breakthrough strategy • Teach and coach as needed • Manage project risk • Deliver results on time • Report project status • Complete final report and control plan • Ensure results are sustained Green belts (part time basis): they are expected to help black belts with expediting and completing Six Sigma projects and may take the lead in small projects of their own. They should also look for ways to apply Six Sigma problem-solving methods within their work area. Key roles are: • Apply the methodology in functional areas • Support the black belts in completing projects • Be project team member • Help ensure improvements are sustained • Concurrent with existing responsibilities Process Driven, NOT Event Driven Rollout strategy (emphasize the importance of projects and measurement) Management’s responsibility

© 2003 by CRC Press LLC

SL316XCh08Frame Page 132 Monday, September 30, 2002 8:11 PM

132

Six Sigma and Beyond: The Implementation Process

Training requirements Black belts Green belts Project definition: • Who is my customer? • What matters? What are the CTQs? • What is the scope? • What nonconformance am I trying to reduce? By how much? • Is the goal of reduction realistic? • What is the current cost of poor quality? • What benefits will we get if we improve to the point of reaching our goal? Project selection: define the project charter. This will provide the appropriate documentation for communicating progress and direction to the rest of the team but also to management. To use the CT Matrix follow the seven steps: • Identify the customers • Meet with customers and identify CTSs • Perform CTY then CTX breakdown and construct CT Matrix • Identify critical or leverage processes • Set improvement objectives and develop action plans • Assign agents • Identify CTPs for critical or leverage processes through Six Sigma projects

IDENTIFY CUSTOMER Y = f(X). Y is the output and the Xs are is the inputs. Identify Y and determine the Xs. It is imperative to understand that most often a single Y may be influenced by more than one X. Therefore, we may have Y = F(X1, X2…, Xn). However, that is not all. We may even have a single X cascading into a further level, such that for every X1 we may have Y = f(x1, x2,…,xn) This is called cascading. Apply project selection checklist. To ensure the selected issue, concern, or problem will make a good Six Sigma project, a checklist can be applied to verify the project’s potential. Simple criteria for selection are the following six questions: • Does the project have recurring events? • Is the scope of the project narrow enough? • Do metrics exist? Can measurements be established in an appropriate amount of time? © 2003 by CRC Press LLC

SL316XCh08Frame Page 133 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

133

• Do you have control of the process? • Does the project improve customer satisfaction? • Does the project improve the financial position of the company? If the answer to all of these questions is yes, then the project is an excellent candidate. Another way to look at the project selection may be to focus on impact, time, tools, metrics, financial, research, and team effort. Typical questions are: • • • • • • • • • • • • • • • • •

What corporate objective is supported by this objective? What business group objective is addressed by the project? What customer will benefit from this project? How? Can the project be completed within 3 to 4 months? Could the process improvements be handled adequately via basic methods and techniques? Is the more structured Six Sigma approach and the methodology required desirable for this project? Will this project require application of all phases of Six Sigma? Have you defined the nonconformances opportunities? Do the baseline nonconformance data exist to support project selection? Is the nonconformance reduction offered greater than 70%? What improvements are expected in your area from the project? Are projected savings greater than or equal to $XXXK per year? Will this project lead to improvements with little or no capital? Is there a similar project already under way or proposed at another location? Can this project be led by a black belt? Can you identify the team members to start this project? Is capital investment required?

Develop high-level problem statement. This is a high-level description of the issue to be addressed by the green belt or black belt. The problem statement will be the starting point for the application of the Six Sigma methodology. This is the point where the champion really needs to understand the process because he or she has to “sell it” to management. In other words, he or she has to make the business case for the project.

THE DMAIC PROCESS The model: it is a structured methodology for executing Six Sigma project activities. Make sure to point out here that the model is not linear in nature. Quite often, teams may find themselves in multiple phases so that thoroughness is established. Define: the purpose is to refine the project team’s understanding of the problem to be addressed. It is the foundation for the success of both the project and of Six Sigma.

© 2003 by CRC Press LLC

SL316XCh08Frame Page 134 Monday, September 30, 2002 8:11 PM

134

Six Sigma and Beyond: The Implementation Process

Measure: the purpose is to establish techniques for collecting data about current performance that highlight project opportunities and provide a structure for monitoring subsequent improvements. Typical questions are: • What is my process? How does it function? • Which outputs affect CTQs most? • Which inputs seem to affect outputs (CTQs) most? • Is my ability to measure and or detect “good enough?” • How is my process doing today? • How good could my (current) process be when everything is running smoothly? • What is the best that my process was designed to do? Analyze: the purpose is to allow the team to further target improvement opportunities by taking a closer look at the data. Typical questions are: • Which inputs actually affect my CTQs most? By how much? • Do combinations of variables affect outputs? • If I change an input, do I really change the output? • If I observe results from the same process and different locations and results appear to be different, are they really? • How many observations do I need to draw conclusions? • What level of confidence do I have regarding my conclusions? • Can I describe the relationship between inputs and outputs in a statistical format? • Do I know the inputs with the biggest impact on a given output? Improve: the purpose is to generate ideas about ways to improve the process; design, pilot, and implement improvements; and validate improvements. Typical questions are: • Once I know for sure which inputs most impact my outputs, how do I set them? • How many trials do I need to run to find and confirm the optimal setting and procedure of these key inputs? • Do I use systematic experimentation to find the input combination that delivers the optimal output? Control: the purpose is to institutionalize process and product improvements and monitor ongoing performance. Typical questions are: • Once I have reduced the nonconformances, how do the functional team and I keep them there? • How does the functional team keep it going? • What do I set up to keep it going even when things like people, technology, and customers change ? Select product or process key characteristics, e.g., customer Y using the improvement strategy — the DMAIC model. Please notice that every output is data-based; therefore, the decision is data-based. Define/measure: • Define performance standards for Y. The focus is Y. © 2003 by CRC Press LLC

SL316XCh08Frame Page 135 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

135

• Validate measurement system for Y. The focus is Y. • Establish process capability of creating Y. The focus is Y. • Define improvement objectives for Y. The focus is Y. Analyze: • Identify variation sources in Y. The focus is Y. • Screen potential causes for change in Y and identify vital few x1. The focus is on x1, x2,…xn. • Discover variable relationships among vital few x1. The focus is on x1, x2,…xn. Improve: • Establish operating tolerances on vital few x1. The focus is on the vital few x1. • Validate measurement system for x1. The focus is on the vital few x1. • Determine ability to control vital few x1. The focus is on the vital few x1. Control: • Implement process control system on vital few x1. The focus is on the vital few x1.

DETAILED MODEL EXPLANATION Define the organization’s values. Key questions are: • • • • • •

What do we really value? Who are our customers and what do they need? Who are we and what do we do? What does customer satisfaction mean? Do our values correlate with those of our customers? How do we verify that we meet internal and external needs?

PERFORMANCE METRICS REPORTING • The classical view vs. the Six Sigma approach • Understand the difference • Understand the magnitude of this difference

ESTABLISH CUSTOMER FOCUS • • • • • •

What is important to the customer? How do we know? Critical to satisfaction Importance of identifying the CTQs. Understand the difference between functional wants and requirements. Understand the interaction between what customers need and what suppliers do.

© 2003 by CRC Press LLC

SL316XCh08Frame Page 136 Monday, September 30, 2002 8:11 PM

136

Six Sigma and Beyond: The Implementation Process

DEFINE VARIABLES: KEY QUESTIONS ARE • • • •

What is meant by variables? What is a dependent variable? What is an independent variable? What other labels are synonymous with dependent and independent variables? • What is meant by leverage variable? • What strategies can be used to isolate leverage variables?

THE FOCUS

OF

SIX SIGMA

• Y = f(x) • The critical to (CT) concept. Key questions are: • What does critical to satisfaction (CTS) mean in terms of customers? • What does critical to quality (CTQ) mean in terms of a product, service, or transaction? • What does critical to delivery (CTD) mean in terms of a product, service, or transaction? • What does critical to cost (CTC) mean in terms of a product, service, or transaction? • What does critical to process (CTP) mean in terms of a product, service, or transaction? • What is the relationship between defect opportunities and CTs? • The critical to quality (CTQ) and Critical to Process (CTP) • Customer satisfaction

PROCESS OPTIMIZATION PROCESS BASELINE: KEY QUESTIONS ARE • What is a process baseline and how is it different from a product benchmarking? • What is the relationship between a process baseline and a process mapping? • What is the relationship between a process baseline, CTs, and nonconformance opportunities? • Where are the key performance metrics associated with a process baseline? • How should a process baseline be established? • How can a process baseline be improved? Supplier improvement. Supplier capability is a critical piece of breakthrough strategy.

© 2003 by CRC Press LLC

SL316XCh08Frame Page 137 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

137

Measure Process Characterization Understanding the concept of rolled-throughput yield Traditional Y = S/U where Y is yield; S = number of units that pass; and U = number of units tested Definition of nonconformance (defect) Six Sigma definition of yield (yield at every step in the process) Yield without rework Hidden factory (rework, nonvalue activities) First pass yield (Yrt) — no rework Normalizing yield (Ynorm) is the average yield per step of a sequential process, e.g., a process with three steps has a yield of 75%, 80%, and 95% per step yield. The normalized average is: 0.75 × 0.80 × 0.95 = 57% Yrt; Ynorm =

3

.57 = 82.91%.

Rolled throughput yield = P(operation 1) × P(operation 2) ×… P(operation n) = e–dpu True yield = Y =

(d / u)r e − d /u (d / u)0 e − d /u 1(e − d /u ) = = = e− d /u r! 0! 1

where Y = yield; d/u = the defects per unit; e = 2.718. Therefore, when r = 0, we obtain the probability of zero defects or rolled throughput yield. This is very different from the classical determination of yield. Poisson approximation Useful formulas DPU = defects per unit TOP (total opportunities) = Units × opportunities DPO (defects per opportunity) = defects per TOP Probability the opportunity is defective = DPO Probability the opportunity is not defective = Pr(ND) = 1 – DPO Rolled yield is the likelihood that any given unit of product will contain 0 defects (recommended when you know the yield of each process element or opportunity) Yrt = Pr(ND) # of opportunities Yrt = Pr1(ND) × Pr2(ND) ×…Prn(ND) Integration of rework loops is to understand the ramifications of processes that are causing the defects.

PROCESS MAPPING Understanding the visual display of the process © 2003 by CRC Press LLC

SL316XCh08Frame Page 138 Monday, September 30, 2002 8:11 PM

138

Six Sigma and Beyond: The Implementation Process

Understand the “what you think it is…” Understand the “what it actually is…” Understand the “what you would like it to be…” Differentiate between business process — strategic, business processes — internal, SIPOC model, and detailed subprocesses map.

CAUSE

AND

EFFECT

Understand the function of the cause and effect in a nonmanufacturing environment. Differentiate between manufacturing and nonmanufacturing causes, e.g., manufacturing: manpower — people, machine, method, material, measurement, mother nature — environment. transactional/commercial/ service: manpower — people, policies, procedures, place, measurement, mother nature — environment. Cause and effect matrix — the idea is to identify and evaluate control plans for key process input variables (KPIVs). Step 1. Identify key customer requirements. Step 2. Rank order and assign priority factor to each output. Step 3. Identify all process steps and materials (inputs) from process map. Step 4. Evaluate correlation (a low score can have small effect on output variables; a high score can greatly affect the output variable). Step 5. Cross-multiply correlation values with priority factors and add for each input.

THE APPROACH

TO

C&E MATRIX

Approach 1. a) place the outputs across the top of the matrix, b) rank outputs, c) place inputs down the left side of the matrix starting with first process step and move to last process step. Approach 2. a) place the outputs across the top of the matrix, b) place the process steps down the left side of the matrix, c) correlate process steps to outputs, d) Pareto the process steps, e) start a new C&E matrix with inputs from the most critical three or four process steps.

LINKS

OF

C&E MATRIX

TO

OTHER TOOLS

Capability summary – key outputs are evaluated FMEA – potential problems are identified Control plan – key inputs are evaluated

BASIC STATISTICS The ten basic statistical concepts © 2003 by CRC Press LLC

SL316XCh08Frame Page 139 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

• • • • • • • • • •

139

Types of data Central tendency Confidence intervals Variation Spread Central limit theorem Distributions Degrees of freedom Probability Accuracy and precision

Process capability – customer requirements; process characterization; process stability Cp, Cpk, Pp, and Ppk — in Six Sigma the focus is first on centering and then controlling the spread Rational subgroupings Sampling Sample Short- vs. long-term capability (performance)

CONVERTING DPM

TO A

Z EQUIVALENT

Understand the Z values Know how to use the Z table Standard transformations Understand the difference between pooled and total standard deviation Pooled — taken over a relatively short time. It takes into account only the variation within a subset and common causes of variation. Total standard deviation — taken from many samples that represent the shift and drift that occur in the population due to all causes of variation. Graphical tools — the most important analysis tool is to ALWAYS plot the data.

BASIC GRAPHS Pareto Time series Standard plot Boxplot Histograms Marginal plots Scatter plots Control charts Other charts Check sheets Cause and effect © 2003 by CRC Press LLC

SL316XCh08Frame Page 140 Monday, September 30, 2002 8:11 PM

140

Six Sigma and Beyond: The Implementation Process

ANALYZE Process optimization — hypothesis testing Roles of statistics Population vs. samples Why do we need hypothesis testing? Significance vs. importance

IMPROVE Design of experiments (DOE) What is it? Objectives Strategy One, two, multiple factors at a time Interactions Model building

CONTROL Process optimization — process control What is control? Sources of variation Types of variation Statistical process control (SPC)/control charts Attribute Variable Project report outs

SIX SIGMA PROJECT CHAMPION — TECHNICAL TRAINING This training in the implementation process of Six Sigma is intended to familiarize the individuals who are about to facilitate the logistics as well as mediate conflict in the Six Sigma diffusion process with the technical areas of the organization. The technical manager ensures that the appropriate help and resources are available to the master black belts and black belts in pursuing process improvement. To be sure, the material for this training is much more technical (more detailed) than the transactional material, even though it is designed for high-level users. The purpose of this is to ensure that the process and requirements of Six Sigma methodology are understood. A technical champion is not expected to do a project, but he is expected to understand the process and provide support as well as eliminate bottlenecks, especially when multiple departments are involved. In our estimation, the emphasis of this training should be on why as opposed to how. A technical champion must be familiar with the process but also must understand the foundations of the approach in such a way that he can ask the right questions. His understanding © 2003 by CRC Press LLC

SL316XCh08Frame Page 141 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

141

should be on such a level that if he needs to explain the project to a green belt, he feels comfortable enough that his explanation would pass muster for the executive level as well. The opposite should hold true as well. It is often suggested that simulated exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may include defining a process and coming up with ways to improve that process; defining five to ten operational definitions in that process; working with some variable and attribute data; calculating the DPO; working with histograms, box plots, scatter plots, Pareto charts, and DOE set-ups; running an experiment with software; and others. Because organizations and their goals are quite different, we will provide the reader with a suggested outline of the training material for this champion session. It should last 5 days and be taught by a master black belt or an outside consultant. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series. (Note: some of the material is the same as that of the transactional training.) Introductions Agenda Ground rules Exploring our values Objectives Definition: just as in the transactional training, the technical approach is based on the customer, the opportunity, and the successes. (It must be stressed that we will be applying Six Sigma methodology rather than following the “pack.” Make sure that emphasis is placed on the process and the tools. In the case of the process, participants must recognize that Six Sigma methodology should be followed systematically, as should attempts to reduce nonconformances that are important to the customer. With respect to the tools, participants must recognize that qualitative as well as quantitative techniques may be employed to resolve issues.) In other words: • • • •

Know what is important to the customer Reduce nonconformances Center around the target Reduce variation

SIX SIGMA BREAKTHROUGH GOAL • A solution for improving company value • A business strategy for net income improvement • A means to enhance customer perception

SIX SIGMA GOAL Defect reduction – why is it important to focus on cost of poor quality (COPQ) © 2003 by CRC Press LLC

SL316XCh08Frame Page 142 Monday, September 30, 2002 8:11 PM

142

Six Sigma and Beyond: The Implementation Process

Yield Improvement Improved customer satisfaction and higher return on investment — the importance of learning faster than our competitors is the only sustainable advantage. This is the reason why Six Sigma methodology emphasizes breakthrough improvements rather than incremental ones.

COMPARISON

BETWEEN

THREE SIGMA

AND

SIX SIGMA QUALITY

SHORT HISTORICAL BACKGROUND The business case for implementing Six Sigma: After the definition, this item is very important. It must be understood by all before moving on to a new topic. It is the reason why Six Sigma is going to be implemented in your organization. Therefore, not only must it be understood, but in addition it must make sense and be believable. Sharing the executive committee members list with everyone is one of the ways to make individuals understand the importance of the implementation process. Another way is to provide some background about the black belts as individuals and their commitment to Six Sigma and to identify specific projects that plague the organization, either genuine financial problems or issues perceived as problems by customers. Yet another way may be to present some specific examples of your company in relationship to your competitors.

OVERVIEW

OF THE

BIG PICTURE

Deployment structure: the Six Sigma implementation process must be a topdown flow; otherwise, it will not work. Executive leadership (part-time basis): executives should be the drivers of the Six Sigma process in directions that meet key business goals and address key customer satisfaction concerns. Key roles are: • • • • • • •

Establish the vision Articulate the business strategy Provide resources Remove roadblocks Support the culture change Monitor the results Define the criteria for success and make others accountable for the results • Align the systems and structures with the changes taking place • Participate with the black belts through project reviews and recognition of results Master black belt (full-time basis): they are the trainers, coaches, and facilitators. They are the experts of Six Sigma tools and methodologies and are responsible for training and coaching black belts. Master black belts, or © 2003 by CRC Press LLC

SL316XCh08Frame Page 143 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

143

shoguns as we call them, may also be responsible for leading large projects on their own. Key roles are: • • • • • • • • • • •

Be the expert in tools and concepts Facilitate and implement Six Sigma in the organization Certify the black belts Assist in identifying projects Coach and support black belts Participate in project reviews Develop new tools or modify old tools for applications Lead major programs Share best practices Drive passion Partner with champion

Project champions (part-time basis): they drive the Six Sigma through the process and are accountable for the performance of black belts and the results of Six Sigma projects in their area. They are the conduit between the executive leadership and the black belt, and they are supposed to eliminate bottlenecks and conflicts that arise during projects, especially projects with cross-functional responsibilities. Key roles are: • • • • • • • • • •

Execute the vision through the organization Create and maintain passion Identify and prioritize projects Identify and select the black belts Develop the reward and recognition program Share best practices in the organization Remove barriers for black belts Drive and communicate results Develop a comprehensive training plan Communicate the linkage between Six Sigma and the business strategy

Black belts (full-time basis): they are accountable for driving projects and are responsible for leading and teaching Six Sigma processes within the company. Black belts are also responsible for applying Six Sigma tools to complete a predetermined amount of projects worth at least $250,000 each (projects are commonly worth between $400,000 and $600,000). It is expected that the improvement will be a breakthrough improvement with a magnitude of 100×. Key roles are: • • • • •

Full time Identify barriers Lead project teams Identify project resources Be expert of the breakthrough strategy

© 2003 by CRC Press LLC

SL316XCh08Frame Page 144 Monday, September 30, 2002 8:11 PM

144

Six Sigma and Beyond: The Implementation Process

• • • • • •

Teach and coach as needed Manage project risk Deliver results on time Report project status Complete final report and control plan Ensure results are sustained

Green belts (part time basis): they are expected to help black belts with expediting and completing Six Sigma projects and may take the lead in small projects of their own. They should also look for ways to apply Six Sigma problem-solving methods within their work area. Key roles are: • • • • •

Apply the methodology in functional areas Support the black belts in completing projects Be project team member Help ensure improvements are sustained Concurrent with existing responsibilities

Process Driven, NOT Event Driven Rollout strategy (emphasize the importance of projects and measurement) Management’s responsibility Training requirements Black belts Green belts Project definition: • • • • • • •

Who is my customer? What matters? What are the CTQs? What is the scope? What nonconformance am I trying to reduce? By how much? Is the goal of reduction realistic? What is the current cost of poor quality? What benefits will we get if we improve to the point of reaching our goal?

Project selection: define the project charter. This will provide the appropriate documentation for communicating progress and direction to the rest of the team but also to management. To use the CT Matrix follow the seven steps: • • • • • •

Identify the customers Meet with customers and identify CTSs Perform CTY then CTX breakdown and construct CT Matrix Identify critical or leverage processes Set improvement objectives and develop action plans Assign agents

© 2003 by CRC Press LLC

SL316XCh08Frame Page 145 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

145

• Identify CTPs for critical or leverage processes through Six Sigma projects

IDENTIFY CUSTOMER Y = f(x). Y is the output and the Xs are the inputs. Identify the Y and determine the Xs. It is imperative to understand that most often a single Y may be influenced by more than one X. Therefore, we may have Y = F(X1, X2…, Xn). However, that is not all. We may even have a single X cascading into a further level, such that for every X1 we may have Y = f(x1, x2,…xn) This is called cascading. Apply project selection checklist. To ensure the selected issue, concern, or problem will make a good Six Sigma project, a checklist can be applied to verify the project’s potential. Simple criteria for selection are the following six questions: • Does the project have recurring events? • Is the scope of the project narrow enough? • Do metrics exist? Can measurements be established in an appropriate amount of time? • Do you have control of the process? • Does the project improve customer satisfaction? • Does the project improve the financial position of the company? If the answer to all of these questions is yes, then the project is an excellent candidate. Another way to look at the project selection may be to focus on impact, time, tools, metrics, financials, research, and team effort. Typical questions are: • • • • •

What corporate objective is supported by this objective? What business group objective is addressed by the project? What customer will benefit from this project? How? Can the project be completed within 3 to 4 months? Could the process improvements be handled adequately via basic methods and techniques? • Is the more structured Six Sigma approach and the methodology required desirable for this project? • Will this project require application of all phases of Six Sigma? • Have you defined the nonconformances opportunities?

© 2003 by CRC Press LLC

SL316XCh08Frame Page 146 Monday, September 30, 2002 8:11 PM

146

Six Sigma and Beyond: The Implementation Process

• Do the baseline nonconformance data exist to support project selection? • Is the nonconformance reduction offered greater than 70%? • What improvements are expected in your area from the project? • Are projected savings greater than or equal to $XXXK per year? • Will this project lead to improvements with little or no capital? • Is there a similar project already under way or proposed at another location? • Can this project be led by a black belt? • Can you identify the team members to start this project? • Is capital investment required? Develop high-level problem statement. This is a high-level description of the issue to be addressed by the green belt or black belt. The problem statement will be the starting point for the application of the Six Sigma methodology. This is the point where the champion really needs to understand the process because he or she has to “sell it” to management. In other words, he or she has to make the business case for the project.

THE DMAIC PROCESS The model: it is a structured methodology for executing Six Sigma project activities. Make sure to point out here that the model is not linear in nature. Quite often, teams may find themselves in multiple phases so that thoroughness is established. Define: the purpose is to refine the project team’s understanding of the problem to be addressed. It is the foundation for the success of both the project and of Six Sigma. Measure: the purpose is to establish techniques for collecting data about current performance that highlight project opportunities and provide a structure for monitoring subsequent improvements. Typical questions are: • • • • • •

What is my process? How does it function? Which outputs affect CTQs most? Which inputs seem to affect outputs (CTQs) most? Is my ability to measure and or detect “good enough?” How is my process doing today? How good could my (current) process be when everything is running smoothly? • What is the best that my process was designed to do?

Analyze: the purpose is to allow the team to further target improvement opportunities by taking a closer look at the data. Typical questions are: • Which inputs actually affect my CTQs most? By how much? • Do combinations of variables affect outputs? © 2003 by CRC Press LLC

SL316XCh08Frame Page 147 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

147

• If I change an input, do I really change the output? • If I observe results from the same process and different locations and results appear to be different, are they really? • How many observations do I need to draw conclusions? • What level of confidence do I have regarding my conclusions? • Can I describe the relationship between inputs and outputs in a statistical format? • Do I know the inputs with the biggest impact on a given output? Improve: the purpose is to generate ideas about ways to improve the process; design, pilot, and implement improvements; and validate improvements. Typical questions are: • Once I know for sure which inputs most impact my outputs, how do I set them? • How many trials do I need to run to find and confirm the optimal setting and procedure of these key inputs? • Do I use systematic experimentation to find the input combination that delivers the optimal output? Control: the purpose is to institutionalize process and product improvements and monitor ongoing performance. Typical questions are: • Once I have reduced the nonconformances, how do the functional team and I keep them there? • How does the functional team keep it going? • What do I set up to keep it going even when things like people, technology, and customers change ? Select product or process key characteristics, e.g., customer Y using the improvement strategy — the DMAIC model. Please notice that every output is data-based; therefore, the decision is data-based. Define/measure: • • • •

Define performance standards for Y. The focus is Y. Validate measurement system for Y. The focus is Y. Establish process capability of creating Y. The focus is Y. Define improvement objectives for Y. The focus is Y.

Analyze: • Identify variation sources in Y. The focus is Y. • Screen potential causes for change in Y and identify vital few x1. The focus is on x1, x2,…xn. • Discover variable relationships among vital few x1. The focus is on x1, x2,…xn. © 2003 by CRC Press LLC

SL316XCh08Frame Page 148 Monday, September 30, 2002 8:11 PM

148

Six Sigma and Beyond: The Implementation Process

Improve: • Establish operating tolerances on vital few x1. The focus is on the vital few x1. • Validate measurement system for x1. The focus is on the vital few x1. • Determine ability to control vital few x1. The focus is on the vital few x1. Control: • Implement process control system on vital few x1. The focus is on the vital few x1.

DETAILED MODEL EXPLANATION Define the organization’s values. Key questions are: • • • • • •

What do we really value? Who are our customers and what do they need? Who are we and what do we do? What does customer satisfaction mean? Do our values correlate with those of our customers? How do we verify that we meet internal and external needs?

PERFORMANCE METRICS REPORTING • The classical view vs. the Six Sigma approach • Understand the difference • Understand the magnitude of this difference

ESTABLISH CUSTOMER FOCUS • • • • • •

What is important to the customer? How do we know? Critical to satisfaction. Importance of identifying the CTQs. Understand the difference between functional wants and requirements. Understand the interaction between what customers need and what suppliers do.

DEFINE VARIABLES: KEY QUESTIONS ARE • What is meant by variables? • What is a dependent variable? • What is an independent variable? © 2003 by CRC Press LLC

SL316XCh08Frame Page 149 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

149

• What other labels are synonymous with dependent and independent variables? • What is meant by leverage variable? • What strategies can be used to isolate leverage variables?

THE FOCUS

OF

SIX SIGMA

• Y = f(X) • The Critical To (CT) concept. Key questions are: • What does critical to satisfaction (CTS) mean in terms of customers? • What does critical to quality (CTQ) mean in terms of a product, service, or transaction? • What does critical to delivery (CTD) mean in terms of a product, service, or transaction? • What does critical to cost (CTC) mean in terms of a product, service, or transaction? • What does critical to process (CTP) mean in terms of a product, service, or transaction? • What is the relationship between defect opportunities and CTs? • The Critical to Quality (CTQ) and Critical to Process (CTP) • Customer satisfaction

PROCESS OPTIMIZATION PROCESS BASELINE Key questions are: • What is a process baseline and how is it different from a product benchmarking? • What is the relationship between a process baseline and a process mapping? • What is the relationship between a process baseline, CTs, and nonconformance opportunities? • Where are the key performance metrics associated with a process baseline? • How should a process baseline be established? • How can a process baseline be improved? Supplier improvement. Supplier capability is a critical piece of breakthrough strategy. Measure Process Characterization Understanding the concept of rolled-throughput yield Traditional Y = S/U where Y is yield; S = number of units that pass; and U = number of units tested Definition of nonconformance (defect) © 2003 by CRC Press LLC

SL316XCh08Frame Page 150 Monday, September 30, 2002 8:11 PM

150

Six Sigma and Beyond: The Implementation Process

Six Sigma definition of yield (yield at every step in the process) Yield without rework Hidden factory (rework, nonvalue activities) First pass yield (Yrt) — no rework Normalizing yield (Ynorm) is the average yield per step of a sequential process, e.g., a process with three steps has a yield of 75%, 80%, and 95% per step yield. The normalized average is: 0.75 × 0.80 × 0.95 = 57% Yrt; Ynorm =

3

.57 = 82.91%.

Rolled throughput yield = P(operation 1) × P(operation 2) ×… P(operation n) = e–dpu

True yield = Y =

(d / u)r e − d /u (d / u)0 e − d /u 1(e − d /u ) = = = e− d /u r! 0! 1

where Y = yield; d/u = the defects per unit; e = 2.718. Therefore, when r = 0, we obtain the probability of zero defects or rolled throughput yield. This is very different from the classical determination of yield. Poisson approximation Useful formulas DPU = defects per unit TOP (total opportunities) = Units × opportunities DPO (defects per opportunity) = defects per TOP Probability the opportunity is defective = DPO Probability the opportunity is not defective = Pr(ND) = 1 – DPO Rolled yield is the likelihood that any given unit of product will contain 0 defects (recommended when you know the yield of each process element or opportunity) Yrt = Pr(ND) # of opportunities Yrt = Pr1(ND) × Pr2(ND) ×…Prn(ND) Integration of rework loops is to understand the ramifications of processes that are causing the defects.

PROCESS MAPPING Understand the visual display of the process Understand the “what you think it is…” Understand the “what it actually is…” Understand the “what you would like it to be…” Differentiate between business process — strategic, business processes — internal, SIPOC model, and detailed subprocesses map © 2003 by CRC Press LLC

SL316XCh08Frame Page 151 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

CAUSE

AND

151

EFFECT

Understand the function of the cause and effect in a nonmanufacturing environment. Differentiate between manufacturing and nonmanufacturing causes, e.g., manufacturing: manpower — people, machine, method, material, measurement, mother nature — environment. Transactional/commercial/service: manpower — people, policies, procedures, place, measurement, mother nature — environment. Cause and effect matrix — the idea is to identify and evaluate control plans for key process input variables (KPIVs). Step 1. Identify key customer requirements. Step 2. Rank order and assign priority factor to each output. Step 3. Identify all process steps and materials (inputs) from process map. Step 4. Evaluate correlation (a low score can have small effect on output variables; a high score can greatly affect the output variable). Step 5. Cross-multiply correlation values with priority factors and add for each input.

THE APPROACH

TO

C&E MATRIX

Approach 1. a) place the outputs across the top of the matrix, b) rank outputs, c) place inputs down the left side of the matrix starting with first process step and move to last process step. Approach 2. a) place the outputs across the top of the matrix, b) place the process steps down the left side of the matrix, c) correlate process steps to outputs, d) Pareto the process steps, e) start a new C&E matrix with inputs from the most critical three or four process steps.

LINKS

OF

C&E MATRIX

TO

OTHER TOOLS

Capability summary – key outputs are evaluated FMEA – potential problems are identified Control plan – key inputs are evaluated

BASIC STATISTICS The ten basic statistical concepts • • • •

Types of data Central tendency Confidence intervals Variation

© 2003 by CRC Press LLC

SL316XCh08Frame Page 152 Monday, September 30, 2002 8:11 PM

152

Six Sigma and Beyond: The Implementation Process

• • • • • •

Spread Central limit theorem Distributions Degrees of freedom Probability Accuracy and precision

Process capability – customer requirements; process characterization; process stability Cp, Cpk, Pp, and Ppk — in Six Sigma the focus is first on centering and then controlling the spread Rational subgroupings Sampling Sample Short- vs. long-term capability (performance)

CONVERTING DPM

TO A

Z EQUIVALENT

Understand the Z values Know how to use the Z table Standard transformation(s) Understand the difference between pooled and total standard deviation Pooled — taken over a relatively short time. It takes into account only the variation within a subset and common causes of variation. Total standard deviation — taken from many samples that represent the shift and drift that occur in the population due to all causes of variation. Graphical tools — the most important analysis tool is to ALWAYS plot the data

BASIC GRAPHS Pareto Time series Standard plot Boxplot Histograms Marginal plots Scatter plots Control charts Other charts Check sheets Cause and effect

ANALYZE Process optimization — hypothesis testing Roles of statistics © 2003 by CRC Press LLC

SL316XCh08Frame Page 153 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

153

Population vs. samples Why do we need hypothesis testing? Significance vs. importance

IMPROVE Design of experiments (DOE) What is it? Objectives Strategy One, two, multiple factors at a time Interactions Model building

CONTROL Process optimization — process control What is control? Sources of variation Types of variation Statistical process control (SPC)/control charts Attribute Variable Project report outs

SIX SIGMA PROJECT CHAMPION TRAINING — MANUFACTURING The following outline serves as an opening discussion to give manufacturing champions a “feel” for Six Sigma methodology. It is given here as a guideline, but one may forego this discussion. If used, it should last no more than 4 hours. It is considered as an introduction because it sets the tone for the training that follows. Most organizations try to implement Six Sigma in manufacturing without having examined some of their own internal situations. This short excursion provides for venting, explanation, motivation, and the need for Six Sigma without really delving into detail in any particular area. Introductions Agenda Ground rules

EXPLORING OUR VALUES SHORT OVERVIEW Problem – potential project © 2003 by CRC Press LLC

SL316XCh08Frame Page 154 Monday, September 30, 2002 8:11 PM

154

Six Sigma and Beyond: The Implementation Process

What is Six Sigma? The goals of Six Sigma Why focus on COPQ? Knowledge is the foundation Directions of Knowledge What makes Six Sigma different? Leaders asking the right questions Foundation of the tools Collecting data Questions to be answered Drive data collection Statistics – intuition vs. data The changing quality philosophy The cost of poor quality (COPQ) A statistical look Variation is the enemy Consequences of variation Primary sources of variation How do we measure variation and quality? The standard deviation What makes Six Sigma different? Where does industry stand? Getting to Six Sigma The impact of added inspection Impact of complexity on inspection The breakthrough methodology The focus of Six Sigma Improvement strategy (DMAIC) The Six Sigma roadmap DMAIC problem solving and fixing method A picture of a process – what is a process The roadmap Project definition Measure Define Measure Analyze Improve Control Breakthrough strategy Six Sigma breakthrough Six Sigma terms and definitions

© 2003 by CRC Press LLC

SL316XCh08Frame Page 155 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

155

SIX SIGMA MANUFACTURING CHAMPION TRAINING — GETTING STARTED Open the formal training with discussion of key items such as: Ranking our values Comparing value systems Relating behavior and values Measurements get attention Performance metrics reporting The classical view of performance Understanding the differences The magnitude of difference What do we measure today? Establishing customer focus Critical to satisfaction: Identifying CTQs Contrasting views — customers and suppliers Customers speak a different language Maximizing customer value Maximizing interactions Linking customer needs and what we do Defining variables — should be a good discussion of: What is meant by the term variables? What is a dependent variable? What is an independent variable? What other labels are synonymous with dependent and independent variables? What is meant by the phrase leverage variable? What strategies can be used to isolate leverage variables? CT concept CTQ and CTP characteristics Customer satisfaction: Quality Delivery Price The focus of Six Sigma The model of Six Sigma: • Define • Measure • Analyze • Improve • Control The leverage principle

© 2003 by CRC Press LLC

SL316XCh08Frame Page 156 Monday, September 30, 2002 8:11 PM

156

Six Sigma and Beyond: The Implementation Process

Process optimization Six Sigma — key questions: What does the phrase critical to satisfaction mean in terms of a customer? What does the phrase critical to quality mean in terms of a product, service, or transaction? What does the phrase critical to delivery mean in terms of a product, service, or transaction? What does the phrase critical to cost mean in terms of a product, service, or transaction? What does the phrase critical to process mean in terms of a product, service, or transaction? What is the relationship between defect opportunities and CTs? CT matrix components: The CT matrix structure “Critical to” characteristics CTS characteristics The product tree (CTY) Process tree (CTX tree) The nature of opportunities: Nature of an opportunity Opportunity and density Frequently used data types and distributions The opportunity hierarchy Opportunity and defect counting strategy Independence and opportunities Complexity and capability Six Sigma process baselines — begin discussion with the following questions: What is a process baseline and how is it different from a product benchmark? What is the connection between a process baseline and a process map? What is the connection between a process baseline, CTs, and defect opportunities? What are the key performance metrics associated with a process baseline? How should a process baseline be established? How can a process baseline be improved? Where are we on the Six Sigma journey? Macro-level product benchmarking Benchmarking engineering drawings What is process baselining? Identifying key processes Baselining manufacturing processes Baselining transactional processes Rolled-throughput yield: The classical perspective of yield What does our intuition tell us? Several competing notions of yield © 2003 by CRC Press LLC

SL316XCh08Frame Page 157 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

157

Classical/traditional:looks at quality at the end of the process. First-time yield: yield exclusive of rework Application: used to determine the quality level of individual processes or process steps Rolled-throughput yield: probability of zero defects (100% yield) Application: used to estimate the cumulative quality level of a multistep sequential process with statistically independent process steps Normalized yield: average yield of consecutive processes Application: used to estimate the average quality level of an entire process Measuring first pass yield Rolled-throughput yield Calculating normalized yield The hidden factory and rolled yield Yield calculation example Notes on the Poisson approximation – when and how to use The effect of independence Understanding the hidden factory — a simulation exercise may be appropriate here Basic statistics probability distributions Statistics: The most important analysis tool Dot diagram Histograms Measures of Location Mean: arithmetic average of a set of values Reflects the influence of all values Strongly influenced by extreme values Would you prefer your income to be the mean or the median? Median: reflects the 50% rank — the center number after a set of numbers has been sorted from low to high Does not include all values in calculation Is “robust” to extreme outlier scores Why would we use the mean instead of the median? In process improvement? Sample mean for a distribution Sample median Relationship of the mean and median Set of data Measures of spread Measures of variation Standard deviation Deviation is the distance from the mean Deviation score = observation – true mean © 2003 by CRC Press LLC

SL316XCh08Frame Page 158 Monday, September 30, 2002 8:11 PM

158

Six Sigma and Beyond: The Implementation Process

Variance = mean or average of squared deviation scores σ2 is the symbol for variance Standard deviation = square root of variance σ is the symbol for the standard deviation Population vs. sample Degrees of freedom Sample statistics Additive property of variances — the variance for a sum or difference of two independent variables is found by adding both variances. (Special note: If y1 and y2 are not independent the covariance term must be included.) V(y1 + y2) = V(y1) + V(y2) V(y1 – y2) = V(y1) + V(y2) Accuracy and precision: Accuracy describes centering Precision describes spread Standard deviation as it relates to specifications DPM Real world defect per million data Probability: Probability density function Normal distribution Normal probability plots Standardized Z transformation The empirical rule of the standard deviation as it relates to normal and other distributions: Rule 1 Roughly 60–75% of the data are within a distance of one standard deviation on either side of the mean. Rule 2 Usually 90–98% of the data are within a distance of two standard deviations on either side of the mean. Rule 3 Approximately 99% of the data are within a distance of three standard deviations on either side of the mean. Central limit theorem — definition The sampling distribution of the mean Types of data — attribute or variable? Attribute data (qualitative) • Categories • Yes, no • Go, no go • Machine 1, machine 2, machine 3 © 2003 by CRC Press LLC

SL316XCh08Frame Page 159 Monday, September 30, 2002 8:11 PM

Six Sigma for Champions

159

• Pass/fail • Good/defective • Maintenance equipment failures, fiber breakouts, number of seeds, number of defects Variable data (quantitative) • Continuous data — decimal places show absolute distance between numbers e.g., time, pressure, alignment, diameter • Discrete data — data are not capable of being meaningfully subdivided into more precise increments Distribution: Binomial distribution: the binomial distribution is used where there are only two possible outcomes for each trial — repeated trials e.g., good/bad, defective/not defective, success/failure Parameters n = number of trials p = probability of success (0 < p 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample means decreases. The standard error will help us calculate confidence intervals. • Significance of confidence intervals — statistics such as mean and standard deviation are only estimates of the population mus and sigmas and are based on only one sample. Because there is variability in these estimates from sample to sample, we can quantify our uncertainty using statistically-based confidence intervals (CIs). Most of the time, we calculate 95% CIs. This means that approximately 95 out of a 100 CIs will contain the population parameter, or we are 95% certain the population parameter is inside the interval. Population vs. sample • Comparison of histograms • Parametric CIs — the parametric CIs assume a t-distribution of sample means and use this to calculate CIs. What is the t-distribution? The tdistribution is a family of bell shaped distributions that are dependent on sample size. The smaller the sample size, the wider and flatter the distribution. • CI for the mean • CIs for proportions

HYPOTHESIS TESTING INTRODUCTION Hypothesis testing is a stepping stone to ANOVA and DOE. Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. (Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys.) To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled.)

PARAMETERS

VS.

STATISTICS

Hypothesis testing description — statistics communicate information from data; however, they are not a substitute for professional judgment. Quite often, statistical testing provides objective solutions to questions that are traditionally answered © 2003 by CRC Press LLC

SL316XCh10Frame Page 257 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

257

subjectively through the practical question of whether there is a real difference between _____ and _____. A practical process problem is translated into a statistical hypothesis in order to answer this question. In hypothesis testing, we use relatively small samples to answer questions about population parameters. Therefore, there is always a chance that we selected a sample that is not representative of the population and as a consequence, there is always a chance that the conclusion obtained is wrong. However, with some assumptions, inferential statistics allow us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (P-value) of a wrong conclusion.

FORMULATING HYPOTHESES Tests of significance — significance level (α — alpha; and β — beta). Customary alpha significance values are 90, 95, or 99%. This alpha level requires two things: a) an assumption of no difference (Ho), and b) a reference distribution of some sort. Hypothesis testing roadmap

WEEK 3 Review week 1 Review week 2 General questions ANOVA review (one way, F test. The F-test is a signal-to-noise ratio such that the higher the F-test, the lower the probability that it will occur by chance. When there are only two levels, the results of the one-way ANOVA are identical to the t-test. The relationship is: F = t2.) Questions about the project The mathematical model for a one way ANOVA • Comments about single-factor designs — the output is generally measured on an interval or ratio scale (yield, temperature, volts, etc.). The output variable can be discrete or interval/ratio. The input variable is known as a factor. If the factor is continuous by nature, it must be classified into subgroups. For example, we could have a measure of line pressure from low to high values. We could do a median split and classify the factor into two levels, low and high. • Diagnostic testing — residual analysis — ANOVA assumes the errors are normally distributed with a mean = 0 and a constant sigma. We can test this by reviewing the residuals, which are comprised of each score subtracted from its sample mean. (In a computer statistical package, this is calculated by asking for it.) The significance of residuals is designated as the epsilon-squared. • Test of equal variance • ANOVA table • F-distribution

© 2003 by CRC Press LLC

SL316XCh10Frame Page 258 Monday, September 30, 2002 8:09 PM

258

Six Sigma and Beyond: The Implementation Process

• • • • •

Main effects and interval plots Pooled standard deviation Homogeneity of variance Barriers to effective designed experiments Execution strategy

DOE defined: a systematic set of experiments that permits one to evaluate the effect of one or more factors without concern about extraneous variables or subjective judgments. It begins with the statement of the experimental objective and ends with the reporting of the results. It may often lead to further experimentation. It is the vehicle of the scientific method, giving unambiguous results that can be used for inferring cause and effect. Strategy of Experimentation • Define the problem • Establish the objective • Select the output — responses (Ys) • Select the input factors (Xs) • Choose the factor levels • Select the experimental design and sample size • Collect and analyze the data • Draw conclusions • Achieve the objective; the objective of all experimental studies is to determine: • The effects of material variation on product reliability. • The sources of variation in a critical process. • The effects of less expensive materials on product performance. • The impact of operator variation on the product. • The cause-effect relationships between process inputs and product characteristics. • The equation that models your process. Barriers to Effective Experimentation • Define the output variables — is output qualitative or quantitative? (Objective: centering, variation improvement, or both?) • What is the baseline (mean and sigma)? • Is output under statistical control? • Does output vary over time? • How much change in the output do you want to detect? • Is the output normally distributed? • Is the measurement system adequate? • Do you need multiple outputs? Problem statements include: • A complete and detailed description of the problem • Identifying and understanding of all operational definitions (relates to the problem and contains no solutions or conclusions) © 2003 by CRC Press LLC

SL316XCh10Frame Page 259 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

259

• As many specifics as possible • No causes Purpose and function include: • Clearly defined and quantified problem • Definition of the measurement source to be used • Identification of the negative effects of the current performance and their relationship to the customer CTQs Questions to establish the objective of the experiment • What do you want to discover by conducting the experiment? • Are you trying to establish the relationship between the input factors (Xs) and the output (response — Y)? • Are you trying to establish the vital few Xs from the trivial many (possible factors)? • Are you interested in knowing if several input factors act together to influence the output (Y)? • Are you trying to determine the optimal settings of the input factors? Selecting inputs (Xs) and the output (responses Y). Factor selection — typical tools for narrowing down the list are • FMEA/control plans or DCP • Cause-and-effect matrix • Multi-vari and hypothesis testing • Process mapping • Brainstorming • Literature review • Engineering knowledge • Operator experience • Scientific theory • Customer/supplier input • Global problem solving • Parameter design Choosing the levels for each factor • The levels of an input factor are the values of the input factor (X) being examined in the experiment (not to be confused with the output [Y]). To select the appropriate levels two items are of concern and serve as the basis for the definition of the levels; a) engineering knowledge and b) theoretical knowledge. • For a quantitative (variables data) factor like temperature: if an experiment is to be conducted at two different temperatures, then the factor temperature has two levels. • For a qualitative (attributes data) factor like cleanliness: if an experiment is to be conducted using clean and not clean, then the factor cleanliness has two levels. Guidelines for setting input variable levels • To determine vital few inputs from a large number of variables use screening experimentation. © 2003 by CRC Press LLC

SL316XCh10Frame Page 260 Monday, September 30, 2002 8:09 PM

260

Six Sigma and Beyond: The Implementation Process

• Set “bold” levels at extremes of current capabilities. If we vary the input to extremes, we will be assured of seeing an effect on the output, if there is one. Remember that this may exaggerate the variation or it may overlook the nonlinearity, if present. Once critical inputs are identified, reduced spacing of the levels is used to identify interactions among inputs. This approach usually leads to a series of sequential experiments. Response surface methods Full factorials with replication Full factorials with repetition Full factorials without replication or repetition Screening or fractional designs Ensuring internal and external validity • Internal validity. Randomization of experimental runs “spreads” the noise across the experiment. Blocking ensures noise is part of the experiment and can be directly studied. • Holding noise variables constant eliminates the effect of that variable but limits broad inferences. • External validity. Include representative samples from possible noise variables. • Threats to statistical validity • Low statistical power: sample size inappropriate. • Loose measurement systems inflate variability of measurements. • Random factors in the experimental setting inflate variability of measurement. • Randomization and sample size prevent threats. Planning questions • What is the measurable objective? • What will it cost? • How will we determine sample sizes? • What is our plan for randomization? • Have we talked to internal customers about this? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run? • Where is the proposal? • DOE worksheet Performing the experiment • Document initial information. • Verify measurement systems. • Ensure baseline conditions are included in the experiment. • Make sure clear responsibilities are assigned for proper data collection. • Always perform a pilot run to verify and improve data-collection procedures! • Watch for and record any extraneous sources of variation. • Analyze data promptly and thoroughly. © 2003 by CRC Press LLC

SL316XCh10Frame Page 261 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

• • • •

261

Graphical Descriptive Inferential (Always run one or more verification runs to confirm your results; go from narrow to broad inference.) General advice • Planning sheet can be more important than running the experiment. • Make sure you have tied potential business results to your project. • Focus on one experiment at a time. • Do not try to answer all the questions in one study; rely on a sequence of studies. • Use two-level designs early. • Spend less than 25% of budget on the first experiment. • Always verify results in a follow-up study. • It is acceptable to abandon an experiment. • A final report is a must!! • Finally, push the envelope with robust levels, but think of the safety of the people and equipment. Factorial experiments • Purpose • To understand the advantages of factorial experiments vs. one factor at a time. • To determine how to analyze general factorial experiments. • To understand the concept of statistical interaction. • To analyze two and three full factorial experiments. • To use diagnostic techniques to evaluate the “goodness of fit” (residuals) of the statistical model. • To identify the most important or critical factors in the experiments. • Full factorial — one-factor-at-a-time (OFAT) and interactions. • Factorial experiments — advantages • Are more efficient than OFAT experiments. • Allow the investigation of the combined effects of factors (interactions). • Cover a wider experimental region than OFAT studies. • Identify critical factors inputs. • Are more efficient in estimating effects of both input and noise variables on the output. • 2k factorials • 3k factorials • General linear model (GLM) — What do we do when our full factorial design is unbalanced due to lost data or the inability to complete all of the experimental runs? This is not an issue, as we can use the GLM to analyze the results. • Analyzing interaction plots • Mixed models (fixed and random factors) permitted • ANOVA plus unbalanced or nested designs © 2003 by CRC Press LLC

SL316XCh10Frame Page 262 Monday, September 30, 2002 8:09 PM

262

Six Sigma and Beyond: The Implementation Process

• Full factorial experiments — typically used to optimize a process • Steps to conduct a full factorial experiment • Step 1: state the practical problem and objective using the doe worksheet. • Step 2: state the factors and levels of interest. • Step 3: select the appropriate sample size. • Step 4: create a computer experimental data sheet with the factors in their respective columns. Randomize the experimental runs in the data sheet. The software will create the factorial design. • Step 5: conduct the experiment. • Step 6: construct the ANOVA table for the full model (balanced or unbalance). • Step 7: review the ANOVA table and eliminate effects with pvalues above 0.05. Run the reduced model to include those pvalues that are deemed significant. • Step 8: analyze the residuals of the reduced model to ensure you have a model that fits. Calculate the fits and residuals for significance. • Generate model • Run verification experiment • Fractional factorials — used primarily for screening factors • Steps for conducting a fractional factorial • Steps 1–6 same as for full factorial • Step 7: analyze the residual plots to ensure we have a model that fits (This step was run in Step 5) • Step 8: investigate significant interactions (p-value < 0.05). Assess the significance of the highest order interactions first. For three-way interactions unstack the data and analyze. • Once the highest-order interactions are interpreted, analyze the next set of lower-order interactions. • Step 9: investigate significant interactions (p-value < 0.05). Evaluate main-effects plot and cube plots. • Step 10: state the mathematical model obtained. If possible, calculate the epsilon squared and determine the practical significance. • Step 11: translate the mathematical model into process terms and formulate conclusions and recommendations. • Step 12: replicate optimum conditions. Plan the next experiment or institutionalize the change. Taguchi experimentation • Loss function • Ideal function • P diagram • Orthogonal arrays (QAs) • Parameters (factors) © 2003 by CRC Press LLC

SL316XCh10Frame Page 263 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

• • • • •

263

Noise (factors) Interaction Main effects Signal to noise Daniel plots

WEEK 4 Review week 1 Review week 2 Review week 3 General questions Questions, concerns about project Week 4 potential project deliverables • Project definition • Project metrics • Process optimization • PLEX, EVOP, RSM, multiple regression • Process controls • Statistical product monitors • Statistical process controls • Document and sustain the gains • Update FMEA • Update control plan • 5S the immediate project area • Quality manual and related documentation • Write the final report • Review of designed experiments

FRACTIONAL FACTORIALS Why do fractional factorial experiments? As the number of factors increases, so does the number of runs. 2 × 2 factorial = 4 runs 2 × 2 × 2 factorial = 8 runs; 2 × 2 × 2 × 2 factorial = 16 runs and so on. If the experimenter can assume higher-order interactions are negligible, it is possible to do a fraction of the full factorial and still get good estimates of low-order interactions. Major use of fractional factorials is screening: a relatively large number of factors in a relatively small number of runs. Screening experiments are usually done in the early stages of a process improvement project. Factorial experiments. Successful factorials are based on: a) the sparsity of effects principle and b) systems are usually driven by main effects and low-order interactions. Sequential experimentation Designing a fractional factorial

© 2003 by CRC Press LLC

SL316XCh10Frame Page 264 Monday, September 30, 2002 8:09 PM

264

Six Sigma and Beyond: The Implementation Process

What is PLEX? PLEX = PLant EXperimentation; a process-improvement tool for online use in full-scale production; uses simple factorial two-level designs in two or three factors; usually requires several iterations of experimental design, analysis, and interim improvements. The goal is to minimize disruption to production but make big enough changes to quickly see effects on output variables. • Prerequisites for PLEX. • Good measurement system in place. • With little or no replicate runs, we want to minimize the effect of measurement error. • May require repeat measurements. • Adequate technical supervision to keep process controlled and monitored. • Extra attention to safety requirements and to avoiding upsets. • Stay within operating region. • Maintain environmental controls. • Cooperation of several functions required. • Why and when do we use PLEX? • Strong need to increase and/or improve production. • May have a sold-out product line. • Product line may have poor process capability. • Offline studies (lab or pilot scale) are not practical or meaningful. • Key process input variables (Xs) are not well determined, but we have the resources only to investigate a few at a time. A series of factorial experiments is required. • Beware, interactions may be obscured. • Would like to “optimize” (or reoptimize) the process while in production mode. PLEX process improvement roadmap • Form process improvement team. • Assess measurement system, e.g., gauge R&R. • Identify Xs and Ys, e.g., multi-vari, cause and effect, FMEA. • Choose two to four factors for first DOE. • Choose safe operating ranges for each factor. Ranges should be wide enough to reasonably see an active effect with no replication. • Set up 2k factorial design with optional, but recommended, center points. • Consider repeating one or more conditions. One approach is to run center point at beginning, middle, and end of design as a check for process drift or capability. • Prior to running design look at each treatment combination to see if there is a potential failure mode or unsafe condition. • Set up sampling plan. • Plan for technical supervision to minimize upset potential. • Randomize order of running, if practical. Otherwise, choose a run sequence that reduces number of changes. • Run process condition long enough to achieve steady state. © 2003 by CRC Press LLC

SL316XCh10Frame Page 265 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

265

• Return to standard conditions until DOE results are analyzed. • Based upon results, suggest interim process changes or subsequent DOEs or small confirmatory studies. • Continue until all Xs are investigated and process is optimized. EVOP — EVolutionary OPerations • What is EVOP? A process-improvement tool used while a process is running in the production mode for the optimization of plant performance; method that uses 22 or 23 factorials with replicates and center points; empowers operators to conduct experiment with minimal engineering support during normal operations; each experimental run is called a CYCLE. One cycle is one of the following: (0,0) = > (1,1) = > (1,–1) = > (–1,–1) = > (–1,1); eliminate randomization to minimize disruption and document effect estimates at the end of each cycle. Cycle continues in the hopes of collecting “sufficient evidence” of significant change in the Y for the various levels of X. Each set of cycles is called a phase. When enough data is collected through cycles in which a state of improved operations is identified, a phase is set to be completed. The results of each phase determine the new settings for subsequent phases of EVOP. Continue phases until X settings are optimized. Data from phases estimate a “response surface.” • Why use EVOP? The goal is to establish the settings of x1, x2, x3,… in the mathematical relationship: Y = f(x1, x2, x3,…) so as to optimize the process; provides information on process optimization with minor interruption to production; empowers operators and manufacturing personnel and is a cost-effective method to employee continual improvement. • How to apply EVOP: • Step 1: what is the problem to be solved? • Step 2: establish the experimental strategy. • Define Ys/Xs to be studied. • Select variable settings for phase I. • Determine maximum number of cycles for phase I. • Step 3: collect and analyze data during phase I, display on an information board to determine steps for Phase 2. • Step 4: repeat steps 2 and 3 for successive phases. • Step 5: implement optimal settings for Xs as S.O.P. • Step 6: rerun EVOP every 6 months to ensure optimal settings are maintained. Response surface methodology (RSM) • What is RSM? Once significant factors are determined, RSM leads the experimenter rapidly and efficiently to the general area of the optimum settings (usually using a linear model). The ultimate RSM objective is to determine the optimum operating conditions for the system or to determine a region of the factor space in which the operating specifications are satisfied (usually using a second-order model). Furthermore, response surfaces are used to optimize the results of a full © 2003 by CRC Press LLC

SL316XCh10Frame Page 266 Monday, September 30, 2002 8:09 PM

266

Six Sigma and Beyond: The Implementation Process

factorial DOE and create a second-order model if necessary. Therefore, RSM is good to a) determine average output parameters as functions of input parameters and b) process and product design optimization. • Response surface: the surface represented by the expected value of an output modeled as a function of significant inputs (variable inputs only): expected (Y) = f(x1, x2, x3,…xn) • Method of steepest ascent or decent: a procedure for moving sequentially along the direction of the maximum increase (steepest ascent) or maximum decrease (steepest descent) of the response variable using the following first order model: • Y (predicted) = b0 + Sbi Xi • Region of curvature: the region where one or more of the significant inputs will no longer conform to the first order model. Once in this region of operation most responses can be modeled using the following fitted second order model: • Y (predicted) = b0 + Sbi Xi + Sbii XiXi + Sbij XiXj • Central composite design: a common DOE matrix used to establish as valid second order model. • Coded variables: variables that are assigned arbitrary levels in a DOE study (–1, 1, A, B) • Uncoded variables: variables that are assigned process specific levels in a RSM study (10V, 20V) Regression • Regression and correlation • Use correlation to measure the strength of linear association between two variables, especially when one variable does not depend on the other. • Use correlation to benchmark equipment against a standard or another similar piece of equipment. • Use regression to predict one variable from another (it may be easier and more cost-efficient). • Use regression to provide evidence that key input variables explain the variation in the response variable or to determine whether different input variables are related to one other. Correlation limitations • Correlation explores linear association. It does not imply a cause-andeffect relationship. • Two variables may be perfectly related in a manner other than linear, and the correlation coefficient will be close to zero. For example, the relationship could be curvilinear. This emphasizes the importance of plots. • The linear association between two variables may be due to a third variable not under consideration. Sound judgment and scientific knowledge are necessary to interpret the results and validity of correlation analysis. © 2003 by CRC Press LLC

SL316XCh10Frame Page 267 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

267

• Some statisticians argue that correlation analysis should only be used when one dependency exists, i.e., when it is not clear which variable depends on the other. • In correlation analysis, it is assumed that both the X and Y variables are random, i.e., X is not fixed to study the dependency of Y. Linear regression uses — quantifies the relationship between a response variable and one or more predictor variables. Four general uses are: • Prediction: the model is used to predict the response variable of interest, especially when this response is difficult or expensive to measure. Emphasis is not given to capturing the role of each input variable with strict preciseness. • Variable screening: the model is used to detect the importance of each input variable in explaining the variation in the response. Important variables are kept for further study. • System explanation: the model is used to explain how a system works. Finding the specific role of each input variable is essential in this case. Various models that define different roles for the inputs are typically in competition. • Parameter estimation: the model is used primarily to find specific ranges, size and magnitudes of the regression coefficients. Line regression assumptions Simple regression — fitted line Plot interpreting the output Regression — residual plots Simple polynomial regression Interpreting the results Assessing the predictive power of the model Matrix plots — scatter plots with many Xs Correlation with many Xs The output — R2 Coefficient of determination (r2) Multiple regression — beware of multicollinearity When to use multiple regression — when process or noise input variables are continuous and the output is continuous, multiple regression can be used to investigate the relationship between the Xs (process and/or noise) Ys. Three types of multiple regression What is a quality system? A quality system is an organization’s agreed-upon method of doing business. It is not to be confused with a set of documents that are meant to satisfy an outside auditing organization (i.e., ISO 900x). This means a quality system represents the actions, not the written words, of an organization. Typical elements of a quality system are: • Quality policy • Organization for quality (does not mean quality department!) • Management review or quality • Quality planning (how to launch and control products and processes) © 2003 by CRC Press LLC

SL316XCh10Frame Page 268 Monday, September 30, 2002 8:09 PM

268

Six Sigma and Beyond: The Implementation Process

• • • • • •

Design control Data control Purchasing Approval of materials for ongoing production Evaluation of suppliers Verification of purchased product (does not mean incoming inspection!) • Product identification and traceability • Process control • Government safety and environmental regulations • Designation of special characteristics • Preventative maintenance • Process monitoring and operator instructions • Preliminary capability studies (how to turn on a process) • Ongoing process performance requirements (how to run a process) • Verification of setups • Inspection and testing • Control of inspection, measuring, and test equipment • Calibration • Measurement system analysis • Control of nonconforming product • Corrective and preventative action • Handling, storage, packaging, preservation, and delivery • Control of quality audits (do what we say we do?) • Training • Service • Use of statistical techniques Aspects of control Quality systems = how we manage? Evolution of management style • First generation: management by doing — this is the first, simplest, most primitive approach: just do it yourself. We still use it. “I’ll take care of it.” It is an effective way to get something done, but its capability is limited. • Secondgeneration: management by directing — people found that they could expand their capacity by telling others exactly what to do and how to do it: a master craftsman giving detailed directions to apprentices. This approach allows an expert to leverage his or her time by getting others to do some of the work, and it maintains strict compliance with the expert’s standards • Third generation: management by results — people get tired of you telling them every detail of how to do their jobs and say “Just tell me what you want by when, and leave it up to me to figure out how to do it.” So you say, “OK, reduce inventories by 20% this year. I’ll reward or punish you based on how well you do. Good luck.” All three approaches have appropriate applications in today’s organizations. © 2003 by CRC Press LLC

SL316XCh10Frame Page 269 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

269

Are they being used appropriately? • Third generation sounds logical. Its approach is widely taught and used and is appropriate where departmental objectives have little impact on other parts of the organization. • Third generation has serious, largely unrecognized flaws we can no longer afford. For example, we all want better figures: higher sales, lower costs, faster cycle times, lower absenteeism, lower inventory. How do we get better figures? • Improve the system. Make fundamental changes that improve quality, prevent errors, and reduce waste. For example, reducing inprocess inventory by increasing the reliability of operations. • Distort the system. Get the demanded results at the expense of other results. “You want lower inventories? No problem!” Inventories miraculously disappear — but schedule, delivery, and quality suffer. Expediting and premium and freight go up. Purchasing says, “You want lower costs? No problem!” Purchasing goes down saving the company millions, but it never shows up on the bottom line. Manufacturing struggles with the new parts, increasing rework and overtime. Quality suffers… • Distort the figures. Use creative accounting. “Oh, we don’t count those as inventory anymore…..that material is now on consignment from our supplier.” The basic system did not change. Control methods agenda Integrating with lean manufacturing Ranking control methods (the strategy) Types of control methods Product vs. process Automatic vs. manual Control plan Control methods are a form of Kaizen Control methods • SPC • S.O.P • Type III corrective action = inspection: implementation of a short-term containment action that is likely to detect the defect caused by the error condition. Containments are typically audits or 100% inspection. • Type II corrective action = flag: improvement made to the process that will detect when the error condition has occurred. This flag will shut down the equipment so that the defect will not move forward. • Type I corrective action = countermeasure: improvement made to the process that will eliminate the error condition from occurring. The defect will never be created. This is also referred to as a long-term corrective action in the form of mistake-proofing or design changes. • Product monitoring SPC techniques (on Ys) • Precontrol (manual or automatic) • X-bar & R or X & MR charts (manual or automatic) © 2003 by CRC Press LLC

SL316XCh10Frame Page 270 Monday, September 30, 2002 8:09 PM

270

Six Sigma and Beyond: The Implementation Process

• • • • • • • •

p and np charts (manual or automatic) c and u charts (manual or automatic) Process control SPC techniques (on Xs) Mistake-proofing (automatic) X-bar & R or X & MR (manual or automatic) EWMA (automatic) Cusum (automatic) Realistic tolerancing (manual or automatic)

The control plan is a living document that is used to document all your process control methods. It is a written description of the systems for controlling parts and processes (or services). The control plan, because it is a living document, should be updated to reflect the addition or deletion of controls based on experience gained by producing parts (or providing services). The immediate GOAL of the quality system (QS): During the control phase of the QS methodology: • The team should 5S the project area. • The team should develop standardized work instructions. • The team should understand and assist with the implementation of process and product control systems. • The team should document all of the above and live by what they have documented. The long-term vision of the quality system — the company and all of its suppliers have a quality system that governs the ways in which products and services are bought, sold, and produced. • The company should be 5S in all areas. • The company should develop standardized work instructions and procedures. • The company should understand and assist with the implementation of process and product control systems. • The company should document all of the above and live by what they have documented. Introduction to statistical process control (SPC) What is SPC? SPC as a control method The goal and methodology Advantages and disadvantages Components of an SPC control chart Where to use SPC charts How to implement SPC charts Types of control charts and examples SPC flowchart • SPC is the basic tool for studying variation and using statistical signals to monitor and improve process performance. This tool can be applied to ANY area: manufacturing, finance, sales, etc. Most companies perform © 2003 by CRC Press LLC

SL316XCh10Frame Page 271 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

271

SPC on finished goods (Ys), rather than process characteristics (Xs). The first step is to use statistical techniques to control our company’s outputs. It is not until we focus our efforts on controlling those inputs (Xs) that control our outputs (Ys) that we realize the full gain of our efforts to increase quality, productivity, and lower costs. • What is SPC? All processes have natural variability (due to common causes) and unnatural variability (due to special causes). We use SPC to monitor and or improve our processes. Use of SPC allows us to detect special cause variation through out-of-control signals. These outof-control signals cannot tell us why the process is out of control, only that it is. Control charts are the means through which process and product parameters are tracked statistically over time. Control charts incorporate upper and lower control limits that reflect the natural limits of random variability in the process. These limits should not be compared to customer specification limits. Based on statistical principles, control charts allow for the identification of unnatural (nonrandom) patterns in process variables. When the control chart signals a nonrandom pattern, we know special cause variation has changed the process. The actions we take to correct nonrandom patterns in control charts are the key to successful SPC usage. Control limits are based on establishing ± 3 sigma limits for the Y or X being measured. Process improvement and control charts Benefits of control chart systems • Proven technique for improving productivity • Effective in defect prevention • Prevents unnecessary process adjustments • Provides diagnostic information • Provides information about process capability Control chart roadmap • Select the appropriate variable to control. • Select the data-collection point. (Note: if variable cannot be measured directly, a surrogate variable can be identified.) • Select type of control chart. • Establish basis for rational subgrouping. • Determine appropriate sample size and frequency. • Determine measurement method/criteria. • Determine gauge capability. • Perform initial capability study to establish trial control limits. • Set up forms for collecting and charting data. • Develop procedures for collection, charting, analyzing, and acting on information. • Train personnel. • Institutionalize the charting process. Control chart types There are many types of control charts; however, the underlying principles of each are the same. The proper type is chosen utilizing knowledge of both © 2003 by CRC Press LLC

SL316XCh10Frame Page 272 Monday, September 30, 2002 8:09 PM

272

Six Sigma and Beyond: The Implementation Process

SPC and knowledge of your process objectives. The chart type selection depends on: • Data type: attribute vs. variable • Ease of sampling; homogeneity of samples • Distribution of data: normal or non-normal? • Subgroup size: constant or variable? • Other considerations Control charts for variables data Control charts for attribute data Analysis of patterns on control charts: • One point outside the three-sigma limit • Two of three outside the two-sigma limit • Four of five outside the one-sigma limit • Cycles • Trend • Stratification • Seven consecutive on one side of the center line Advantages of control chart systems: • Proven technique for improving productivity. • Effective in defect prevention. • Prevent unnecessary process adjustments. • Provide diagnostic information. • Provide information about process capability. • Can be used for both attribute and variable data types. Disadvantages of control chart systems: • Everyone must be well trained and periodically retrained. • Data must be gathered correctly. • Mean and range/standard deviation must be calculated correctly. • Data must be charted correctly. • Charts must be analyzed correctly. • Reactions to patterns in charts must be appropriate — every time! Precontrol charts — traditionally, precontrol has been perceived as an ineffective tool, and most quality practitioners still remain skeptical of its benefits. This view originated due to the fact that the limits of the three precontrol regions are commonly calculated based on the process specifications, thus resulting in overreactions and inducing more variability to a process instead of reducing it. In the Six Sigma breakthrough strategy, precontrol is implemented after the improve phase. The zones are calculated based on the process after improvements are made, so its distribution is narrow and tight compared to the specification band. Specification limits are not used in calculating these zones, so we encounter units in the yellow or red zones before actual defects are produced. Where to use SPC charts: • When a mistake-proofing device is not feasible • Identify processes with high RPNs from the FMEA © 2003 by CRC Press LLC

SL316XCh10Frame Page 273 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

273

• Evaluate “current controls” column of the FMEA to determine the gaps in the control plan. Does SPC make sense? • Identify processes that are critical based on DOEs • Place charts only where necessary based on project scope. If a chart has been implemented, do not hesitate to remove it if it is not valueadded. • Initially, the process outputs may need to be monitored. • The goal: monitor and control process inputs and, over time, eliminate the need for SPC charts. Pareto Histogram Cause-and-effect diagram Interpreting the results Definition of lean manufacturing — a systematic approach to manufacturing which is based on the premise that anywhere work is being done, waste is being generated. A vehicle through which organizations can identify and reduce waste. A manufacturing methodology that will facilitate and foster a living quality system. The goal of lean manufacturing is total elimination of waste. Poka-yoke (mistake-proofing) Planning for waste elimination • Establish “permanent” control to prevent its reoccurrence. • The vision: continuous elimination of waste • Infrequent setups and long runs • Functional focus • If it ain’t broke don’t fix it • Specialized workers, engineers, and leaders • Good enough • Run it, repair it • Layoff • Management directs • Penalize mistakes • Make the schedule • Quick setups and short runs • Product focus • Fix it so it does not break • Multifunctionally skilled people • Never good enough, continual improvement • Do it right the first time • New opportunities • Leaders teach • Retrain • Make quality a priority There are seven elements of waste; they are waste of: • Correction • Overproduction © 2003 by CRC Press LLC

SL316XCh10Frame Page 274 Monday, September 30, 2002 8:09 PM

274

Six Sigma and Beyond: The Implementation Process

• • • • •

Processing Conveyance Inventory Motion Waiting

The first step toward waste elimination is identifying it. Black belt projects should focus efforts on one or more of these areas. 5S workplace organization — to ensure your gains are sustainable, you must start with a firm foundation. 5S standards are the foundation that supports all the phases of lean manufacturing. The system can only be as strong as the foundation it is built on. The foundation of a production system is a clean and safe work environment. Its strength is contingent upon the employee and company commitment to maintaining it. (As a black belt you set the goals high and accept nothing less. Each operator must understand that maintaining these standards is a condition of their employment.) Foundation of lean manufacturing — 5S overview 1. Sorting (decide what is needed). To sort out necessary and unnecessary items. To store oft-used items in the work area, store infrequently used items away from the work area and dispose of items that are not needed. 2. Storage: (arrangement of items needed straightened up in the work place). To arrange all necessary items. To have a designated place for everything. A place for everything and everything in its place. 3. Shining (sweep and cleanliness). To keep your area clean on a continuing basis. 4. Standardize. To maintain the workplace at a level that uncovers and makes problems obvious. To continuously improve plant by continuous assessment and actions. 5. Sustaining (training and disciplined culture). To maintain our discipline, we need to practice and repeat until it becomes a way of life. Benefits of 5S implementation • A cleaner workplace is a safer workplace. • Contributes to how we feel about our product, process, our company, and ourselves. • Provides a customer showcase to promote our business. • Product quality and especially contaminants will improve. • Efficiency will increase. Some 5S focusing tools • “Red tag” technique (visual clearing up). This is a vital clearing-up technique. As soon as a potentially unnecessary item is identified, it is marked with a red tag so that anybody can see clearly what may be eliminated or moved. The use of red tags can be one secret to a company’s survival, because it is a visible way to identify what is not © 2003 by CRC Press LLC

SL316XCh10Frame Page 275 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

• • • • • • •

275

needed in the workplace. Red tags ask why an item is in a given location and support the first “S” — sort. Tips for tagging: • We all tend to look at items as personal possessions. They are company possessions. We are the caretakers of the items. • An outsider can take the lead in red tagging. Plant people take advantage of these “fresh eyes” by creating an atmosphere where they will feel comfortable in questioning what is needed. • Tag anything not needed. One exception: do not red tag people unless you want to be red tagged yourself! • If in doubt, tag it! Before and after photographs Improve area by area, each one completely Clear responsibilities Daily cross-department tours Schedule ALL critical customers to visit Regular assessments and “radar” metrics Red tag technique. The red tag technique involves the following steps: 1. Establish the rules for distinguishing between what is needed and what is not. 2. Identify needed and unneeded items and attach red tags to all potentially unneeded items. Write the specific reason for red tagging and sign and date each tag. 3. Remove red tag items and temporarily store them in an identified holding area. 4. Sort through the red tag items; dispose of those that are truly superfluous. Other items can eliminated at an agreed interval when it is clear that they have no use. Ensure that all stakeholders agree. 5. Determine ways to improve the workplace so that unnecessary items do not accumulate. 6. Continue to red tag regularly.

Standardized work — the one best way to perform each operation identified and agreed upon through general consensus (not majority rules). This becomes the standard work procedure. The affected employees should understand that once they have defined the standard, they will be expected to perform the job according to that standard. It is imperative that we all must understand the notion: variation = defects. Standardized work leads to reduced variation. Prerequisites for standardized work Standardized workflow Kaizen — continual improvement. The philosophy of incremental continual improvement, that every process can and should be continually evaluated and improved in terms of time required, resources used, resultant quality, and other aspects relevant to the process. The BB’s job, simply stated, is focused Kaizen. Our methodology for Kaizen is the Six Sigma breakthrough © 2003 by CRC Press LLC

SL316XCh10Frame Page 276 Monday, September 30, 2002 8:09 PM

276

Six Sigma and Beyond: The Implementation Process

strategy — DMAIC. Control is only sustained long term when the 5Ss and standardized work are in place. • Kaizen rules • Keep an open mind to change • Maintain a positive attitude • Never leave in silent disagreement • Create a blameless environment • Practice mutual respect every day • Treat others as you want to be treated • One person, one vote — no position, no rank • No such thing as a dumb question • Understand the thought process and then the Kaizen elements • Takt time • Cycle time • Work sequence • Standard WIP • Takt time determination • Kaizen process steps • Step 1. Create flowchart with parts and subassemblies. • Step 2. Calculate Takt time = net available time. • Step 3. Measure each operation — each assembly and subassembly as they are. To the extent an operator has to go to an assembly for something, measure walk time. Establish a baseline using time observation forms; note any setup time. • Step 4. Do a baseline standard work flow chart (should look like a spaghetti chart). • Step 5. Do a baseline percent loading chart. Review for each operator where the waste and walk time is. Look at this in close relationship to the process. • Step 6. Review the 5Ss. • Step 7. Consolidate, accumulate jobs to get them as close to Takt time as possible. Work with the operators. • Step 8. Observe measure, modify the new flow process. This should be a one piece flow process if we are producing to takt time. • Step 9. Complete the one-piece flow process and redo all baseline charts (you may consider overlaying these new results on top of the older data to display the improvement). Make a list of things to complete. • Step 10. Prepare presentation, share results. Kaizen presentation guidelines • Prepare overheads or a slide show for a 20-minute presentation • Ensure your presentation includes all of the Kaizen steps • Use whatever props or other devices to best explain your achievement • Include 10 minutes for Q and A

© 2003 by CRC Press LLC

SL316XCh10Frame Page 277 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

277

• Each team member should participate in the presentation • Management needs to see and hear about the results of the team’s success JIT concepts (just in time) Kanban — A pull inventory system Poka Yoke— a methodology that helps build quality into the product and allows only good product to go to the next operator or customer. It focuses on the elimination of human errors. Key elements of mistake-proofing: • Distinction between error and defect • Source inspection • 100% inspection • Immediate action • “Red flag” conditions • Control/feedback logic • Guidelines for mistake proofing Mistake-proofing strategies • Do not make surplus products (high inventory makes poor quality difficult to see) • Eliminate, simplify, or combine operations • Use a transfer rather than process batch strategy • Involve everyone in error and defect prevention (standard practices, daily improvements, and mistake-proofing) • Create an environment that emphasizes quality work, promotes involvement and creativity, and strives for continual improvement Advantages of mistake proofing • No formal training programs required • Eliminates many inspection operations • Relieves operators from repetitive tasks • Promotes creativity and value adding operations • Contributes to defect-free work • Effectively provides 100% internal inspection without the associated problems of human fatigue and error

CONTROL

PLANS

A control plan is a logical, systematic approach for finding and correcting root causes of out-of-control conditions and will be a valuable tool for process improvement. A key advantage of the reaction plan form is its use as a troubleshooting guide for operators. A systematic guide of what to look for during upset conditions is valuable on its own. Key items of concern are: • What elements make up a control plan? • Why should we bother with them? • Who contributes to their preparation?

© 2003 by CRC Press LLC

SL316XCh10Frame Page 278 Monday, September 30, 2002 8:09 PM

278

Six Sigma and Beyond: The Implementation Process

• How do we develop one? • When do we update them? • Where should the plan reside? Control plan strategy • Operate our processes consistently on target with minimum variation. • Minimize process tampering (overadjustment). • Assure that the process improvements that have been identified and implemented become institutionalized. ISO 9000 can assist here. • Provide for adequate training in all procedures. • Include required maintenance schedules. • Factors impacting a good control plan. Control plan components • Process map steps • Key process output variables, targets, and specs • Key and critical process input variables with appropriate working tolerances and control limits • Important noise variables (uncontrollable inputs) • Short- and long-term capability analysis results • Designated control methods, tools, and systems • SPC • Automated process control • Checklists • Mistake-proofing systems • Standard operating procedures • Workmanship standards Documenting the control plan • FMEA • Cause-and-effect matrix • Process map • Multi-vari studies • DOE Reaction plan and procedures • Control methods identify person responsible for control of each critical variable and details about how to react to out-of-control conditions. • Control methods include a training plan and process auditing system, e.g., ISO 9000. • Complicated methods can be referenced by document number and location changes in the process require changes to the control method. • Actions should be the responsibility of people closest to the process. • The reaction plan can simply refer to an SOP and identify the person responsible for the reaction procedure. • In all cases, suspect or nonconforming product must be clearly identified and quarantined. Questions for control plan evaluation. Key process input variables (Xs): • How are they monitored? • How often are they verified? © 2003 by CRC Press LLC

SL316XCh10Frame Page 279 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

• • • • • •

279

Are optimum target values and specifications known? How much variation is there around the target value? What causes the variation in the X? How often is the X out of control? Which Xs should have control charts? Uncontrollable (noise) inputs. What are they? Are they impossible or impractical to control? Do we know how to compensate for changes in them? How robust is the system to noise? • Standard operating procedures — do they exist? Are they simple and understood? Are they being followed? Are they current? • Is operator training performed and documented? • Is there a process audit schedule? Maintenance procedures • Have critical components been identified? • Does the schedule specify who, what, and when? • Where are the manufacturer’s instructions? • Do we have a troubleshooting guide? • What are the training requirements for maintenance? • What special equipment is needed for measurement? What is the measurement capability? • Who does the measurement? How often is a measurement taken? How are routine data recorded? • Who plots the control chart (if one is used) and interprets the information? • What key procedures are required to maintain control? • What is done with product that is off spec? • How is the process routinely audited? • Who makes the audit? How often? How is it recorded? Control plan checklist • Documentation package • Sustaining the gains Issues to transitioning a project • Assure your project is complete enough to transition. • No loose ends — have at least a plan (project action plan) for everything not yet finalized. • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people within the impacted area and those people outside that the changes may affect. • Display, update, and communicate your project results in impacted area during all phases. Remember, no surprises, buy-in during all phases. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help; you are not a one-person show and do not have all the answers. © 2003 by CRC Press LLC

SL316XCh10Frame Page 280 Monday, September 30, 2002 8:09 PM

280

Six Sigma and Beyond: The Implementation Process

• Use data collection. • Idea generation (brainstorming events). • Create buy-in with the entire workcell/targeted area. • Project action plan. Project action plan (suggested format) • Sustaining the gain. • Changes must be permanent. • Changes must be built into daily routine. • A sampling plan and measurement system must be established and used for monitoring. • Responsibilities must be clear, accepted, and, if necessary, built into roles and responsibilities. • Develop and update procedures. • Train all involved. • Action plan solidified and agreed upon. Sustaining the gain — product changes • Revise drawings by submitting EARs • Work with process, test, and product engineers Process changes • Physically change the process flow (5S the project area). • Develop visual indicators. • Establish or buy new equipment to aid assembly or test. • Poka-Yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely updates. • Make it someone’s job to review the metric and take action when needed. • Training. • Train everyone in the new process (do not leave until there is full understanding). Aspects of control • Benchmarks for world class performance. • Quality improvement rate of 68% per year. • Productivity improvement rate of 2% per month. • Lead-time is less than ten times the value-added time. • Continuous improvement culture. • Total employee involvement. • Reward and recognition. • Celebration. © 2003 by CRC Press LLC

SL316XCh10Frame Page 281 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

281

MANUFACTURING TRAINING – 4 WEEKS WEEK 1 Introductions Agenda Training ground rules: • If you have any questions, please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities; please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Manufacturing training Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Vision • Philosophy • Aggressive goal • Metric (standard of measurement) • Benchmark • Method • Vehicle for: • Customer focus • Breakthrough improvement • Continual improvement • People involvement • Defines the goals of the business • Defines performance metrics that tie to the business goals • Identifies projects using performance metrics that will yield clear business results • Applies advanced quality and statistical tools to achieve breakthrough financial performance • Goal • Performance target • Problem-solving methodology The strategy • Which business function needs it? © 2003 by CRC Press LLC

SL316XCh10Frame Page 282 Monday, September 30, 2002 8:09 PM

282

Six Sigma and Beyond: The Implementation Process

• Leadership participation. Six Sigma only works when leadership is passionate about excellence and willing to change. • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma is a catalyst for leaders The breakthrough phases • Define • Measure • Analyze • Improve • Control The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? COPQ data • Getting there through inspection • Six Sigma overview • Overall perspective • Manufacturing process picture • Defects and variation • Variation and process capability Process capability and improvement • The defect elimination system • Overall perspective • Defects and the hidden factory • Rolled-throughput yield vs. first-time yield • What causes defects? • Excess variation due to: • Manufacturing processes • Supplier (incoming) material variation • Unreasonably tight specifications (tighter than the customer requires) • Dissecting process capability Premise of Six Sigma sources of variation can be: • Identified • Quantified • Eliminated or controlled How do we improve capability? • Six Sigma, metrics, and continual improvement Six Sigma is characterized by: • Defining critical business metrics © 2003 by CRC Press LLC

SL316XCh10Frame Page 283 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

283

• Tracking them • Improving them using proactive process improvement • Continual improvement: defects per unit (DPU) drives plant-wide improvement. Defects per million opportunities (DPMO) allows for comparison of dissimilar products. • Calculating the product sigma level — sigma level allows for benchmarking within and across companies. • Metrics: Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt = e-dpu). Cost of poor quality and cycle time (throughput) are two others. Process steps, FTY and RTY Harvesting the fruit of Six Sigma PPM conversion chart Translating needs into requirements • Implementation Six Sigma success factors • Affects directly quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Insures a predictable factory Black belt execution strategy. Its purpose is to introduce roles and responsibilities and to describe the execution strategy. • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt in relationship to • Delivering successful projects using the breakthrough strategy. • Training and mentoring the local organization on Six Sigma • roles of a black belt. • Mentoring: • Cultivate a network of experts in the factory or on site. • Work with the operators. • Work with the process owners. • Work with all levels of management. • Teaching and coaching: • Provide formal training to local personnel regarding new tools and strategies • Become the conduit for information • Provide one-on-one support • Develop effective teams • Identifying and discovery: • Find new applications • Identify new projects • Surface new business opportunities • Connect the business through the customer and supplier • Seek best practices © 2003 by CRC Press LLC

SL316XCh10Frame Page 284 Monday, September 30, 2002 8:09 PM

284

Six Sigma and Beyond: The Implementation Process

• Being involved: • Sharing best practices throughout the organization • Being a spokesperson to the customer • Driving supplier performance • Getting involved with executive management • Becoming a future leader Prerequisites for black belts: • Breakthrough strategy training • Black belt instruction • Roles of the master black belt Role of executives: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma Roles of the master black belt: • Be the expert in the tools and concepts. • Develop and deliver training to various levels of the organization. • Certify the black belts. • Assist in the identification of projects. • Coach and support BB in project work. • Participate in project reviews to offer technical expertise. • Partner with the champions. • Demonstrate passion around Six Sigma. • Share best practices. • Take on leadership of major programs. • Develop new tools or modify old tools for application. • Understand the linkage between Six Sigma and the business strategy. Role of champion: • Will select black belt projects consistent with corporate goals. • Will drive the implementation of Six Sigma through public support and removal of barriers Green belt: • Will deliver successful localized projects using the breakthrough strategy. Six Sigma Instructor: • Will make sure each and every black belt candidate is certified in the understanding, usage, and application of the Six Sigma tools. BB Execution Strategy. Its purpose is to insure sources of variation in manufacturing and transactional processes that are appropriately and objectively identified, quantified, and controlled or eliminated. By using the breakthrough strategy, process performance which is sustained through well developed, documented, and executed process control plans. The goal, of course, is to achieve improvements in rolledthroughput yield, cost of poor quality, and capacity–productivity. To reach this goal, BBs use the DMAIC model, the Kano model, QFD and other tools and methodologies. The phases of process improvement are: • The define phase • Refine the project © 2003 by CRC Press LLC

SL316XCh10Frame Page 285 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

• • • •

285

Establish the “as is” process Identify customers and CTQs Identify goals and scope of project A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs critical tos — CTCost, CTDelivery, CTQuality The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. The expected result is a Pareto of Xs that are used as inputs into the FMEA and control plans. These are the CTPs — Critical to the Process, or anything that we can control or modify about our process that will help us achieve our objectives. • The Measurement Phase. Establish the performance baseline. A welldefined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is defects. This holds true for any problem statement, objective, and metric (percent defects, overtime, RTY, etc.). • Primary metric. A black belt needs to be focused, if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits. Do not confuse projected project benefits with your objective. Make sure you separate these two items. • There are times when you may achieve your objective, yet not see the projected benefits. This is because we can not control all issues. Need to tackle them in a methodical order. • Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. • Critical to matrix (cause and effects matrix). • Establish data-collection system. Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place you will not be able to determine whether you are making any improvements in your project. Establish this system such that you can historically © 2003 by CRC Press LLC

SL316XCh10Frame Page 286 Monday, September 30, 2002 8:09 PM

286

Six Sigma and Beyond: The Implementation Process

record the data you are collecting. This information should be recorded in a database that can be readily accessed. The data should be aligned in the database in such a manner that for each output (Y) recorded, the operating conditions (X) are identified. This becomes important for future reference. This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. • Measurement systems analysis. To determine whether the measurement system, defined as the gauge and operators can be used to precisely measure the characteristic in question. It is very important to make the point that — We are not evaluating part variability but gauge operator capability. Some guidelines are: • Determine the measurement capabilities for Ys • Need to be completed before assessing capability of Ys • These studies are called: • Gauge repeatability and reproducibility (GR&R) studies • Measurement systems analysis (MSA) • Measurement systems evaluation (MSE) • Indices: precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error (P/T ≤ 10% is desirable; P/T = 30% is marginal.) precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. • Capability studies. Used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and long-term (overall) estimates of capability indices are taught. Indices used assuming process is centered: Cp, Pp, and Zst; indices used to evaluate shifted process: Cpk, Ppk, and Zlt. • The analysis phase. Identify vital few Xs by identifying high risk input variables (Xs) from the failure modes and effects analysis (FMEA); to reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA techniques; to determine the presence of and potential elimination of noise variables via multivari studies; to plan and document initial improvement activities. • Failure modes and effects analysis: • Documents effects of failed key inputs (Xs) on key outputs (Ys) • Documents potential causes of failed key input variables (Xs) • Documents existing control methods for preventing or detecting causes • Provides prioritization for actions and documents actions taken • Can be used as the document to track project progress • Multi-vari studies. Study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is to identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase; to take a first look at major input variables. © 2003 by CRC Press LLC

SL316XCh10Frame Page 287 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

287

Ultimately, multi-vari studies help select or eliminate variables for study in designed experiments. • The improvement phase. Determine the governing transformation equation through understanding the ideal function. The backbone of the process improvement is DOE (design of experiments). From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). This phase is characterized by a sequence of experiments, each based on the results to the previous study. Critical variables are identified during this process. Usually three to six Xs account for most of the variation in the outputs. Ultimately, the purpose of this phase is to control and focus on the continual improvement process. • The control phase: optimize, eliminate, automate, and or control vital few inputs; document and implement the control plan; sustain the gains identified; reestablish and monitor long-term delivered capability; implement continual improvement efforts (green belts at the functional area); execution strategy support systems; safety requirements; maintenance plans defined; system to track special causes; required and critical spare parts list; troubleshooting guides; control plans for both short and long term; SPC charts for process monitoring; inspection points and metrology control; workmanship standards; and others? Potential project deliverables • Define: • Identification of customers • Identification of customers’ needs • Identify the “as is” process • Formulate the goal and scope of the project • Update the project charter • Measure: • Project definition: • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E matrix, PFMEA, fishbones • Data-collection system • Measurement systems analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, sigma level, DPU, RTY) • Graphical and statistical tools: • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review © 2003 by CRC Press LLC

SL316XCh10Frame Page 288 Monday, September 30, 2002 8:09 PM

288

Six Sigma and Beyond: The Implementation Process

• Analyze • Project definition: • Problem description • Project metrics • Passive process analysis: • Graphical analysis • Multi-vari studies • Hypothesis testing • DOE planning sheet • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review • Improve • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review • Control • Project Definition: • Problem description • Project • Optimization of Ys (RSM/EVOP) • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review © 2003 by CRC Press LLC

SL316XCh10Frame Page 289 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

289

Rolled-throughput yield The classical perspective of yield Simple first-time yield = traditional yield Measuring first-pass yield Rolled-throughput yield Normalized yield Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as an “opportunity count.” In terms of quality, each product or process characteristic represents a unique “opportunity” to either add or subtract value. (Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar.) Hidden factory DPMO • Nonvalue-add rules: no opportunity count should be applied to any operation which does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception: an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. • Supplied components rules: each supplied part provides one opportunity. Supplied materials such as, machine oil, coolants, etc. do not count as supplied components. • Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. • Sanity check rule: will applying counts in these operations take my business in the direction it is intended to go? If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be counter to the company objective. Hence it would not provide an opportunity. Once you define an “opportunity,” however, you must institutionalize that definition to maintain consistency. Introduction to software package used. The instructor should provide information about the software at least in the following areas. • Purpose of using the software • Capabilities of the software: • Cut and paste © 2003 by CRC Press LLC

SL316XCh10Frame Page 290 Monday, September 30, 2002 8:09 PM

290

Six Sigma and Beyond: The Implementation Process

• Formatting data • Numeric vs. alpha columns • Date columns • Entering data • Graphing • Basic statistics • Help menu • Normality testing • ANOVA • Z scores • Creating random data Basic statistics: • Mean • Median • Normal distribution • T test • Z test Fundamentals of improvement: • Variability — is the process on target with minimum variability? We use the mean to determine if process is on target. We use the standard deviation (σ) to determine spread • Stability — how does the process perform over time? Stability is represented by a constant mean and predictable variability over time. If process is not stable, identify and remove causes (Xs) of instability (obvious nonrandom variation). Determine the location of the process mean. Is it on target? If not, identify the variables (Xs) that affect the mean and determine optimal settings to achieve target value. Estimate the magnitude of the total variability. Is it acceptable with respect to the customer requirements (spec limits)? If not, identify the sources of the variability and eliminate or reduce their influence on the process • Can we tolerate variability? Even though there will always be variability present in any process, we can tolerate variability if: a) the process is on target, b) the total variability is relatively small compared to the process specifications, and c) the process is stable over time Types of outputs (data): • Attribute data (qualitative) • Variable data (quantitative) • Discrete (count) data • Continuous data Selecting statistical techniques. There are statistical techniques available to analyze all combinations of input/output data. Statistical distributions. We can describe the behavior of any process or system by plotting multiple data points for the same variable over time, across products on different machines, etc. The accumulation of these data can be viewed as a distribution of values. © 2003 by CRC Press LLC

SL316XCh10Frame Page 291 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

291

Represented by: • Dot plots • Histograms • Normal curve or other “smoothed” distribution Population parameters vs. sample statistics: • Population: an entire group of objects that have been made or will be made containing a characteristic of interest. Very likely we will never know the true population parameters. • Sample: the group of objects actually measured in a statistical study. A sample is usually a subset of the population of interest. • Measures of central tendency: median, mean, and mode. • Measures of variability: • Range — numerical distance between the highest and the lowest values in a data set. • Variance (σ;2 s2) — the average squared deviation of each individual data point from the mean. (Emphasize that variances add. In fact, variances of the inputs add to calculate the total variance in the output.) • Standard deviation (σ; s) — the square root of the variance, most commonly used measurement to quantify variability. (Emphasize that standard deviations do not add.) The normal distribution is a distribution of data which has certain consistent properties. These properties are very useful in our understanding of the characteristics of the underlying process from which the data were obtained. Most natural phenomena and man-made processes are distributed normally, or can be represented as normally distributed. • Property 1: a normal distribution can be described completely by knowing only the mean, and standard deviation. • Property 2: the area under sections of the curve can be used to estimate the cumulative probability of a certain “event” occurring. • Property 3: the previous rules of cumulative probability closely apply even when a set of data is not perfectly normally distributed. Testing normality: • Normal probability plots • Chi-square • F test Data set: • Mining the data • Test data for normality • Conduct appropriate testing and analyses Capability analysis: • The need for capability • Types of capability analysis • Variable output • Attribute output © 2003 by CRC Press LLC

SL316XCh10Frame Page 292 Monday, September 30, 2002 8:09 PM

292

Six Sigma and Beyond: The Implementation Process

• The method • Long vs. short • Indices of capability • Z shift • Conversion from short to long term and vice versa • Additional capability topics • Box-cox transformation • Nonnormal data (transformable) • Nonnormal data (not transformable) Attribute measurement system: a measurement system that compares each part to a standard and accepts the part if this standard is met. • Screen: 100% evaluation of product using inspection techniques (an attribute measurement system). • Screen effectiveness: the ability of the attribute measurement system to properly discriminate good from bad. • Customer bias: operator has a tendency to hold back good product. • Producer bias: operator has a tendency to pass defective product. Purpose of Attribute R&R • To assess your inspection or workmanship standards against your customers’ requirements. • To determine if inspectors across all shifts, all machines, etc. use the same criteria to distinguish “good” from “bad.” • To quantify the ability of inspectors to accurately repeat their inspection decisions. • To identify how well these inspectors are conforming to a “known master” which includes: • How often operators decide to ship truly defective product • How often operators do not ship truly acceptable product • Discover areas where: • Training is needed • Procedures are lacking • Standards are not defined Attribute R&R — the method Variable gauge R&R • The ideal measurement system will produce “true” measurements every time it is used (zero bias, zero variance). • The study of measurement systems will provide information as to the percent variation in your process data which comes from error in the measurement. • It is also a great tool for comparing two or more measurement devices or two or more operators against one another. • MSE should be used as part of the criteria required to accept and release a new piece of measurement equipment to manufacturing. • It should be the basis for evaluating a measurement system that is suspected of being deficient. Possible sources of process variation © 2003 by CRC Press LLC

SL316XCh10Frame Page 293 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

293

Precision vs. accuracy Basic model Sources of measurement variation: • Knowledge to be obtained • How big is the measurement error? • What are the sources of measurement error? • Is the tool stable over time? • Is the tool capable for this study? • How do we improve the measurement system? Accuracy-related terms • Accuracy is the extent to which the average of the measurements deviates from the true value. The difference between the observed average value of measurements and the master value. The master value is an accepted, traceable reference standard (e.g., NIST). • True value — theoretically correct value (NIST standards). • Bias — average of measurements are different by a fixed amount, effects include: • Operator bias — different operators get detectably different averages for the same measurements on the same part. • Machine bias — different machines get detectably different averages for the same measurements on the same parts. Precision-related terms • Precision — total variation in the measurement system. Measure of natural variation of repeated measurements. Typical terms associated with precision are: Random error, spread, test/retest error. • Repeatability — the inherent variability of the measurement device. Variation that occurs when repeated measurements are made of the same variable under similar conditions, same part, same operator, same set-up, same units, same environmental conditions in the short term. It is estimated by the pooled (average) standard deviation of the distribution of repeated measurements. Repeatability is usually less than the total variation of the measurement system. Another way of looking at it is to think of it as the variation between successive measurements of the same part, same characteristic, by the same person using the same instrument. Also known as test–retest error; used as an estimate of short-term measurement variation. • Reproducibility — the variation that results when different conditions are used to make the same measurements with different operators, different set-ups, different test units, different environmental conditions, long-term measurement variation. It is estimated by the standard deviation of the averages of measurements from different measurement conditions. Another way of saying this: the difference in the average of the measurements made by different persons using the same or different instrument when measuring the identical characteristic on the same part. © 2003 by CRC Press LLC

SL316XCh10Frame Page 294 Monday, September 30, 2002 8:09 PM

294

Six Sigma and Beyond: The Implementation Process

• Linearity — a measure of the difference in accuracy or precision over the range of instrument capability. • Discrimination — the number of decimal places that can be measured by the system. Increments of measure should be at least one-tenth of the width of the product specification or process variation. • Stability (over time) — the distribution of measurements remains constant and predictable over time for both mean and standard deviation. No drifts, sudden shifts, cycles, etc. To ensure stability make sure you monitor and analyze control charts. • Correlation — a measure of linear association between two variables, e.g., two different measurement methods or two different laboratories. P/T and P/TV • “P to PV” is used to qualify a measurement system as capable of measuring to the total observed process variation. • “P to T” is used to qualify a measurement system as capable of measuring to a given product specification. Uses of P/T and P/TV (percent R&R) • The P/T ratio is the most common estimate of measurement system precision. This estimate may be appropriate for evaluating how well the measurement system can perform with respect to the specification. Specifications, however, may be too tight or too loose. • Generally, the P/T ratio is a good estimate when the measurement system is used only to classify production samples. Even then, if process capability (Cpk) is not adequate, the P/T ratio may give you a false sense of security. • The P/TV (percent R&R) is the best measure for the black belt. This estimates how well the measurement system performs with respect to the overall process variation. Percent R&R is the best estimate when performing process improvement studies. Care must be taken to use samples representing full process range. • The method — calculating percent R&R • By operator — shows if any operator had higher or lower readings (on average) than the others. • By part — shows the ability of all operators to obtain the same readings for each part. Also shows the ability of a measurement system to distinguish between parts (amount of overlap). Gauge R&R, Xbar, and R chart Percent R&R vs. capability • Handling poor gauge capability. If a dominant source of variation is repeatability (equipment), you need to replace, repair, or otherwise adjust the equipment. If, in consultation with the equipment vendor or upon searches of industry literature, you find that the gauge technology that you are using is “state-of-the-art” and is performing to its specifications, you should still fix the gauge. One temporary solution to this problem is to use signal averaging. If a dominant source of variation is the operator (reproducibility), you must address this via training and © 2003 by CRC Press LLC

SL316XCh10Frame Page 295 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

295

definition of the standard operating procedure. You should look for differences between operators to give you some indication as to whether it is a training, skill, and/or procedure problem. If the gauge capability is marginal (as high as 30% of study variation) and the process is operating at a high capability (Ppk greater than two), then the gauge is probably not hindering you and you can continue to use it. Controlling repeatability. Note: if you want to decrease your gauge error take advantage of the standard error square root of the sample. Measurement system evaluation questions: • Written inspection measurement procedure? • Detailed process map developed? • Specific measuring system and setup defined? • Trained or certified operators? • Instrument calibration performed in a timely manner? • Tracking accuracy? • Tracking percent R&R? • Tracking bias? • Tracking linearity? • Tracking discrimination? • Correlation with supplier or customer where appropriate? Measurement system analysis questions: • Have you picked the right measurement system? Is this measurement system associated with either critical inputs or outputs? • What do the precision, accuracy, tolerance, P/T ratio, percent R&R, and trend chart look like? • What are the sources of variation and what is the measurement error? • What needs to be done to improve this system? • Have we informed the right people of our results? • Who owns this measurement system? • Who owns troubleshooting? • Does this system have a control plan in place? • What’s the calibration frequency? Is that frequent enough? • Do identical systems match? Deliverables for week 2: • Project report • Title page • Problem statement summary page • Problem statement • CTQ • What is the defect? • Initial DPMO • Target DPMO (i.e., 90% reduction/99% reduction stretch) • Team • Benefits of the project (why are we doing this?) • Picture/drawing to allow audience to set reference • Process flow diagram © 2003 by CRC Press LLC

SL316XCh10Frame Page 296 Monday, September 30, 2002 8:09 PM

296

Six Sigma and Beyond: The Implementation Process

• Definition of the measurement system. What was defined as the measurement of study? How is this linked to the CTQ? • Measurement system validation • Show failures and what was learned • Show analysis and results • Initial capability study • Begin screening factors (C&E, FMEA, multi-vari) • Brief recap/summary • Next steps

WEEK 2 Review of key questions Review project questions, concerns Process performance metrics CP and PP CPK and PPK When to add/subtract ZSHIFT Metric conversion Multi-vari charts. Their purpose is to narrow the scope of input variables — leverage KPIVs (identify inputs and outputs). The following tools can be used to identify the inputs and outputs: • C&E matrix • FMEA • Fishbone • Short-term capability • Scatter plots • Correlation • Regression boxplots • Main effects • Interaction plots • ANOVAs, T-tests • Multi-vari defined — a graphical tool which, through logical subgrouping, analyzes the effects of categorical Xs on continuous Ys. A method to characterize the baseline capability of a process while either in production mode or via historical data. If in the production mode, the data used in a multi-vari study is collected for a relatively short period of time (2 weeks to 2 months), though the multi-vari study can continue until the full range of the output variable is observed (from low to high). Categorical Xs are typically used in multi-vari analysis. • Inputs that can be classified as attribute in nature. These types of inputs have levels assigned which are arbitrary in nature (operator A–operator B–operator C, or low–high, or machine 1–machine 2– machine 3). © 2003 by CRC Press LLC

SL316XCh10Frame Page 297 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

297

• We use the results to determine capability, stability, and potential relationships between Xs and Ys. • Performing a multi-vari • Step 1: plan the multi-vari • Identify the major areas of variation. • Show them on the project organization chart. • Decide how to take data in order to distinguish these major sources of variation. • Decide ahead of time how to graph the data so that possible variation will be visible. • Step 2: take data in order of production (not randomly) • Data should include the entire range of variation. • Step 3: take a representative sample (minimum of three) per group • Step 4: analyze the results • Is there an area that shows the greatest source of variation? • Are there cyclic or unexpected nonrandom patterns of variation? • Are the nonrandom patterns restricted to a single sample or more? • Are there areas of variation that can be eliminated (e.g., shift-to-shift variation)? Sampling Methods • Simple random sampling: if all possible samples of n experimental units are equally likely, the procedure to use is a simple random sample. • Characteristics of Simple Random Sampling: • Unbiased — every experimental unit has the same chance of being chosen • Independence — the selection of one experimental unit is not dependent on the selection of another • Stratified sample: divide the population into homogeneous groups and randomly sample from within each group. • Cluster sample: divides the sample into smaller homogeneous groups. Then the groups are randomly sampled. • Systematic sample: start with a randomly chosen unit and sample every kth unit thereafter. Sampling Plan A good sampling plan will capture all relevant sources of noise variability • Lot-to-lot • Batch-to-batch • Different shifts, different operators, different machines • Sample size rule of thumb: 30 Correlation and simple linear regression • Overview • Correlation coefficients • Correlation and causality © 2003 by CRC Press LLC

SL316XCh10Frame Page 298 Monday, September 30, 2002 8:09 PM

298

Six Sigma and Beyond: The Implementation Process

• • • •

• • • • •

Scatter plots Fitted line plots Simple regression Correlation is a measure of the strength of association between two quantitative variables (e.g., pressure and yield). Correlation measures the degree of linearity between two variables assumed to be completely independent of each other. Correlation coefficient, r, always lies between –1 and +1. Comparison of covariances Simple regression Regression equation Fitted line plot

The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if “n” is sufficiently high (> 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample mean decreases. The standard error will help us calculate confidence intervals (CIs). Significance of confidence intervals — statistics such as the mean and standard deviations are only estimates of the population mus and sigmas and are based on only one sample. Because there is variability in these estimates from sample to sample, we can quantify our uncertainty using statistically based CIs. Most of the time, we calculate 95% CIs; however, there is nothing sacred about this particular confidence. It may be anything. We interpreted 95% CI as approximately 95 out of a 100 CIs will contain the population parameter, or we’re 95% certain the population parameter is inside the interval. Population vs. sample Comparison of histograms Parametric CIs Confidence interval for the mean What is the t-distribution? The t-distribution is a family of bell-shaped distributions that are dependent on sample size. The smaller the sample size, the wider and flatter the distribution. CIs for proportions — CIs can also be constructed for fraction defective (p), where x = number of defect occurrences n = sample size p = x/n = proportion defective in sample For cases in which the number defective (x) is at least five and the total number of samples n is at least 30, the normal distribution approximation can be used as a shortcut. For other cases, the binomial tables are needed to construct this confidence interval.

© 2003 by CRC Press LLC

SL316XCh10Frame Page 299 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

299

HYPOTHESIS TESTING INTRODUCTION Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. • Passive: you have either directly sampled your process or have obtained historic sample data. • Active: you have made a modification to your process and then sampled. • Statistical testing provides objective solutions to questions which are traditionally answered subjectively. Hypothesis testing is a stepping stone to ANOVA and DOE. The null and alternate hypotheses. The method and the roadmap. Hypothesis testing answers the practical question of whether there is a real difference between _____ and _____. Tests of significance. Significance level (alpha and beta).

WEEK 3 Week 1 review Week 2 review General questions Questions, concerns about project Week 3 potential project deliverables • Project definition • Problem description • Project metrics • DOE planning • Inputs list • DOE planning sheet • Designed experiments • Analysis of experiments • Y = F (X1, X2, X3, …) • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review ANOVA review

© 2003 by CRC Press LLC

SL316XCh10Frame Page 300 Monday, September 30, 2002 8:09 PM

300

Six Sigma and Beyond: The Implementation Process

DOE INTRODUCTION A systematic set of experiments that permits one to evaluate the effect of one or more factors without concern about extraneous variables or subjective judgments. It begins with the statement of the experimental objective and ends with the reporting of the results. It may often lead to further experimentation. It is the vehicle of the scientific method, giving unambiguous results that can be used for inferring cause and effect. Full factorials experiments Analyzing single factor experiments One-way analysis continued Comparing more than two groups Test of equal variances Pooled standard deviation Multiple comparisons Experimental design selection Inference space considerations Strategy of experimentation • Define the problem • Establish the objective • Select the output — responses (Ys) • Select the input factors (Xs) • Choose the factor levels • Select the experimental design and sample size • Collect the data • Analyze the data • Draw conclusions Barriers to effective experimentation Factor selection — narrowing down the list • Which factors do we include? The following sources provide insight. • FMEA/control plans or DCP • Cause and effect matrix • Multi-vari and hypothesis testing • Process mapping • Brainstorming • Literature review • Engineering knowledge • Operator experience • Scientific theory • Customer/supplier input • Global problem solving Choosing the levels for each factor • The levels of an input factor are the values of the input factor (X) being examined in the experiment (not to be confused with the output, Y). • For a quantitative (variables data) factor like temperature: if an experiment is to be conducted at two different temperatures, then the factor temperature has two levels. © 2003 by CRC Press LLC

SL316XCh10Frame Page 301 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

301

• For a qualitative (attributes data) factor like cleanliness: if an experiment is to be conducted using clean and not clean, then the factor cleanliness has two levels. Selecting the type of experiment design • Response surface methods • Full factorials with replication • Full factorials with repetition • Full factorials without replication or repetition • Screening or fractional designs • One factor at a time (OFAT) Ensuring internal and external validity • Internal validity. Randomization of experimental runs “spreads” the noise across the experiment. Blocking ensures noise is part of the experiment and can be directly studied. • Holding noise variables constant eliminates the effect of that variable but limits broad inferences. • External validity. Include representative samples from possible noise variables. • Threats to statistical validity • Low statistical power: sample size inappropriate. • Loose measurement systems inflate variability of measurements. • Random factors in the experimental setting inflate variability of measurement. • Randomization and sample size prevent threats. Planning questions • What is the measurable objective? • What will it cost? • How will we determine sample sizes? • What is our plan for randomization? • Have we talked to internal customers about this? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run? • Where is the proposal? Performing the experiment • Document initial information • Verify measurement systems • Ensure baseline conditions are included in the experiment • Make sure clear responsibilities are assigned for proper data collection • Always perform a pilot run to verify and improve data collection procedures! • Watch for and record any extraneous sources of variation • Analyze data promptly and thoroughly • Graphical • Descriptive • Inferential © 2003 by CRC Press LLC

SL316XCh10Frame Page 302 Monday, September 30, 2002 8:09 PM

302

Six Sigma and Beyond: The Implementation Process

• Always run one or more verification runs to confirm your results (go from narrow to broad inference) Final report/general advice • Planning sheet can be more important than running the experiment. • Make sure you have tied potential business results to your project. • Focus on one experiment at a time. • Don not try to answer all the questions in one study, rely on a sequence of studies. • Use two-level designs early. • Spend less than 25% of budget on the first experiment. • Always verify results in a follow-up study. • It is acceptable to abandon an experiment. • A final report is a must!! • Finally, push the envelope with robust levels, but think of the safety of the people and equipment. Steps to conduct a full factorial experiment • Step 1: state the practical problem and objective using the a DOE worksheet. • Step 2: state the factors and levels of interest. • Step 3: select the appropriate sample size. • Step 4: create a computer software experimental data sheet with the factors in their respective columns. Randomize the experimental runs in the data sheet. • Step 5: conduct the experiment. • Step 6: construct the ANOVA table for the full model, use either: a) balance ANOVA or b) DOE > analyze factorial design. • Step 7: review the ANOVA table and eliminate effects with p-values above 0.05. Run the reduced model to include those p-values that are deemed significant. • Step 8: analyze the residuals of the reduced model to ensure we have a model that fits. Calculate the fits and residuals. Factorial experiments • GLM procedure for unbalanced designs • Residual analysis • Analyzing the two- and three-way interaction • Analysis of main effects • Epsilon-squared • Orthogonality • Describe the overall concepts of 2k factorials • Create standard order designs • Design and analyze 2k factorials using: • ANOVA • Effects plots • Graphs and residual plots • Advantages of 2k factorials • Require relatively few runs per factor studied © 2003 by CRC Press LLC

SL316XCh10Frame Page 303 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

303

• Can be the basis for more complex designs • Good for early investigations — can look at a large number of factors with relatively few runs • Lend themselves well to sequential studies • Analysis is fairly easy • Standard order of 2k designs Calculating the interaction effects — the interaction effect is represented by multiplying the columns to be represented. Mixed models (fixed and random factors) permitted ANOVA plus unbalanced or nested Used for 2,k 2k with centerpoints, 2k with blocking Notation is different than ANOVA procedures Steps for conducting a 2k factorial experiment (the reader will notice that steps 1–6 are the same as that for a full factorial. In fact we pick up where we left off.) • Step 7: analyze the residual plots to ensure we have a model that fits. • Step 8: investigate significant interactions (p-value < 0.05). Assess the significance of the highest order interactions first. For 3-way interactions unstack the data and analyze. • Stat > DOE > factorial plots > interaction plot. Once the highest order interactions are interpreted, analyze the next set of lower order interactions. • Step 9: investigate significant interactions (p-value < 0.05). • Step 10: state the mathematical model obtained. If possible calculate the epsilon-squared and determine the practical significance. • Step 11: translate the mathematical model into process terms and formulate conclusions and recommendations. • Step 12: replicate optimum conditions. Plan the next experiment or institutionalize the change. How to add center points in your designs Blocking with 2k factorials Confounding and blocking

WEEK 4 Review week 1 Review week 2 Review week 3 General questions Questions, concerns about project Week 4 potential project deliverables • Project definition • Project metrics • Process optimization • PLEX, EVOP, RSM, multiple regression © 2003 by CRC Press LLC

SL316XCh10Frame Page 304 Monday, September 30, 2002 8:09 PM

304

Six Sigma and Beyond: The Implementation Process

• • • • • • • • • •

Process controls Statistical product monitors Statistical process controls Document and sustain the gains Update FMEA Update control plan 5S the immediate project area Quality manual and related documentation Write the final report Review of designed experiments

FRACTIONAL FACTORIALS Why do fractional factorial experiments? As the number of factors increases, so does the number of runs. 2 × 2 factorial = 4 runs; 2 × 2 × 2 factorial = 8 runs; 2 × 2 × 2 × 2 factorial = 16 runs and so on. If the experimenter can assume higher-order interactions are negligible, it is possible to do a fraction of the full factorial and still get good estimates of low-order interactions. Major use of fractional factorials is screening: a relatively large number of factors in a relatively small number of runs. Screening experiments are usually done in the early stages of a process improvement project. Factorial experiments. Successful factorials are based on: a) the sparsity of effects principle and b) systems are usually driven by main effects and low-order interactions. Sequential experimentation Designing a fractional factorial What is PLEX? PLEX = PLant EXperimentation; a process-improvement tool for online use in full-scale production; uses simple factorial two-level designs in two or three factors; usually requires several iterations of experimental design, analysis, and interim improvements. The goal is to minimize disruption to production but make big enough changes to quickly see effects on output variables. • Prerequisites for PLEX • Good measurement system in place. • With little or no replicate runs, we want to minimize the effect of measurement error. • May require repeat measurements. • Adequate technical supervision to keep process controlled and monitored. • Extra attention to safety requirements and to avoiding upsets. • Stay within operating region. • Maintain environmental controls. • Cooperation of several functions required.

© 2003 by CRC Press LLC

SL316XCh10Frame Page 305 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

305

• Why and when do we use PLEX? • Strong need to increase and/or improve production. • May have a sold-out product line. • Product line may have poor process capability. • Offline studies (lab or pilot scale) are not practical or meaningful. • Key process input variables (Xs) are not well determined, but we have the resources only to investigate a few at a time. A series of factorial experiments is required. • Beware, interactions may be obscured. • Would like to “optimize” (or reoptimize) the process while in production mode. PLEX process improvement roadmap • Form process improvement team. • Assess measurement system, e.g., gauge R&R. • Identify Xs and Ys, e.g., multi-vari, cause and effect, FMEA. • Choose two to four factors for first DOE. • Choose safe operating ranges for each factor. Ranges should be wide enough to reasonably see an active effect with no replication. • Set up 2k factorial design with optional, but recommended, center points. • Consider repeating one or more conditions. One approach is to run center point at beginning, middle, and end of design as a check for process drift or capability. • Prior to running design look at each treatment combination to see if there is a potential failure mode or unsafe condition. • Set up sampling plan. • Plan for technical supervision to minimize upset potential. • Randomize order of running, if practical. Otherwise, choose a run sequence that reduces number of changes. • Run process condition long enough to achieve steady state. • Return to standard conditions until DOE results are analyzed. • Based upon results, suggest interim process changes or subsequent DOEs or small confirmatory studies. • Continue until all Xs are investigated and process is optimized. EVOP — EVolutionary OPerations • What is EVOP? A process-improvement tool used while a process is running in the production mode for the optimization of plant performance; method that uses 22 or 23 factorials with replicates and center points; empowers operators to conduct experiment with minimal engineering support during normal operations; each experimental run is called a cycle. One cycle is one of the following: (0,0) = > (1,1) = > (1,–1) = > (–1,–1) = > (–1,1); eliminate randomization to minimize disruption and document effect estimates at the end of each cycle. Cycle continues in the hopes of collecting “sufficient evidence” of

© 2003 by CRC Press LLC

SL316XCh10Frame Page 306 Monday, September 30, 2002 8:09 PM

306

Six Sigma and Beyond: The Implementation Process

significant change in the Y for the various levels of X. Each set of cycles is called a phase. When enough data is collected through cycles in which a state of improved operations is identified, a phase is set to be completed. The results of each phase determine the new settings for subsequent phases of EVOP. Continue phases until X settings are optimized. Data from phases estimate a “response surface.” • Why use EVOP? The goal is to establish the settings of x1, x2, x3,… in the mathematical relationship: Y = f(x1, x2, x3,…) so as to optimize the process; provides information on process optimization with minor interruption to production; empowers operators and manufacturing personnel and is a cost-effective method to employee continual improvement. • How to apply EVOP: • Step 1: what is the problem to be solved? • Step 2: establish the experimental strategy. • Define Ys/Xs to be studied. • Select variable settings for phase I. • Determine maximum number of cycles for phase I. • Step 3: collect and analyze data during phase I, display on an information board to determine steps for phase 2. • Step 4: repeat steps 2 and 3 for successive phases. • Step 5: implement optimal settings for Xs as S.O.P. • Step 6: rerun EVOP every 6 months to ensure optimal settings are maintained. Response surface methodology (RSM) • What is RSM? Once significant factors are determined, RSM leads the experimenter rapidly and efficiently to the general area of the optimum settings (usually using a linear model). The ultimate RSM objective is to determine the optimum operating conditions for the system or to determine a region of the factor space in which the operating specifications are satisfied (usually using a second-order model). Furthermore, response surfaces are used to optimize the results of a full factorial DOE and create a second-order model if necessary. Therefore, RSM is good to a) determine average output parameters as functions of input parameters and b) process and product design optimization. • Response surface: the surface represented by the expected value of an output modeled as a function of significant inputs (variable inputs only): expected (Y) = f(x1, x2, x3,…xn) • Method of steepest ascent or descent: a procedure for moving sequentially along the direction of the maximum increase (steepest ascent) or maximum decrease (steepest descent) of the response variable using the following first order model: • Y (predicted) = b0 + Sbi Xi • Region of curvature: the region where one or more of the significant inputs will no longer conform to the first order model. Once in this region of operation most responses can be modeled using the following fitted second order model: © 2003 by CRC Press LLC

SL316XCh10Frame Page 307 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

307

• Y (predicted) = b0 + Sbi Xi + Sbii XiXi + Sbij XiXj • Central composite design: a common DOE matrix used to establish as valid second order model • Coded variables: variables that are assigned arbitrary levels in a DOE study (–1, 1, A, B) • Uncoded variables: variables that are assigned process specific levels in a RSM study (10V, 20V) Regression • Regression and correlation • Use correlation to measure the strength of linear association between two variables, especially when one variable does not depend on the other. • Use correlation to benchmark equipment against a standard or another similar piece of equipment. • Use regression to predict one variable from another (it may be easier and more cost-efficient). • Use regression to provide evidence that key input variables explain the variation in the response variable or to determine whether different input variables are related to one other. Correlation limitations • Correlation explores linear association. It does not imply a cause-andeffect relationship. • Two variables may be perfectly related in a manner other than linear, and the correlation coefficient will be close to zero. For example, the relationship could be curvilinear. This emphasizes the importance of plots. • The linear association between two variables may be due to a third variable not under consideration. Sound judgment and scientific knowledge are necessary to interpret the results and validity of correlation analysis. • Some statisticians argue that correlation analysis should only be used when one dependency exists, i.e., when it is not clear which variable depends on the other. • In correlation analysis, it is assumed that both the X and Y variables are random, i.e., X is not fixed to study the dependency of Y. Linear regression uses — quantifies the relationship between a response variable and one or more predictor variables. Four general uses are: • Prediction: the model is used to predict the response variable of interest, especially when this response is difficult or expensive to measure. Emphasis is not given to capturing the role of each input variable with strict preciseness. • Variable screening: the model is used to detect the importance of each input variable in explaining the variation in the response. Important variables are kept for further study. • System explanation: the model is used to explain how a system works. Finding the specific role of each input variable is essential in this case. © 2003 by CRC Press LLC

SL316XCh10Frame Page 308 Monday, September 30, 2002 8:09 PM

308

Six Sigma and Beyond: The Implementation Process

Various models that define different roles for the inputs are typically in competition. • Parameter estimation: the model is used primarily to find specific ranges, size and magnitudes of the regression coefficients. Line regression assumptions Simple regression — fitted line Plot interpreting the output Regression — residual plots Simple polynomial regression Interpreting the results Assessing the predictive power of the model Matrix plots — scatter plots with many Xs Correlation with many Xs The output — R2 Coefficient of determination (r2) Multiple regression — beware of multicollinearity When to use multiple regression — when process or noise input variables are continuous and the output is continuous, multiple regression can be used to investigate the relationship between the Xs (process and/or noise) Ys. Three types of multiple regression What is a quality system? A quality system is an organization’s agreed-upon method of doing business. It is not to be confused with a set of documents that are meant to satisfy an outside auditing organization (i.e., ISO 900x). This means a quality system represents the actions, not the written words, of an organization. Typical elements of a quality system are: • Quality policy • Organization for quality (does not mean quality department!) • Management review or quality • Quality planning (how to launch and control products and processes) • Design control • Data control • Purchasing • Approval of materials for ongoing production • Evaluation of suppliers • Verification of purchased product (does not mean incoming inspection!) • Product identification and traceability • Process control • Government safety and environmental regulations • Designation of special characteristics • Preventative maintenance • Process monitoring and operator instructions • Preliminary capability studies (how to turn on a process) • Ongoing process performance requirements (how to run a process) • Verification of setups • Inspection and testing © 2003 by CRC Press LLC

SL316XCh10Frame Page 309 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

309

• Control of inspection, measuring, and test equipment • Calibration • Measurement system analysis • Control of nonconforming product • Corrective and preventative action • Handling, storage, packaging, preservation, and delivery • Control of quality audits (do what we say we do?) • Training • Service • Use of statistical techniques Aspects of control Quality systems = how we manage Evolution of management style • First generation: management by doing — this is the first, simplest, most primitive approach: just do it yourself. We still use it. “I’ll take care of it.” It is an effective way to get something done, but its capability is limited. • Second generation: management by directing — people found that they could expand their capacity by telling others exactly what to do and how to do it: a master craftsman giving detailed directions to apprentices. This approach allows an expert to leverage his or her time by getting others to do some of the work, and it maintains strict compliance with the expert’s standards. • Third generation: management by results — people get tired of you telling them every detail of how to do their jobs and say “Just tell me what you want by when, and leave it up to me to figure out how to do it.” So you say, “OK, reduce inventories by 20% this year. I’ll reward or punish you based on how well you do. Good luck.” All three approaches have appropriate applications in today’s organizations. Are they being used appropriately? • Third generation sounds logical. Its approach is widely taught and used and is appropriate where departmental objectives have little impact on other parts of the organization. • Third generation has serious, largely unrecognized flaws we can no longer afford. For example, we all want better figures: higher sales, lower costs, faster cycle times, lower absenteeism, lower inventory. How do we get better figures? • Improve the system. Make fundamental changes that improve quality, prevent errors, and reduce waste. For example, reducing inprocess inventory by increasing the reliability of operations. • Distort the system. Get the demanded results at the expense of other results. “You want lower inventories? No problem!” Inventories miraculously disappear — but schedule, delivery, and quality suffer. Expediting and premium and freight go up. Purchasing says, “You want lower costs? No problem!” Purchasing goes down saving the company millions, but it never shows up on the bottom line. Manufacturing © 2003 by CRC Press LLC

SL316XCh10Frame Page 310 Monday, September 30, 2002 8:09 PM

310

Six Sigma and Beyond: The Implementation Process

struggles with the new parts, increasing rework and overtime. Quality suffers… • Distort the figures. Use creative accounting. “Oh, we don’t count those as inventory anymore…..that material is now on consignment from our supplier.” The basic system did not change. Control methods agenda Integrating with lean manufacturing Ranking control methods (the strategy) Types of control methods Product vs. process Automatic vs. manual Control plan Control methods are a form of Kaizen Control methods • SPC • S.O.P • Type III corrective action = inspection: implementation of a short-term containment action that is likely to detect the defect caused by the error condition. Containments are typically audits or 100% inspection. • Type II corrective action = flag: improvement made to the process that will detect when the error condition has occurred. This flag will shut down the equipment so that the defect will not move forward. • Type I corrective action = countermeasure: improvement made to the process that will eliminate the error condition from occurring. The defect will never be created. This is also referred to as a long-term corrective action in the form of mistake-proofing or design changes. • Product monitoring SPC techniques (on Ys) • Precontrol (manual or automatic) • X-bar and R or X and MR charts (manual or automatic) • P and np charts (manual or automatic) • c and u charts (manual or automatic) • Process control SPC techniques (on Xs) • Mistake-proofing (automatic) • X-bar and R or X and MR (manual or automatic) • EWMA (automatic) • Cusum (automatic) • Realistic tolerancing (manual or automatic) The control plan is a living document that is used to document all your process control methods. It is a written description of the systems for controlling parts and processes (or services). The control plan, because it is a living document, should be updated to reflect the addition or deletion of controls based on experience gained by producing parts (or providing services). The immediate goal of the quality system (QS): During the control phase of the QS methodology: © 2003 by CRC Press LLC

SL316XCh10Frame Page 311 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

311

• The team should 5S the project area. • The team should develop standardized work instructions. • The team should understand and assist with the implementation of process and product control systems. • The team should document all of the above and live by what they have documented. The long-term vision of the quality system — the company and all of its suppliers have a quality system that governs the ways in which products and services are bought, sold, and produced. • The company should be 5S in all areas. • The company should develop standardized work instructions and procedures. • The company should understand and assist with the implementation of process and product control systems. • The company should document all of the above and live by what they have documented. Introduction to statistical process control What is statistical process control (SPC)? SPC as a control method The goal and methodology Advantages and disadvantages Components of an SPC control chart Where to use SPC charts How to implement SPC charts Types of control charts and examples

SPC FLOWCHART Class exercise Introduction to SPC • SPC is the basic tool for studying variation and using statistical signals to monitor and improve process performance. This tool can be applied to any area: manufacturing, finance, sales, etc. Most companies perform SPC on finished goods (Ys), rather than process characteristics (Xs). The first step is to use statistical techniques to control our company’s outputs. It is not until we focus our efforts on controlling those inputs (Xs) that control our outputs (Ys) that we realize the full gain of our efforts to increase quality, productivity, and lower costs. • What is SPC? All processes have natural variability (due to common causes) and unnatural variability (due to special causes). We use SPC to monitor and or improve our processes. Use of SPC allows us to detect special cause variation through out-of-control signals. These outof-control signals cannot tell us why the process is out of control, only that it is. Control charts are the means through which process and product parameters are tracked statistically over time. Control charts incorporate upper and lower control limits that reflect the natural limits © 2003 by CRC Press LLC

SL316XCh10Frame Page 312 Monday, September 30, 2002 8:09 PM

312

Six Sigma and Beyond: The Implementation Process

of random variability in the process. These limits should not be compared to customer specification limits. Based on statistical principles, control charts allow for the identification of unnatural (nonrandom) patterns in process variables. When the control chart signals a nonrandom pattern, we know special cause variation has changed the process. The actions we take to correct nonrandom patterns in control charts are the keys to successful SPC usage. Control limits are based on establishing ± 3 sigma limits for the Y or X being measured. Process improvement and control charts Benefits of control chart systems • Proven technique for improving productivity • Effective in defect prevention • Prevents unnecessary process adjustments • Provides diagnostic information • Provides information about process capability Control chart roadmap • Select the appropriate variable to control. • Select the data-collection point. (Note: if variable cannot be measured directly, a surrogate variable can be identified.) • Select type of control chart. • Establish basis for rational subgrouping. • Determine appropriate sample size and frequency. • Determine measurement method/criteria. • Determine gauge capability. • Perform initial capability study to establish trial control limits. • Set up forms for collecting and charting data. • Develop procedures for collection, charting, analyzing, and acting on information. • Train personnel. • Institutionalize the charting process. Control chart types There are many types of control charts; however, the underlying principles of each are the same. The proper type is chosen utilizing knowledge of both SPC and knowledge of your process objectives. The chart type selection depends on: • Data type: attribute vs. variable • Ease of sampling; homogeneity of samples • Distribution of data: normal or non-normal? • Subgroup size: constant or variable? • Other considerations Control charts for variables data Control charts for attribute data Analysis of patterns on control charts • One point outside the three-sigma limit • Two of three outside the two-sigma limit • Four of five outside the one-sigma limit © 2003 by CRC Press LLC

SL316XCh10Frame Page 313 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

313

• Cycles • Trend • Stratification • Seven consecutive on one side of the center line Advantages of control chart systems: • Proven technique for improving productivity. • Effective in defect prevention. • Prevent unnecessary process adjustments. • Provide diagnostic information. • Provide information about process capability. • Can be used for both attribute and variable data types. Disadvantages of control chart systems: • Everyone must be well trained and periodically retrained. • Data must be gathered correctly. • Mean and range/standard deviation must be calculated correctly. • Data must be charted correctly. • Charts must be analyzed correctly. • Reactions to patterns in charts must be appropriate — every time! Precontrol charts — traditionally, precontrol has been perceived as an ineffective tool, and most quality practitioners still remain skeptical of its benefits. This view originated due to the fact that the limits of the three precontrol regions are commonly calculated based on the process specifications, thus resulting in overreactions and inducing more variability to a process instead of reducing it. In the Six Sigma breakthrough strategy, precontrol is implemented after the improve phase. The zones are calculated based on the process after improvements are made, so its distribution is narrow and tight compared to the specification band. Specification limits are not used in calculating these zones, so we encounter units in the yellow or red zones before actual defects are produced. Where • • • • •

• •

to use SPC charts: When a mistake-proofing device is not feasible Identify processes with high RPNs from the FMEA Evaluate “current controls” column of the FMEA to determine the gaps in the control plan. Does SPC make sense? Identify processes that are critical based on DOEs Place charts only where necessary based on project scope. If a chart has been implemented, do not hesitate to remove it if it is not valueadded. Initially, the process outputs may need to be monitored. The goal: monitor and control process inputs and, over time, eliminate the need for SPC charts.

Pareto Histogram Cause-and-effect diagram Interpreting the results © 2003 by CRC Press LLC

SL316XCh10Frame Page 314 Monday, September 30, 2002 8:09 PM

314

Six Sigma and Beyond: The Implementation Process

Definition of lean manufacturing — a systematic approach to manufacturing which based on the premise that anywhere work is being done, waste is being generated. A vehicle through which organizations can identify and reduce waste. A manufacturing methodology that will facilitate and foster a living quality system. The goal of lean manufacturing is total elimination of waste. Poka-yoke (mistake-proofing) Planning for waste elimination • Establish “permanent” control to prevent its reoccurrence • The vision: continuous elimination of waste • Infrequent setups and long runs • Functional focus • If it ain’t broke don’t fix it • Specialized workers, engineers, and leaders • Good enough • Run it, repair it • Layoff • Management directs • Penalize mistakes • Make the schedule • Quick setups and short runs • Product focus • Fix it so it does not break • Multifunctionally skilled people • Never good enough, continual improvement • Do it right the first time • New opportunities • Leaders teach • Retrain • Make quality a priority There are seven elements of waste; they are waste of: • Correction • Overproduction • Processing • Conveyance • Inventory • Motion • Waiting The first step toward waste elimination is identifying it. Black belt projects should focus efforts on one or more of these areas. 5S workplace organization — to ensure your gains are sustainable, you must start with a firm foundation. 5S standards are the foundation that supports all the phases of lean manufacturing. The system can only be as strong as the foundation it is built on. The foundation of a production system is a clean and safe work environment. Its strength is contingent upon the employee and company commitment © 2003 by CRC Press LLC

SL316XCh10Frame Page 315 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

315

to maintaining it. (As a black belt you set the goals high and accept nothing less. Each operator must understand that maintaining these standards is a condition of their employment.) Foundation of lean manufacturing — 5S overview 1. Sorting (decide what is needed). To sort out necessary and unnecessary items. To store oft-used items in the work area, store infrequently used items away from the work area and dispose of items that are not needed. 2. Storage: (arrangement of items needed straight up the work place). To arrange all necessary items. To have a designated place for everything. A place for everything and everything in its place. 3. Shining (sweep and cleanliness). To keep your area clean on a continuing basis. 4. Standardize. To maintain the workplace at a level that uncovers and makes problems obvious. To continuously improve plant by continuous assessment and actions. 5. Sustaining (training and disciplined culture). To maintain our discipline, we need to practice and repeat until it becomes a way of life. Benefits of 5S implementation • A cleaner workplace is a safer workplace. • Contributes to how we feel about our product, process, our company, and ourselves. • Provides a customer showcase to promote our business. • Product quality and especially contaminants will improve. • Efficiency will increase. Some 5S focusing tools • “Red tag” technique (visual clearing up). This is a vital clearing-up technique. As soon as a potentially unnecessary item is identified, it is marked with a red tag so that anybody can see clearly what may be eliminated or moved. The use of red tags can be one secret to a company’s survival, because it is a visible way to identify what is not needed in the workplace. Red tags ask why an item is in a given location and support the first “S” — sort. Tips for tagging: • We all tend to look at items as personal possessions. They are company possessions. We are the caretakers of the items. • An outsider can take the lead in red tagging. Plant people take advantage of these “fresh eyes” by creating an atmosphere where they will feel comfortable in questioning what is needed. • Tag anything not needed. One exception: do not red tag people unless you want to be red tagged yourself! • If in doubt, tag it! • Before and after photographs • Improve area by area, each one completely • Clear responsibilities • Daily cross-department tours © 2003 by CRC Press LLC

SL316XCh10Frame Page 316 Monday, September 30, 2002 8:09 PM

316

Six Sigma and Beyond: The Implementation Process

• Schedule all critical customers to visit • Regular assessments and “radar” metrics • Red tag technique. The red tag technique involves the following steps: 1. Establish the rules for distinguishing between what is needed and what is not. 2. Identify needed and unneeded items and attach red tags to all potentially unneeded items. Write the specific reason for red tagging and sign and date each tag. 3. Remove red tag items and temporarily store them in an identified holding area. 4. Sort through the red tag items; dispose of those that are truly superfluous. Other items can eliminated at an agreed interval when it is clear that they have no use. Ensure that all stakeholders agree. 5. Determine ways to improve the workplace so that unnecessary items do not accumulate. 6. Continue to red tag regularly. Standardized work — the one best way to perform each operation identified and agreed upon through general consensus (not majority rules). This becomes the standard work procedure. The affected employees should understand that once they have defined the standard, they will be expected to perform the job according to that standard. It is imperative that we all must understand the notion: variation = defects. Standardized work leads to reduced variation. Prerequisites for standardized work Standardized workflow Kaizen — continual improvement. The philosophy of incremental continual improvement, that every process can and should be continually evaluated and improved in terms of time required, resources used, resultant quality, and other aspects relevant to the process. The BB’s job, simply stated, is focused Kaizen. Our methodology for Kaizen is the Six Sigma breakthrough strategy — DMAIC. Control is only sustained long term when the 5Ss and standardized work are in place. • Kaizen rules • Keep an open mind to change • Maintain a positive attitude • Never leave in silent disagreement • Create a blameless environment • Practice mutual respect every day • Treat others as you want to be treated • One person, one vote — no position, no rank • No such thing as a dumb question • Understand the thought process and then the Kaizen elements • Takt time • Cycle time • Work sequence © 2003 by CRC Press LLC

SL316XCh10Frame Page 317 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

317

• Standard WIP • Takt time determination • Kaizen process steps • Step 1. Create flowchart with parts and subassemblies • Step 2. Calculate Takt time = net available time • Step 3. Measure each operation — each assembly and subassembly as they are. To the extent an operator has to go to an assembly for something, measure walk time. Establish a baseline using time observation forms; note any setup time. • Step 4. Do a baseline standard work flow chart (should look like a spaghetti chart). • Step 5. Do a baseline percent loading chart. Review for each operator where the waste and walk time is. Look at this in close relationship to the process. • Step 6. Review the 5Ss • Step 7. Consolidate, accumulate jobs to get them as close to Takt time as possible. Work with the operators. • Step 8. Observe measure, modify the new flow process. This should be a one piece flow process if we are producing to takt time. • Step 9. Complete the one-piece flow process and redo all baseline charts (you may consider overlaying these new results on top of the older data to display the improvement). Make a list of things to complete. • Step 10. Prepare presentation, share results. Kaizen presentation guidelines • Prepare overheads or a slide show for a 20-minute presentation • Ensure your presentation includes all of the Kaizen steps • Use whatever props or other devices to best explain your achievement • Include 10 minutes for Q and A • Each team member should/must participate in the presentation • Management needs to see and hear about the results of the team’s success JIT concepts (just in time) Kanban — A pull inventory system Poka Yoke — a methodology that helps build quality into the product and allows only good product to go to the next operator or customer. It focuses on the elimination of human errors. Key elements of mistake-proofing: • Distinction between error and defect • Source inspection • 100% inspection • Immediate action • “Red flag” conditions • Control/feedback logic • Guidelines for mistake proofing Mistake-proofing strategies • Do not make surplus products (high inventory makes poor quality difficult to see) © 2003 by CRC Press LLC

SL316XCh10Frame Page 318 Monday, September 30, 2002 8:09 PM

318

Six Sigma and Beyond: The Implementation Process

• Eliminate, simplify, or combine operations • Use a transfer rather than process batch strategy • Involve everyone in error and defect prevention (standard practices, daily improvements, and mistake-proofing) • Create an environment that emphasizes quality work, promotes involvement and creativity, and strives for continual improvement Advantages of mistake proofing • No formal training programs required • Eliminates many inspection operations • Relieves operators from repetitive tasks • Promotes creativity and value adding operations • Contributes to defect-free work • Effectively provides 100% internal inspection without the associated problems of human fatigue and error

CONTROL

PLANS

A control plan is a logical, systematic approach for finding and correcting root causes of out-of-control conditions and will be a valuable tool for process improvement. A key advantage of the reaction plan form is its use as a troubleshooting guide for operators. A systematic guide of what to look for during upset conditions is valuable on its own. Key items of concern are: • What elements make up a control plan? • Why should we bother with them? • Who contributes to their preparation? • How do we develop one? • When do we update them? • Where should the plan reside? Control plan strategy • Operate our processes consistently on target with minimum variation. • Minimize process tampering (overadjustment). • Assure that the process improvements that have been identified and implemented become institutionalized. ISO 9000 can assist here. • Provide for adequate training in all procedures. • Include required maintenance schedules. • Factors impacting a good control plan. Control plan components • Process map steps • Key process output variables, targets, and specs • Key and critical process input variables with appropriate working tolerances and control limits • Important noise variables (uncontrollable inputs) • Short- and long-term capability analysis results • Designated control methods, tools, and systems • SPC © 2003 by CRC Press LLC

SL316XCh10Frame Page 319 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

319

• Automated process control • Checklists • Mistake-proofing systems • Standard operating procedures • Workmanship standards Documenting the control plan • FMEA • Cause-and-effect matrix • Process map • Multi-vari studies • DOE Reaction plan and procedures • Control methods identify person responsible for control of each critical variable and details about how to react to out-of-control conditions. • Control methods include a training plan and process auditing system, e.g., ISO 9000. • Complicated methods can be referenced by document number and location changes in the process require changes to the control method. • Actions should be the responsibility of people closest to the process. • The reaction plan can simply refer to an SOP and identify the person responsible for the reaction procedure. • In all cases, suspect or nonconforming product must be clearly identified and quarantined. Questions for control plan evaluation. Key process input variables (Xs): • How are they monitored? • How often are they verified? • Are optimum target values and specifications known? • How much variation is there around the target value? • What causes the variation in the X? • How often is the X out of control? • Which Xs should have control charts? • Uncontrollable (noise) inputs. What are they? Are they impossible or impractical to control? Do we know how to compensate for changes in them? How robust is the system to noise? • Standard operating procedures — do they exist? Are they simple and understood? Are they being followed? Are they current? • Is operator training performed and documented? • Is there a process audit schedule? Maintenance procedures • Have critical components been identified? • Does the schedule specify who, what, and when? • Where are the manufacturer’s instructions? • Do we have a troubleshooting guide? • What are the training requirements for maintenance? • What special equipment is needed for measurement? What is the measurement capability? © 2003 by CRC Press LLC

SL316XCh10Frame Page 320 Monday, September 30, 2002 8:09 PM

320

Six Sigma and Beyond: The Implementation Process

• Who does the measurement? How often is a measurement taken? How are routine data recorded? • Who plots the control chart (if one is used) and interprets the information? • What key procedures are required to maintain control? • What is done with product that is off spec? • How is the process routinely audited? • Who makes the audit? How often? How is it recorded? Control plan checklist • Documentation package • Sustaining the gains Issues to transitioning a project • Assure your project is complete enough to transition. • No loose ends — have at least a plan (project action plan) for everything not finalized. • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people within the impacted area and those people outside that the changes may affect. • Display, update, and communicate your project results in impacted area during all phases. Remember, no surprises, buy-in during all phases. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help; you are not a one-person show and do not have all the answers. • Use data collection. • Idea generation (brainstorming events). • Create buy-in with the entire workcell/targeted area. • Project action plan. Project action plan (suggested format) • Sustaining the gain. • Changes must be permanent. • Changes must be built into daily routine. • A sampling plan and measurement system must be established and for monitoring. • Responsibilities must be clear, accepted, and, if necessary, built into roles and responsibilities. • Develop and update procedures. • Train all involved. • Action plan solidified and agreed upon. Sustaining the gain — product changes • Revise drawings by submitting EARs • Work with process, test, and product engineers

© 2003 by CRC Press LLC

SL316XCh10Frame Page 321 Monday, September 30, 2002 8:09 PM

Six Sigma for Black Belts

321

Process changes • Physically change the process flow (5S the project area). • Develop visual indicators. • Establish or buy new equipment to aid assembly or test. • Poka-Yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely updates. • Make it someone’s job to review the metric and take action when needed. • Training. • Train everyone in the new process (do not leave until there is full understanding). Aspects of control • Benchmarks for world class performance. • Quality improvement rate of 68% per year. • Productivity improvement rate of 2% per month. • Lead-time is less than ten times the value-added time. • Continuous improvement culture. • Total employee involvement. • Reward and recognition. • Celebration.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 323 Monday, September 30, 2002 8:08 PM

11

Six Sigma for Green Belts

The intent of this training in the implementation process of Six Sigma is to familiarize the individuals who are about to assist the black belts with resolving projects that will improve customer satisfaction and the financial position of the organization. To be sure, this is a more intensive training than that of the orientation, as the material begins to be more technical in nature and more specific as to the tools and their applications. After all, the green belt is expected to actually do the work under the direct supervision of the black belt. The green belt needs to know not only why something is being done (elementary level) and how to do it, but also how it applies to his specific job. It is often suggested that simple simulated exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may involve defining a process and improving that process; providing five to ten operational definitions in that process; working with variable and attribute data; calculating the DPO; working with histograms, box plots, scatter plots, Pareto charts, and DOE setups; running the experiment with the aid of software; and several others. Because organizations and their goals are quite different we will provide the reader with a suggested outline of the training material for this green belt session. It should last 5 days and be taught by a black belt. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series.

INSTRUCTIONAL OBJECTIVES — GREEN BELT RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x). • Interpret the expression y = f(x). Business Metrics • State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. • Define the nature of a performance metric. 323 © 2003 by CRC Press LLC

SL316XCh11Frame Page 324 Monday, September 30, 2002 8:08 PM

324

Six Sigma and Beyond: The Implementation Process

• • • • • • • • • • • •

Identify the driving need for performance metrics. Explain the benefit of plotting performance metrics on a log scale. Provide a listing of at least six key performance metrics. Identify and define the principal categories associated with quality costs. Compute the COQ given the necessary background data. Provide a detailed explanation of how a defect can impact the classical COQ categories. Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma rate of improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization.

Six Sigma Fundamentals • • • • • • • • • • • • • • • • •

Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Identify the parts-per-million defect goal of Six Sigma. Recognize that defects arise from variation. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a four sigma business. Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. Understand that work in process (WIP) is highly correlated to the rate of defects. Rationalize the statement “The highest-quality producer is the lowest-cost producer.” Understand that global benchmarking has consistently revealed four sigma as average while best-in-class is near the Six Sigma region. Draw first-order conclusions when given a global benchmarking chart. State the general findings that tend to characterize or profile a four sigma organization. Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. Provide a qualitative definition and graphical interpretation of standard deviation. Understand the driving need for breakthrough improvement vs. continual improvement.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 325 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

325

• Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from the point of view of quality, cost, and cycle-time. • Provide a brief history of Six Sigma and its evolution. • Understand the need for measuring those things that are critical to the customer, business, and process. • Define the various facets of Six Sigma and why Six Sigma is important to a business. • Define the magnitude of difference between three, four, five, and Six Sigma. • Provide a very general description of how a process capability study is conducted and interpreted. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Provide a brief description for the outcome 1 – Y.rt. • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. • Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and explain how such a defect can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • Define the two primary components of process breakthrough. • Provide a synopsis of what a statistically designed experiment is and what role it plays during the improvement phase of breakthrough. • Understand that the term sigma is a performance metric that only applies at the opportunity level. • Understand the role of questions in the context of management leadership. • Define the three primary sources of variation in a product. • Describe the general methodologies that are required to progress through the hierarchy of quality improvement. • Understand the key success factors related to the attainment of Six Sigma. © 2003 by CRC Press LLC

SL316XCh11Frame Page 326 Monday, September 30, 2002 8:08 PM

326

Six Sigma and Beyond: The Implementation Process

• Understand the basic elements of a sigma benchmarking chart. • Interpret a data point plotted on a sigma benchmarking chart. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Explain the interrelationship between the terms process capability, process precision, and process accuracy.

DEFINE Nature of Variables • Explain the term leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Provide a specific explanation of the term blocking variable and explain when such variables should be used in an experiment. Opportunities for Defects • Provide a rational definition of a defect. • Compute the defect-per-unit metric given a specific number of defects and units produced. • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities. • Recognize the difference between uniform and random defects. CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT tree to identify problem areas. © 2003 by CRC Press LLC

SL316XCh11Frame Page 327 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

327

• Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. Process Baselines • Conduct a complete baseline capability analysis (using a software package), interpret the results, and make valid recommendations. Six Sigma Projects • Define a Six Sigma black belt project reporting and review process. • Interpret each of the action steps associated with the four phases of process breakthrough. • Explain why the planning questions are so important to project success. • Explain how the generic planning guide can be used to create a project execution cookbook. • Create a set of criteria for selecting and scoping Six Sigma black belt projects. Six Sigma Deployment • • • • • • • • • • • • •

Provide a brief description of a Six Sigma black belt (SSBB). Describe the role and responsibilities of a SSBB. Provide a brief description of a Six Sigma champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of a Six Sigma master black belt (SSMBB). Describe the roles and responsibilities of a SSMBB. Understand the SSBB instructional curriculum. Recognize that the SSBB curriculum sequence is correlated to the Six Sigma breakthrough strategy. Recognize the importance and provide a description of the plan-trainapply-review (PTAR) learning process. Provide a brief description of the key implementation principles and identify principal deployment success factors. List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. Develop a business model that incorporates and exploits the benefits of Six Sigma.

MEASURE Scales of Measure • Explain why survey questions that utilize the five-point Likert scale must often be reduced to two categories during analysis. © 2003 by CRC Press LLC

SL316XCh11Frame Page 328 Monday, September 30, 2002 8:08 PM

328

Six Sigma and Beyond: The Implementation Process

• Identify the four primary scales of measure and provide a brief description of their unique characteristics. Data Collection • Provide a specific explanation of the term replicate in the context of a statistically designed experiment. • Explain why the sequence of order in which an experiment takes place must be randomized and what can happen when this is not done. Measurement Error • Explain how a statistically designed single-factor experiment can be used to study and control for the influence of measurement error. • Explain how full factorial experiments can be employed to study and control for the influence of measurement error. • Explain how fractional factorial experiments can be used to study and control for the influence of measurement error. • Describe the role of measurement error studies during the measurement phase of breakthrough. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. • Understand what a normal distribution, or typical normal histogram, is and how it is used to estimate defect probability. • Identify the circumstances under which the Poisson distribution could be applied to the analysis of product or transactional defects. • Understand the applied differences between the Poisson and binomial distributions. • Construct a histogram for a set of nonnormal data and isolate a transformation that will force the data to a normal condition. • Understand what the t distribution is and how it changes as degrees of freedom change. • Understand what the F distribution is and how it can be used to test the hypothesis that two variances are equal. Static Statistics • Provide a qualitative definition and graphical interpretation of variance. • Compute the sample standard deviation given a set of data. • Compute the mean, standard deviation, and variance for a set of normally distributed data. © 2003 by CRC Press LLC

SL316XCh11Frame Page 329 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

329

• Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Compute the corresponding Z value of a specification limit given an appropriate set of data. • Convert a Z value into a defect probability given a table of areas under the normal curve. • Provide a graphical understanding of standard deviation and explain why it is so important to Six Sigma work. • Compute Z.usl and Z.lsl for a set of normally distributed data and then determine the probability of defect. • Compute Z.usl and Z.lsl for a set of nonnormal data with upper and lower specifications and then determine the probability of defect. Dynamic Statistics • Compute and interpret the total, inter-, and intragroup sums of squares for a given set of data. • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. • Provide a practical explanation of what could account for a differential between a short-term Z value and a long-term Z value. • Explain the difference between inherent capability and sustained capability in terms of the standard deviation. • Describe the role and logic of rational subgrouping as it relates to the short-term and long-term standard deviations. • Explain the difference between dynamic mean variation and static mean offset. • Explain why the term instantaneous reproducibility (i.e., process precision) is associated with the short-term standard deviation. • Explain why the term sustained reproducibility is associated with the long-term standard deviation. • Recognize the four principal types of process centering conditions and explain how each impacts process capability. • Compute and interpret the within, between, and total sums of squares for a set of normally distributed data organized into rational subgroups.

ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). © 2003 by CRC Press LLC

SL316XCh11Frame Page 330 Monday, September 30, 2002 8:08 PM

330

Six Sigma and Beyond: The Implementation Process

• Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. • Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric rolled-throughput yield (YA) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • Understand the impact of process capability and complexity on the probability of zero defects. • Construct a benchmarking chart using the product report option in the Minitab software program. • List some sources that could offer the data necessary to estimate a sigma capability. • Explain how throughput yield (Y.tp) and opportunity counts can be employed to establish sigma capability of a product/process. • Compute the normalized yield (Y.norm) given a rolled-throughput yield (Y.rt) value and a specific number of defect opportunities. • Compute the total defects-per-unit (TDPU) value given a rolled-throughput yield (Y.rt) value. • Provide a brief description of how one would implement and deploy the performance metric rolled-throughput yield (Y.rt). • Illustrate how a system-level DPU goal can be flowed down through a product/process hierarchy to assess the required CTO capability. • Illustrate how a series of CTQ capability values can be flowed up through a product/process hierarchy to establish the system DPU. Process Metrics • Compute and interpret the Cp index of capability. • Compute and interpret the Cpk index of capability. • Explain the theoretical and practical differences between Cp, Cpk, Pp, and Ppk. • Explain why a Z can be used to measure process capability and explain its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Recognize that a 1.5 sigma shift between sampling periods is typical and therefore can be used when quantification is not possible. • Understand the general guidelines for adjusting a Z value for the influence of shift and drift (when to add or subtract the shift value). • Compute the Cp and Cpk indices for a set of normally distributed data with upper and lower performance limits. • Explain why Cpk values will often not correlate to first-time yield information. • Compute and interpret Z.st and Z.lt for a set of normally distributed data organized into rational subgroups. © 2003 by CRC Press LLC

SL316XCh11Frame Page 331 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

331

• Compute and interpret Z.shift (static and dynamic) for a set of normally distributed data organized into rational subgroups. • Compute and interpret Cp, Cpk, Pp, and Ppk. • Explain how Cp, Cpk, Pp, and Ppk correlate to the four principal types of process centering conditions. • Show how Z.st, Z.lt, Z.shift (dynamic), and Z.shift (static) relate to Cp, Cpk, Pp, and Ppk. • Create and interpret a standardized computer process characterization report. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. Diagnostic Tools • Understand, construct, and interpret a multi-vari chart, then identify areas of application. Simulation Tools • Create a series of random normal numbers with a given mean and variance. • Create k sets of subgroups where each subgroup consists of n samples from a normal distribution with a given mean and variance. • Create a series of random lognormal numbers and then transform the data to fit a normal density function. Statistical Hypotheses • Explain how a practical problem can be translated into a statistical problem and the benefits of doing so. • Explain what a statistical hypothesis is and why it is created and show the forms it may take in terms of the mean and variance. • Define the concept of alpha risk and provide several examples that illustrate its practical consequences. • Define the concept of statistical confidence and explain how it relates to alpha risk. • Define the concept of beta risk and provide several examples that illustrate its practical consequences. • Provide a detailed understanding of the contrast distribution and how it relates to the alternate hypothesis. • Explain what is meant by the phrase statistically significant difference and recognize that such differences do not imply practical difference. • Construct a truth table that illustrates how the null and alternate hypotheses interrelate with the concepts of alpha risk and beta risk. • Recognize that the extent of difference required to produce practical benefit is referred to as delta. © 2003 by CRC Press LLC

SL316XCh11Frame Page 332 Monday, September 30, 2002 8:08 PM

332

Six Sigma and Beyond: The Implementation Process

• Explain what is meant by the term power of the test and describe how it relates to the concept of beta risk. • Understand how sample size can impact the extent of decision risk associated with the null and alternate hypotheses. • Establish the appropriate sample size for a given situation when presented with a sample size table. • Describe the dynamic interrelationships between alpha, beta, delta, and sample size from a statistical as well as practical perspective. • List the essential steps for successfully conducting a statistically-based investigation of a practical real-world problem. • Provide a detailed understanding of the null distribution and how it relates to the null hypothesis. Continuous Decision Tools • Provide a general description of the term experimental error and explain how it relates to the term replication. • Provide a general description of one-way analysis of variance and discuss the role of sample size in it. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact if they are violated. • Recognize that when the intratreatment replicates are correlated, there is an adverse impact on experimental error. • Demonstrate how the total variation in single-factor experiments can be characterized analytically and graphically. • Demonstrate how the experimental error in an experiment can be partitioned from the total error for independent consideration. • Demonstrate how the intergroup variation in an experiment can be partitioned from the total error for independent consideration. • Compute the total sums of squares, as well as the intragroup and intergroup sums of squares for a single-factor experiment. • Define how degrees of freedom are established for each source of variation in a single-factor experiment. • Organize the sums of squares and degrees of freedom into an ANOVA table and compute the mean square ratios. • Determine the random sampling error probability related to any given mean square ratio and illustrate the effect of sample size. • Compute all post-hoc comparisons (i.e., pairwise t tests) in the instance that an F value proves to be statistically significant. • Compute and interpret the relative effect (i.e., sensitivity) of an experimental factor, create a main effects plot, and set tolerances. • Provide a conceptual understanding of statistical confidence interval and how it relates to the notion of random sampling error. • Understand what the distribution of sample averages is and how it relates to the central limit theorem. © 2003 by CRC Press LLC

SL316XCh11Frame Page 333 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

333

• Explain what the standard error of the mean is and demonstrate how it is computed. • Compute the tail area probability for a given Z value that is associated with the distribution of sample averages. • Compute the 95% confidence interval for the mean of a small data set and explain how it may be applied in practical situations. • Rationalize the difference between a one-sided test of the mean and a two-sided test of the mean. • Understand what the distribution of sample differences is and how it can be employed for testing statistical hypotheses. • Compute the 95% confidence interval for the mean of sample differences given two samples of normally distributed data. • Understand the nature of one- and two-sample t tests and apply these tests to an appropriate set of data. • Compute and interpret the 95% confidence interval from a sample variance using the chi-square distribution. • Explain how the 95% confidence interval from a sample variance can be used to test the hypothesis that two variances are equal. Discrete Decision Tools • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied. • Understand how the probability of a given chi-square value can be determined. • Recognize that the chi-square statistic can be employed as a goodness-of-fit test as well as a test of independence. • Compute the expected cell frequencies for any given contingency table. • Compute the chi-square statistic for a 2 × 2 contingency table and determine the probability of chance sampling error. • Determine the extent of association for a W contingency table using the contingency coefficient. • Compute the chi-square statistic for an n-way contingency table and determine the probability of chance sampling error. • Illustrate how the chi-square statistic and cross-tabulation can be utilized in the analysis of surveys. • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Recognize that the cross-tabulation of two classification variables, each with two categories, is referred to as a W contingency table. • Explain how to establish the degrees of freedom associated with any contingency table. • Construct a 95% confidence interval for a Poisson mean and discuss how this can be used to test hypotheses about Poisson means. • Understand how to calculate the standard deviation for a set of data selected from a binomial distribution. © 2003 by CRC Press LLC

SL316XCh11Frame Page 334 Monday, September 30, 2002 8:08 PM

334

Six Sigma and Beyond: The Implementation Process

• Compute the 95% confidence interval for a proportion and explain how it can be used to test hypotheses about proportions. • Understand the nature of discontinuity and how to apply Yates correction to compensate for this effect. • Recognize that the square root of a chi-square is equal to Z for the special case where df = 1.

IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Explain the primary differences between a random-effects model and a fixed-effects model. • Identify the four principal families of experimental designs and what each family of designs is used for. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • Provide a specific explanation of the term confounding and identify several ways to control for this situation. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. • Explain how the settings (i.e., levels) of an experimental factor can significantly influence the outcome of an experiment. • Recognize that the most powerful application of modern statistics cannot rescue a poorly designed experiment. • Explain the term full factorial experiment and how it differs from a fractional factorial experiment. • Describe the overriding limitations of the classical test plan when two factors are involved and state several advantages of a full factorial design. • Show at least four ways that a two-factor, two-level full factorial design matrix can be displayed and communicated. • Understand the added value of a balanced and orthogonal design and the practical implications when these properties are not present. • Construct the vectored columns for a two-factor, two-level full factorial design, given Yates standard order. • Compute the relative effect for each experimental effect and display the results on a Pareto chart. • Design and conduct a two-factor, multilevel full factorial experiment and interpret the outcome from a statistical and practical perspective.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 335 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

335

• Provide a general description of a fractional factorial experiment and the inherent advantages that fractional arrays offer. • Understand why third-order and higher effects are most often statistically and practically insignificant. • Create a half fraction of a full factorial experiment by sorting on the highest-order interaction and then discern the pattern of confounding. • Recognize how an unreplicated fractional factorial design can be folded into a full factorial design with replication. • List the unique attributes associated with fractional factorial designs of resolution III, IV, and V. • Explain what happens to the experimental error term when a factor is collapsed out of the matrix by folding. • Explain how Plackett–Burman experimental designs are used and discuss their unique strengths and weaknesses. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response means as a basis for the plot. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response variances as a basis for the plot. • Compute the sums of squares associated with each experimental effect in a fractional factorial experiment. • Create an ANOVA table and compute the mean square ratio for each experimental effect in a fractional factorial experiment. • Determine the random sampling error probability for any given MSR in a fractional factorial experiment. • Compute the relative effect for each experimental effect in a fractional factorial experiment and display the results in a Pareto chart. • Explain the phrase hidden replication and understand that this phenomenon does not preclude the a priori consideration of sample size. • Explain the phrase column contrast and show how it can be used to establish the factor effect and the related sums of squares. • Construct and interpret a main effects plot for a two-factor two-level experiment and display the 95% confidence intervals on the plot. • Construct and interpret an interaction plot for a two-factor, two-level experiment and display the 95% confidence intervals on the plot. • Compute the sums of squares associated with each experimental effect in a two-factor, two-level full factorial experiment. • Create an ANOVA table and compute the mean squares ratios for each experimental effect in a two-factor, two-level full factorial experiment. • Determine the random sampling error probability for any given mean square ratio in a two-factor, two-level full factorial experiment. • Implement center point within a two-factor, two-level full factorial experiment and estimate whether there is any statistically significant curvature. Robust Design Tools Nothing special. © 2003 by CRC Press LLC

SL316XCh11Frame Page 336 Monday, September 30, 2002 8:08 PM

336

Six Sigma and Beyond: The Implementation Process

Empirical Modeling Tools Nothing special. Tolerance Tools Nothing special. Risk Analysis Tools Nothing special. DFSS Principles Nothing special.

CONTROL Precontrol Tools • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented. • Describe the unique characteristics of the precontrol method and compare precontrol to statistical process control charts. Continuous SPC Tools • Explain what the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. • Provide a conceptual understanding of each step associated with the general cookbook for control charts. • Explain how the use of rational subgroups forces nonrandom variations due to assignable causes to appear between sampling periods. • Explain how the control limits of an SPC chart are directly linked to the concepts associated with hypothesis testing. • Construct and interpret an X-bar and R chart for a set of normally distributed data organized into rational subgroups. • Illustrate how an X-bar and R chart can be used to study and control for measurement error and contrast this with the DOE/ANOVA method. • Construct and interpret an X-bar and R chart for a set of data (organized into rational subgroups) that is not normally distributed within groups. • Construct and interpret an individual’s chart for a set of normally distributed data collected over time. • Construct and interpret an individual’s chart for a set of nonnormally distributed data collected over time. © 2003 by CRC Press LLC

SL316XCh11Frame Page 337 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

337

• Construct and interpret an exponentially weighted moving average (EWMA) chart and highlight its advantages and disadvantages. • Provide a detailed understanding of how to adjust a process parameter using the method of bracketing and contrast this technique to other methods. Discrete SPC Tools • Construct and interpret a P chart and explain how the control limits for this chart are related to the confidence intervals of the binomial distribution. • Construct and interpret a U chart and explain how the control limits for this chart are related to confidence intervals for the Poisson distribution.

SIX SIGMA TRANSACTIONAL GREEN BELT TRAINING Introductions: Name Title or position Organization Background in quality improvement programs, statistics, etc. Hobbies/personal information Agenda Ground rules Exploring our values Six Sigma overview • Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Performance target • Practical meaning • Value • A problem solving methodology • Vision • Philosophy Aggressive goal — metric (standard of measurement) • Benchmark • Method — how are we going to get there? • Customer focus • Breakthrough improvement • Continual improvement • People involvement © 2003 by CRC Press LLC

SL316XCh11Frame Page 338 Monday, September 30, 2002 8:08 PM

338

Six Sigma and Beyond: The Implementation Process

Bottom line: Six Sigma defines the goals of the business • Defines performance metrics that tie to the business goals by identifying projects and using performance metrics that will yield clear business results. • Applies advanced quality and statistical tools to achieve breakthrough financial performance. The Six Sigma strategy • Which business function needs it? • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma a catalyst for leaders The principles of six sigma • We only act on what is known; therefore, we must look for appropriate and applicable data. • We know more when we search; therefore, we must have appropriate and applicable methodologies. • We search for what we question; therefore, we must be certain that what we question is related to customer satisfaction. • We question what we measure; therefore, we must be certain of our measuring capability. • If we question and measure, then decisions can be made based on data rather than “gut feelings.” Roles and responsibilities Executive management: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma publicly Champion: • Will select black belt projects consistent with corporate goals • Will drive the implementation of Six Sigma through public support and removal of barriers • Will be accountable for the performance of the BBs Master black belt: • They are the experts of Six Sigma tools and methodologies. They are responsible for training and coaching BBs, MBBs or shoguns, as we call them, and may also be responsible for leading large projects on their own. Black belt: • They are the main force of the Six Sigma philosophy. They are responsible for leading and teaching the Six Sigma methodology within the organization. They are also responsible for training the green belts, to ensure sources of variation in manufacturing and © 2003 by CRC Press LLC

SL316XCh11Frame Page 339 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

339

transactional processes are objectively identified, quantified, and controlled or eliminated. How? By using the breakthrough strategy, process performance is sustained through well developed, documented, and executed process control plans such as defining the goal and identifying the model to use. • Goal: to achieve improvements in rolled-throughput yield, cost of poor quality, and capacity-productivity. • To deliver successful projects using the breakthrough strategy • To train and mentor the local organization on Six Sigma • The model • Kano model • QFD — House of Quality • D-M-A-I-C Green belt: • Will deliver successful localized projects using the breakthrough strategy. • Will participate in larger BB DMAIC or DFSS projects. • Will lead other GB Six Sigma projects. • Will apply Six Sigma knowledge in daily work. Six Sigma instructor: • Will make sure every black belt candidate is certified in the understanding, usage, and application of the Six Sigma tools. Project selection — the most important component of the successful transactional project. • The Y = f(x) relationship. Ys are the functional items that the customer needs, wants, or expects and they are always thought of as “outputs.” Xs, on the other hand, are the specific requirements that will satisfy the Ys, and they are always thought of as “inputs.” It is imperative that the reader must understand that one Y may have multiple Xs and those Xs may have sub-Xs (noted as xs), etc. • Identify the Y and determine the Xs — the actual cascading process for Y to X to x to x1, x2, etc. The idea here is to start very broad and flow down to a level of a specific measurable problem. • Apply criteria to projects — obviously, each organization may have its own criteria; however, the following five seem to be generic enough to get you going in the right direction. a) Does the problem relate in a positive way to customer satisfaction? b) Does the problem repeat? c) Do you have control over the problem? d) Is the scope of the project narrow enough to be worked on? E) Do metrics exist? Can measurements be established in an appropriate amount of time? • Develop high-level problem statement that includes a) the specificity of the problem; b) descriptive statements about the problem (e.g., location, occurrence, etc. ) c) scope and list of data needed. The problem statement is a living description of the issue to be resolved and may be modified as the project evolves. © 2003 by CRC Press LLC

SL316XCh11Frame Page 340 Monday, September 30, 2002 8:08 PM

340

Six Sigma and Beyond: The Implementation Process

The DMAIC model — high-level overview. This model drives breakthrough improvement. • Define the selection of performance characteristics critical in meeting the customer’s expectations. • Measure the creation and validation of a measurement system. • Analyze the identification of sources of variation from the performance objectives. • Improve the discovery of process relationships and the establishment of new procedures. • Control the monitoring of implemented improvements to maintain gains and ensure corrective actions are taken when necessary. The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? In addition to the direct costs associated with finding and fixing defects, cost of poor quality also includes: • The hidden cost of failing to meet customer expectations the first time. • The hidden opportunity for increased efficiency. • The hidden potential for higher profits. • The hidden loss in market share. • The hidden increase in production cycle time. • The hidden labor associated with ordering replacement material. • The hidden costs associated with disposing of defects. Getting there through inspection • Defects and the hidden factory • Rolled-throughput yield vs. first time yield What causes defects? Excess variation due to a) manufacturing processes, b) supplier (incoming) material variation, and c) unreasonably tight specifications (tighter than the customer requires). Dissecting process capability — premise of Six Sigma: Sources of variation can be a) identified and b) quantified. Therefore, they can be controlled or eliminated. How do we improve capability? Six Sigma, metrics and continual improvement • Six Sigma is characterized by a) defining critical business metrics, b) tracking them, and c) improving them using proactive process improvement. Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt) • Yrt = e-dpu • Cost of poor quality and cycle time (throughput) are two other metrics • Continual improvement • Calculating the product sigma level Metrics • Defects per unit (DPU) drives plant-wide improvement. © 2003 by CRC Press LLC

SL316XCh11Frame Page 341 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

341

• Defects per million opportunities (DPMO) allows for comparison of dissimilar products. • Sigma level allows for benchmarking within and across companies. • Tracking trends in metrics. • Harvesting the fruit of Six Sigma. • PPM conversion chart. Translating needs into requirements Deployment success: if and only if Six Sigma • Affects directly quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Ensures a predictable factory • Black belt execution strategy with the support of management • Describe BB execution strategy • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt

THE DMAIC MODEL

IN

DETAIL

THE DEFINE PHASE The individual components of this phase are: a) define problem, b) identify customer, c) identify CTQs, d) map process, e) refine process scope, and f) update project charter.

WHO IS

THE

CUSTOMER?

• What does the customer want? • How can the organization benefit from fixing a problem? • A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs critical tos - CTCost, CTDelivery, CTQuality. • The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. • Result: a Pareto of Xs that are used as input into the FMEA and control plans. These are the CTPs, critical to the process. This includes anything that we can control or modify about our process that will help us achieve our objectives.

MEASUREMENT PHASE The individual components of this phase are: a) identify measurement and variation, b) determine data type, c) develop data collection plan, d) perform MSA, e) perform

© 2003 by CRC Press LLC

SL316XCh11Frame Page 342 Monday, September 30, 2002 8:08 PM

342

Six Sigma and Beyond: The Implementation Process

data collection, and f) perform capability analysis. The idea here is to establish the performance baseline. The measure phase — IMPORTANT!!! A well-defined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is defects. This holds true for any problem statement, objective, and metric (% defects, overtime, RTY, etc.). • Primary metric — a green belt needs to be focused; if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits — do not confuse projected project benefits with your objective. Make sure you separate these two items. There are times when you may achieve your objective yet not see the projected benefits. This is because we cannot control all issues. We need to tackle them in a methodical order. Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data-collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. Establish data-collection system • Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place, you will not be able to determine whether you are making any improvements in your project. • Establish this system such that you can historically record the data you are collecting. • This information should be recorded in a database that can be readily accessed. • The data should be aligned in the database in such a manner that for each output (Y) recorded, the operating conditions (X) are identified. This becomes important for future reference. • This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. © 2003 by CRC Press LLC

SL316XCh11Frame Page 343 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

343

MEASUREMENT SYSTEMS ANALYSIS Purpose: to determine whether the measurement system, defined as the gauge and operators, can be used to precisely measure the characteristic in question. We are not evaluating part variability, but gauge operator capability. • Guidelines • Determines the measurement capabilities for Ys • Needs to be completed before assessing capability of Ys • These studies are called: gauge repeatability and reproducibility (GR&R) studies, measurement systems analysis (MSA) or measurement systems evaluation (MSE) • Indices: • Precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error. Ten percent is desirable • Precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. Thirty percent is marginal Capability studies: used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and longterm (overall) estimates of capability indices are taught. (The reader may want to review Volume 1 or Volume 4 for the discussion on long and short capability.) • Indices used assuming process is centered: Cp, Pp, Zst • Indices used to evaluate shifted process: Cpk, Ppk, Zlt Measure: potential project deliverables • Project definition • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E Matrix, PFMEA, fishbones • Data-collection system • Measurement System(s) Analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, σ Level, DPU, RTY) • Graphical and statistical tools • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review

© 2003 by CRC Press LLC

SL316XCh11Frame Page 344 Monday, September 30, 2002 8:08 PM

344

Six Sigma and Beyond: The Implementation Process

THE ANALYSIS PHASE The individual components of this phase are: a) review analysis tools, b) apply graphical analysis tools for both attribute and variable data (e.g., Pareto, histogram, run charts, box plot, scatter plot and so on to determine patterns of variation), and c) identify sources of variation. Purpose of the analysis phase • To identify high-risk input variables (Xs) from the failure modes and effects analysis (FMEA). • To reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA techniques. • To determine the presence of and potential elimination of noise variables via multi-vari studies. • To plan and document initial improvement activities. • Failure modes and effects analysis • Documents effects of failed key inputs (Xs) on key outputs (Ys) • Documents potential causes of failed key input variables (Xs) • Documents existing control methods for preventing or detecting causes • Provides prioritization for actions and documents actions taken • Can be used as the document to track project progress • Multi-vari studies: study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is: • To identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase. • To take a first look at major input variables. • To help select or eliminate variables for study in designed experiments. • Identify vital few Xs. • Determine the governing transformation equation. Analyze: potential project deliverables • Project definition • Problem description • Project metrics • Passive process analysis • Graphical analysis • Multi-vari studies • Hypothesis testing • Updated PFMEA • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review

© 2003 by CRC Press LLC

SL316XCh11Frame Page 345 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

345

THE IMPROVEMENT PHASE The individual components of this phase are: a) generate improvement alternatives, b) conduct a pilot study, c) validate improvement, d) create “should be” process map, e) update FMEA, and f) perform a cost benefit analysis. • DOE (design of experiments) is the backbone of process improvement. • From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). • This phase is characterized by a sequence of experiments, each based on the results of the previous study. The intent is to generate improvement alternatives. • Critical variables are identified during this process. • Usually three to six Xs account for most of the variation in the outputs. • Control and continuous improvement. • Perform a pilot. • Validate the improvement. • Create the “should be” process map. • Update the FMEA. • Perform preliminary cost benefit analysis. Improve: potential project deliverables • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review

THE CONTROL PHASE The individual components of this phase are: a) develop control strategy, b) develop control plan, and c) update SOP and training plan. The idea here is to implement long-term control strategy and methods. Develop an execution plan • Optimize, eliminate, automate, and control vital few inputs. • Document and implement the control plan. • Sustain the gains identified. • Reestablish and monitor long-term delivered capability. © 2003 by CRC Press LLC

SL316XCh11Frame Page 346 Monday, September 30, 2002 8:08 PM

346

Six Sigma and Beyond: The Implementation Process

• Implement continuous improvement efforts (this is perhaps the key responsibility of all the green belts at the functional area). • Provide execution strategy support systems. • Establish safety requirements. • Define maintenance plans. • Establish system to track special causes. • Draw up required and critical spare parts list. • Write troubleshooting guides. • Develop control plans. • Make SPC charts. • Buy process monitors. • Oversee inspection points. • Provide metrology control. • Set workmanship standards. • Others ? Control: potential project deliverables • Project definition: • Problem description • Project metrics • Optimization of Ys: • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review Additional items of discussion. The following items should be discussed at the appropriate and applicable complexity level of the participants. In some cases, some of the following items may be just mentioned but not discussed. Rolled throughput yield • The classical perspective of yield • Simple first-time yield = traditional yield • Measuring First Pass Yield Normalized yield • Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as © 2003 by CRC Press LLC

SL316XCh11Frame Page 347 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

• • • • • •

347

an opportunity count. In terms of quality, each product or process characteristic represents a unique opportunity to either add or subtract value. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar. Formulas to know Hidden factory Take away — rolled-throughput yield Integrates rework loops Highlights “high-loss” steps… Put project emphasis here!

DPMO, counting opportunities Nonvalue-add rules: an opportunity count should never be applied to any operation that does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception: an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. Supplied components rules: each supplied part provides one opportunity. Supplied materials, such as machine oil, coolants, etc., do not count as supplied components. Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. (Sanity check rule: “Will applying counts in these operations take my business in the direction it is intended to go?” If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be opposed to the company objective. Hence, it would not provide an opportunity. Once you define an opportunity, however, you must institutionalize that definition to maintain consistency. This opportunity, if it is good enough for the original evaluation, must also be good enough to be evaluated at the end of the project. In other words, the opportunity count must have the same base; otherwise it is meaningless.) Introduction to data • Description and definitions • What do you want to know? • Discrete vs. continuous data • Categories of scale • Nominal scale — nominal scales of measure are used to classify elements into categories without considering any specific property. Examples of nominal scales include “causes” on fishbone diagrams, yes/no, pass/fail, etc. • Ordinal scale — ordinal scales of measure are used to order or rank nominal (pass/fail) data based on a specific property. Examples of ordinal scales include relative height, Pareto charts, customer satisfaction surveys, etc. © 2003 by CRC Press LLC

SL316XCh11Frame Page 348 Monday, September 30, 2002 8:08 PM

348

Six Sigma and Beyond: The Implementation Process

• Likert scale (ordinal) — example rating scale ranges: five-point school grading system (A B C D E); seven-point numerical rating (1 2 3 4 5 6 7); verbal scale (excellent, good, average, fair, poor). • Interval and ratio scale — interval scales of measure are used to express numerical information on a scale with equal distance between categories, but no absolute zero. Examples are: temperature (°F and °C), a dial gauge sitting on top of a gauge block, comparison of differences, etc. Ratio scales of measure are used to express numerical information on a scale with equal distance between categories, but with an absolute zero in the range of measurement. • A tape measure, ruler, position Vs time at constant speed, and so on. Selecting Statistical Techniques At this point of the discussion the instructor may want to introduce a computer software package to facilitate the discussion of statistical tools. Key items of discussion should be: • Entering data into the program • Cutting and pasting • Generating random numbers • Importing and exporting data from databases, Excel, ASCII, etc. • Pull-down menus of the software (for general statistics, graphs, etc.) • Manipulate and change data • Basic statistics and probability distributions • Calculate the z scores and probability • Calculate capability • Control charts Discussion and practice of key statistical techniques and specific tools Basic statistics • Mean, median, mode, variance, and standard variation • Distributions • Normal, Z-transformation, normal and nonnormal probability plots, nonnormal, Poison, binomial, hypergeometric, t-distribution • Central limit theorem — very important concept. Emphasis must be placed on this theorem because it is the fundamental concept (backbone) of inferential statistics and the foundation for tools to be learned later this session. The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if n is sufficiently high (n > 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample means decreases. The standard error will help us calculate confidence intervals. Confidence intervals (CIs) are derived from the central limit theorem and are used

© 2003 by CRC Press LLC

SL316XCh11Frame Page 349 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

349

by black belts to quantify a level of certainty or uncertainty regarding a population parameter based on a sample. • Degrees of freedom • Standard error • Confidence Parametric confidence intervals — the parametric confidence intervals assume a t-distribution of sample means and uses this to calculate confidence intervals. Confidence intervals for proportions — confidence intervals can also be constructed for fraction defective (p), where x = number of defect occurrences; n = sample size and p = x/n = proportion defective in sample. For cases in which the number defective (x) is at least 5 and the total number of samples n is at least 30, the normal distribution approximation can be used as a shortcut. For other cases, the binomial tables are needed to construct this confidence interval. • Accuracy and precision • Defects per million • Population vs. sample • Sampling distribution of the mean • Concept of variation • Additive property of variances • Attribute or variable Types of data — variable and attribute • Rational subgroups • Data-collection plan — your data-collection plan and execution will make or break your entire project!!!!!!!!!!! Data-collection plan — ask yourself the following questions: • What do you want to know about the process? • What are the likely causes of variation in the process (Xs)? • Are there cycles in the process? • How long do you need to collect data to capture a true picture? • Who will be collecting the data? • How will you test your measurement system? • Are the operational definitions detailed enough? • How will you display the data? • Is data available? If not, how will you prepare data collection sheets? • Where could data collections occur? What are your correction plans? Process capability and performance • Process capability • Capability • Process characterization • Converting DPM to a Z value • Short-term vs. long-term • Indicating the spread © 2003 by CRC Press LLC

SL316XCh11Frame Page 350 Monday, September 30, 2002 8:08 PM

350

Six Sigma and Beyond: The Implementation Process

• Indicates the spread and center • Indicates spread and centering • Process shift — how much should we expect? Is 1.5σ enough? Where does it come from? • The map to the indicators and what do they mean Stability • Process control • Pooled vs. total variation • Short-term vs. long-term • Which standard deviation? • Area of improvement • What is good? Measurement system analysis • Why MSA? How does variation relate to MSA? • Measurement systems • Resolution • Bias • Accuracy vs. precision • Linearity Measurement tools • A simple gauge • Calibration • Consistency • Gauge R& R • GR&R with ANOVA • Indices (Cp, Cpk, Pp, Ppk) • Cp is the “potential” capability of your process assuming you are able to eliminate all nonrandom causes. In addition, Cp assumes the process is centered. This metric is also called “process entitlement” or the best your process could ever hope to perform in the short term. In order to calculate this metric you need a close approximation for short-term standard deviation (which is not always available). • Cpk and Ppk use the mean, not only the tolerance band, to estimate capability. The term CPKmin(Cpklower, Cpkupper) is stated as the shortest numerical distance between the mean and the nearest spec limit. How do you know if your gauge is good enough? Introduce definition of quality (ISO 8402) Control charts • Variable and attribute (X-bar and s, X-bar and R, IndX and MR, p, c, etc.) • Multi-vari charts: the purpose of these charts is to narrow the scope of input variables and, therefore, to identify the inputs and outputs (KPIVs and KPOVs)

© 2003 by CRC Press LLC

SL316XCh11Frame Page 351 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

351

HYPOTHESIS TESTING INTRODUCTION Why learn hypothesis testing? Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled. Statistical testing provides objective solutions to questions that are traditionally answered subjectively. Hypothesis testing is a stepping stone to ANOVA and DOE.) • Hypothesis testing terms that you need to remember • Steps in hypothesis testing • Hypothesis testing roadmap • Hypothesis testing description • The null and alternate hypotheses • The hypothesis testing form • Test for significance • Significance level • Alpha risk — this alpha level requires two things: a) an assumption of no difference (Ho) and b) a reference distribution of some sort — producer’s risk • Beta risk — consumer’s risk

PARAMETERS

VS.

STATISTICS

Parameters deal with populations and are generally denoted with Greek letters. Statistics deal with samples and are generally denoted with English letters. There is no substitute for professional judgment. It is true that in hypothesis testing we answer the practical question: “Is there a real difference between _____ and _____ ?” However, we use relatively small samples to answer questions about population parameters. There is always a chance that we selected a sample that is not representative of the population. Therefore, there is always a chance that the conclusion obtained is wrong. With some assumptions, inferential statistics allows us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (P value) of a wrong conclusion. What is signal-to-noise ratio Managing change Measures and rewards An introduction to graphical methods • Pareto • Histogram • Run chart • Scatter plot © 2003 by CRC Press LLC

SL316XCh11Frame Page 352 Monday, September 30, 2002 8:08 PM

352

Six Sigma and Beyond: The Implementation Process

• Correlation vs. causality • Boxplot • Hypothesis tests for means • Comparison of means t Distribution Hypothesis testing for attribute data Useful definitions Hypothesis tests: proportions Chi-square test for independence Chi-square test Chi-square test for a relationship ANOVA Why ANOVA?

INTRODUCTION

TO

DESIGN

OF

EXPERIMENTS

What is experimental design? Organizing the way in which one changes one or more input variables (Xs) to see if any of them, or any combination of them, effects the output (Y) in a significant way. A well-designed experiment eliminates the effect of all possible Xs except the ones that you changed. Typically, if the output variable changes significantly, it can be tied directly to the input X variable that was changed and not to some other X variable that was not changed. The real power of experimentation is that sometimes we get lucky and find a combination of two or more Xs that make the Y variable perform even better! • Benefits of DOE • Why not one factor at a time? • Types of experiments • Classes of DOE • Terms used in DOE • Main effects and interactions • Contrast • Yates standard order • Run order for a DOE • Strategy of experimentation • Barriers to effective experimentation Focus on the X-Y relationship Trial and error One factor at a time Full factorial experiment Things to watch for in experiments Randomization • Repetition and replication • 2-K factorials • Advantages of 2-K factorials • Standard order of 2-K designs © 2003 by CRC Press LLC

SL316XCh11Frame Page 353 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

353

• Interactions • Interaction effects • Interactions for the three-way design • Main effects • Cube plots • Types of 2k factorials Center points and blocking • Adding center points • In two-level designs, there is a risk of missing a curvilinear relationship. Inclusion of center points is an efficient way to test for curvature without adding a large number of extra runs. • Confounding and blocking • Residuals analysis • Residuals

SCREENING DESIGNS These designs are a powerful tool at analyzing multiple factors and interactions. The designs combine the flexibility of reduced run size without compromising information. One word of caution: do not reduce the experiment too far. By doing fewer runs, you may not obtain the desired level of information. Factorial experiments — the success of fractional factorials is based on the assumption that main effects and lower order interactions are generally the key factors. Full factorials can usually be derived from a fractional factorial experiment once nonsignificant factors are eliminated. • Fractional factorials • Design resolution • Choosing a design • Notation • Alias structure Planning experiments • Team involvement • Maximize prior knowledge • Identify measurable objectives • FMEA on all steps of the execution • Replication and repetition consideration • Verify and validate data collection and analysis procedures Steps to experimentation • Define the problem. What is the objective of the experiment? • Establish the objective. • Select the response variables. • Select the independent variables. • Choose the variable levels. • Select the experimental design. • Sequential experimentation © 2003 by CRC Press LLC

SL316XCh11Frame Page 354 Monday, September 30, 2002 8:08 PM

354

Six Sigma and Beyond: The Implementation Process

• • • •

Select experimental design Screening/fractional factorial Full factorial/partial Consider the sample plan: how many runs can we afford? (The more runs or samples, the better understanding and confidence in the result.) How are we controlling noise and controllable variables that we know about? • What is our plan for randomization? • Walk through the experiment • Collect data • Analyze data • Draw statistical conclusions • Replicate results • Draw practical solutions Implement solutions • Understand the current process • Is output qualitative or quantitative? • (A vs. B) or (50 vs. 100) ? • What is the baseline capability • Is your process under statistical control? • Is the measurement system adequate? • Factor selection • Which factors (KPIV’s) do we include? • Where should they come from? • Process map • Cause and effects matrix • FMEA • Multi-vari study results • Brainstorming (fishbone) • Process knowledge • Operator experience • Customer/supplier input • Level selection. After the test factors are identified, we must set the levels of those factors we want to test. What is the right level differentiation to obtain the information needed? If the levels are too wide or narrow, nothing will be gained. Level guideline: 20% above and below the specs. If no specs, +/– 3 sigma from the mean. • What will the experiment cost? • Are all of the necessary players involved (informed)? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run and walked through the process? • Has the necessary paperwork been completed? • Make sure the MSA has been validated.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 355 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

355

• Budget and timelines (The goal in DOE: to find a design that will produce a specific desired amount of information at a minimum cost to the company.) Four phases of designed experiments: • Planning: careful planning involves clearly defining the problem of interest, the object of the experiment, and the environment in which the experiment will be carried out. • Screening: initial experiments aim to reduce the number of potentially influential variables to a vital few. Screening allows us to focus process improvement efforts on the most important variables. Screening designs include two-level full and fractional factorials, general full factorials, and Plackett-Burman. • Optimization: after we have identified the vital few variables by screening, we need to determine the best values in order to optimize a process; for example, we may want to maximize a yield or reduce product variability. Optimization designs include full factorial designs (twolevel and general) and response surface designs (central composite and Box–Behnken). • Verification: we can perform a follow-up experiment at the predicted best process conditions to confirm optimization results. Fractional factorial designs Purpose: to determine which main effects (factors) are important. Key features: 1. Know which resolution you are running: always two-level factorials. 2. Useful to estimate mostly main effects (not interactions). 3. They can be built up to a higher-order blocked factorial design. 4. Limited to 15 runs. 5. Don’t expect more than what the design will provide. Recommendation: use these designs when you need to narrow down the list of important factorials. They are easy to interpret and cost effective. Screening designs (full or fractional) Purpose: to investigate how seven factors or less interact to drive a process. Key features: 1. Two-level factorials. Resolution IV, V, or higher. 2. General full factorials. 3. These allow estimation of at least two-way interactions. 4. They can model weak curvature through center points and can be built up into a response surface (blocked central composite) design to model more pronounced curvature. 5. They provide direction for further experimentation in search of an optimal solution. Recommendation: this is the design most often used in industry. They are good, low-cost, all-purpose designs.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 356 Monday, September 30, 2002 8:08 PM

356

Six Sigma and Beyond: The Implementation Process

Response surface designs Purpose: to model responses that exhibit quadratic (curvilinear) relationships with the factors. Key features: 1. Recommended for nonsequential experiments. (Only one shot!) 2. Use when extreme combinations cannot be run. 3. Excellent for optimizing since curvature is typically seen around optimal. 4. Designs are costlier (more runs). Factors of interest should be low in number. 5. These can be used to minimize variation. 6. These can be used to put the process on target, maximize, or minimize a measure of interest. How do I sustain the improvement? Tools to assure process remains in control Keys to success • Early involvement of all work cell/department members • Update all affected parties (including supervisors/managers regularly) • Get buy-in — no surprises! • Poka yoke the process • Establish frequent measurement • Establish procedures for the new/updated process • Train everyone — assign responsibilities • Monitor the results How do I transition my project? • Assure your project is complete enough to transition. • No loose ends — a plan (project action plan) for everything not finalized • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people in impacted area. • Display your project in impacted area during all phases. Remember, no surprises. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help. • Data collection. • Idea generation (brainstorming events). • What is a project action plan? It is a documented communication tool (contract) which allows you to identify: • What is left to do to complete your project. • Who is responsible to carry out each task. • When they should have it complete. • How it should be accomplished.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 357 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

357

Do I have to have one? Only if there are unfinished tasks to your improvement process that you expect others to carry out after the transition. (The tasks must be negotiated and agreed to.) Who will monitor the plan for implementation/completion? Both you and the responsible supervisor/manager who assumes ownership. Who has ultimate responsibility? The owner of each task and the responsible supervisor/manager. Product changes • Revise drawings by submitting EARs • Work with process, test, and product engineers Process changes • Physically change the process flow (5S the project area). To ensure your gains are sustainable you must start with a firm foundation. 5S standards are the foundation that supports all the phases of Six Sigma manufacturing. The foundation of a production system is a CLEAN and SAFE work environment. Its strength is dependent upon employee commitment to maintaining it. • Develop visual indicators. Create a visual factory. • Establish/buy new equipment to aid assembly/test. • Poka yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely evaluations. • Make it someone’s job to review the metric and take action when needed. • Training — train everyone in the new process. (Don’t leave until there is full understanding.)

CONTROL PLANS The control plan provides a written summary description of the system for controlling parts and processes; it is used to minimize process and product variation and describes the actions that are required at each phase of the process including receiving, in-process, final assembly, and shipping, to ensure that all process outputs will be in a state of control. A control plan for operational actions such as ordering, order taking, invoicing, billing, etc. can also be utilized for transactional operations. The control plan does not replace the information contained in detailed operator instructions. Since processes are expected to be continually updated and improved, the control plan is a living document, reflecting the current methods of control and measurement systems used.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 358 Monday, September 30, 2002 8:08 PM

358

Six Sigma and Beyond: The Implementation Process

• Development and implementation Developing a control plan • A basic understanding of the process must be obtained. Establish a multifunction team to gather and utilize appropriate available information, such as: • Process flow diagram • Failure mode and effects analysis (process and design) • Special characteristics (critical and significant characteristics) • Control plans/lessons learned from similar parts or processes • Team’s knowledge of the process • Technical documentation (design/process notices, MPIs, PM) • Validation plan results (DVP, EVP, PVP) • Optimization methods (QFD/DOE) • Develop the process flow diagram — map the process. • Develop the process FMEA. • Examine each process operation for potential problems and failures. • Focus on characteristics that are important to the customer and to product safety. • A PFMEA is required for most organizations for all new product processes. PFMEAs must be eventually developed for all existing product lines. If a PFMEA does not exist, then customer concerns/complaints must be considered when developing the control plan. • Develop a preliminary manufacturing control process (MCP), utilizing a standardized format. This format satisfies ISO 9000, ISO/TS16949, and QS-9000 requirements (and is the REQUIRED FORMAT!). • Conduct multifunctional team review for revision/consensus of the MCP. • Install MCP with change control approval. This will assign and display a document number, version number, issue date, and owner. • Implement the MCP. Update/revise manufacturing process instructions, control charts, gauge systems, etc. as required from the new control plan. • Benefits to developing and implementing CPs — improves overall quality by reducing chances of quality excursions. Reduces shrinkage or defects in MFG/transaction processes by keeping processes centered. Also, the data aids in timely troubleshooting of MFG/transaction processes, as well as a communication vehicle for changes to CTQ characteristics, control methods, etc. Quality system overview Control tools Continuous SPC tools The foundation of SPC Statistical process control © 2003 by CRC Press LLC

SL316XCh11Frame Page 359 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

359

Types of control charts — variable and attribute • Basic components of a control chart • Control limits • What are control limits? • What is meant by “in control” and “out of control?” • Link between control limits, hypotheses testing and specifications Variable control charts • Individual X vs. EWMA chart • X-bar and R charts • X-bar-s charts • Individual and moving range • EWMA chart • Control chart — interpretation • Control chart — nonnormal distribution Attribute control charts • p charts • np chart • c chart • u chart • Attribute chart interpretation Alternative methods of control • Precontrol • Zone control charting Process capability estimate Poka yoke — understand the use of poka yoke strategies in completing a black belt project. Know how to design and implement a poka yoke strategy. • What is poka yoke/error or mistake-proofing? • Mistake-proofing manufacturing processes • Mistake-proofing transactional processes • Types of mistake-proofing • Errors vs. defects • Types of human errors • “Red flag” conditions • Control/feedback logic • Guidelines for mistake-proofing • Mistake-proofing strategies • Advantages of mistake proofing Maintenance — a reliability function • Maintenance via Six Sigma is all-encompassing — transactional, information systems, production equipment, etc. Maintenance function should be linked to customer CTQs. It should address all six ms: machines, manpower, methods, materials, mother nature, and measurements. (Make sure you differentiate these from the classical nonmanufacturing items of policies, procedures, place, environment, measurement, and people.) Maintenance can and should be a reliability function, not just a repair function. © 2003 by CRC Press LLC

SL316XCh11Frame Page 360 Monday, September 30, 2002 8:08 PM

360

Six Sigma and Beyond: The Implementation Process

• Maintenance maximizes output, minimizes cost, and assures continued operation - customer satisfaction. Maintenance — integrated strategy • World-class key performance indicators • Predictive maintenance • Benefits to developing and implementing PMs • Major elements of preventive maintenance Realistic tolerancing — a simple graphical method for establishing optimum levels and appropriate tolerances for inputs. Once it is determined that a continuous output depends linearly on a continuous input, the output specification is used to create an input specification. Scatter plots and fitted line plots demonstrate association of inputs and outputs, not necessarily cause and effect. A realistic tolerancing method: Step 1: Identify the KPOV of interest note its specification. Choose KPIV. Step 2: Select the KPIV of interest. Define a range of values for the KPIV that will likely optimize the KPOV. Step 3: Run 30 samples over the range of the KPIV and record the output values. Step 4: Plot the results with the KPIV on the x-axis and the output on the y-axis. If the plot has a tilt with little vertical scatter, a relation exists. Proceed to Step 5. If there is no tilt, the KPIV has no relation to the response variable. Step 5: Determine the target value and tolerance of the KPIV. • Draw a best-fit line through the data. • Eliminate data point furthest from best-fit line. • Draw a parallel line through the next furthest point from the bestfit line. Draw a second parallel line equidistant from the best-fit line on the opposite side. The vertical distance between these two parallel lines represents 95% of the total effect of all other factors on the output other than the KPIV studied here. If specifications exist for the response variable, draw lines from those values on the yaxis to intersect the upper and lower confidence lines. • Drop two lines from these intersection points to the x-axis. The distance between where these intersect the x-axis represents the maximum tolerance permitted for the input variable. Step 6: Compare these values against the existing operating levels and implement necessary changes to SOP. Document changes via the FMEA and control plan. Gauge and measurement systems • Management plan • Long-term gauge control

© 2003 by CRC Press LLC

SL316XCh11Frame Page 361 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

361

• Long-term gauge control is the management of the basis of our understanding of our process. Remember, the quality of our process cannot be understood and controlled without understanding the quality of our measurements. • Why do we need a long-term gauge plan? Long-term project control is dependent on measurement and analysis. The measurement system needs to be under control. • Who is responsible for the long term gauge plan? Those responsible for the process variables of interest. Gauge management incorporate it into the local quality system and ensures that future owners are trained to implement. • What is in a long-term gauge plan? 1. Initial baseline analysis 2. Ownership details 3. Calibration control (chart?) with instructions 4. Handling and storage requirements 5. Maintenance requirements — procedures and log 6. Spare parts requirements 7. ID/tracking system 8. Ongoing MSA requirements (product/product changes, gauge changes, operator changes, etc.) 9. Thorough documentation • What do you need to do to develop your long-term gauge plan? • Your gauge: • What was your initial baseline (GR&R) data? Is this gauge still appropriate? • What is the amount of bias in your gauge? Linearity? How will you control this bias? • Who “owns” and maintains the gauge? • Who calibrates your gauge? How frequently? • Which gauge would you use? • What are the handling and storage requirements for the gauge? • Who needs to maintain the gauge? What does this mean? • How do you maintain the gauge? What are the spare parts requirements? • How frequently and when should MSA be performed? By whom? • Which one should you use and when? • What documentation is required for the long-term gauge plan? • How will we manage this documentation? • What issues/roadblocks do I see in developing the long-term gauge plan? • Implementing gauge plans

© 2003 by CRC Press LLC

SL316XCh11Frame Page 362 Monday, September 30, 2002 8:08 PM

362

Six Sigma and Beyond: The Implementation Process

SIX SIGMA GREEN BELT TRAINING — TECHNICAL Introduction Agenda Ground rules Exploring our values Objectives Definition (it must be emphasized that we will be applying the Six Sigma methodology rather than following the “pack”). Six Sigma goal Defect reduction Yield improvement Improved customer satisfaction and higher return on investment Comparison between three sigma and Six Sigma quality

SHORT HISTORICAL BACKGROUND The business case for implementing Six Sigma (After the definition, this item is very important. It must be understood by all before moving on to a new topic. It is the reason why Six Sigma is going to be implemented in your organization. Therefore, not only must it be understood, but in addition it must make sense and be believable. Sharing the executive committee members list with everyone is one of the ways to make individuals understand the importance of the implementation process. Another way is to provide some background about the black belts as individuals and their commitment to Six Sigma and to identify specific projects that plague the organization, either genuine financial problems or issues perceived as problems by customers.) Overview of the big picture Deployment structure Executive leadership (part-time basis): Executives are supposed to be the drivers of the Six Sigma process in directions that meet key business goals and address key customer satisfaction concerns. Master black belt (full-time basis). Master black belts are the experts of Six Sigma tools and methodologies. They are responsible for training and coaching black belts. Master black belts, or shoguns as we call them, may also be responsible for leading large projects on their own. Project champions (part-time basis): Project champions are accountable for the performance of black belts and the results of Six Sigma projects in their area. They are the conduit between the executive leadership and the black belt. They are responsible for eliminating bottlenecks and conflicts as they pertain to projects, especially in projects with cross-functional responsibilities.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 363 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

363

Black belts (full-time basis): Black belts are responsible for leading and teaching Six Sigma processes within the company. They are also responsible for applying Six Sigma tools to complete a predetermined amount of projects worth at least $250,000 each. (Projects are commonly worth $400,000–$600,000). It is expected that the result will be a breakthrough improvement with a magnitude of 100X. Green belts (part-time basis): Green belts are expected to help black belts with expediting and completing Six Sigma projects. They may take the lead in small projects of their own. They should also look for ways to apply Six Sigma problem-solving methods within their work area. Rollout strategy (emphasize the importance of projects and measurement) Training requirements Black belts Green belts Project selection defines the project charter. This will provide the appropriate documentation for communicating progress and direction to the rest of the team but also to the champion. Identify customer The Y = f(X). The Y is the output and the Xs are the inputs. Identify the Y and determine the Xs. It is imperative to understand that most often a single Y may be influenced by more than one X. Therefore, we may have: Y = F(X1, X2…, Xn). However, that is not the end. We may even have a single X cascading into a further level, such that: Y = f(X1, X2,…, Xn) and for X1 we may have Y = f(x1, x2,…xn). This is called cascading. Apply project selection checklist. To ensure the selected issue will make a good Six Sigma project, a check list can be applied to verify the project’s potential. Simple criteria for selection are the following six questions: • Does the project have recurring events? • Is the scope of the project narrow enough? • Do metrics exist? Can measurements be established in an appropriate amount of time? • Do you have control of the process? • Does the project improve customer satisfaction? • Does the project improve the financial position of the company? If the answer to all of these questions is yes, then the project is an excellent candidate. Develop a high-level problem statement. This is a high-level description of the issue to be addressed by the green belt or black belt. The problem statement will be the starting point for the application of Six Sigma methodology. © 2003 by CRC Press LLC

SL316XCh11Frame Page 364 Monday, September 30, 2002 8:08 PM

364

Six Sigma and Beyond: The Implementation Process

THE DMAIC PROCESS The model: a structured methodology for executing Six Sigma project activities. Point out here that the model is not linear in nature. Quite often, teams may find themselves in multiple phases so that thoroughness is established. Define: the purpose is to refine the project team’s understanding of the problem to be addressed. It is the foundation for the success of both the project and Six Sigma. Measure: the purpose is to establish techniques for collecting data about current performance that highlights project opportunities and provides a structure for monitoring subsequent improvements. Analyze: the purpose is to allow the team to further target improvement opportunities by taking a closer look at the data. Improve: the purpose is to generate ideas about ways to improve the process; design, pilot, and implement improvements; and validate improvements. Control: the purpose is to institutionalize process/product improvements and monitor ongoing performance.

THE DMAIC MODEL

IN

DETAIL

Define It begins with a definition of the problem and ends with a completed project charter. Define problem: green belts may be required to formulate a high-level problem statement. Key points are: • Identify the problem. The definition must be such that the improvement is identifiable. • Understand the operational definitions. • Understand the potential metrics of the situation. • Identify through a rough trade-off analysis the positives and negatives of current performance and their relationship to the customer. • Recognize that the identification process may be an iterative process itself, which means that there may be revision in the future. Identify the customer. Determine who the “real” customer is. It may be an internal, external, or ultimate customer. The focal point here is to make sure that the customer — however defined — will benefit from solving this particular problem. Identify the critical to quality characteristics (CTQs). The focus here is to ensure that there is a link between the characteristics identified and customer satisfaction. They help in focusing the team on issues important to the customer. Here, the operational definitions will make a difference if they do not reflect accurate descriptions of what the customer really needs or wants. Also in this stage, a preliminary measurement thought process begins to identify whether or not a consistent interpretation and measurement is ensured. Specificity is the key point. © 2003 by CRC Press LLC

SL316XCh11Frame Page 365 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

365

• Make sure that the team identifies what matters to the customer. • Ask whether or not the voice of the customer (VOC) has been accounted for, through Kano modeling, quality function deployment (QFD), benchmarking, market analysis, internal intelligence, etc. • Prioritize the CTQs. Not all CTQs are important. They must be prioritize based on the following: • Critical priority — items that will cause customer dissatisfaction unless these characteristics are functional. • Performance — items that improve performance. • Delighter — items that may not be crucial to the process improvement but will delight the customer. Define (map) the process. A preliminary flow chart of the process is highly recommended here. • Map the supplier, input, process, output, customer (SIPOC) model. • Understand the meaning and contribution of each SIPOC element. • Differentiate between the “is” and the “could be” processes. • Identify the essential elements and the merely desirable elements. • Identify the “hidden factory” — the hidden factory is the work that has been done but not counted. Refine the project scope. Further specify project concerns; develop micro problem statement based on the new SIPOC model of the process. Think of suspected sources of variation by using: • Brainstorming • 5 whys • Cause-and-effect diagram and matrix Update project charter. This is the deliverable of the define stage. Additions and modifications are appropriate at this stage based on ALL information gained from the define stage. A project charter should include: • The statement of the problem — concise, clear, and measurable • The project scope • The business case • The project plan and milestones • The goal and the expected results • Roles and responsibilities Measure This establishes techniques for collecting data about the current performance of the process identified in the define stage. It begins with identifying the measurement and variation in the process and ends with a capability analysis. Measure is very important and has been recognized as such for a very long time. It is interesting to review the comments of Fourier (Adler, 1982, p. 537). In developing the theory of heat, he enumerates five quantities that in order to be numerically expressed require five different kinds of units, “namely, the unit of length, the unit of time, that of temperature, that of weight, and finally the unit which serves to measure quantities of heat.” To which he adds the remark that “every undermined magnitude or constant © 2003 by CRC Press LLC

SL316XCh11Frame Page 366 Monday, September 30, 2002 8:08 PM

366

Six Sigma and Beyond: The Implementation Process

has one dimension proper to itself, and that the terms of one and the same equation could not be compared, if they had not the same exponent of dimension.” Identify measurement and variation. A measure describes the quantity, capacity, or performance of a product, process, or service based on observable data. However, that measure may be different depending on where we measure, who is doing the measurement, and what kind of measuring instrument is used. • Identify the sources of variation and the impact on performance. • Identify different measures and the criteria for establishing good process measures. • Identify the different kind of data that are available and their important contribution. • Explain why one should measure • Concept of variation • Common causes • Special causes • Shift and drift • Sources of variation • Machines • Material • Method • Measurement • Mother nature • People • Measurement usage • Measurement of inputs • Measurement of process • Measurement of outputs Complete the appropriate items of the FMEA. Determine data type. The type is determined by what is measured, that is, attribute data or variable data. Develop data collection plan. Consider the following: • Purpose of data • Collection plan • Stratification plan • Checksheets • Sampling • Sample size Perform measurement system analysis. Measurement System Analysis (MSA) is a quantitative evaluation of the tools and process used in making data observations. • Types of MSA • Operational definitions — used generally for nonmanufacturing applications • Walking the process — used for nonmanufacturing applications © 2003 by CRC Press LLC

SL316XCh11Frame Page 367 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

367

• Gauge R&R — used for variable or attribute data • Repeatability • Reproducibility • Using the control chart method • Using the ANOVA method • Understand the concept of component variation in relationship to the gauge R&R MSA fails. If the MSA fails at this stage, DO NOT collect more data. YOU MUST fix the problem before proceeding. Perform data collection. This depends on the collection plan that you have defined. The better the collection plan, the better the data collected. Data collection is a process by which we accumulate enough information to identify the potential cause of the problem. Perform capability analysis. Capability analysis is the study of how well a process is performing in meeting the expectations of customers (CTQs). • Understand the difference between short- and long-term capability. • Calculate capability for attribute and variable data. • Calculate capability for normally and nonnormally distributed data. • Calculate yield in a process. • Calculate capability using a software program. Analyze In this stage, the focus is to target improvement opportunities by taking a closer look at the data. Review analysis tools and apply the knowledge gained. The idea here is to help you identify “gaps” in the data-collection plan that require additional information. Also, at this point we may find that a solution requires further analysis before implementation. Tools usually reviewed for this purpose are: Pareto chart, run chart, box plot, histogram, scatter plot, and run charts. Identify sources of variation. The purpose here is to target root causes that project teams validate and verify with observation, data, or experiment. Improve The idea here is to allow the project team to develop, implement, and validate improvement alternatives that will achieve desired performance levels as defined by CTQs. Generate improvement alternatives. The idea here is to come up with alternatives to test as improvements to the problem’s root cause. Two techniques are usually used here: a) brainstorming and b) DOE. • Criteria for improvement — quality, time, cost • Refining improvement criteria • Evaluating improvements © 2003 by CRC Press LLC

SL316XCh11Frame Page 368 Monday, September 30, 2002 8:08 PM

368

Six Sigma and Beyond: The Implementation Process

Pilot. A pilot is a trial implementation of a proposed improvement conducted on a small scale under close observation. Validate improvement. To validate improvements, use data collected from the pilot and calculate the sigma value. Compare this value to the value that you had calculated in the analysis stage for capability. Create “should be” process map. This process map will provide a tool to explain the improvement to others and to guide the implementation efforts. Since this is the new and revised process, it should be different than the original process map of the define stage. Update FMEA. Using information from the “should be” process the resulting portion of the FMEA should be completed. Perform cost–benefit analysis: A cost–benefit analysis is a structured process for determining the trade-off between implementation costs and anticipated benefits of potential improvements. Control The final link of the DMAIC model is the control stage. Its purpose is to institutionalize process/product improvements and monitor ongoing performance in order to sustain the gains achieved in the improve stage. • Develop control strategy • Prevention vs. detection • Mistake-proofing • Control charts • Understand the difference between limits and specifications. • Understand the concept of long-term MSA. Develop control plan. A control plan provides a written summary description of the system for controlling parts and processes. In essence, the control plan is a reflection of the decisions made in the development of the control strategy. Furthermore, the control plan is used by the process owner as a reaction plan in case something goes wrong with the process. Update standard operating procedures (SOPs) and training plan. The final step of the control stage is to update all the relevant documentation including the standard operating procedures. This update should include all revised process steps and control measures.

SIX SIGMA GREEN BELT TRAINING — MANUFACTURING Introductions: Name Title or position Organization Background in quality improvement programs, statistics, etc. Hobbies/personal information

© 2003 by CRC Press LLC

SL316XCh11Frame Page 369 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

369

Agenda Ground rules Exploring our values Six Sigma overview • Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Performance target • Practical meaning • Value • A problem-solving methodology Vision Philosophy Aggressive goal — metric (standard of measurement) • Benchmark • Method — how are we going to get there? • Customer focus • Breakthrough improvement • Continual improvement • People involvement Bottom line: Six Sigma defines the goals of the business • Defines performance metrics that tie to the business goals by identifying projects and using performance metrics that will yield clear business results. Applies advanced quality and statistical tools to achieve breakthrough financial performance. The Six Sigma strategy • Which business function needs it? • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma is a catalyst for leaders The breakthrough phases The DMAIC model — high-level overview This drives breakthrough improvement The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? In addition to the direct costs associated with finding and fixing defects, cost of poor quality also includes: © 2003 by CRC Press LLC

SL316XCh11Frame Page 370 Monday, September 30, 2002 8:08 PM

370

Six Sigma and Beyond: The Implementation Process



The hidden cost of failing to meet customer expectations the first time • The hidden opportunity for increased efficiency • The hidden potential for higher profits • The hidden loss in market share • The hidden increase in production cycle time • The hidden labor associated with ordering replacement material • The hidden costs associated with disposing of defects Getting there through inspection • Defects and the hidden factory • Rolled-throughput yield vs. first-time yield What causes defects? Excess variation due to a) manufacturing processes, b) supplier (incoming) material variation, and c) unreasonably tight specifications (tighter than the customer requires). Dissecting process capability — premise of Six Sigma: sources of variation can be a) identified and b)quantified. Therefore, they can be controlled or eliminated. How do we improve capability? Six Sigma: metrics and continual improvement • Six Sigma is characterized by a) defining critical business metrics, b) tracking them, and c) improving them using proactive process improvement. Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt) • Yrt = e-dpu • Cost of poor quality and cycle time (throughput) are two other metrics • Continual improvement • Calculating the product sigma level Metrics • Defects per unit (DPU) drives plant-wide improvement • Defects per million opportunities (DPMOs) allow for comparison of dissimilar products • Sigma level allows for benchmarking within and across companies • Tracking trends in metrics • Harvesting the fruit of Six Sigma • PPM conversion chart Translating needs into requirements Deployment success: if and ONLY if Six Sigma • Directly affects quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Ensures a predictable factory • Establishes black belt execution strategy with the support of management Roles and responsibilities © 2003 by CRC Press LLC

SL316XCh11Frame Page 371 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

371

Executive management: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma publicly Champion: • Will select black belt projects consistent with corporate goals • Will drive the implementation of Six Sigma through public support and removal of barriers Master black belt: • The expert in Six Sigma tools and methodologies. • Responsible for training and coaching black belts. Master black belts, or shoguns as we call them, may also be responsible for leading large projects on their own. Black belt: • Ensures sources of variation in manufacturing and transactional processes are objectively identified, quantified, and controlled or eliminated. How? By using the breakthrough strategy, process performance is sustained through well developed, documented, and executed process control plans such as defining the goal and identifying the model to use. • Goal: to achieve improvements in rolled-throughput yield, cost of poor quality, and capacity-productivity. • To deliver successful projects using the breakthrough strategy • To train and mentor the local organization on Six Sigma • The model • Kano model • QFD — house of quality • D-M-A-I-C Green belt: • Will deliver successful localized projects using the breakthrough strategy. Six Sigma instructor: • Will make sure every black belt candidate is certified in the understanding, usage, and application of Six Sigma tools. Describe BB execution strategy • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt

PHASES

OF

PROCESS IMPROVEMENT

The Define Phase Who is the customer? • What does the customer want? • How can the organization benefit from fixing a problem? © 2003 by CRC Press LLC

SL316XCh11Frame Page 372 Monday, September 30, 2002 8:08 PM

372

Six Sigma and Beyond: The Implementation Process

A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs (critical tos) — CTCost, CTDelivery, CTQuality. The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. Result: a Pareto of Xs that are used as input into the FMEA and control plans. These are the CTPs, critical to the process — anything that we can control or modify about our process that will help us achieve our objectives. The Measurement Phase Establish the performance baseline. The measure phase — IMPORTANT!!! A well defined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is “defects.” This holds true for any problem statement, objective, and metric (% defects, overtime, RTY, etc.). • Primary metric — a green belt needs to be focused; if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits — do not confuse projected project benefits with your objective. Make sure you separate these two items. There are times when you may achieve your objective yet not see the projected benefits. This is because we cannot control all issues. We need to tackle them in a methodical order. Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data-collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. Establish data-collection system • Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place, you will not be able to determine whether you are making any improvements in your project.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 373 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

373

• Establish this system such that you can historically record the data you are collecting. • This information should be recorded in a database that can be readily accessed. • The data should be aligned in the database in such a manner that for each output (Y) recorded, the operating conditions (X) are identified. This becomes important for future reference. • This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. Measurement Systems Analysis Purpose: to determine whether the measurement system, defined as the gauge and operators, can be used to precisely measure the characteristic in question. We are not evaluating part variability, but gauge operator capability. • Guidelines • Determines the measurement capabilities for Ys • Needs to be completed before assessing capability of Ys • These studies are called: gauge repeatability and reproducibility (GR&R) studies, measurement systems analysis (MSA) or measurement systems evaluation (MSE) • Indices: • Precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error. Ten percent is desirable • Precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. Thirty percent is marginal Capability studies: used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and longterm (overall) estimates of capability indices are taught. (The reader may want to review Volume 1 or Volume 4 for the discussion on long and short capability.) • Indices used assuming process is centered: Cp, Pp, Zst • Indices used to evaluate shifted process: Cpk, Ppk, Zlt Measure: potential project deliverables • Project definition • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E matrix, PFMEA, fishbones • Data-collection system

© 2003 by CRC Press LLC

SL316XCh11Frame Page 374 Monday, September 30, 2002 8:08 PM

374

Six Sigma and Beyond: The Implementation Process

• Measurement System(s) Analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, σ level, DPU, RTY) • Graphical and statistical tools • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review

THE ANALYSIS PHASE • Purpose of the analysis phase • To identify high-risk input variables (Xs) from the failure modes and effects analysis (FMEA). • To reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA techniques. • To determine the presence of and potential elimination of noise variables via multi-vari studies. • To plan and document initial improvement activities. • Failure modes and effects analysis: • Documents effects of failed key inputs (Xs) on key outputs (Ys). • Documents potential causes of failed key input variables (Xs). • Documents existing control methods for preventing or detecting causes. • Provides prioritization for actions and documents actions taken. • Can be used as the document to track project progress. Multi-vari studies: study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is: • To identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase. • To take a first look at major input variables. • To help select or eliminate variables for study in designed experiments. • To identify vital few Xs. • To determine the governing transformation equation. Analyze: potential project deliverables • Project definition • Problem description • Project metrics • Passive process analysis: • Graphical analysis • Multi-vari studies © 2003 by CRC Press LLC

SL316XCh11Frame Page 375 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

375

• Hypothesis testing • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review The Improvement Phase • DOE (design of experiments) is the backbone of process improvement. • From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). • This phase is characterized by a sequence of experiments, each based on the results to the previous study. The intent is to generate improvement alternatives. • Identify “critical” variables during this process. • Usually three to six Xs account for most of the variation in the outputs. • Control and continuous improvement. • Perform a pilot. • Validate the improvement. • Create the “should be” process map. • Update the FMEA. • Perform preliminary cost–benefit analysis. Improve: potential project deliverables • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review The Control Phase • • • •

Implement long-term control strategy and methods. Develop an execution plan. Optimize, eliminate, automate, and control vital few inputs. Document and implement the control plan.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 376 Monday, September 30, 2002 8:08 PM

376

Six Sigma and Beyond: The Implementation Process

• Sustain the gains identified. • Reestablish and monitor long-term delivered capability. • Implement continuous improvement efforts. (This is perhaps the key responsibility of all the green belts in the functional area.) • Establish execution strategy support systems. • Learn safety requirements. • Define maintenance plans. • Establish system to track special causes. • Draw up required and critical spare parts list. • Create troubleshooting guides. • Draw up control plans. • Create SPC charts. • Buy process monitors. • Set up inspection points. • Set up metrology control. • Set workmanship standards. • Others? Control: potential project deliverables • Project definition: • Problem description • Project metrics • Optimization of Ys: • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review Additional items of discussion. The following items should be discussed at the appropriate and applicable complexity level of the participants. In some cases, some of the following items may be just mentioned but not discussed. Rolled-throughput yield • The classical perspective of yield • Simple first-time yield = traditional yield • Measuring first-pass yield Normalized yield • Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent © 2003 by CRC Press LLC

SL316XCh11Frame Page 377 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

• • • • • •

377

and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as an opportunity count. In terms of quality, each product or process characteristic represents a unique opportunity to either add or subtract value. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar. Formulas to know Hidden factory Take away — rolled-throughput yield Integrates rework loops Highlights “high-loss steps… Put project emphasis here!

DPMO, counting opportunities: Nonvalue-add rules: an opportunity count should never be applied to any operation that does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception: an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. Supplied components rules: each supplied part provides one opportunity. Supplied materials, such as machine oil, coolants, etc., do not count as supplied components. Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. (Sanity check rule: “Will applying counts in these operations take my business in the direction it is intended to go?” If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be opposed to the company objective. Hence, it would not provide an opportunity. Once you define an opportunity, however, you must institutionalize that definition to maintain consistency. This opportunity, if it is good enough for the original evaluation, must also be good enough to be evaluated at the end of the project. In other words, the opportunity count must have the same base; otherwise it is meaningless.) Introduction to data • Description and definitions • What do you want to know? • Discrete vs. continuous data • Categories of scale • Nominal scale — nominal scales of measure are used to classify elements into categories without considering any specific property. Examples of nominal scales include “causes” on fishbone diagrams, yes/no, pass/fail, etc. • Ordinal Scale — ordinal scales of measure are used to order or rank nominal (pass/fail) data based on a specific property. Examples of © 2003 by CRC Press LLC

SL316XCh11Frame Page 378 Monday, September 30, 2002 8:08 PM

378

Six Sigma and Beyond: The Implementation Process

ordinal scales include relative height, Pareto charts, customer satisfaction surveys, etc. • Likert scale (ordinal) — example rating scale ranges: five-point school grading system (A B C D E); seven-point numerical rating (1 2 3 4 5 6 7); verbal scale (excellent, good, average, fair, poor). • Interval and ratio scale — interval scales of measure are used to express numerical information on a scale with equal distance between categories, but no absolute zero. Examples are: temperature (°F and °C), a dial gauge sitting on top of a gauge block, comparison of differences, etc. Ratio scales of measure are used to express numerical information on a scale with equal distance between categories, but with an absolute zero in the range of measurement. • A tape measure, ruler, position Vs time at constant speed, and so on. Selecting Statistical Techniques At this point of the discussion the instructor may want to introduce a computer software package to facilitate the discussion of statistical tools. Key items of discussion should be: • Entering data into the program • Cutting and pasting • Generating random numbers • Importing and exporting data from databases, Excel, ASCII, etc. • Pull-down menus of the software (for general statistics, graphs, etc.) • Manipulate and change data • Basic statistics and probability distributions • Calculate the z scores and probability • Calculate capability • Control charts Discussion and practice of key statistical techniques and specific tools Basic statistics • Mean, median, mode, variance, and standard variation • Distributions • Normal, Z-transformation, normal and nonnormal probability plots, nonnormal, Poison, binomial, hypergeometric, t-distribution • Central limit theorem — very important concept. Emphasis must be placed on this theorem because it is the fundamental concept (backbone) of inferential statistics and the foundation for tools to be learned later this session. The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if n is sufficiently high (n > 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample means decreases. The standard error will help us calculate confidence intervals. Confidence © 2003 by CRC Press LLC

SL316XCh11Frame Page 379 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

379

intervals (CIs) are derived from the central limit theorem and are used by black belts to quantify a level of certainty or uncertainty regarding a population parameter based on a sample. • Degrees of freedom • Standard error • Confidence Parametric confidence intervals — the parametric confidence intervals assume a t-distribution of sample means and uses this to calculate confidence intervals. Confidence intervals for proportions — confidence intervals can also be constructed for fraction defective (p), where x = number of defect occurrences; n = sample size and p = x/n = proportion defective in sample. For cases in which the number defective (x) is at least 5 and the total number of samples n is at least 30, the normal distribution approximation can be used as a shortcut. For other cases, the binomial tables are needed to construct this confidence interval. • Accuracy and precision • Defects per million • Population vs. sample • Sampling distribution of the mean • Concept of variation • Additive property of variances • Attribute or variable Types of data — variable and attribute • Rational subgroups • Data-collection plan — your data-collection plan and execution will make or break your entire project!!!!!!!!!!! Data-collection plan — ask yourself the following questions: • What do you want to know about the process? • What are the likely causes of variation in the process (Xs)? • Are there cycles in the process? • How long do you need to collect data to capture a true picture? • Who will be collecting the data? • How will you test your measurement system? • Are the operational definitions detailed enough? • How will you display the data? • Is data available? If not, how will you prepare data collection sheets? • Where could data collections occur? What are your correction plans? Process capability and performance • Process capability • Capability • Process characterization • Converting DPM to a Z value • Short-term vs. long-term • Indicating the spread © 2003 by CRC Press LLC

SL316XCh11Frame Page 380 Monday, September 30, 2002 8:08 PM

380

Six Sigma and Beyond: The Implementation Process

• Indicates the spread and center • Indicates spread and centering • Process shift — how much should we expect? Is 1.5σ enough? Where does it come from? • The map to the indicators and what do they mean Stability • Process control • Pooled vs. total variation • Short-term vs. long-term • Which standard deviation? • Area of improvement • What is good? Measurement system analysis • Why MSA? How does variation relate to MSA? • Measurement systems • Resolution • Bias • Accuracy vs. precision • Linearity Measurement tools • A simple gauge • Calibration • Consistency • Gauge R& R • GR&R with ANOVA • Indices (Cp, Cpk, Pp, Ppk) • Cp is the “potential” capability of your process assuming you are able to eliminate all nonrandom causes. In addition, Cp assumes the process is centered. This metric is also called “process entitlement” or the best your process could ever hope to perform in the short term. In order to calculate this metric you need a close approximation for short-term standard deviation (which is not always available). • Cpk and Ppk use the mean, not only the tolerance band, to estimate capability. The term CPKmin(Cpklower, Cpkupper) is stated as the shortest numerical distance between the mean and the nearest spec limit. How do you know if your gauge is good enough? Introduce definition of quality (ISO 8402) Control charts • Variable and attribute (X-bar and s, X-bar and R, IndX and MR, p, c, etc.) • Multi-vari charts: the purpose of these charts is to narrow the scope of input variables and, therefore, to identify the inputs and outputs (KPIVs and KPOVs)

© 2003 by CRC Press LLC

SL316XCh11Frame Page 381 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

381

HYPOTHESIS TESTING INTRODUCTION Why learn hypothesis testing? Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled. Statistical testing provides objective solutions to questions that are traditionally answered subjectively. Hypothesis testing is a stepping stone to ANOVA and DOE.) • Hypothesis testing terms that you need to remember • Steps in hypothesis testing: • Hypothesis testing roadmap • Hypothesis testing description • The null and alternate hypotheses • The hypothesis testing form • Test for significance • Significance level • Alpha risk — this alpha level requires two things: a) an assumption of no difference (Ho) and b) a reference distribution of some sort — producer’s risk • Beta risk — consumer’s risk

PARAMETERS

VS.

STATISTICS

Parameters deal with populations and are generally denoted with Greek letters. Statistics deal with samples and are generally denoted with English letters. There is no substitute for professional judgment. It is true that in hypothesis testing we answer the practical question: “Is there a real difference between _____ and _____ ?” However, we use relatively small samples to answer questions about population parameters. There is always a chance that we selected a sample that is not representative of the population. Therefore, there is always a chance that the conclusion obtained is wrong. With some assumptions, inferential statistics allows us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (P value) of a wrong conclusion. What is signal-to-noise ratio? Managing change Measures and rewards An introduction to graphical methods • Pareto • Histogram • Run chart • Scatter plot • Correlation vs. causality © 2003 by CRC Press LLC

SL316XCh11Frame Page 382 Monday, September 30, 2002 8:08 PM

382

Six Sigma and Beyond: The Implementation Process

• Boxplot • Hypothesis tests for means • Comparison of means t Distribution Hypothesis testing for attribute data Useful definitions Hypothesis tests: proportions Chi-square test for independence Chi-square test Chi-square test for a relationship ANOVA Why ANOVA?

INTRODUCTION

TO

DESIGN

OF

EXPERIMENTS

What is experimental design? Organizing the way in which one changes one or more input variables (Xs) to see if any of them, or any combination of them, effects the output (Y) in a significant way. A well-designed experiment eliminates the effect of all possible Xs except the ones that you changed. Typically, if the output variable changes significantly, it can be tied directly to the input X variable that was changed and not to some other X variable that was not changed. The real power of experimentation is that sometimes we get lucky and find a combination of two or more Xs that make the Y variable perform even better! • Benefits of DOE • Why not one factor at a time? • Types of experiments • Classes of DOE • Terms used in DOE • Main effects and interactions • Contrast • Yates standard order • Run order for a DOE • Strategy of experimentation • Barriers to effective experimentation Focus on the X-Y relationship Trial and error One factor at a time Full factorial experiment Things to watch for in experiments Randomization • Repetition and replication • 2-K factorials • Advantages of 2k factorials • Standard order of 2k designs • Interactions © 2003 by CRC Press LLC

SL316XCh11Frame Page 383 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

383

• Interaction effects • Interactions for the three-way design • Main effects • Cube plots • Types of 2k factorials Center points and blocking • Adding center points • In two-level designs, there is a risk of missing a curvilinear relationship. Inclusion of center points is an efficient way to test for curvature without adding a large number of extra runs. • Confounding and blocking • Residuals analysis: • Residuals

SCREENING DESIGNS These designs are a powerful tool at analyzing multiple factors and interactions. The designs combine the flexibility of reduced run size without compromising information. One word of caution: do not reduce the experiment too far. By doing fewer runs, you may not obtain the desired level of information. Factorial experiments — the success of fractional factorials is based on the assumption that main effects and lower order interactions are generally the key factors. Full factorials can usually be derived from a fractional factorial experiment once nonsignificant factors are eliminated. • Fractional factorials • Design resolution • Choosing a design • Notation • Alias structure Planning experiments • Team involvement • Maximize prior knowledge • Identify measurable objectives • FMEA on all steps of the execution • Replication and repetition consideration • Verify and validate data collection and analysis procedures Steps to experimentation • Define the problem. What is the objective of the experiment? • Establish the objective. • Select the response variables. • Select the independent variables. • Choose the variable levels. • Select the experimental design. • Sequential experimentation • Select experimental design © 2003 by CRC Press LLC

SL316XCh11Frame Page 384 Monday, September 30, 2002 8:08 PM

384

Six Sigma and Beyond: The Implementation Process

• Screening/fractional factorial • Full factorial/partial • Consider the sample plan: how many runs can we afford? (The more runs or samples, the better understanding and confidence in the result.) How are we controlling noise and controllable variables that we know about? • What is our plan for randomization? • Walk through the experiment • Collect data • Analyze data • Draw statistical conclusions • Replicate results • Draw practical solutions Implement solutions • Understand the current process. • Is output qualitative or quantitative? • (A vs. B) or (50 vs. 100) ? • What is the baseline capability? • Is your process under statistical control? • Is the measurement system adequate? • Factor selection • Which factors (KPIV’s) do we include? • Where should they come from? • Process map • Cause and effects matrix • FMEA • Multi-vari study results • Brainstorming (fishbone) • Process knowledge • Operator experience • Customer/supplier input • Level selection. After the test factors are identified, we must set the levels of those factors we want to test. What is the right level differentiation to obtain the information needed? If the levels are too wide or narrow, nothing will be gained. Level guideline: 20% above and below the specs. If no specs, +/– 3 sigma from the mean. • What will the experiment cost? • Are all of the necessary players involved (informed)? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run and walked through the process? • Has the necessary paperwork been completed? • Make sure the MSA has been validated. • Budget and timelines (The goal in DOE: to find a design that will produce a specific desired amount of information at a minimum cost to the company.) © 2003 by CRC Press LLC

SL316XCh11Frame Page 385 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

385

Four phases of designed experiments: • Planning: careful planning involves clearly defining the problem of interest, the object of the experiment, and the environment in which the experiment will be carried out. • Screening: initial experiments aim to reduce the number of potentially influential variables to a vital few. Screening allows us to focus process improvement efforts on the most important variables. Screening designs include two-level full and fractional factorials, general full factorials, and Plackett-Burman. • Optimization: after we have identified the vital few variables by screening, we need to determine the best values in order to optimize a process; for example, we may want to maximize a yield or reduce product variability. Optimization designs include full factorial designs (two-level and general) and response surface designs (central composite and Box-Behnken). • Verification: we can perform a follow-up experiment at the predicted best process conditions to confirm optimization results. Fractional factorial designs Purpose: to determine which main effects (factors) are important. Key features: 1. Know which resolution you are running: always two-level factorials. 2. Useful to estimate mostly main effects (not interactions). 3. They can be built up to a higher-order blocked factorial design. 4. Limited to 15 runs. 5. Don’t expect more than what the design will provide. Recommendation: use these designs when you need to narrow down the list of important factorials. They are easy to interpret and cost effective. Screening designs (full or fractional) Purpose: to investigate how seven factors or less interact to drive a process. Key features: 1. Two-level factorials. Resolution IV, V, or higher. 2. General full factorials. 3. These allow estimation of at least two-way interactions. 4. They can model weak curvature through center points and can be built up into a response surface (blocked central composite) design to model more pronounced curvature. 5. They provide direction for further experimentation in search of an optimal solution. Recommendation: this is the design most often used in industry. They are good, low-cost, all-purpose designs. Response surface designs Purpose: to model responses that exhibit quadratic (curvilinear) relationships with the factors. Key features: 1. Recommended for nonsequential experiments. (Only one shot!) 2. Use when extreme combinations cannot be run. 3. Excellent for optimizing since curvature is typically seen around optimal. © 2003 by CRC Press LLC

SL316XCh11Frame Page 386 Monday, September 30, 2002 8:08 PM

386

Six Sigma and Beyond: The Implementation Process

4. Designs are costlier (more runs). Factors of interest should be low in number. 5. These can be used to minimize variation. 6. These can be used to put the process on target, maximize, or minimize a measure of interest. How do I sustain the improvement? Tools to assure process remains in control Keys to success • Early involvement of all work cell/department members. • Update all affected parties (including supervisors/managers regularly). • Get buy-in — no surprises! • Poka yoke the process. • Establish frequent measurement. • Establish procedures for the new/updated process. • Train everyone — assign responsibilities. • Monitor the results. How do I transition my project? • Assure your project is complete enough to transition. • No loose ends — a plan (project action plan) for everything not finalized. • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people in impacted area. • Display your project in impacted area during all phases. Remember, no surprises. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help. • Data collection. • Idea generation (brainstorming events). • What is a project action plan? It is a documented communication tool (contract) which allows you to identify: • What is left to do to complete your project? • Who is responsible to carry out each task? • When they should have it complete? • How it should be accomplished? Do I have to have one? Only if there are unfinished tasks to your improvement process that you expect others to carry out after the transition. (The tasks must be negotiated and agreed to.) Who will monitor the plan for implementation/completion? Both you and the responsible supervisor/manager who assumes ownership. Who has ultimate responsibility? The owner of each task and the responsible supervisor/manager.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 387 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

387

Product changes • Revise drawings by submitting EARs. • Work with process, test, and product engineers. Process changes • Physically change the process flow (5S the project area). To ensure your gains are sustainable you must start with a firm foundation. 5S standards are the foundation that supports all the phases of Six Sigma manufacturing. The foundation of a production system is a CLEAN and SAFE work environment. Its strength is dependent upon employee commitment to maintaining it. • Develop visual indicators. Create a visual factory. • Establish/buy new equipment to aid assembly/test. • Poka yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely evaluations. • Make it someone’s job to review the metric and take action when needed. • Training — train everyone in the new process. (Don’t leave until there is full understanding.)

CONTROL PLANS The control plan provides a written summary description of the system for controlling parts and processes; it is used to minimize process and product variation and describes the actions that are required at each phase of the process including receiving, in-process, final assembly, and shipping, to ensure that all process outputs will be in a state of control. A control plan for operational actions such as ordering, order taking, invoicing, billing, etc. can also be utilized for transactional operations. The control plan does not replace the information contained in detailed operator instructions. Since processes are expected to be continually updated and improved, the control plan is a living document, reflecting the current methods of control and measurement systems used. • Development and implementation Developing a control plan • A basic understanding of the process must be obtained. Establish a multifunction team to gather and utilize appropriate available information, such as: • Process flow diagram • Failure mode and effects analysis (process and design) © 2003 by CRC Press LLC

SL316XCh11Frame Page 388 Monday, September 30, 2002 8:08 PM

388

Six Sigma and Beyond: The Implementation Process

• • • • • • • • • •

Special characteristics (critical and significant characteristics) Control plans/lessons learned from similar parts or processes Team’s knowledge of the process Technical documentation (design/process notices, MPIs, PM) Validation plan results (DVP, EVP, PVP) Optimization methods (QFD/DOE) Develop the process flow diagram — map the process. Develop the process FMEA. Examine each process operation for potential problems and failures. Focus on characteristics that are important to the customer and to product safety. • A PFMEA is required for most organizations for all new product processes. PFMEAs must be eventually developed for all existing product lines. If a PFMEA does not exist, then customer concerns/complaints must be considered when developing the control plan. • Develop a preliminary manufacturing control process (MCP), utilizing a standardized format. This format satisfies ISO 9000, ISO/TS 16949, and QS-9000 requirements (and is the REQUIRED FORMAT!) • Conduct multifunctional team review for revision/consensus of the MCP. • Install MCP with change control approval. This will assign and display a document number, version number, issue date, and owner. • Implement the MCP. Update/revise manufacturing process instructions, control charts, gauge systems, etc. as required from the new control plan. • Benefits to developing and implementing CPs — improves overall quality by reducing chances of quality excursions. Reduces shrinkage or defects in MFG/transaction processes by keeping processes centered. Also, the data aids in timely troubleshooting of MFG/transaction processes, as well as a communication vehicle for changes to CTQ characteristics, control methods, etc. Quality system overview Control tools Continuous SPC tools The foundation of SPC Statistical process control Types of control charts — variable and attribute • Basic components of a control chart • Control limits • What are control limits? • What is meant by “in control” and “out of control?” • Link between control limits, hypotheses testing and specifications © 2003 by CRC Press LLC

SL316XCh11Frame Page 389 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

389

Variable control charts • Individual X vs. EWMA chart • X-bar and R charts • X-bar-s charts • Individual and moving range • EWMA chart • Control chart — interpretation • Control chart — nonnormal distribution Attribute control charts • P charts • np chart • C chart • U chart • Attribute chart interpretation Alternative methods of control • Precontrol • Zone control charting Process capability estimate Poka yoke — understand the use of poka yoke strategies in completing a black belt project. Know how to design and implement a poka yoke strategy. • What is poka yoke/error or mistake-proofing? • Mistake-proofing manufacturing processes • Mistake-proofing transactional processes • Types of mistake-proofing • Errors vs. defects • Types of human errors • “Red flag” conditions • Control/feedback logic • Guidelines for mistake-proofing • Mistake-proofing strategies • Advantages of mistake proofing Maintenance — a reliability function • Maintenance via Six Sigma is all-encompassing — transactional, information systems, production equipment, etc. Maintenance function should be linked to customer CTQs. It should address all six ms: machines, manpower, methods, materials, mother nature, and measurements. (Make sure you differentiate these from the classical nonmanufacturing items of policies, procedures, place, environment, measurement, and people.) Maintenance can and should be a reliability function, not just a repair function. • Maintenance maximizes output, minimizes cost, and assures continued operation - customer satisfaction. Maintenance — integrated strategy • World-class key performance indicators • Predictive maintenance © 2003 by CRC Press LLC

SL316XCh11Frame Page 390 Monday, September 30, 2002 8:08 PM

390

Six Sigma and Beyond: The Implementation Process

• Benefits to developing and implementing PMs • Major elements of preventive maintenance Realistic tolerancing — a simple graphical method for establishing optimum levels and appropriate tolerances for inputs. Once it is determined that a continuous output depends linearly on a continuous input, the output specification is used to create an input specification. Scatter plots and fitted line plots demonstrate association of inputs and outputs, not necessarily cause and effect. A realistic tolerancing method: Step 1: Identify the KPOV of interest note its specification. Choose KPIV. Step 2: Select the KPIV of interest. Define a range of values for the KPIV that will likely optimize the KPOV. Step 3: Run 30 samples over the range of the KPIV and record the output values. Step 4: Plot the results with the KPIV on the x-axis and the output on the y-axis. If the plot has a tilt with little vertical scatter, a relation exists. Proceed to Step 5. If there is no tilt, the KPIV has no relation to the response variable. Step 5: Determine the target value and tolerance of the KPIV. • Draw a best-fit line through the data. • Eliminate data point furthest from best-fit line. • Draw a parallel line through the next furthest point from the bestfit line. Draw a second parallel line equidistant from the best-fit line on the opposite side. The vertical distance between these two parallel lines represents 95% of the total effect of all other factors on the output other than the KPIV studied here. If specifications exist for the response variable, draw lines from those values on the yaxis to intersect the upper and lower confidence lines. • Drop two lines from these intersection points to the x-axis. The distance between where these intersect the x-axis represents the maximum tolerance permitted for the input variable. Step 6: Compare these values against the existing operating levels and implement necessary changes to SOP. Document changes via the FMEA and control plan. Gauge and measurement systems • Management plan • Long-term gauge control • Long-term gauge control is the management of the basis of our understanding of our process. Remember, the quality of our process cannot be understood and controlled without understanding the quality of our measurements. • Why do we need a long-term gauge plan? Long-term project control is dependent on measurement and analysis. The measurement system needs to be under control. © 2003 by CRC Press LLC

SL316XCh11Frame Page 391 Monday, September 30, 2002 8:08 PM

Six Sigma for Green Belts

391

• Who is responsible for the long term gauge plan? Those responsible for the process variables of interest. Gauge management incorporate it into the local quality system and ensure that future owners are trained to implement. • What is in a long-term gauge plan? 1. Initial baseline analysis 2. Ownership details 3. Calibration control (chart?) with instructions 4. Handling and storage requirements 5. Maintenance requirements — procedures and log 6. Spare parts requirements 7. ID/tracking system 8. Ongoing MSA requirements (product/product changes, gauge changes, operator changes, etc.) 9. Thorough documentation • What do you need to do to develop your long-term gauge plan? • Your gauge • What was your initial baseline (GR&R) data? Is this gauge still appropriate? • What is the amount of bias in your gauge? Linearity? How will you control this bias? • Who “owns” and maintains the gauge? • Who calibrates your gauge? How frequently? • Which gauge would you use? • What are the handling and storage requirements for the gauge? • Who needs to maintain the gauge? What does this mean? • How do you maintain the gauge? What are the spare parts requirements? • How frequently and when should MSA be performed? By whom? • Which one should you use and when? • What documentation is required for the long-term gauge plan? • How will we manage this documentation? • What issues/roadblocks do I see in developing the long-term gauge plan? • Implementing gauge plans

REFERENCE Adler, M. J. (1982). The great ideas: a syntopicon of great books of the western world. Encyclopaedia Britannica, Chicago, IL.

© 2003 by CRC Press LLC

SL316XCh11Frame Page 392 Monday, September 30, 2002 8:08 PM

© 2003 by CRC Press LLC

SL316XCh12Frame Page 393 Monday, September 30, 2002 8:07 PM

12

Six Sigma for General Orientation

The intent of the orientation overview is to take away the “mystical” aura of the Six Sigma methodology. It is geared toward individuals who are about to take further training in the Six Sigma methodology and, as such, serves as an overview as to what to expect. No prerequisites are needed; however, a willingness to learn and an open mind to “new” approaches and methodologies are expected. It is often suggested that simple exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may be to define a process and improve on that process, to define five to ten operational definitions in that process, to work with some variable and attribute data, to calculate the DPO, and several others. It must be stressed that this 2-day training is an introduction; it attempts to explain the Six Sigma process on a very high level of understanding. As a consequence, the exercises given during the training are intended to motivate the participants and convince the participants that there is room in their organization for improvement and application of the Six Sigma methodology. Because organizations and their goals are quite different, we will provide the reader with a suggested outline of the training material for this orientation session. It should last 2 days, and the level of difficulty will depend on the participants. Detailed information may be drawn from the first six volumes of this series. A typical orientation program may want to focus on the following instructional objectives. The reader will notice that in some categories, there are no objectives. This is because for this stage of training the material may be overwhelming. Furthermore, transactional, technical, or manufacturing categories are absent. The reason for this is that as an overview the scope of the training is to give a sense of what Six Sigma is all about and to introduce the methodology through limited simulation. The simulations are designed to convince the participants that appropriate and applicable operational definitions and data will spur improvement in the decisionmaking process.

INSTRUCTIONAL OBJECTIVES — GENERAL RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success.

393 © 2003 by CRC Press LLC

SL316XCh12Frame Page 394 Monday, September 30, 2002 8:07 PM

394

Six Sigma and Beyond: The Implementation Process

• Interpret the expression y = f(x). • Provide examples of the y and x terms in the expression y = f(x). Business Metrics • State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. • Define the nature of a performance metric. • Identify the driving need for performance metrics. • Explain the benefit of plotting performance metrics on a log scale. • Identify and define the principal categories associated with quality costs. • Compute the COQ given the necessary background data. Six Sigma Fundamentals • • • • • • • • • • • • • • • • • •

Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Identify the parts-per-million defect goal of Six Sigma. Recognize that defects arise from variation. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a four sigma business. Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. Understand that work-in-process (WIP) is highly correlated to the rate of defects. Rationalize the statement: the highest-quality producer is the lowest-cost producer. Understand that global benchmarking has consistently revealed four sigma as average, while best-in-class is near the Six Sigma region. State the general findings that tend to characterize or profile a four sigma organization. Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. Provide a qualitative definition and graphical interpretation of the standard deviation. Draw first-order conclusions when given a global benchmarking chart. Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. Provide a brief history of Six Sigma and its evolution.

© 2003 by CRC Press LLC

SL316XCh12Frame Page 395 Monday, September 30, 2002 8:07 PM

Six Sigma for General Orientation

395

• Understand the need for measuring those things that are critical to the customer, business, and process. • Define the various facets of Six Sigma and why Six Sigma is important to a business. • Provide a very general description of how a process capability study is conducted and interpreted. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Understand the driving need for breakthrough improvement vs. continual improvement. • Define the two primary components of process breakthrough. • Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from quality, cost, and cycle-time points of view.

DEFINE Nature of Variables • Explain the nature of a leverage variable and its implications for customer satisfaction and business success. Opportunities for Defects • Provide a rational definition of a defect. CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols.

© 2003 by CRC Press LLC

SL316XCh12Frame Page 396 Monday, September 30, 2002 8:07 PM

396

Six Sigma and Beyond: The Implementation Process

• Explain how process maps can be linked to the CT tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. Process Baselines Nothing specific. Six Sigma Projects • Interpret each of the action steps associated with the four phases of process breakthrough. Six Sigma Deployment • • • • •

Provide a brief description of the nature of a Six Sigma black belt (SSBB). Describe the role and responsibilities of a SSBB. Provide a brief description of the nature of a Six Sigma champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of the nature of a Six Sigma master black belt (SSMBB). • Describe the roles and responsibilities of a SSMBB.

MEASURE Scales of Measure Nothing specific. Data Collection Nothing specific. Measurement Error • Describe the role of measurement error studies during the measurement phase of breakthrough. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. © 2003 by CRC Press LLC

SL316XCh12Frame Page 397 Monday, September 30, 2002 8:07 PM

Six Sigma for General Orientation

397

Static Statistics • Provide a qualitative definition and graphical interpretation of the variance. • Compute the sample standard deviation given a set of data. • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Compute the mean, standard deviation, and variance for a set of normally distributed data. Dynamic Statistics • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations.

ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). Process Metrics • Explain why a Z can be used to measure process capability and explain its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. Diagnostic Tools Nothing specific. Simulation Tools Nothing specific. Statistical Hypotheses Nothing specific. Continuous Decision Tools Nothing specific. © 2003 by CRC Press LLC

SL316XCh12Frame Page 398 Monday, September 30, 2002 8:07 PM

398

Six Sigma and Beyond: The Implementation Process

Discrete Decision Tools • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer.

IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. Robust Design Tools Nothing specific. Empirical Modeling Tools Nothing specific. Tolerance Tools Nothing specific. Risk Analysis Tools Nothing specific. DFSS Principles Nothing specific.

CONTROL Precontrol Tools • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented.

© 2003 by CRC Press LLC

SL316XCh12Frame Page 399 Monday, September 30, 2002 8:07 PM

Six Sigma for General Orientation

399

Continuous SPC Tools • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. Discrete SPC Tools Nothing specific.

OUTLINE

OF

CONTENT

Based on the above general objectives, it is recommended that the training should follow the following content format. By no means is this the only format. However, we believe that the content follows a hierarchical sequence, and because of this, we have attempted to accommodate the learning process. For detailed information, the reader is encouraged to see volumes one through six of this series. Introduction Agenda Ground rules Exploring our values Objectives Reason for adopting the Six Sigma methodology Background of Six Sigma How other companies have used it to their benefit The business case for your organization. (This is a very important section. It is where you make your case of whether or not the program of Six Sigma is worth your time and money. Make sure there is a convincing argument that this is not a fad but rather a way of doing business.)

PROCESS IMPROVEMENT Process design or redesign Process management Comparison of traditional quality with statistical quality and DFSS Three sigma vs. four sigma vs. Six Sigma Overview of the DMAIC model What makes a good Six Sigma project? Project selection Understanding the goal of Six Sigma How should Six Sigma be approached from the corporate level Structure of Six Sigma

© 2003 by CRC Press LLC

SL316XCh12Frame Page 400 Monday, September 30, 2002 8:07 PM

400

Six Sigma and Beyond: The Implementation Process

Executive Champion Orientation (make sure you tell the participants that they are participating in this phase, which is the big picture of the Six Sigma methodology). Master black belts Black belts Green belts The model in some detail

DEFINE Team charter Customer focus Understanding needs, wants, and expectations Kano model Translating needs, wants into requirements Need for prioritizing critical to quality (CTQs) Process mapping Understanding the elements of the process Supplier Input Process itself (this is where the boundary helps in defining the focus of the project) Output Customer Understanding the difference between “what is” and “what should be” as opposed to “what could be” is a process. The focus for the improvement is always on the “what is.” Define the problem Selection criteria for project Deliverables

MEASURE Measurement Input measures Process measures © 2003 by CRC Press LLC

SL316XCh12Frame Page 401 Monday, September 30, 2002 8:07 PM

Six Sigma for General Orientation

401

Output measures Understand the measuring business process of Y = f(x) Understand the difference between effectiveness and efficiency Understand the internal quality indicators and their quantification

VARIATION Process variation Common Special Data collection Clarify data Collection goals Develop operational definitions and procedures Plan for data consistency and stability Begin data collection Continue improving measurement consistency Types of data Qualitative Quantitative (variable; attribute) Sampling Why sample? Sample determination Check sheets Frequency plot check sheets Process capability What is six sigma capability? Variation and process capability Sigma and the normal curve (sigma = standard deviation = the point of inflection on the curve) Simple Calculations and Conversions Calculate the DPU Calculate the DPMO First-pass performance Guidelines for determining how many opportunities per unit

© 2003 by CRC Press LLC

SL316XCh12Frame Page 402 Monday, September 30, 2002 8:07 PM

402

Six Sigma and Beyond: The Implementation Process

ANALYZE Data Analysis Visual displays of data Histogram Box plots Run charts Stratification Pareto Process analysis Moments of truth Value added analysis Cycle time analysis Root cause analysis Cause-and-Effect Analysis Cause and effect diagram [problem (effect) is the Y; causes are the Xs] Root Causes Verification Scatter diagram Quantify opportunity Determine the Opportunity Understand the equation profit = revenue – cost Understand the difference between hard and soft money Improve Generate solutions Solutions criteria List possible solutions Select solutions Evaluate solutions and make best choice Validate solutions Cost-benefit analysis Implementation planning “Should be” process map (this is the result of your dissatisfied “what is”) Piloting Project planning Change management strategy Control © 2003 by CRC Press LLC

SL316XCh12Frame Page 403 Monday, September 30, 2002 8:07 PM

Six Sigma for General Orientation

Document and institutionalize Develop procedures Institutionalize systems and structures Continual improvement Monitor process Standards Control charts Measurement plan

© 2003 by CRC Press LLC

403

SL316XCh13Frame Page 405 Monday, September 30, 2002 8:06 PM

Part III Training for the DCOV Model

© 2003 by CRC Press LLC

SL316XCh13Frame Page 407 Monday, September 30, 2002 8:06 PM

13

DFSS Training

As we said in Volume 1 of this series, Six Sigma methodology is primarily a problemsolving methodology. It solves problems by the Six Sigma breakthrough strategy, which is the DMAIC model. If we accept this, we must also accept the fact that, just like any other methodology for solving a problem, Six Sigma is by definition an appraisal approach, for it tries to eliminate the nonconformance after it has occurred. Consequently, Six Sigma in the form of the DMAIC model is indeed an appraisal method utilizing a systematic approach to what has been a piecemeal intervention of many tools and methodologies over the last 75 years or so (probability concepts, DOE, FMEA, and so on). It is effective, to be sure. Certainly it is powerful. However, we are not convinced that it is the most effective way to resolve problems in any organization. Other tools can be just as effective if management is committed, resources are allocated, and a vision of true customer satisfaction that costs much less is forged. What we are convinced of regarding Six Sigma is its strength in applying the methodology to the design of a service, process, or product before it reaches the customer. This is where the money is. This is where it will pay off. This is where there is a true opportunity for improvement. Of course, at this point the Six Sigma methodology becomes a planning tool, for it tries to prevent the nonconformance from happening. The DFSS is the tool, the methodology, the vision, the metric, and the approach for truly improving the process, service, product, organizational productivity, and, most of all, customer satisfaction. Nothing else will do. Period! The general objectives of DFSS have been given in Chapters 7 through 10, under the categories of training (executives, champions, master black belts, and black belts). The reason for this is that we believe that by including them at that location we emphasize the knowledge base as it relates to the rest of the curriculum. The reader, of course, may want to extract them and combine each of the particular objectives into one. That is an acceptable approach. The actual training for DFSS follows the generic model of define, characterize, optimize, and verify (DCOV). In the define phase, we focus on understanding the customer. In the characterize phase, we focus on understanding the system. In the optimize phase, we focus on designing robust performance in a product, service, or process. And in the verify phase, we focus on testing and verification of the product, service, or process.

THE ACTUAL TRAINING FOR DFSS In DFSS, there are generally three levels of training: 407 © 2003 by CRC Press LLC

SL316XCh13Frame Page 408 Monday, September 30, 2002 8:06 PM

408

Six Sigma and Beyond: The Implementation Process

1. Executive Training One-day training session that provides the project leader and managers a high-level overview of DFSS objectives and methods. Included is a discussion of the link between DMAIC and DCOV and the benefits of instituting Six Sigma in the design phase. 2. Champion Training A 2- or 4-day training course with greater depth of understanding of the DFSS process and tools. 3. Complete DFSS Training (Project Members and Black Belt) • Week 1: DFSS process, scorecard generation (organization depended), define (voice of the customer) and characterize (system design and functional mapping). • Week 2: review of week 1, questions and answers specific to DFSS content; project-related. Discussion of optimize (design for robustness, design for producibility) and verify. • Week 3 (optional or as needed): advanced parameter and tolerance design; structured inventive thinking). • Week 4 (optional or as needed): statistical tolerancing, FMEA, and multivariate analysis.

EXECUTIVE DFSS TRAINING Introductions Agenda Training ground rules • If you have any questions — please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities — please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Review the DMAIC model • Background and history of Six Sigma • Six Sigma scale • Philosophy • Significance of z scores • The move from three to four to five to Six Sigma DFSS business case — a very important issue to be discussed at length. Obviously each organization has its own situation; however, the following items are a good starting point for discussion • Current customer perception of performance • Future customer perception of performance © 2003 by CRC Press LLC

SL316XCh13Frame Page 409 Monday, September 30, 2002 8:06 PM

DFSS Training

409

• Current warranty cost • Future warranty cost • Current customer satisfaction performance • Future customer performance • Current competitive advantage • Future competitive advantage Link between DMAIC and DCOV • DMAIC improves customer satisfaction by eliminating nonconformances after they have occurred. It uses three primary ways to do it: 1. Statistical problem solving 2. Process variability reduction 3. Process capability assessment • DCOV improves customer satisfaction by preventing or avoiding nonconformances by improving the actual design relative to cost and sensitivity to noise over time. DFSS strategy • Target current and future products • Beta projects • Breakthrough systems • New product or service lines • High leverage of current customer issues • Depend on executive leadership • Educate management first • Engineering executive leads • Specific process participation • Be compatible with Six Sigma • Y = f(x); Y = f(x,n) • Six Sigma infrastructure (DMAIC foundation or something similar i.e., TQM, QOS, etc.) • Build on processes that work • Organizational timing requirements • APQP • Robustness DFSS deployment strategy • Train executives (0 to 3 months) • Conduct beta projects to establish training logistics and communicate the DFSS methods to interested teams (2 to 4 months) • Expand DFSS to other teams (4 to 6 months) • Apply DFSS to all new products (6 to 12 months) • Apply DFSS to the entire organization (12 to 18 months) • DFSS starts with educating the management team with a 4- to 8hour review covering the following: • Understand the process • Select projects for DFSS use • Identify the participants to receive the comprehensive training • Identify an executive to champion each project © 2003 by CRC Press LLC

SL316XCh13Frame Page 410 Monday, September 30, 2002 8:06 PM

410

Six Sigma and Beyond: The Implementation Process

Roles •







• The project team receives • An overview of the process • Training in scorecard preparation • Tool training appropriate for the team’s current phase in the product development process and responsibilities The champion role • Manages the process through the team. • Ensures that the team has all the resources necessary in time to follow the process. • Ensures the review process of the scorecards is incorporated within the overall executive review process for the team. • Requests that training as the team approaches a new phase within the process requiring further training. (Typical additional training is in the areas of statistical tolerancing, FEA, FMEA, parameter and tolerance design, etc.) Executive role • Determine business goals, in terms of cost of poor quality, within particular areas of concern. • Work with the deployment director to select the few significant issues on which to use DFSS for both current issues and future programs. • Declare that you are the champion for these issues and will use DFSS to resolve them. • Schedule regular status reviews with the effected teams. • Assure sufficient resources are available to the team to create success. • Meetings should become integrated with routine business reviews. • Request additional training as needed. Deployment director • Train executive teams in the one-day DFSS overview. • Coordinate selection of DFSS projects for assigned project teams. • Work with DFSS teams within assigned projects to deliver training to effected executives, managers, black belts, and DFSS engineers. • Attend regular status reviews with the effected teams. • Coordinate for the delivery of additional training as required. • Be a resource for resolving Six Sigma and DFSS issues. Technical manager • Oversee and coordinate the work of the DFSS process management teams (PMTs) to deliver the outcomes required at each milestone of the project. • Work with DFSS black belt to define the work required by each PMT member to generate appropriate scorecard and milestone deliverables.

© 2003 by CRC Press LLC

SL316XCh13Frame Page 411 Monday, September 30, 2002 8:06 PM

DFSS Training

411

• Provide the appropriate resources and facilities needed to meet the DFSS and milestone deliverables. • Act as the project champion to the black belt to assure that the black belt’s assignments are appropriate to his skills and the use of DMAIC. • Black Belt • Work with technical manager to define the work required by each PMT member to generate the appropriate scorecard and milestone deliverables. • Resolve issues associated with the DFSS project that are best solved using DMAIC. • Act as a DMAIC resource to the team to include teaching concepts to the team as needed. Project member • Generate the outcomes normally associated with a project member. • Use DFSS methods to understand the underlying transfer function associated with the targeted system. • Generate the scorecard to predict the quality level of the targeted system at the appropriate milestone. This will entail gathering data regarding the product design geometry as well as manufacturing and assembly process capability. DFSS is a methodology that identifies explicitly the relationship between design and service, product or process. Its intent is to satisfy the customer by either enhancing current designs or completely redesigning the current design. What is new with DFSS Scorecard — perhaps the most important item in the entire DFSS methodology. Key QOS deliverable quantification — prioritization and use of appropriate and applicable data. Transfer function — function that characterizes critical-to-satisfaction metrics in terms of design parameters. The focus is on robustness, i.e., y = f(x,n), where y is the customer’s functionality, x is the requirement for that y, and n is the noise that x has to operate under so that y is delivered. Key characteristics of DFSS • Data driven • Provides leverage to existing tools with an organization, e.g. QOS, warranty, APQP, organizational verification system, organizational reliability program(s) and so on. • Provides a template for applying statistical engineering, including simulation studies. • Delivers quality to the product by focusing on the subsystem and moving into systems. © 2003 by CRC Press LLC

SL316XCh13Frame Page 412 Monday, September 30, 2002 8:06 PM

412

Six Sigma and Beyond: The Implementation Process

• Provides a vehicle of understanding of the y = f(x,n) for components to subsystems to systems. • Forces the use of systems engineering in the design process in conjunction with APQP timing requirements. The scorecard — there are many ways to track the progress of the project. The team should develop its own so that the following information may be captured: • CTCs • Transfer function that delivers the CTS attribute • Transfer function quantified in such a way that predicts the quality of delivery of the attribute • Appropriate and applicable information about the project so that related business decisions may be made A transfer function is the mathematical equation that allows you to design a quantitative relationship between dependent and independent variables. Transfer functions can be derived from two sources: 1. First principles • Known equations that describe functions (they may be identified from physics, including function structure flows) • Analytic models, simulation models (finite element analysis, Monte Carlo, etc.) • Drawings of systems and subsystems (evaluation of tolerancing, geometry of design, and mass considerations) • Design of experiments (classical design, Taguchi, response surface methodology, and multivariate analysis) 2. Empirical data • Correlation • Regression • Mathematical modeling Uses of transfer function — recognizing that not all variables should be included in a transfer function, we identify and focus only the critical few xs that are the most critical for achieving y. We then use the transfer function primarily to: • Estimate the mean and variance of Ys and ys • Cascade the customer requirement • Optimize and make trade-offs • Forecast customer experience • Set tolerances for significant xs • When a transfer function is not exactly known there are two options 1. Use surrogate data from similar designs. 2. Build a bundle of transfer function. The rationale for such an idea is the fact that we know that we will never have 100% of all customer © 2003 by CRC Press LLC

SL316XCh13Frame Page 413 Monday, September 30, 2002 8:06 PM

DFSS Training

413

metrics in terms of transfer function, just like we will never be able to design something with 100% reliability (remember: R(t) = 1 – F(t)). To be sure, we already may know some of the transfer functions; however, if we are in doubt, we may combine subsystems so that the outcome of a system may be represented with the best option of a transfer function. Overview of DFSS • Define CTSs • In this phase we • Identify CTS drivers and Y • Establish operating window for each chosen Y for new and aged conditions • Inputs • Consumer insights • Quality and customer satisfaction history • Mining data analysis • Functional, serviceability, corporate, and regulatory requirements • Integration targets • Brand profiler • Quality function deployment (QFD) • Conjoint analysis • TRIZ results • Design specifications • Business strategy • Competitive environment • Technology assessment • Market segmentation • Benchmarking • Required technical activity • Select Ys • Define customer and product needs/requirements • Relate needs/requirements to customer satisfaction; benchmark • Prioritize needs/requirements to determine CTS Ys • Peer review • Outputs • Kano analysis • CTS scorecards • Y relationship to customer satisfaction • Benchmarked CTSs • Targets and ranges for CTS Ys • Characterize the system • In this phase we • Flow CTS Ys down to lower level ys, e.g., Y to y to y1, y2… yn • Relate ys to CTQ parameters (xs and ns), y = f(x1, … xk, n1… nj) • Characterize robustness opportunities © 2003 by CRC Press LLC

SL316XCh13Frame Page 414 Monday, September 30, 2002 8:06 PM

414

Six Sigma and Beyond: The Implementation Process

• Inputs • Kano diagram • CTS Ys, with targets and ranges • Customer satisfaction scorecard • Functional boundaries • Interfaces from VDS/SDS • Existing hardware FMEA data and so on • Required technical activity • Identify functions associated with CTSs • Decompose Y into contributing elements and identify xs and ns • Create function structure or other model for identified functions • Select ys that measure the intended function • Identify control and noise factors • Create general or explicit transfer function • Peer review • Outputs • Function diagrams • Mapping of Y to functions to critical functions to ys • P diagram, including critical • Technical metrics, ys • Control factors, xs • Noise factors, ns • Transfer function • Scorecard with target and range for ys and xs • Plan for • Optimization • Verification (robustness and reliability checklist) • Optimize product/process • In this step we: • Understand capability and stability of present processes • Understand the high time-in-service robustness of the present product • Minimize product sensitivity to noise, as required • Minimize process sensitivity to product and manufacturing variations, as required • Inputs • Present process capability (µ target and σ) • P diagram, with critical ys, xs, ns • Transfer function (as developed and understood to date) • Manufacturing and assembly process flow diagrams, maps • Gage R&R capability studies • PFMEA and DFMEA data • Optimization plans, including noise management strategy • Verification plans: robustness and reliability checklist • Required technical activity • Optimize product and process © 2003 by CRC Press LLC

SL316XCh13Frame Page 415 Monday, September 30, 2002 8:06 PM

DFSS Training

• • •

415

Minimize variability in y by selecting optimal nominals for xs Optimize process to achieve appropriate σx Ensure ease of assembly and manufacturability (in both steps above) • Eliminate specific failure modes • Update control plan • Peer review • Outputs • Transfer function • Scorecard with estimate of σy • Target nominal values identified for xs • Variability metric for CTS Y or related function, e.g., range, standard deviation, S/N ratio improvement • Tolerances specified for important characteristics • Short-term capability, “z” score • Long-term capability • Updated verification plans (robustness and reliability checklist) • Updated control plan • Verify results • In this step we • Assess actual performance, reliability, and manufacturing capability • Demonstrate customer-correlated performance over product life • Inputs • Updated verification plans (robustness and reliability checklist) • Scorecard with predicted values of y, σy based upon µ x and σx • Historical design verification and reliability results • Control plan • Required technical activity • Enhance tests with key noise factors • Conduct physical and analytical performance and key life tests • Improve ability of tests to discriminate good/bad commodities • Apply test strategy to maximize resource efficiency • Peer review • Outputs • Test results (product performance over time i.e., Weibull, hazard plot, etc.) • Long term process capability estimates • Scorecard with values of y, σy based on test data • Completed robustness and reliability checklist with demonstration matrix • Lessons learned captured in system or component design specification and so on

© 2003 by CRC Press LLC

SL316XCh13Frame Page 416 Monday, September 30, 2002 8:06 PM

416

Six Sigma and Beyond: The Implementation Process

DFSS CHAMPION TRAINING Every champion should be trained in the DFSS model of the Six Sigma methodology. Again, the intent is not to make them experts, but to familiarize them enough with the process, to ask questions, and make sound decisions. The following outlines are designed for a 2- and 4-day training program. The reader will notice that we make no distinction between categories here. The reason is that this outline is generic enough to accommodate all three. Obviously, it can and should be modified for specific situations.

DFSS – 2-DAY PROGRAM Generally, this is used for the transactional champion. Introduction Review the DMAIC model Have a familiarity with the DFSS methodology Understand how Six Sigma integrates with current design practices Understand how to select CTQs Understand the importance of using data and the Six Sigma methodology vs. alternate approaches Understand tolerancing and its importance to Six Sigma Gain exposure to the tools and resources available to assist Six Sigma design efforts; remember to emphasize that project champions are responsible for familiarity and understanding, not expertise Design for Six Sigma — a systematic methodology, with tools, training and measurements which enable us to design products/processes that meet customer expectations and can be produced at the Six Sigma level. Developing a robust design Design for Six Sigma process Six Sigma design process Identify (measure) phase CTQ identification The QFD process CTQ identification — QFD 2-step process FMEA process — relationship to QFD FMEA is used in all design phases Using FMEA: assessing current situation Characterize • Review ideal function • Identify and define the CTQs Optimize (improve) Understanding process data Tolerance analysis • Six Sigma mechanical tolerancing © 2003 by CRC Press LLC

SL316XCh13Frame Page 417 Monday, September 30, 2002 8:06 PM

DFSS Training

417

Verify design (control) Test and controls Benefits

DFSS CHAMPION TRAINING OUTLINE — 4 DAYS Generally, this is used for the technical or manufacturing champion. The champion needs to be familiar with NOT an expert in DFSS. Overview of customer focus and business strategy Understand the need for accurate CTQs Source of most quality problems is design The five sigma wall and the jump to the Six Sigma opportunity DFSS definition: a systematic methodology, with tools, training, and measurements that enable the organization to design products and processes that meet or exceed customer expectations as well as contribute to the profitability of the organization and that can be produced at the Six Sigma level. Differentiate between traditional and robust design — explain the difference between y = f(x) and y = f(x,n). CTQs identification Process capability Tolerance analysis Design for Six Sigma process — we want to Understand the process standard deviation. Control the variation of the process — make it stable. Determine the Six Sigma tolerances. Correlate and confirm (for agreement) that customer expectations are met under real-world usage. (Quite often, in a DFSS study we find that the present capability is not meeting our goals and does not always conform to customer needs. As a consequence, the DFSS becomes an interactive process between designers, manufacturing, and customer needs.) DFSS model Identify Customer requirements Technical requirements (both variables and limits) Characterize (design) Formulate concept design Identify potential risks For each CTQ, identify design parameters and noise Find critical design parameters and noise factors and their influence on CTQ Develop a preliminary transfer function © 2003 by CRC Press LLC

SL316XCh13Frame Page 418 Monday, September 30, 2002 8:06 PM

418

Six Sigma and Beyond: The Implementation Process

Optimize Do a trade-off analysis between your parameters and noise factors and customer requirements to make sure that all CTQs are met. Assess parameter capability to achieve critical design parameters and noise factors and meet CTQ limits. Optimize design to minimize sensitivity of CTQs to process parameters. Determine tolerance. Estimate capability of new design (via simulation) and costs. Verify (validate) Test and validation. Assess performance, failure modes, reliability, and risks. If the design for verification is okay, then proceed to process. DFSS tools Identify Kano model QFD FMEA Benchmarking Competitive analysis Target costing Organizational requirements Customer input (marketing surveys, focus groups, warranty, etc.) Characterize (design) Risk assessment Gauge R & R Simulation (finite element analysis, Monte Carlo, solver, etc.) Process mapping Ideal function DOE — parameter design Reliability tools as appropriate Optimize Process capability models Robust design Simulation Tolerancing Traditional DMAIC tools Verify (validate) Accelerated testing Reliability analysis as appropriate FMEA Cost–benefit analysis Overview of selected tools: the idea here is to emphasize maximization of customer satisfaction and organizational profitability by minimizing variation. The tools used in DFSS are intended to do that. That is, in the identify phase, the focus © 2003 by CRC Press LLC

SL316XCh13Frame Page 419 Monday, September 30, 2002 8:06 PM

DFSS Training

419

is to tie the design as much as possible to the voice of the customer. Therefore, the tools used are to identify what is important to the customer but also to prioritize them. Identify (measure) phase CTQ identification: the select few, measurable key characteristics of a specific part, process, or specification that must be in statistical control to guarantee customer satisfaction. Therefore, they must be measured and analyzed on an ongoing basis. The factors to consider are: a) relationship to design and customer need and b) technical risk related to meeting the specifications. Kano model Identifies the basic performance and excitement characteristics Forces the issue of understanding what requirements are essential QFD process • Identifies CTQs that are the source of customer satisfaction • Focuses on satisfaction • Translates customer requirements to part CTQ characteristics by: Step 1. Collect and organize customer requirements. Work with marketing and other organizations (internal and external to the organization). Simulate the needs, wants, and expectations of the customer as though you were the customer. Storyboard customer requirements and organize hierarchically. Rate importance of each specific customer requirement on a Likert scale (1–5). Step 2. Collect and organize technical requirements. Research existing specifications, engineering procedures, and validation plans. Develop a list of specific “measurable” design requirements. Step 3. Map relationships between customer and technical requirements. Focus on vital few and trivial many. Use scale of strong, moderate and weak (9, 3, 1 or 0, respectively). A strong relationship is in direct correlation with customer satisfaction. Definite service call or nonpurchase issue. A moderate relationship may result in a service call or nonpurchase issue. A weak relationship has a very small chance of a service call or nonpurchase issue. Multiply relationship weighting by customer importance and sum columns. At this point the translation of technical requirements into CTQs takes place. (Warning: CTQs are great! However, we must be very careful in their identification because quite often not all of them are driven by customer satisfaction.) FMEA is used in all product steps. It assesses the current situation and may identify potential problems in both design and process. For a detailed explanation, see Volume 6 of this series. In a cursory review the FMEA focuses on: Product planning — design FMEA (product FMEA) Product goal setting Performance © 2003 by CRC Press LLC

SL316XCh13Frame Page 420 Monday, September 30, 2002 8:06 PM

420

Six Sigma and Beyond: The Implementation Process

Reliability Cost Life Product design — design FMEA (product FMEA) Optimization Analyze preliminary transfer function Process design — process FMEA Process sequencing Function flow Production quality planning — process FMEA Quality plans Manufacturing Suppliers Services — service FMEA Field service goal setting Maintainability Serviceability Spare part availability Design (analyze) phase: the focus here is a) to ensure that the appropriate and applicable CTQs are identified and emphasized, b) to ensure appropriate transfer functions of CTQs have been translated into the technical requirements, c) verify use of design simplification tools to reduce complexity, and d) to organize testing using design of experiments. Formulate concept design — use simple design methodology. Review ideal function — P diagram. Introduce preliminary transfer function — y = f(x,n). Identify potential risks — use assessment risk and FMEA. For each technical requirement identify design parameters and noise (CTQs) — use engineering analysis and simulation. Determine the CTQs and their influence on the technical requirements (transfer functions) — use systems engineering, DOE, and appropriate and applicable analysis tools. Demonstrate how the flow of transfer function relates CTQs to the technical requirements. (Special note: it is very important for champions to understand this. Otherwise, the project will likely fail. If the wrong CTQs are identified to create the transfer functions, then the transfer function is not accurately identified and there is no true understanding of the customer need, want, etc.) Optimize (improve) phase: the focus here is to understand and use process capability and statistical tolerancing and recognize processes that do not meet the Six Sigma requirements. Assess process capability to achieve critical design parameters and noise to meet CTQ limits — use data analysis. © 2003 by CRC Press LLC

SL316XCh13Frame Page 421 Monday, September 30, 2002 8:06 PM

DFSS Training

421

Optimize design to minimize sensitivity of CTQs to process parameters given a noise — use appropriate and applicable data bases, process capability models, and process flow charts. Conduct mistake proofing — use warning labels, devices, feedback controls. Determine tolerances — use statistical tolerancing, robust designs, simulation. Perform trade-off analysis to ensure all CTQs are met — use trade-off analysis. Estimate DFSS success and cost — use appropriate and applicable Six Sigma tools. Understanding process data – three categories of understanding Doing it well implies that capability is done and is acceptable. Doing it better implies understanding of current data. Doing it right implies Six Sigma methodology. Understanding variation, short-term and long-term. Understanding rational subgrouping. Understanding the short-term design goal is a Z value of 6.0. Understand the long- vs. short-term capability of 4.5 vs. Six Sigma. Provide examples of long and short capability. (Remember, for Six Sigma the capability is 6 × short-term sigma.) Tolerance analysis. Make it work — parameter design Engineering calculations Finite element analysis DOE Test and evaluation Make it fit — tolerance analysis Process capability studies Minimum maximum stack up Root sum of squares Producibility studies Control (validate) phase: the focus here is to make sure the design meets customer’s requirements by increased testing using formal tests and feedback of requirements to manufacturing and sourcing.

PROJECT MEMBER AND BB DFSS TRAINING The most intensive DFSS training is reserved for the project member and black belt. Within this training, however, there are typically two alternatives for the actual delivery of the DFSS methodology. The first is to do it on an as-needed basis, which implies that the training may last as long as the project does, and the second is to offer the entire training as a block. In the first case, the material is presented and a facilitation of the project follows. The actual process is divided into the components of the DCOV model and generally lasts about 1 week per element. The second option is to do it in a 2-week blocks with about a 1-month break in between to facilitate implementation of the concepts learned. In week 1, the define and characterize phases are presented, and after the 1-month lapse the second week is spent © 2003 by CRC Press LLC

SL316XCh13Frame Page 422 Monday, September 30, 2002 8:06 PM

422

Six Sigma and Beyond: The Implementation Process

discussing the optimize and verification phases. The training material for both options are exactly the same. The only difference is that with the first option the specificity of the design project dictates the pace and application of the tools.

WEEK 1 Introductions Agenda Training ground rules: • If you have any questions, please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities; please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Review the DMAIC model • Background and history of Six Sigma • Six Sigma scale • Philosophy • Significance of z scores • The move from three to four to five to Six Sigma DFSS business case is a very important issue and should be discussed at length. Obviously, each organization has its own situation; however, the following items are a good starting point for discussion: • Current customer perception of performance • Future customer perception of performance • Current warranty cost • Future warranty cost • Current customer satisfaction performance • Future customer performance • Current competitive advantage • Future competitive advantage Link between DMAIC and DCOV • DMAIC improves customer satisfaction by eliminating nonconformances after they have occurred. It does this in one of three primary ways: 1. Statistical problem solving 2. Process variability reduction 3. Process capability assessment DCOV improves customer satisfaction by preventing or avoiding nonconformances by improving the actual design relative to cost and sensitivity to noise over time. © 2003 by CRC Press LLC

SL316XCh13Frame Page 423 Monday, September 30, 2002 8:06 PM

DFSS Training

423

DFSS strategy • Target current and future products • Beta projects • Breakthrough systems • New product or service lines • High leverage of current customer issues • Depend on executive leadership • Educate management first • Engineering executive leads • Specific process participation • Be compatible with Six Sigma • Y = f(x); Y = f(x,n) • Six Sigma infrastructure (DMAIC foundation or something similar i.e., TQM, QOS, etc.) • Build on processes that work • Organizational timing requirements • APQP • Robustness DFSS deployment strategy • Train executives (0 to 3 months) • Conduct beta projects to test training logistics and communicate the DFSS methods to interested teams (2 to 4 months) • Expand DFSS to other teams (4 to 6 months) • Apply DFSS to all new products (6 to 12 months) • Apply DFSS to the entire organization (12 to 18 months) • DFSS starts with educating the management team with a 4- to 8-hour review covering the following: • Understand the process • Select projects for DFSS use • Identify the participants to receive comprehensive training • Identify an executive to champion each project • The project team receives: • An overview of the process • Training in scorecard preparation • Tool training appropriate for the team’s current phase in the product-development process Roles and responsibilities • The champion role • Manages the process through the team. • Ensures that the team has all the resources necessary in time to follow the process. • Ensures the review process of the scorecards is incorporated within the overall executive review process for the team. • As the team approaches a new phase within the process requiring further training, the executive requests that training. (Typical additional © 2003 by CRC Press LLC

SL316XCh13Frame Page 424 Monday, September 30, 2002 8:06 PM

424

Six Sigma and Beyond: The Implementation Process

training is in the areas of statistical tolerancing, FEA, FMEA, parameter and tolerance design, etc.) Executive role • Determines business goals, in terms of cost of poor quality, within particular areas of concern. • Works with the deployment director to select the few significant issues to use DFSS for both current issues and future programs. • Declares that you are the champion for these issues and will use DFSS to resolve them. • Schedules regular status reviews with the effected teams. • Ensures sufficient resources are available to the team to create success. • Integrates meetings with routine business reviews. • Requests additional training as needed. Deployment director • Trains executive teams in the 1-day DFSS overview. • Coordinates selection of DFSS projects assigned to project teams. • Works with DFSS teams within assigned projects to deliver training to effected executives, managers, black belts and DFSS engineers. • Attends regular status reviews with the affected teams. • Coordinates for the delivery of additional training as required. • Is a resource for resolving Six Sigma and DFSS issues. Technical manager • Oversees and coordinates the work of the DFSS process management teams (PMT) to deliver the outcomes required at each milestone of the project. • Works with DFSS black belt to define the work required by each PMT member to generate appropriate scorecard and milestone deliverables. • Provides the appropriate resources and facilities needed to meet the DFSS and milestone deliverables. • Acts as the project champion to the black belt to assure that black belt’s assignments are appropriate to his skills and the use of DMAIC. Black belt • Works with technical manager to define the work required by each PMT member to generate the appropriate scorecard and milestone deliverables. • Resolves issues associated with the DFSS project that are best solved using DMAIC. • Acts as a DMAIC resource to the team to include teaching concepts to the team as needed. Project member • Generates the outcomes normally associated with a project member. • Uses the DFSS methods to understand the underlying transfer function associated with the targeted system. © 2003 by CRC Press LLC

SL316XCh13Frame Page 425 Monday, September 30, 2002 8:06 PM

DFSS Training

425

• Generates the scorecard to predict the quality level of the targeted system at the appropriate milestone. This will entail gathering data regarding the product design geometry as well as manufacturing and assembly process capability. DFSS is a methodology that identifies explicitly the relationship between design and service, product or process. Its intent is to satisfy the customer by either enhancing current designs or completely redesigning the current design. What is new with DFSS • Scorecard — perhaps the most important item in the entire DFSS methodology. • Key QOS deliverable quantification — prioritization and usage of appropriate and applicable data. • Transfer function — function that characterizes critical-to-satisfaction metrics in terms of design parameters. The focus is on robustness, i.e., y = f(x,n), where y is the customer’s functionality, x is the requirement for that y, and n is the noise that that x has to operate under so that y is delivered. Key characteristics of DFSS • Data-driven. • Provides leverage to existing tools within an organization, e.g., QOS, warranty, APQP, organizational verification system, organizational reliability programs, etc. • Provides a template for applying statistical engineering including simulation studies. • Delivers quality to the product by focusing on the subsystem and moving into systems. • Provides a vehicle of understanding of the y = f(x,n) for components to subsystems to systems. • Forces the use of systems engineering in the design process in conjunction with APQP timing requirements. The scorecard: there are many ways to track the progress of the project. The team should develop its own, so that the following information may be captured: CTCs Transfer function that delivers the CTS attribute Transfer function quantified in such a way that it predicts the quality of delivery of the attribute Appropriate and applicable information about the project on the basis of which related business decisions may be made A transfer function is the mathematical equation that allows you to design a quantitative relationship between dependent and independent variables. Transfer functions can be derived from two sources: © 2003 by CRC Press LLC

SL316XCh13Frame Page 426 Monday, September 30, 2002 8:06 PM

426

Six Sigma and Beyond: The Implementation Process

1. First principles • Known equations that describe functions (they may be identified from physics, including function structure flows) • Analytic models, simulation models (finite element analysis, Monte Carlo, etc.) • Drawings of systems and subsystems (evaluation of tolerancing, geometry of design, and mass considerations) • Design of experiments (classical design, Taguchi, response surface methodology and multivariate analysis) 2. Empirical data • Correlation • Regression • Mathematical modeling • Uses of transfer function — recognizing that not all variables should be included in a transfer function, we identify and focus only the critical few xs that are the most critical for achieving y. We then use the transfer function primarily to: • Estimate the mean and variance of Ys and ys • Cascade the customer requirement • Optimize and make trade-offs • Forecast customer experience • Set tolerances for significant xs • When a transfer function is not exactly known there are two options 1. Use surrogate data from similar designs. 2. Build a bundle of transfer function; the rationale for such an idea is the fact that we know that we will never have 100% of all customer metrics in terms of transfer function, just as we will never be able to design something with 100% reliability [recall: R(t) = 1 – F(t)]. To be sure, we already may know some of the transfer functions, however, if we are in doubt, we may combine subsystems so that the outcome of a system may be represented with the best option of a transfer function. Overview of DFSS Define CTSs In this phase we: Identify CTS drivers and Y Establish operating window for each chosen Y for new and aged conditions • Inputs • Consumer insights • Quality and customer satisfaction history • Mining data analysis • Functionality, serviceability, and corporate and regulatory requirements • Integration targets • Brand profiler © 2003 by CRC Press LLC

SL316XCh13Frame Page 427 Monday, September 30, 2002 8:06 PM

DFSS Training

427

• Quality function deployment (QFD) • Conjoint analysis • TRIZ results • Design specifications • Business strategy • Competitive environment • Technology assessment • Market segmentation • Benchmarking • Required technical activity • Select Ys • Define customer and product needs/requirements • Relate needs/requirements to customer satisfaction; benchmark • Prioritize needs/requirements to determine CTS Ys • Peer review • Outputs • Kano analysis • CTS scorecards • Y relationship to customer satisfaction • Benchmarked CTSs • Targets and ranges for CTS Ys • Characterize the system • In this phase we: • Flow CTS Ys down to lower level ys, e.g., Y to y to y1, y2… yn • Relate ys to CTQ parameters (xs and ns), y = f(x1, … xk, n1… nj) • Characterize robustness opportunities • Inputs • Kano Diagram • CTS Ys, with targets and ranges • Customer satisfaction scorecard • Functional boundaries • Interfaces from VDS/SDS • Existing hardware FMEA data, etc. • Required technical activity • Identify functions associated with CTSs • Deconstruct Y into contributing elements and identify xs and ns • Create function structure or other model for identified functions • Select ys that measure the intended function • Identify control and noise factors • Create general or explicit transfer function • Peer review • Outputs • Function diagram • Mapping of Y to critical functions, ys • P diagram, including critical • Technical metrics, ys © 2003 by CRC Press LLC

SL316XCh13Frame Page 428 Monday, September 30, 2002 8:06 PM

428

Six Sigma and Beyond: The Implementation Process

• Control factors, xs • Noise factors, ns • Transfer function • Scorecard with target and range for ys and xs • Plan for • Optimization • Verification (robustness and reliability checklist) • Optimize product/process • In this step we • Understand capability and stability of present processes • Understand the high time-in-service robustness of the present product • Minimize product sensitivity to noise, as required • Minimize process sensitivity to product and manufacturing variations, as required • Inputs • Present process capability (µ target and σ) • P diagram, with critical ys, xs, ns • Transfer function (as developed and understood to date) • Manufacturing and assembly process flow diagrams, maps • Gauge R&R capability studies • PFMEA and DFMEA data • Optimization plans, including noise management strategy • Verification plans: robustness and reliability checklist • Required technical activity • Optimize product and process • Minimize variability in y by selecting optimal nominals for xs • Optimize process to achieve appropriate σx • Ensure ease of assembly and manufacturability (in both steps above) • Eliminate specific failure modes • Update control plan • Peer review • Outputs • Transfer function • Scorecard with estimate of σy • Target nominal values identified for xs • Variability metric for CTS Y or related function, e.g., range, standard deviation, S/N ratio improvement • Tolerances specified for important characteristics • Short-term capability, “z” score • Long-term capability • Updated verification plans (robustness and reliability checklist) • Updated control plan • Verify results • In this step we © 2003 by CRC Press LLC

SL316XCh13Frame Page 429 Monday, September 30, 2002 8:06 PM

DFSS Training

429



Assess actual performance, reliability, and manufacturing capability • Demonstrate customer-correlated performance over product life • Inputs • Updated verification plans (robustness and reliability checklist) • Scorecard with predicted values of y, σy based upon µ x and σx • Historical design verification and reliability results • Control plan • Required technical activity • Enhance tests with key noise factors • Conduct physical and analytical performance and key life tests • Improve ability of tests to discriminate good/bad commodities • Apply test strategy to maximize resource efficiency • Peer review • Outputs • Test results (product performance over time i.e., Weibull, hazard plot, and so on) • Long-term process capability estimates • Scorecard with values of y, σy based on test data • Completed robustness and reliability checklist with demonstration matrix • Lessons learned captured in system or component design specification, etc.

DCOV MODEL

IN

DETAIL

The Define Phase To begin the process of DFSS the engineer or designer must understand the customer. In fact, the engineer must also understand the customer’s “drive” or “insight,” as some call it. This is because a customer focuses on functionality of the product or service and her judgment is based on emotional or rational responses. It turns out that the judgment is communicated as an intent, and the actual purchase is the result. The engineer or designer, on the other hand, focuses on requirements, which are the translation of functionality into engineering specifications. This is the function of the define phase. In fact, the translation turns out to be an iterative process using several tools and methodologies to come up with the now famous coding of the requirements as Y→y→x→x1 and so on. Cascading is an important yet time-consuming process. But however systematic and thorough it may be, the fact remains that no steps are done with 100% accuracy; the cascading is not always a one-to-one relationship, and indeed the customer’s information may not be available. By the same token, given limited and less-thanperfect information as engineers we must optimize the information. We do that by demanding correctness on critical factors, focusing only on critical factor transference, and designing our products, services, or processes to Six Sigma requirements based only on those factors. How do we do that? By collecting the appropriate and © 2003 by CRC Press LLC

SL316XCh13Frame Page 430 Monday, September 30, 2002 8:06 PM

430

Six Sigma and Beyond: The Implementation Process

applicable customer information, i.e., demographics, lifestyles, usage habits, and product or service preference and by understanding the transfer function. Tools to consider • Customer understanding • Interviews — asking provocative questions • Observation — watching and recording behaviors of what the customer is doing in daily life • Immersion — stepping into another person’s life • Introspection — imagining yourself in the role of the consumer • Market research • Conjoint analysis • Discriminant analysis • Multivariate analysis • Warranty data • Library, web, professional sources • Focus groups • Mind map — a way of capturing customer environment or creating an image of the product’s or service’s use • Start with several ideas connected to one central function • Each of these ideas can be connected to other ideas of their own • Activity diagrams are based on the user’s environment. They are usually constructed as a process flow diagram, except the arrows represent order of activities, and the boxes represent user activity. (Activities may be parallel or sequential.) Activity diagrams help in understanding the Ys. • Show life cycle • Represent activities that occur as the customer interacts with the product or service • Kano Model • Basic quality • Performance quality • Excitement quality • Quality over time • Organizational knowledge • Warranty data • Surrogate data • Data mining • QFD is a planning tool that incorporates the voice of the customer into features that satisfy the customer. QFD is an excellent framework to organize Ys, ys, and xs. However, it does not generate them. The idea of QFD is to • Capture relationships between customer wants and design variables • Deploy the design in such a way that the customer is satisfied • Make sure that the key relationships are more rigorous through the establishment of transfer functions © 2003 by CRC Press LLC

SL316XCh13Frame Page 431 Monday, September 30, 2002 8:06 PM

DFSS Training

431

Formulating CTSs: Use measurable CTSs by focusing on the customer’s environment, emotions, and activities to do the first draft of items that are critical to satisfaction. Make sure that you as an engineer or designer understand that criticality is a relative term. Criticality is an issue of measurement. To be effective one must understand the theory of mathematical comparisons (measurement theory). Effective critical items are those items that are measured in ratio scale. • Ordinal scales • Interval scales • Numerical scales — binary scale (0 and 1) • Ratio scales • Deliverables/checklist of define phase — have you considered the following? • Form a cross-functional team • Determine project scope • Understand customer needs • Identify corporate and regulatory requirements • Consider product strategy and priorities • Analyze quality history • Develop a Kano model • Identify CTS Ys or surrogates as appropriate • Document relationships between customer satisfaction and CTS identified items • Complete a CTS scorecard • Conduct peer review • Obtain project champion approval • Identify the transfer function The Characterize Phase This phase requires the output of the define phase — particularly the Kano diagram since we are trying to establish four particular items: 1) modeling function (functions vs. constraints), 2) function structures (activity diagrams, flow chains, Y-function matrix, function-function matrix), 3) ideal function, and 4) metrics for ys (function measurement, Y-y matrix). How do we do this? Tools to consider: • Concept selection • Pugh selection • Value analysis • System diagram • Structure matrix • Functional flow • Interface • QFD • TRIZ • Conjoint analysis © 2003 by CRC Press LLC

SL316XCh13Frame Page 432 Monday, September 30, 2002 8:06 PM

432

Six Sigma and Beyond: The Implementation Process

• • • • • • • •

Robustness Reliability checklist Signal process flow diagrams Axiomatic designs P Diagram Validation Verification Specifications

Function modeling — a form-independent statement of what a product or service does, usually expressed as a verb-noun pair or an active verb-noun phrase pair. (For more information on this, see volume 6.) There are two options for using function modeling: 1) function structures — an input/output model of functional nodes interconnected by interaction flows and 2) function trees — a hierarchical breakdown of the overall function. All functions may have subfunctions, and several options of tools exist in defining them. Some are: FAST diagram, bottom-up trees, top-bottom trees, function structures, design structure matrices, finite-state machines, HartleyPirbhai diagrams, entity relation diagrams, and others. Why do function modeling? Because it: Provides the structure to enable mapping of Ys to ys to xs Enhances variability analysis through decomposition Ensures complete and accurate identification of all factors by concentrating on what and not how Provides a direct relationship to customer needs Provides a physical model of the system Links the functions with the left portion of the design FMEA Functions vs. constraints • Whereas a function is what the product or service does, the constraint is a property of the system, not something the system does. For example, cost, reliability, weight, appearance, etc. are all constraints since none of them provides a function to the item. Typically, all elements in the system contribute to a constraint, not just one element. We cannot, therefore, add on a subsystem to improve the constraint. However, what we can do is model the constraints with metrics. • Function structures may be developed for both existing and new designs. In the case of existing systems, the majority of the work may be transferred from the FMEA. In the case of a new design, the following seven steps are recommended: 1. Create an overall function model for the product. (Remember, the flows are: energy, material, information.) • Define the overall function model — top-level definition • Define the inputs and outputs from the Ys 2. Develop an activity diagram • Define the beginning and termination points of the life cycle • Εstablish user activities • Clearly distinguish parallel and linear activities © 2003 by CRC Press LLC

SL316XCh13Frame Page 433 Monday, September 30, 2002 8:06 PM

DFSS Training

433

• •

Define the system boundary of the product or service For each user activity, compare the Ys and ask what device functions are needed. (Very important: if the activity is not important, or you do not know how to measure it, do not include it in the activity diagram.) 3. Map Ys (customer needs) to input flows. For each Y, relate the system’s input flows to the Y. These input flows must be acted upon by the product to achieve the Y. If there are subfunctions, list those too. List the importance level for each Y. 4. For each flow, create a function chain from input to output. (Hint: think of yourself as the flow going through the system.) Start with flows for the most important customer needs. Play act the flow. List the answers as subfunctions chained together by the flows. Carry the chain forward until the flow leaves the system or until the flow needs to interact with another flow. 5. Combine the chains into an overall function structure. Combine the chains by connecting flows between each sequence, adding subfunctions that interact or provide control states or removing subfunctions that are redundant. Combine and refine ends based on: Are the subfunctions atomic, i.e., can they be fulfilled by a single, basic solution principle that satisfies the function? Is the level of detail (granularity) sufficient to address the customer needs? 6. Validate the functional decomposition. This is a preliminary validation. However, it is important, and for each validated item, there must be either a check, guideline, or action associated with it. Are the user activities covered by the functional model? Are physical laws maintained? Are all functions from independent? Are all sub functions atomic? 7. Verify the model against the Ys (customer needs). A substep of validation is verifying that the critical Ys are represented in the functional model. Identify the subfunctions or chain of subfunctions that satisfy each Y. (Does a main function exist in this chain that addresses the Y?)

IDEAL FUNCTION

AND

P-DIAGRAM

When formulating the transfer function, the challenge is to select metrics for ys that represent intended function whenever possible. Then, optimizing for y will maximize the intended function and automatically minimize energy flowing to unintended functions; the metrics may come from energy, material, or information flows. To complete the transfer function y = f(x,n) we need to identify control factors (xs) and © 2003 by CRC Press LLC

SL316XCh13Frame Page 434 Monday, September 30, 2002 8:06 PM

434

Six Sigma and Beyond: The Implementation Process

noise factors (ns). The energy, material, and information flows in function structures will help identify potential xs and ns. The final list of signal-control and noise factors is typically captured in the P diagram.

IDENTIFYING TECHNICAL METRICS The major objective of the characterize phase is to convert customer Ys into the engineering world as critical metrics — ys. The process for this conversion is to • Relate customer needs (Ys) to functions, then determine the criticality of each function. (Current information: Ys → functions from function structures. To help us in this, we may use the “map function of needs,” which shows the flow from Ys to the flow to function chain to primary function.) • Create the matrix representation: — the creation of a Y matrix to determine the criticality of each function. (Rows are Ys; columns are functions. A cell entry shows that there is a relationship between Y and function. The degree of this relationship is identified with a rating of importance.) • Create a Y-function matrix and QFD. Very important to note: the Yfunction matrix is not the QFD matrix! These are two distinct matrices. The first one is the Y-function; the second is the function y. Creating both of these will form the QFD matrix. The flow is represented as: a) Ys to function, b) functions to ys, and c) Ys to ys, resulting in the QFD Y to y matrix. • Create customer need importance matrix — rows are Ys; columns are functions. • Distribute importance — use importance scale criteria. A typical one is 1–5. • Calculate function weights — a standard QFD practice. However, this is not yet an allocation to ensure criticality. Make sure you check the extreme values. (We are looking for a one-to-one relationship — one function, one Y.) To check ask the following questions: 1. Does each important Y have at least one important function to cover it? 2. Is there only one way associated with each important function? • The relationship meaning is to tell us how much proportional increase in Y is gained with an increase in functional performance, as opposed to how much the Y is related to the function. • Identify critical functions and interfaces. The steps are: a) Measure functions — for each critical function examine the input /output subfunctions; examine the input and transformed (output) flows. The result could be a y. b) Measure interfaces — for each high interface in the matrix ask 1) what is the flow? 2) why is there a problem or benefit? 3) what can be measured to quantify this flow? These are possible ys. c) Organize all possible ys into an effective set of ys — take the possible ys from all functions and interactions and create a new © 2003 by CRC Press LLC

SL316XCh13Frame Page 435 Monday, September 30, 2002 8:06 PM

DFSS Training

435

relationship matrix with rows y (customer needs) and columns y (x,n) and rank them accordingly. d) Check for basic quality — for each function and interface ask: if this function or interface fails, how does the customer become dissatisfied? The answer is a new possible Y, the failure mode. In fact, it measures a latent, basic quality. For each new Y determine its importance. Critical to quality — we identify the CTQs for xs and ns by: a) identifying y importance, b) flow tracing, and c) understanding the system boundary. Typical tools used in this phase are: P diagrams, DOE, correlation, regression, flow analysis, known equations, simulation tools, sensitivity analysis. • Sensitivity analysis works with numerical models, not hardware. It determines how a solution varies locally about a point in the input space (noise or control). The mathematical question is whether the derivative is large or small. Mathematically, is the derivative times variation range large or small? The reader will notice that this sensitivity analysis is quite different than the DOE sensitivity analysis that runs the inputs throughout the entire ranges, not through small and local variations. Three approaches are typical in conducting sensitivity analysis: 1) take mathematical derivatives and analyze the equation, 2) do limit analysis by hand, and 3) use simulation software — Monte Carlo or something else. Generating concepts — when existing designs are inherently nonrobust, new concepts should be required. After all, the goal of Six Sigma is to design systems and products that provide precision performance without precision components. This is true because precision components are expensive. So, rather than jumping in the bandwagon of changing components — which, by the way, we can change any time — we should be looking at changes in a) parameter optimization (Taguchi is very useful here), b) improvement in manufacturing process, c) increase in component precision, and d) tolerance tightening. The basic process for generating concepts is: 1) understand the primary customer need and engineering specifications, 2) decompose the product functions, 3) search for solutions for product functions and architecture, and 4) combine solutions into concept variants. Typical intuitive concept generation methods are: • Brainstorming • Mind mapping • Method 6–3–5 – the process is conducted based on the following seven steps 1. Arrange team members around a table. 2. Each member writes three ideas for the primary function — usually five or less. The ideas are expressed clearly and positioned on a large piece of paper in thirds or fourths depending on the number of ideas. 3. After x minutes of work on the concepts, members pass their ideas to the person on their right. © 2003 by CRC Press LLC

SL316XCh13Frame Page 436 Monday, September 30, 2002 8:06 PM

436

Six Sigma and Beyond: The Implementation Process

4. For the next x minutes, team members modify, without erasing, the ideas on the sheet, with the option of adding an entirely new concept. 5. Passing of the idea sheets continues until original sheets return, and the round ends. With sufficient time intervals, the process is repeated five times. 6. After generating ideas for each of the primary product functions, the entire process is repeated to develop alternative layouts and combined concept variants that utilize a summary of the solution principles generated for each function. 7. The ideas are accumulated and processed accordingly. • Morphological analysis — the process for this analysis is very simple and it involves a) listing important functions, b) listing each important subfunction, c) identifying the current solution, d) generating new ideas for each subfunction, and e) configuring and laying out permutations. • Directed and logical methods • Design catalogs • Functional tolerancing • TRIZ • Evaluating concepts • Pugh analysis — a process of evaluating design concepts against identified criteria using the analysis to identify additional alternatives and selecting one or more concepts for further refinement or development. It is used as the basis for trade analysis. • Trade study — a trade study is a technical analysis comparing the technical, cost, risk, and performance attributes of two or more competing alternative solutions against a predefined set of evaluation criteria, in order to define the optimum solution. The process is: • Define decision • At correct level. • Consistent with prior decisions based on user needs. • Define evaluation criteria • Measurable and understandable (results oriented). • As mutually independent as possible (avoid redundancies). • Consistent with policies and regulations and organizational (internal and external) constraints. • Categorize evaluation into shalls and targets • Shall – mandatory for success. • Target – desirable, but not mandatory. • Determine weighting factors and obtain team buy-in • Determine relative importance for each must and want. • Obtain multiple opinions from team on each weighting factor. • Assign weighting values. • Assess factors and weights. • Force rank.

© 2003 by CRC Press LLC

SL316XCh13Frame Page 437 Monday, September 30, 2002 8:06 PM

DFSS Training

437

• List set of alternative solutions • List Pugh analysis. • Review prior similar designs or products. • Define raw score for each function • Base scoring on data when possible. • Use expert engineering judgment. • Obtain several opinions and combine to obtain a value. • Screen alternatives through “shalls” • Be realistic in your requirements. • May not have any “shalls,” which is OK. • Eliminate any alternative that does not meet all “shall” criteria. • Keep “close calls” available for further study. • Compute weighted score for each alternative • Multiply raw value by weighting factor. • Record in a table the weighted value. • Sum factors for each alternative. • Assess risk of implementing highest value solutions based on • Customer preference • Investment • Product/service performance • Project • Introduction • Dollar overrun • Understand the sensitivity of the score and the raw values and weighting factors • Make recommendation or decision Verification and optimizing planning • Create program-specific reliability and robustness checklist (a very good practice to follow) • Develop a P diagram — identify all parameters (factors and noises) • Generate experimental (Latin hypercube) samples (DOE) • Run a CAE model to calculate response • Create response surface model (RSM) • Perform reliability assessment (probability analysis) • Perform robust optimization (minimize variability (σ) and target (µ)) • List CTS Ys as functional requirements • Identify potential error states (failure modes) from P diagram or FMEA • List xs and ys as design parameters that deliver the CTS Ys • Identify potential noises from the five generic categories • Generate failure mode vs. noise interaction matrix • Initiate noise factor management strategy for key characteristics controlled by manufacturing (piece-to-piece) • Initiate potential test strategy, capturing information from design verification • Indicate relationships between failure modes and test strategy © 2003 by CRC Press LLC

SL316XCh13Frame Page 438 Monday, September 30, 2002 8:06 PM

438

Six Sigma and Beyond: The Implementation Process

• Indicate relationship between failure modes and test strategy (test strategy must address failures) • Indicate relationship between noises and test strategy (test strategy must address important noises) • Run tests and show results Deliverables/checklist of the characterize phase • Add necessary suppliers to cross functional team • Model system function • Identify critical functions and interfaces (interfaces are: energy transfer, physical proximity, information transfer, material) • Select metrics related to intended function • Establish transfer function relationships both in terms of Y = f(x) and Y = f(x,n) • Formulate noise management strategy • Initiate verification planning using reliability and robustness checklist (remember: this is an organization dependent score card) • Enter data in design and manufacturing scorecard (this is also organization dependent) • Conduct peer review • Obtain project champion approval • Document information related to transfer functions

WEEK 2 The Optimize Phase Review week 1 General content questions Specific project questions Definition of optimize — the classic dictionary definition is: to make as good or effective as possible or to make the most effective use of. In the DFSS methodology this means that we have to answer three questions: 1) How can we satisfy multiple constraints? 2) What operating region is least sensitive noise? and 3) What settings of the factors give us the target response? To answer these questions typical tools and methodologies we use are: • Parameter and tolerance design • Simulation • Taguchi • Excel’s solver • Statistical tolerancing • QLF • Design and process FMEA • Robustness • Reliability checklist

© 2003 by CRC Press LLC

SL316XCh13Frame Page 439 Monday, September 30, 2002 8:06 PM

DFSS Training

439

• Process capability • Gauge R & R • Control plan The purpose, then, of the optimize phase is to • Satisfy and delight the customer • Optimize to achieve desired target and variability levels in metrics critical to satisfaction, i.e., Ys. The greatest opportunity for optimization is early in the design. In this stage, the range over which nominal values of xs may vary and, therefore, the opportunity for optimization. We optimize by searching for control factor (x) settings that satisfy the constraints, make responses (Y or y) insensitive to noise, and ultimately achieve target response. The requirements for optimization then are: • For hardware experimentation • Range • Shifts or patterns over time • Physical understanding of failure modes induced by noise • For analytic experimentation • Mean • Standard deviation or range • Shift or patterns over time Optimization approaches • Mathematical programming. What is mathematical programming? In a mathematical programming or optimization problem, we seek to minimize or maximize a real function of real or integer variables, subject to constraints on the variables. The term mathematical programming refers to the study of these problems: their mathematical properties are the development and implementation of algorithms to solve these problems and the application of these algorithms to real world problems. Note in particular that the word “programming” does not specifically refer to computer programming. In fact, the term mathematical programming was coined before the word programming became closely associated with computer software. This confusion is sometimes avoided by using the term optimization as a synonym for mathematical programming. In DFSS we use this approach with an explicit transfer function or model that can be incorporated into an automated optimization algorithm. This is where Excel’s solver will work very well. For example: given y = f(x1, x2,…xk, n1, n2, …nm) minimize σy such that: y = T (target) or T – b < y < T + b; lower range limitj < xj < upper range limitj; lower range limitj < nj < upper range limitj; By changing x1, x2,…xk. Excel solver will do the rest. • Experimentation (statistical methods) • Orthogonal arrays • Response surface methods • Sequential experimentation • Design and analysis of computer experiments © 2003 by CRC Press LLC

SL316XCh13Frame Page 440 Monday, September 30, 2002 8:06 PM

440

Six Sigma and Beyond: The Implementation Process

• Heuristics • Genetic algorithms (for a plethora of information see: www. aic.nrl.navy.mil/galist) Variability: transmission from x to y – In volume 6 we talked about the mathematics of DFSS sigma. Let us recall that 12

2   ∂y 2 2  ∂y  2   ... + + σ y = ( ) σ x1 +  σ x2    ∂x1  ∂x2   

While the focus of the DMAIC model is to reduce σx21 and σx22 (variability), the focus of the DCOV is to reduce the (∂y/∂x) (sensitivity). This is very important, and that is why we use the partial derivatives of the xs to define the Ys. Of course, if the transformation function is a linear one, then the only thing we can do is control variability. Needless to say, in most cases, we deal with polynomials, and that is why DOE and especially parameter design are very important in any DFSS endeavor. We want to exploit nonlinearities in the transfer function. • Variability, noise, reliability, and robustness • Variability — the performance of products vary around the intended target due to variability (noise) in manufacturing, operating conditions, etc. • Noise — manufacturing, deterioration, neighboring systems, customer usage, and environment. • Reliability — probability of a product performing its intended function for a specified life under the operating conditions encountered. • Robustness — capability of a product to perform its intended function consistently in the presence of noise during its intended life. Robust designs produce tight distributions around a target, which minimizes quality loss. (Note: prototype analysis does not verify robustness. It verifies functionality on a single, usually hand-selected, sample.) • Taguchi’s optimization rules • Step 1. Reduce variability • Step 2. Adjust to target (mean or slope) • Traditional DOE • Full factorial experimental designs • Fractional experimental designs • Robust DOE – Taguchi • Ideal function • Noise strategy • Signal-to-noise ratio • 2 step optimization • Confirmation

© 2003 by CRC Press LLC

SL316XCh13Frame Page 441 Monday, September 30, 2002 8:06 PM

DFSS Training

441

Differences between analytical and physical experimentation • No uncertainty (random error) in the experimentation. Factorial experimentation and its statistical analysis are designed to cope with this issue. Replication is useless. • Usage of same-level variable more than once generates “pseudo-replicates” and may be a waste of time in analytical experiments. • Experiment logistics are not an issue (parameter values can be adjusted easier than in hardware), but minimum number of computer runs is still very important. • Nonlinearities can be fully exploited as opposed to linear model (two or three levels factorial experiments). • In addition to response, sensitivities (derivatives) are easily attainable in some cases. • Iterative optimization is practical and often preferred. • Statistical information of noise factors are assumed known (mean, standard deviation, etc). • When noise-factor representations are limited in the model, surrogate noises must be used. • Parameter and tolerance designs can be easily performed simultaneously. Deterministic analysis • Inputs • Nominal or worst-case values of dimensions, materials, loads, etc. • Process • Finite element analysis • Numerical model • Multivariate techniques • Regression equation • Outputs • Point estimates (performance, life) • Safety factor or design margin • Limitations • Limited incorporation of real-world variability • Luck of up-front robustness • Poor correlation to hardware test performance • No opportunity to do robust design Analytical robustness • Inputs • Nominal or worst-case values of dimensions, materials, loads, etc. • Process • Finite element analysis • Numerical model • Multivariate techniques • Regression equation

© 2003 by CRC Press LLC

SL316XCh13Frame Page 442 Monday, September 30, 2002 8:06 PM

442

Six Sigma and Beyond: The Implementation Process

• Outputs • Point estimates (performance, life) • Safety factor or design margin • Challenges in analytical robust design • Many CAE models have limited capability to represent real-world noise; therefore, surrogate data must be used. • Stochastic information of noise factors are assumed known. • Many CAE models are computationally expensive. • A large number of design parameters and large design space are often considered; therefore, nonlinear relationship between input and output is common. • Many CAE models focus on error states; therefore, a large-scale multiobjective optimization is often needed. • In early product development where analytical robustness is applied, design objectives and constraints are still fluid and will more likely change. Understanding process data • Rational subgroup • Center of means • Short vs. long capability • Process stability — a process is considered stable when it consists of only common cause variation. (Note: one cannot design for Six Sigma if the process is unstable.) • In control vs. out of control • General discussion • Discussion of quality loss function (QLF) • Experimental strategies • Parameter design • Tolerance design • ANOVA • Discriminant analysis • Statistical tolerancing — a bundle of tools and methodologies used in DFSS to optimize the design. Typical tools used are • DOE • Latin hypercube sampling — a sampling method for uniform spread of design points in the region without replicates and can be considered as an extension to Latin Squares to multidimensions • Multivariate adaptive regression spline (MARS) is one of several modern regression tools that can help analysts quickly develop superior predictive models. Suited for linear and logistic regression, MARS automates the model specification search, including variable selection, variable transformation, interaction detection, missing-value handling, and model validation. MARS is a nonparametric modeling tool that is equally adept at developing © 2003 by CRC Press LLC

SL316XCh13Frame Page 443 Monday, September 30, 2002 8:06 PM

DFSS Training

443

simple or highly nonlinear models. MARS rapidly separates effects that are applicable to an entire data set from those that apply only to specific subsets, automatically tracking nonlinear effects with spline basis functions. Models enhanced with MARS-created variables are typically far more accurate than handcrafted models. In essence, MARS is a flexible nonlinear regression method that automates the building of predictive models. Also, it automatically builds a model-free (nonparametric) nonlinear model based on a set of data. This automatic and model-free feature is very preferable when there is lack of knowledge of a possible parametric model for which the construction of parametric nonlinear regressions or differential equations is very difficult or time-consuming. Its general structure is: f (x) = bo +

∑ b B ( x ) + ∑ b B ( x , x ) + ∑ b B ( x , x , x ) + ... i

i

i

m

m

i

j

n

n

i

j

k

Where bo = constant term bi , bm , bn , = coefficients Bi (xi) = single-term function Bm (xi,xj) = a two term interaction function Bn (xi,xj,xk) = a three-term interaction function • Gaussian stochastic Kriging (GSK) is the modeling approach that treats bias (systematic departure of the response surface from a linear model) as the realization of a stationary random function. In DFSS, the GSK model is used to improve lack of fit [ε(x)] and can be represented as follows: ε( x ) = β + z ( x ) where β is an average error and z(x) is a realization of a random Gaussian process, Z(x), with zero mean and covariance between two sets of input xi and xj as d

Cov(z(xi), z(xj)) = σ2 R(xi,xj); R(xi,xj) = exp{ −

∑θ

k

(xik – xjk)p}

k =1

• Adaptive sequential experiments — used to build a surrogate model sequentially with a desired accuracy. It helps avoiding over sampling. It is very beneficial in DFSS studies for expensive computer experiments. However, sometimes when validating a surrogate MARS model with a new sample, there are cases in which models generated sequentially were worse than models generated with completely new DOE © 2003 by CRC Press LLC

SL316XCh13Frame Page 444 Monday, September 30, 2002 8:06 PM

444

Six Sigma and Beyond: The Implementation Process

matrices. If this happens, it may be necessary to perform a trade-off study between the cost of additional CAE runs and accuracy of a surrogate model. • Model validation — used to ensure the required accuracy of a surrogate model compared with the corresponding CAE model. Reliability and robustness assessment • Concepts • Limit state — a demarcation in design variable space to separate acceptable and unacceptable design. Mathematically, the limit state of a system function, f(x), may be expressed as: L = f(x) = f(x1, x2,…,xn), where x1, x2,…,xn are design variables. • Most probable point (MPP) — defined in standard normal variable space. It is the point on the limit state with the maximum joint probability density. It also has minimal distance to the origin from the limit state. Probability assessment using the MPP: let u be a vector of n random variables in standard normal space. MPP is the solution to the following optimization problem. Minimize u • u subject to g(u) = L. The distance β from the origin to the MPP may be used to assess the probability of g(u) > L. Pr[g(u) > L] = Φ(–β), where β2 = u1*2 + u2*2 +,…, + u*n2 +,…, + u*n2 and Φ(.) = the cumulative standard distribution. • When a variable does not follow standard normal distribution, it is transformed into standard normal variable by quantile–quantile relationships. In mathematical terms, it is shown as: F(xi) = Φ(ui), where F(xi) is the cumulative probability distribution function and Φ(ui) is the cumulative standard normal distribution function. Once the work is done in u space, xi can be found for the corresponding ui by inverse transformation using quantile-quantile relationship – xi = Fi –1 [F(ui)] • % contribution — the overall impact of the variation of a variable to the variation of the functional performance. Mathematically, it is defined as ∂y 2 2 ) σ xi ∂xi % contribution (xi) = x100% ∂y ∂y 2 2 ∂y 2 2 ( )2 σ2x1 + ( ) σ x2 +,... + ( ) σn ∂x1 ∂x2 ∂x n (

and it consists of two key elements: 1. (Deterministic) Sensitivity of the variable to the performance function ∂y/∂xi. It is the slope of the function at the point of interest with respect to variable xi. 2. Variability of the variable, σx2i

© 2003 by CRC Press LLC

SL316XCh13Frame Page 445 Monday, September 30, 2002 8:06 PM

DFSS Training

445

Methods to quantify probability of failure • Root sum square (RSS) — the first order approximation method. It is also called first order reliability method (FORM). The statistical distribution of the response is approximated by normal distribution. The mean value of the response is the function of the mean value of the variables, and the standard deviation of the response is the RSS of products of the partial derivative and the standard deviation of a variable as

µ y ≅ f (µ xx , µ x2 ,..., µ xn ; σ y = (

∂f 2 2 ∂f 2 2 ∂f 2 2 ) σ x1 + ( ) σ x2 + ... + ( ) σ xn ∂x1 ∂x2 ∂x n

 L −µ  x P( y ≤ L ) = Φ  σ   y It is important to note that from the properties of normal distribution RSS provides an exact solution when all the design variables are independent and normally distributed and the performance function is linear. For all other cases, RSS provides an approximation. • Successive linear approximation method (SLAM) — a general-purpose algorithm for finding MPP. Once MPP is found, probability of failure can be assessed. • Monte Carlo and quasi Monte Carlo • Monte Carlo is a statistical method to quantify the statistical characteristics of a functional performance through a sampling of the statistical characteristics of its variables. • Quasi Monte Carlo is a statistical method that uses quasi random sequences that have a better uniformity and help to converge faster than random sequences. (It is much faster than the traditional Monte Carlo approach in integrating high-dimensional problems.) Functions of robustness and reliability checklist. We have already mentioned this checklist several times and have noted that this is an organization-dependent item. Because of its importance let us identify here some of the applications of this checklist: • Identify functional requirements and the associated error states that are a condensed quality history of prioritized issues. • Identify important noise factors associated with error states and assess weak-to-strong linkage to error state. • Identify mapping noises to test plan to ensure system is tested against critical noises and that unresolved error states are indicated by the tests. (Important noise factors include the manufacturing tolerance around some critical xs.) © 2003 by CRC Press LLC

SL316XCh13Frame Page 446 Monday, September 30, 2002 8:06 PM

446

Six Sigma and Beyond: The Implementation Process

• Selecting a noise-factor strategy to ensure design is robust to critical noise factors. (This is an engineering step, which could include design change and development work.)

DESIGN

FOR

PRODUCIBILITY

In Volume 6, we discussed the issue of producibility. The main goal was to make a product or service insensitive to noise. We recommended that some appropriate tools may be DOE, parameter design, statistical tolerancing, FMEA, GDT, DFM/DFA, TRIZ, mistake proofing, control plan, etc. We also suggested that when reviewing program assumptions, inconsistencies between product and process should be identified so that the “gap” may be identified. (Both DFMEA and PFMEA are excellent tools to identify opportunities for closing the “gap.” For DFSS purposes, we are interested in comparing the required manufacturing capability for critical xs with process capability.) In selecting the strategy from a DFSS perspective, the objective of DFSS in producibility is to come up with a strategy that leads to improving customer satisfaction at the lowest cost and that supports design and process verification timing. The following are some recommendations for achieving this goal: • Consider what is possible and most cost-effective at the particular time of application. • Adopt what has been done (tools and or results) from “things learned” that has increased satisfaction and productivity or use robustness mentality to make product and process insensitive to noise. • Use finite element analysis or other analytic tools instead of hardware experimentation (it is too late and too expensive if you wait for hardware). • Learn to limit manufacturing-induced product noises when they result in more rapid product deterioration. • Learn to always describe and communicate changes and actions adopted to appropriate personnel.

DELIVERABLES/CHECKLIST • • • • • • • • • • • •

FOR THE

OPTIMIZE PHASE

Add necessary engineers to cross-functional team. Generate new concept, if needed. Complete P diagram. Quantify process capability (µ and σ) for CTQ xs. Complete derivation of the transfer function that includes CTQ xs. Identify target nominal values for xs. Design for ease of assembly and manufacture, resolve related concerns. Change process capability to achieve appropriate σx (business decision). Update control plan. Update validation planning in the reliability and robustness checklist. Review known design impacts on customer satisfaction. Conduct peer review.

© 2003 by CRC Press LLC

SL316XCh13Frame Page 447 Monday, September 30, 2002 8:06 PM

DFSS Training

447

• Obtain project champion approval. • Document information learned.

THE VERIFY PHASE In the verify phase, there are three areas of concern: 1) design verification — verifying through a series of well planned tests that verify whether or not a product as designed functions in the manner in which it is required to function, 2) production validation — verifying through a series of well planned tests that verify whether or not a product as produced performs in the manner the designer intended, and 3) process validation — verifying through a series of well planned tests that verify whether or not a manufacturing process is capable of producing product to its engineering specification. In each of these areas, however, the objective is to validate the design by demonstrating that it meets the functional and reliability requirements set in the D, C, and O phase. Typical tools used in the verify phase are: • • • • • • • • •

Assessment (validation and verification score cards) Design verification plan and report Robustness reliability Process capability Gauge R & R Control plan FMEA QFD P diagram

Steps in the verification process: 1. 2. 3. 4. 5. 6. 7.

Update or develop test plan details. Conduct test. Analyze/assess results. Make sure design passes requirements. Develop failure resolution plan. Record action on design verification. Complete the design verification.

STEP 1: UPDATE/DEVELOP

TEST PLAN DETAILS

Objective: to develop a program-specific design-verification plan to demonstrate that all customer functional and reliability requirements have been met. The risk, of course, of not doing either the updating or the development is to settle for uncertain reliability, unverified functions, timing issues, unaccountability in resources and content, as well as risking the possibility of a program-specific test plan that may not capture customer requirements. © 2003 by CRC Press LLC

SL316XCh13Frame Page 448 Monday, September 30, 2002 8:06 PM

448

Six Sigma and Beyond: The Implementation Process

• Inputs • Functional/reliability targets • FMEA • Design verification from the define phase • Any remaining inputs from define phase • Test matrix correlating failure modes/noises/requirements • Customer usage information • Gaps from define phase (if any) • Responsibility for providing input • Cross-functional team • Design and release engineer • Supplier • Technical specialist • Product integrator/systems engineer • Update/develop test plan details: how? • Update design verification plan (DVP) details while developing design as needed for lower-level (e.g., components, subsystems) verification. • Update key life testing (KLT) and customer-correlated tests based on functional target information from define phase. • Complete standard DVP form. • Define sample sizes and test duration required to demonstrate reliability targets, test metrics (i.e., MMBF, MTTF, etc.). • Identify type of test (e.g., test-to-failure, bogey, degradation). The preferred method is test to failure. • Implement reliability growth plan. • Identify the needed test facilities/resources/timing. • Review/update CAE models and associated noise-simulation strategies. • Identify plan for use of analytical/CAE testing (wherever possible, plan for computer simulation/CAE). • Who is responsible for doing the update and development test plan? • Cross-functional team • Design and release engineer • Supplier • Technical specialist • Product integrator/systems engineer • When? • Within the timing milestone system for the organization • Outputs • Signed-off program-specific DVP • Identification of facilities and resources • Total program product test plan • Program specific KLT and customer-correlated tests. Key life testing is an accelerated testing method that focuses on the major stresses and principal noise factors that drive loss of function. Specifically, KLT is

© 2003 by CRC Press LLC

SL316XCh13Frame Page 449 Monday, September 30, 2002 8:06 PM

DFSS Training

449

used to a) verify design, b)compare designs, c) benchmark the competition and or best practice, and d) confirm and predict reliability. The test should also uncover failure mechanisms associated with real-world usage over the design life. • Updated or adapted CAE models

STEP 2: CONDUCT TEST Objective: to conduct the tests specified in DVP. The consequence of not doing it is the risk that reliability may not be verified appropriately. Inputs • DVP • Total program project plan • Components/systems subsystems/complete project to be tested • Test procedures as appropriate and applicable to specific project • Responsibility for providing input • Project teams • Suppliers • Conduct test: how? • Request facility time by initiating test. • Provide written description to test technician, technical specialist, supplier, and engineering management of the test purpose/focus/failure modes. • Obtain hardware for testing (components/subsystems/systems/vehicle). • Ensure rigorous adherence to test procedure. • Use software to manage scheduling and timing review the progress of KLT and customer correlated tests being conducted at test facilities. • Execute analytical/CAE testing plan. • Capture all relevant test observations and information (ensure that technician has total project awareness of customer failure perceptions). Component subsystem testing most likely occurs simultaneously with design development; wherever possible, use computer simulation/CAE instead of physical experiments. • Who? • Project teams • Suppliers • When? • Within the timing milestone system of the organization. • Outputs • Tests completed as specified in DVP. • All failures reported. • Incident report/test results (should include hard and soft failures). • Parts for analysis. • Test data load into reliability database.

© 2003 by CRC Press LLC

SL316XCh13Frame Page 450 Monday, September 30, 2002 8:06 PM

450

Six Sigma and Beyond: The Implementation Process

STEP 3: ANALYZE/ASSESS RESULTS Objective: to determine whether test results demonstrate that requirements and targets are met at a specified reliability level. The risk of not doing this is that from a quantitative perspective both reliability and performance will remain unknown. Inputs • Test results • Suspect parts • Incidence reports • Responsibility for Providing input • Program team • Suppliers • Technical specialists • Analyze/assess results: how? • Perform statistical/graphical analysis of function vs. requirement. • Perform failure analysis on all failed or suspect parts. • Assess failure risk. • Who? • Program team • Suppliers • Subject matter experts • When? • Within the timing milestone system for the organization • Outputs • Failure resolution sufficient to meet requirements and targets • Reliability growth chart • Determination of whether the system or part meets the functional and reliability requirements. The focus here is to improve robustness. Five options are generally available: 1) change the technology to be robust; 2) make basic current design assumptions insensitive to the noises through parameter design, by beefing up the design (upgrading design specifications), and redundancy; 3) reduce or remove the noise factors (this may need additional DOE); 4) insert a compensation device; and 5) send the error state somewhere else where it will create less harm (disguise the effect). • Reliability demonstrated quantitatively. • Weibull plot • Degradation curves

STEP 4: DOES

THE DESIGN PASS REQUIREMENTS?

Objective: to identify which systems need to go through failure resolution and which move on to sign-off. The risk of not doing this step is that nonsegregation of targets “met” vs. “not met” causes ambiguity of remaining verification task.

© 2003 by CRC Press LLC

SL316XCh13Frame Page 451 Monday, September 30, 2002 8:06 PM

DFSS Training

451

• Inputs • Results from previous step • Functional targets from define stage • Project specific specifications • Reliability growth chart • Weibull plot • Degradation curves • Responsibility for providing input • Program team • Suppliers • Technical specialist • Does the design pass requirements? How? • Review test results from previous step to verify the design and determine whether the system or part meets the functional and reliability requirements. • Ensure that failure resolution meets requirements and targets. • Who? • Program team • Suppliers • Subject matter experts • When? • Within the timing milestone system for the organization • Output • Decision whether or not design meets the requirements and targets • Update lessons learned

STEP 5: DEVELOP

FAILURE RESOLUTION PLAN

Objective: to develop failure resolution plan including corrective actions and modified verification plan. The risk of not doing this step is that appropriate corrective actions will not be identified or verified. • Inputs • Test failures • Failed parts • Incident reports • Diagnostic information • Repair information • Responsibility for providing input • Project teams • Suppliers • Develop failure resolution plan: how? (Make sure to address root cause, not symptom. Also, do not get into the habit of retesting the same part to try to get acceptance.)

© 2003 by CRC Press LLC

SL316XCh13Frame Page 452 Monday, September 30, 2002 8:06 PM

452

Six Sigma and Beyond: The Implementation Process

• Failure investigation–fault isolation–failure analysis–root cause determination–define corrective action–define corrective action verification requirements (retest requirements) • Initial assessment of corrective action • Update FRACAS or other appropriate database • Who? • Project teams • Suppliers • When? • Within the timing milestone system for the organization • During test program (DVP) as required • Outputs • Corrective action plan approved and verification based on root cause analysis • FRACAS incident and concern databases update with corrective action status

STEP 6: RECORD ACTIONS AND RECORD (DVP&R)

ON DESIGN VERIFICATION PLAN

Objective: to document failures and corrective actions in DVP&R. The risk of not doing this step is that the current program risk status may not be clear and repeat mistakes if lessons learned are not captured. • Inputs • Failure summary report • Corrective action plan • Responsibility for providing input • Project teams • Suppliers • Record actions on DVP&R. How? • Capture failure summary and reverification information in DVP&R. • List corrective actions in DVP&R. • Identify ongoing product test status, i.e., pass/fail/failure mode. • Update program-specific FMEA. • Who? • Project teams • Suppliers • When? • Within the timing milestone system for the organization • Outputs • Program-specific FMEA updated • Test results corrective actions shown on standard DVP&R form • Go to Steps 2 and 3 for conducting revised test and analyzing data

© 2003 by CRC Press LLC

SL316XCh13Frame Page 453 Monday, September 30, 2002 8:06 PM

DFSS Training

453

STEP 7: COMPLETE DVP&R Objective: to complete and sign off on the DVP&R. Not doing this step may jeopardize the project since the risk status may be unclear. Individuals who may sign off are: project integrator, system engineer, product engineer, or functional manager. • Inputs • Test information • Results • Timing • Sample size • Remarks • Statistical test confidence • Responsible for providing input • Supplier • Complete DVP&R: How? • Assess how well functional and reliability targets are being met. • Make risk assessment. • Sign off on design verification plan and report. • Who? • Supplier • Project team • When? • Within the timing milestone system for the organization • Output • Completed DVP&R signed off by all appropriate and applicable personnel • Documented test results that meet target and requirements and risk assessment with corrective action recommendation • Lessons learned feedback Deliverables/checklist for the verify phase • Add necessary engineers to cross functional team. • Complete reliability and robustness checklist. • Compare verification results to phase D, C, O results and resolve. • Enter actual test data in the scorecard. • Capture lessons learned in system design specification and component design specification. • Conduct peer review. • Obtain project champion approval. • Update scorecard over time with performance results from the field and actual process capability data. • Document information learned.

© 2003 by CRC Press LLC

SL316XCh13Frame Page 454 Monday, September 30, 2002 8:06 PM

454

Six Sigma and Beyond: The Implementation Process

SELECTED BIBLIOGRAPHY Frantzeskakis, L. and Powel, W. B. (1990). A successive linear approximation procedure for stochastic, dynamic vehicle allocation problems. Transportation Science. Vol. 24. No. 1. Pp. 40–57. Friedman, J. H. (1991). Multivariate adaptive regression splines (with discussion), Annals of Statistics. 19. Pp. 1–141. Qi, L. and Chen, X. (1995). A globally convergent successive approximation method for severely nonsmooth equations, SIAM Journal of Control Optimization. Vol. 33. Pp. 402–418. Roosen, C. B. and Hastie, T. J. (1994). Automatic smoothing spline projection pursuit, Journal of Computational and Graphical Statistics. Vol. 3. No. 3. Pp. 235–248.

© 2003 by CRC Press LLC

SL316XCh14Frame Page 455 Monday, September 30, 2002 8:06 PM

14

Six Sigma Certification

Teachers have it. Accountants have it. Doctors and attorneys have it. What is it? Certification. Some call it certification, others call it passing the Bar or Board exams, etc. Whatever it is called, the essence is the same — somebody somewhere has decided that homogeneity in the profession of choice would be “guaranteed” by certification. The certification process follows a general course: a) very prescriptive educational coursework, b) a standardized test of knowledge of the given profession, and, in some cases, c) a test with an application portion attached as part of the certification. (The application form of the test is generally given as a paper test simulation of a case problem.) The result, of course, is competency. But competency may be defined in many ways depending on who is doing the measuring. For example Skinner (1954, p. 94) defined the path to competency as follows: “The whole process of becoming competent in any field must be divided into a very large number of very small steps, and reinforcement must be contingent upon the accomplished of each step. The solution to the problem of creating a complex repertoire of behavior also solves the problem of maintaining the behavior in strength … By making each successive step as small as possible, the frequency of reinforcement can be raised to a maximum, while the possibly aversive consequences of being wrong are reduced to a minimum.” This is a very interesting point of view. The reader will notice that it promotes a theory of motivation as well as one of cognitive development. Motivation is external and based on positive reinforcement of many small steps. Cognitive development, on the other hand, is based on what is learned, how it is learned, and whether or not the evaluation of learning is consistent and uniformly administered (Madaus, West, Harmon, Lomax, and Viator, 1992; Whitford and Jones, 2000). In the field of quality, a recent movement has been to create a Six Sigma certification, or what we call “discipline envy.” Some of you may be familiar with this concept. It involves an individual or group (formal or informal) wishing to model itself on, borrow from, or appropriate the terms, vocabulary, and authority figures of another discipline. To be sure, anthropomorphism has its uses. And for me, “discipline envy,” which is very much a part of life in my own academic discipline of statistics and instructional technology, the world of quality, etc., is a kind of fantasizing about an “ego ideal” elsewhere. Discipline envy is pervasive in human history. One hundred years ago, music was without question the discipline of disciplines, the ego ideal for the arts. Before that it was architecture. However, both Schelling and Goethe compared architecture to “frozen music,” and the power of this comparison became the impetus for Walter Pater’s proclamation that “all art constantly aspires toward the condition of music.” But that is not the end. We humans

455 © 2003 by CRC Press LLC

SL316XCh14Frame Page 456 Monday, September 30, 2002 8:06 PM

456

Six Sigma and Beyond: The Implementation Process

have an unusual idiosyncrasy, which the Germans call Anders-streben, or a desire to substitute one purity with another and so on in the name of superior result. The problem with that notion, however, is that sometimes we substitute purity for impurity in the name of cleaning the specific contamination. What does all this have to do with Six Sigma certification? Plenty! We have come to believe that as quality professionals we must be certified to be professionals because others do it. We have come to believe that certification will give us a confirmation of respect from others. We have come to believe that through certification we will demonstrate excellence in our profession — or perhaps a superior discipline altogether. We have come to believe that, without certification, we suffer some kind of “loss” and do not measure up to a more perfect and more whole discipline. You see, we have come to recognize envy as an aspect of specific idealization. The problem with that idealization, however, is that it has gone wrong. For example, the subject envies the object for some possession or quality, and the more ideal the object, the more intense the envy. At this point, envy becomes a signal of thwarted identification. When that happens, we are in deep trouble because the only solution at this point is some kind of interdisciplinarity. And so the question arises, Is it possible to have a hierarchical discipline? And if so, is it possible to identify that discipline through a rigorous certification? We believe not. Why? I am reminded of the now classic question that the Annals of Improbable Research — the only humor magazine with eight Nobel Prize laureates on its board — posed some time ago. The question was: Which field of science has the smartest people? An astronomer provided the following answer. “Speaking of ranking the various disciplines — politicians think they are economists. Economists think they are social scientists. Social scientists think they are psychologists. Psychologists think they are biologists. Biologists think they are organic chemists. Organic chemists think they are physical chemists. Physical chemists think they are physicists. Physicists think they are mathematicians. Mathematicians think they are God. God … ummm … so happens that God is an astronomer.” Let us consider the word interdisciplinary a little more closely. Interdisciplinary is a word as much misunderstood these days as multiculturalism, and for similar reasons. Both words seem to their detractors to break down boundaries and hierarchies, to level differences rather than discriminate among them, to invite an absence of rigor, and to threaten, to somehow erase or destroy the root term (culture and discipline). As Ronald Barthes in 1972 wrote, “Interdisciplinary studies do not merely confront already constituted disciplines (none of which, as a matter of fact, consents to leave off). In order to do interdisciplinary work, it is not enough to take a subject (a theme) and to arrange two or three sciences around it. Interdisciplinary study consists in creating a new object, which belongs to no one.” And now let us look at the Six Sigma methodology. What is it comprised of? A combination of many sciences, business disciplines, engineering, and several others. To expect an individual to be an expert through certification is an absurdity for several reasons.

© 2003 by CRC Press LLC

SL316XCh14Frame Page 457 Monday, September 30, 2002 8:06 PM

Six Sigma Certification

457

The issue of expert. The general consensus regarding a shogun (master black belt) and a black belt is that they should be experts. No one as yet has been able to define expert. To know the methodology steps does not qualify somebody as an expert. An expert, at least to our thinking, is an individual who knows not only the methodology but all the tools and their applications to the trade as well. We believe that it is impossible for any one individual to know everything well. Therefore, by definition one cannot really be an expert. This is precisely the reason why Six Sigma depends on crossfunctional teams to do the actual work. The issue of statistics. We expect shoguns and black belts to know statistics to solve their problems. We teach them several out of hundreds of statistical tests in the hope that they will use the right one. We keep forgetting that several empirical studies have shown that many nonstatisticians do not fully understand the statistical tests that they employ (Nelson, Rosenthal and Rosnow, 1986; Oakes, 1986; Rosenthal and Gaito, 1963; Zuckerman, Hodgins, Zuckerman, and Rosenthal, 1993). It is absurd to think that 3 days of instruction in ANOVA or reliability robustness or any other subject can make somebody an expert in that area. The issue of interdisciplinary topics. Every black belt training course has several hours on leadership, project management, financial concepts, general quality, and many other topics. To think that at the end of the 4-week training period you have produced an expert at solving all sorts of problems in any organization would be ludicrous. These are disciplines in their own right, and it takes years to understand and implement them appropriately. The issue of the project. A black belt must select a project that is worth over $250,000 to the organization. That is fine. However, the problem is the definition of the problem, the arrival at that figure, and the subjectiveness of the analytical process. We must be honest here. In several instances, the amount, the process, and the problems solved are not really problems in customer satisfaction but rather a political agenda for management. As such the shogun and the champion confer black belt certifications as they see fit. There is no standardization.

THE NEED FOR CERTIFICATION A great deal of attention has recently been devoted to having some kind of certification, especially in Six Sigma methodology. Many articles have appeared in various professional publications (Quality Progress, Quality Digest, etc.) as well as in general publications such as the New York Times Magazine, U.S. Today, and others. The typical line taken by writers advocating the adoption of certification is that it would give coherence and direction to instruction and lead to higher levels of professional achievement. However, surrogate data suggest that certification does not improve competence. Advocates seem to assume that the adoption of such certification is needed if the profession, and especially Six Sigma methodology, is to remain “pure” and © 2003 by CRC Press LLC

SL316XCh14Frame Page 458 Monday, September 30, 2002 8:06 PM

458

Six Sigma and Beyond: The Implementation Process

continue providing a basic competence for the individual and, above all, consistency for all Six Sigma training providers. Various arguments are presented to support the case for certification. What is conspicuously missing from such articles is evidence to support such arguments. The only evidence presented in support of certification is the high level of performance and professionalism in other areas such as law, medicine, accounting, etc. Here, we dare to present evidence bearing on the issue of certification and then offer general comments about Six Sigma. Fortunately, such evidence has recently become available through a surrogate database. It is hoped that the evidence presented here will help people in thinking through this very important issue. The first line of evidence comes from the recently completed Third International Mathematics and Science Study (TIMSS) conducted by the International Association for the Evaluation of Educational Achievement (IEA). This was a 41-nation study of student performance in mathematics and science, along with collateral information obtained from students, teachers, and school principals via questionnaires. It is unquestionably the largest educational research study ever undertaken, with over a half million students tested in the participating countries. Initial results from the study have been reported in two volumes (Beaton et al.,1996a,1996b). Using information from the study, it was possible to establish a relationship between having a nationally centralized curriculum (certified curriculum or syllabus) and student performance. Furthermore, many of these countries tested have a national test to determine whether students have learned the material in the national curriculum or syllabus. The amazing results are in. And what a surprise! If having national standards (certifications) were a truly potent force in influencing student achievement, one would expect that students in countries having a national certification or syllabus would perform significantly higher than students from countries that do not have such national standards. This is hardly the case. While most of the participating countries do have a national certification of sorts or syllabus, there is virtually no correlation between student performance and a national certification or syllabus. In fact, some countries without national certification or syllabus had higher scores than some countries with certification. Therefore, the absence of a relationship between a national certification or syllabus and performance in specific subjects (in this study, mathematics and science were tested) raises serious questions as to whether a national certification or syllabus would lead to higher student achievement. Additional information from the TIMSS study is also highly informative about the distribution of achievement in the 41 participating countries. For example, if one excludes the five highest- and five lowest-scoring countries, the achievement of the middle 50% of the students in each country is almost wholly overlapping. Another way of viewing these results is to examine the standard deviations of the national distributions of achievements. In science, for example, the median standard deviation of countries with a national certification is 88.5, whereas the median standard deviation for countries without a national curriculum is 88. Because these results are virtually identical, one is lead to question how effective national curricula are in bringing students to a particular level of achievement. © 2003 by CRC Press LLC

SL316XCh14Frame Page 459 Monday, September 30, 2002 8:06 PM

Six Sigma Certification

459

The most striking finding of the study is that, despite what countries might say about their certifications (curricula), there is a high level of consistency of student performance regardless of which country’s ratings are used to obtain a percentage correct score. In fact, few countries change their position in the standings. This can be considered further evidence of the appropriateness of the tests for all countries. This is not surprising because the international tests were developed on the basis of a careful curriculum analysis of all participating countries. While it is clear that there is no relationship between a national certification (curriculum) and student performance, explaining this lack of relationship is another matter. Grant, Peterson, and Shojgreen-Downer (1996) showed that even though teachers may teach differently using prior knowledge, understanding the content in a different way and teaching in a completely different way, the scores of the students increased so much that these differences are, to say the least, striking. They suggest that even highly prescribed and detailed standards, accompanied by centrally developed tests, are no guarantee that teachers will teach the same content. A different line of evidence bearing on the issue of national standards comes from studies conducted by the National Assessment of Educational Progress (NAEP). These studies periodically test student performance in various school subjects. The 1994 NAEP report, Trends in Academic Progress (Mullis et al., 1994) presents information on student achievement in mathematics and science from 1970 to 1992. Again, increases have been noticed. However, the increase in performance has occurred despite the absence of national standards. This increase in performance is not easy to explain. (Many educators attribute the rise to increased attention and commitment to the improvement of mathematics and science curricula and instruction. The impetus for these have come from state departments of education and local school district efforts as well as national nongovernmental efforts at system reform such as Success for All, Equity 2000, Accelerated Schools, and the Coalition for Essential Schools, among others [Slavin, 1997].)

GENERAL COMMENTS REGARDING CERTIFICATION AS IT RELATES TO SIX SIGMA Competency is the goal, and no one would deny that competency is an admirable goal in any discipline, most of all in Six Sigma methodology. Certification, if done correctly, can perhaps provide standardization of knowledge, but that is about all it can do. Competency is very difficult to measure. False security about knowledge. In the last 3 to 4 years, we have pretended that black belts and shoguns (master black belts) are the new supermen of organizations. We expect them to deliver “fixes” of problems that are causing discomfort at many levels, both internal and external to the organization. We emphasize statistical thinking and statistical analysis with a sprinkling of interdisciplinary themes and hope that these items will resolve the concerns of current organizations. We have forgotten the lesson that scientists have taught us over the years — that it is a mistake to bury one’s © 2003 by CRC Press LLC

SL316XCh14Frame Page 460 Monday, September 30, 2002 8:06 PM

460

Six Sigma and Beyond: The Implementation Process

head in the statistical sand. In general, oil tankers arrive safely at their destinations, but the Exxon Valdez did not; in general, the world gets enough rainfall, but for a whole decade, the African Sahel did not. And in general, problem solvers do solve problems, but sometimes they do not, or, even worse, they provide the wrong solution. Wrong emphasis on learning process. We have become creatures that believe that with a specific affirmation, we can indeed reach perfection or specific competence. We have fallen victims of the Wizard of Oz story. We want somebody to give us a diploma to affirm that we have brains. We need an outsider to give us a clock so that we can believe that we indeed have a heart. We want somebody to give us a medal so that we can believe that we do indeed have courage. However, the problem with the Wizard of Oz is that all three of the characters seeking something from the wizard — the scarecrow, the tin man, and the lion — already had the qualities they were seeking. But because of external affirmation, they suddenly became “real.” In our case, we have come to believe that certification provides a specific piece of paper that gives us boasting rights about our knowledge in a particular area. Certification is a form of affirmation, a false hope. Why? Because it does not address the real issue of knowledge and competence — and in that order. We continue to generate notions that are patently absurd, and many of those silly ideas produce not disbelief or rejection but repeated attempts to show that they might be worthy of attention. Rather than focusing on the basic causes of competency, we look at effects. The irony is that the entire methodology of Six Sigma is based on “root cause,” while certification is on the “effect.” Rather than emphasizing the appropriate education and training in the school environment, we “cram” knowledge in a very limited time frame. We graduate from universities people with statistics or engineering degrees and then expect them to pass a certification exam. Something is wrong here. Either the university did not properly educate the students, or the students accepted the diplomas under false pretenses. If the university did its job, further certification is unnecessary. On the other hand, if it did not educate them properly, then it should not grant students diplomas. Political ploy. We have already discussed reputation and prestige, but it seems to us that certification as it stands today is nothing more than an issue of prestige. The issue of reputation is not even addressed. That makes it a political issue, and in the long term, it will affect Six Sigma in a negative way. Lack of absolute scales will be the demise of the current certification process. Subjectivity of the certification process. How can certification be discussed without first addressing the body of knowledge (BOK)? We know of at least four sources that define BOK quite differently: 1) the Six Sigma Academy 2) the American Society for Quality 3) the International Quality Federation, and 4) the one we have proposed both in volume 1 and in this volume. All of them have common points; however, not all of them agree © 2003 by CRC Press LLC

SL316XCh14Frame Page 461 Monday, September 30, 2002 8:06 PM

Six Sigma Certification

461

on all issues. So the question becomes, in which BOK are you certified? Is one better than the others? Who certifies the certifiers? How can we believe that the certification means anything at all, since the certifiers themselves are self proclaimed? The certifiers have forgotten that only other specialists can properly evaluate specialist. In the case of the six sigma, arbitrarily some organizations got together when they saw a financial bonanza and they went ahead with tests that are not even based on a common knowledge base. What do they measure? Do they imply that different organizations have different criteria and different base knowledge for certification? Is one worth more than the other? If so, by how much? [It is amazing that “discipline envy” has clouded our thinking to the point where some individual organizations have different certifications between their own divisions to the point where they do not recognize each others certification.] Accountability. By way of comparison, allow me to be provocative. McMurtrie (2001) reported that from 1997 to 2000 out of 2896 accredited colleges only five lost their accreditation, 43 were given probation, and 11 showed cause (yet none of them closed its doors). In the field of quality, how many companies do you know of that have been issued a revocation of their ISO 9000 or QS-9000 certification? How many certified Lead Auditors, Auditors, Quality Professionals, or Professional Engineers have had their certification revoked? My point is, what are the ramifications of foul play within certification? What if there is no certification? The answer, unfortunately, is nothing. Absolutely nothing! There is no accountability because, as we have already mentioned, two very important issues remain unresolved. There can be no accountability as long as there is 1) no uniform BOK and 2) no standardized training. Accountability implies standardization of process, knowledge, delivery and maintenance. In the current state of Six Sigma certification, none of these exists.

CONCLUSION Certification, we believe, at this time is totally inappropriate, even though we recognize that many organizations continue issuing certificates for black belts, and at least two not-for-profit organizations provide certifications for Six Sigma. It is unfortunate that quality organizations are trying to make money and become politically correct through a certification scheme rather than focus on making it better and more robust and at least agree on BOK. Six Sigma, whether one wants to admit it or not, is a combination of many disciplines that together can work to improve an organization and its processes, products, and services. Certification means certification in statistics, reliability, project management, organizational development, etc. The list is endless. (Remember Skinner’s definition.) When all those certifications are completed, then and only then, perhaps, can we begin to think of Six Sigma certification. Obviously, that is an impossible and unrealistic goal. What we can hope for is that quality societies and individual organizations will push for more appropriate and applicable education and training as well as a consistent base of knowledge. © 2003 by CRC Press LLC

SL316XCh14Frame Page 462 Monday, September 30, 2002 8:06 PM

462

Six Sigma and Beyond: The Implementation Process

On a personal level, I have become aware of a key intellectual trick or error in much, though not all, current theory that works to get participants to renounce their faith in their personal capacities, especially their own intuitions in their creativity. The trick or mistake could be called the summoning of the near enemy, and it works as follows. People often become confused on moral issues because of a particular problem inherent in human dealings, i.e., that any virtue has a bad cousin, a failing that closely resembles the virtue and that can be mistaken for it — what in Tibetan Buddhism is called the near enemy. For example, the near enemy of equanimity is apathy; the near enemy of quality is functionality; and the near enemy of Six Sigma is definitional justification through defects of opportunity. What I would like to see is a profession that did a better job of teaching everyone how to distinguish for him- or herself between scholarship that moves things forward (truly improves the process and customer satisfaction) and scholarship that just shakes things up (a revolutionary program that changes the direction of our misunderstandings about customer satisfaction and organizational profitability – a true 100-fold improvement program). On a more subjective level, I would like to see greater emphasis placed on the ascesis, or self-transformation, that produces integrity, honesty, flexibility, and moral independence so that we are indeed free to tell the emperor that he is not wearing any clothes. Currently, we are in a state of limbo as a profession; because we are afraid to speak, our self-transformation has become a loss of self. A shift in this direction may happen in the next few years if for no other reason than that integrity, honesty, flexibility, and moral independence are qualities whose value comes into high relief during a time of “high stakes” and “great need.” I believe that the pressures of the current certification frenzy will converge with the pressures of an already latent dissent within the profession to produce some change, though whether the transformation will be more than superficial I cannot predict. I hope that part of the change will involve a revived conversation about what it is to be Six Sigma certified. These comments point to the craft of mindful listening that has been practiced all along in our profession (in training sessions, conferences, publications and so on) and in our intimate encounters with books alongside the more highly rewarded craft of argumentation that for the moment has gotten us into a trough. A first step in rethinking what we are about as a profession may be to stop focusing on outsmarting one another and to find ways of fostering the more intuitive and receptive dimensions of our communal and intellectual lives. Where this might lead methodologically I don’t know. But as a best-case scenario our profession may in time develop a culture that, without dispensing with traditional scholarship or critical theory, somehow uses interdisciplinary methodology as the basis for a complex exploration of the art and science of listening that is one of the creative forces in the world, a force, moreover, that our species would do well to cultivate if we want to have a chance of surviving. That may sound idealistic; part of me says that as long as this profession is invested in hierarchy, which it always will be, there will always exist a built-in spiritual dullness that is the opposite of listening. But most of us who decide to © 2003 by CRC Press LLC

SL316XCh14Frame Page 463 Monday, September 30, 2002 8:06 PM

Six Sigma Certification

463

become quality professionals specializing in Six Sigma methodology, however invested we are in institutional security and prestige, also do this kind of work because we had an experience early in our lives of being taught how to let go of whatever we thought was the whole of reality and to take the measure of a larger moral and human universe. Maybe Six Sigma certification is at one of those rare junctures when the costs of closing ourselves off within a world as defined by the disciplinary norms of the moment will come to seem unacceptably high. The debate over certification is likely to continue for some time. Arguments will be advanced for and against the adoption of certification. This is undoubtedly healthy because it involves a major policy issue in the quality profession. One hopes that this debate will be based, at least in part, on evidence rather than argumentation. If it is, one can be reasonably confident that it will lead to the adoption of sound policies. For right now, however, certification does not make sense.

REFERENCES Beaton, A., Martin, M., Mulls, I., Gonzalez, E., Smith, T., and Kelley, D. (1996a). Science achievement in the middle school years: IEA’s Third International Mathematics and Science Study (TIMSS). TIMSS International Study Center, Boston College, Chestnut Hill, MA. Beaton, A., Martin, M., Mulls, I., Gonzalez, E., Smith, T., and Kelley, D. (1996b). Mathematics achievement in the middle school years: IEA’s Third International Mathematics and Science Study (TIMSS). TIMSS International Study Center, Boston College, Chestnut Hill, MA. Grant, S., Peterson, P., and Shojgreen-Downer, A. (1996). Learning to teach mathematics in the context of system reform. American Educational Research Journal, 33(2), 500–541. Madaus, G. F., West, M. M., Harmon, M. C., Lomax, R. G., and Viator, K. A. (1992). The influence of testing on teaching math and science in grades 4–12. Center for the Study of Testing, Evaluation, and Educational Policy. Boston College. Chestnut Hill, MA. McMurtrie, B. (January 12, 2001). Regional accreditors punish colleges rarely and inconsistently. The Chronicle of Higher Education. Pp. A27-A30. Mullis, I., Dossey, J., Campbell, J., Gentile, C., O’Sullivan, C., and Latham, A. (1994). Trends in academic progress. National Assessment of Educational Progress, Educational Testing Service, Princeton. Nelson, N., Roenthal, R., and Rosnow, R. L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. American Psychologist. 41, 1299–1301. Oakes, M. (1986). Statistical inference: a commentary for the social and behavioral sciences. John Wiley & Sons, New York. Rosenthal, R. and Gaito, J. (1963). The interpretation of level of significance by psychological researchers. Journal of Psychology. 55, 33–38. Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard Educational Review. 24, 86–97. Slavin, R. (1997). Design competition: a proposal for a new federal role in educational research and development. Educational Researcher, 26(1), 22–27.

© 2003 by CRC Press LLC

SL316XCh14Frame Page 464 Monday, September 30, 2002 8:06 PM

464

Six Sigma and Beyond: The Implementation Process

Whitford, B. L. and Jones, K. (2000). How high stakes school accountability undermines a performance based curriculum vision. In Accountability, Assessment and Teacher Commitment: Lessons from Kentucky’s Reform Efforts. Whitford, B. L. and Jones, K. (Eds.). State University of New York Press. Albany, NY. Zuckerman, M., Hodgin, H. S., Zuckerman, A., and Rosenthal, R. (1993). Contemporary issues in the analysis of data: a survey of 551 psychologists. Psychological Science. 4, 49–53.

© 2003 by CRC Press LLC

SL316XCh15EpilogFrame Page 465 Monday, September 30, 2002 8:05 PM

Epilog Let me close this volume and the series of Six sigma and beyond with some analogies from the movie world. They remind us of strength, mighty power, indestructibility, intuition and so many other attributes that allegedly were build into the said movies. In the end, though they all turned out to be little less than perfect. So, and with the six sigma methodology (both the DMAIC and DFSS model). The intent is great. The expected results are phenomenal. Yet, if we do not plan appropriately, design for customer satisfaction and test for validation, we are going to be just as short sighted as the movies. Overconfident for nothing! Titanic: The boat that would never sink. We see the director who says he's king of the world. Market translation: There are never enough life boats/jackets when the inevitable iceberg is struck by a crazed captain. Lots of companies will drown or swamp life boats that could save a few. Supplier strategy: Get in a lifeboat by hook or crook and be prepared to keep others out. User strategy: Make sure your picks are in a lifeboat or be prepared to have belly up technology. In the case of the six sigma methodology, make sure that it is for you. If not, move on. Not everything has to be six sigma! The Producers: Two con artists with the intent to produce a flop attempt to swindle little old ladies but wind up producing a winner with no way to pay off all the investors. Everyone goes to jail. Market translation: Actually, it's the opposite of this with the intent to produce a winner but actually producing a loser with nobody going to jail and all monies squandered. Not nearly as funny as the show. Supplier strategy: Most of the little old ladies have wised up, so you will have to show them an actual business plan. User/Venture Capitalist strategy: Ask for references, do your due diligence. In the case of the six sigma methodology, make sure the ROI is present for your organization. Just because six sigma is the BUZZ phrase, that does not mean that is also the silver bullet that will cure ALL your organization’s ailments. Be careful! Field of Dreams: Voice tells corn farmer, "If you build it, he will come." Farmer builds baseball diamond, gets really cool supernatural game of baseball going, and almost loses the farm. Market translation: If you build it, they probably will not come. There needs to be a real market or need (like reconciling with your father). Only a few visionaries ever make it. Supplier strategy: Do your homework and don't believe all the consultants and market analysts you pay to tell you what you want to hear. User strategy: Ditto. In the case of the six sigma methodology, make sure your xs are correlated with the real Ys. Otherwise, you are investing in wishful thinking. Remember: focus on functionality and not specifications. Satisfaction is a derivative of this functionality and not specifications. The Nightmare Before Christmas: The Pumpkin King thinks that Christmas is better than Halloween and attempts a hostile takeover. Learns that a space you know is better than one you don't. Market translation: The real winners in any space 465 © 2003 by CRC Press LLC

SL316XCh15EpilogFrame Page 466 Monday, September 30, 2002 8:05 PM

466

Six Sigma and Beyond: The Implementation Process

have deep domain expertise and really understand/know their customers. Supplier strategy: Know what you know and focus on a market/space where you can execute better than anyone else. User strategy: Don't accept gifts/products from scary people. In the case of the six sigma methodology do not accept the methodology just because other people and organizations “swear” by it. In your organization traditional tools and methodologies may work “miracles” just as good as the six sigma approach. Investigate thoroughly, before you decide. Good Will Hunting: Brilliant but angry young man addresses deep-seated insecurities while learning to love and becoming domesticated. Kind-hearted but strong-willed mentor brings him along. Market translation: Lots of smart kids spent a lot of dumb money as no one was there to show them how to do it right. Shame on the top executives and boards of directors. Supplier strategy: Promise you'll never do it again and say that you really do learn a lot after flushing your first $100 million. User strategy: What did you REALLY expect? In the case of the six sigma methodology, do not forget than even the GREAT companies that allegedly started and perfected this particular methodology (Six Sigma) have had their ups and downs. Do not imitate anyone, just because they are bigger and they talk a good PR. Imitate the doers, the ones that have a track record for success and not the ones that create “fads” and “buzz words” to confuse and twist the real truth of improvement. How can anyone believe that 6 sigma pays off when companies still have record recalls, record dissatisfaction from customer surveys and market share loss? Thus, make sure your hunting is an intelligent one and do not be swayed by the sirens and trumpets of the consultants and executives who want your money and offer excuses for failures. And, one of my all-time favorite movies/lessons: Modern Times: A classic film illustrating the power of technology and the potential for abuse and likely dehumanization. No words, but the pictures and Chaplin say it all. Market translation: Technology is powerful and potentially destructive. Really understand what you are getting yourselves into when you sign up for it. Supplier strategy: Understand the scope of your efforts and sell appropriately. User strategy: Be careful, it's dangerous out there. In the case of the six sigma methodology, be very mindful of the technology associated with this methodology. There is an aura of magic that if one is going to use a particular software and a particular analysis with a three dimension graph attached to it, it must be the correct answer. If the result was generated with advanced mathematical techniques/modeling using the Greek alphabet at least twice over, it must be the correct answer. That may no be so. Remember, however, that sometime the more we change the more we stay the same. We may change names, but the essence is the same. Do not be fooled by jargon, a slick sales – force and or consultants that try to convince you that you are missing something. You may not be missing anything at all. In fact, you may be the best!

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 467 Monday, September 30, 2002 8:02 PM

Glossary In the pursuit of implementing Six Sigma and identifying, selecting, and working with a project within an organization, the following summary of words should be familiar in the vocabulary of the Six Sigma professional. We have tried to include words that follow pretty much the nine areas of project management (PM), which are: 1. 2. 3. 4. 5. 6. 7. 8. 9.

PM framework Scope Quality Time Cost Risk Human Resources Communications Contract–Procurement Management

The reader will notice that some words have special meaning when used in the context of PM. For example, change, commitment, variance, work plan, and many others. α (alpha) risk — the maximum probability of saying a process or lot is unacceptable when, in fact, it is acceptable. Acceptance sampling — sampling inspection in which decisions concerning acceptance or rejection of materials or services are made; also includes procedures used in determining the acceptability of the items in question based on the true value of the quantity being measured. Accountability/responsibility matrix — a structure that relates the project organization structure to the work breakdown structure; assures that each element of the project scope of work is assigned to a responsible individual. Accuracy (of measurement) — difference between the average result of a measurement with a particular instrument and the true value of the quantity being measured. Acquisition control — a system for acquiring project equipment, material, and services in a uniform and orderly fashion. Acquisition evaluations — review and analysis of responses to determine a supplier’s ability to perform work as requested. This activity may include an evaluation of the supplier’s financial resources, ability to comply with technical criteria and delivery schedules, satisfactory record of performance, and eligibility for award.

467 © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 468 Monday, September 30, 2002 8:02 PM

468

Six Sigma and Beyond: The Implementation Process

Acquisition methods — the various ways by which goods and services are acquired from suppliers. Acquisition negotiations — contracting without formal advertising. This method offers flexible procedures, permits bargaining, and provides an opportunity to prospective suppliers to revise their offers before the award. Acquisition process — the process of acquiring personnel, goods, or services for new or existing work within the general definitions of contracts requiring an offer and acceptance, consideration, lawful subject matter, and competent parties. Active listening — standard techniques of active listening are to pay close attention to what is said, to ask the other party to spell out carefully and clearly what they mean, and to request that ideas be repeated if there is any ambiguity or uncertainty. Activity — a task or series of tasks performed over a period of time. Activity description — any combination of characters that easily identifies an activity to any recipient of the schedule. Activity duration — the best estimate of the time (hours, days, weeks, months, etc.) necessary for the accomplishment of the work involved in an activity, considering the nature of the work and resources needed for it. Actual cost of work performed (ACWP) — the direct costs actually incurred and the indirect costs applied in accomplishing the work performed within a given time period. These costs should reconcile with a contractor’s incurred cost ledgers that are regularly audited by the client. Actual finish date — the calendar date work actually ended on an activity. It must be prior or equal to the data date. The remaining duration of this activity is zero. Actual start date — the calendar date work actually began on an activity. It must be prior or equal to the data date. Addendum — see procurement addendum. ADM — see Arrow Diagramming Method. Agency — a legal relationship by which one party is empowered to act on behalf of another party. Agreement, legal — a legal document that sets out the terms of a contract between two parties. Alpha risk — see producer’s risk. Alternative analysis — breaking down a complex scope situation for the purpose of generating and evaluating different solutions and approaches. Alternatives — review of the means available and the impact of trade-offs to attain the objectives. Amount at stake — the extent of adverse consequences that could occur on the project. Analysis — the study and examination of something complex and the separation of it into its more simple components. Analysis typically includes discovering not only the parts of the thing being studied but also how they fit together and why they are arranged in this particular way. A study of schedule variances for cause, impact, corrective action, and results. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 469 Monday, September 30, 2002 8:02 PM

Glossary

469

Analysis of means (ANOM) — a statistical procedure for troubleshooting industrial processes and analyzing the results of experimental designs with factors at fixed levels. It provides a graphical display of data. Ellis R. Ott developed the procedure in 1967 because he observed that nonstatisticians had difficulty understanding analysis of variance. Analysis of means is easier for quality practitioners to use because it is an extension of the control chart. In 1973, Edward G. Schilling further extended the concept, enabling analysis of means to be used with normal distributions and attributes data where the normal approximation to the binomial distribution does not apply. This is referred to as “analysis of means for treatment effects.” Analysis of variance (ANOVA) — a basic statistical technique for analyzing experimental data. It subdivides the total variation of a data set into meaningful component parts associated with specific sources of variation in order to test a hypothesis on the parameters of the model or to estimate variance components.There are three models: fixed, random, and mixed. Analytical thinking — breaking down a problem or situation into discrete parts to understand how each part contributes to the whole. Apparent low bidder — the contractor who has submitted the lowest compliant bid for all or part of a project as described in a set of bid documents. Application — an act of putting to use (new techniques); an act of applying techniques. Appraisal costs — costs incurred to determine the degree of conformance to quality requirements. Approve — to accept as satisfactory. Approval implies that the thing approved has the endorsement of the approving agency; however, the approval may still require confirmation by somebody else. In management use, the important distinction is between approve and authorize. Persons who approve something are willing to accept it as satisfactory for their purposes, but this decision is not final. Approval may be by several persons. The person who authorizes has final organization authority. This authorization is final approval. Approved bidders list — a list of contractors that have been pre-qualified for purposes of submitting competitive bids. Approved changes — changes that have been approved by higher authority. Arbitration — a formalized system for dealing with grievances and administering corrective justice as part of collective bargaining agreements. Archive tape — a computer tape that contains historical project information. Area of project application — the environment in which a project takes place, with its own particular nomenclature and accepted practices, e.g., facilities, products, or systems development projects, to name a few. Arrow diagramming method — the graphic presentation of an activity. The tail of the arrow represents the start of the activity. The head of the arrow represents the finish. Unless a time scale is used, the length of the arrow stem has no relation to the duration of the activity. Length and direction of the activity are usually a matter of convenience and clarity. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 470 Monday, September 30, 2002 8:02 PM

470

Six Sigma and Beyond: The Implementation Process

As-built schedule — the final project schedule, which depicts actual completion (finish) dates, actual duration dates, and start dates. As-performed schedule — the final project schedule, which depicts actual completion (finish) dates, actual durations, and start dates. Assurance — examination with the intent to verify. Attribute data — go/no-go information. The control charts based on attribute data include fraction defective chart, number of affected units chart, count chart, count-per-unit chart, quality score chart, and demerit chart. Audits — a planned and documented activity performed by qualified personnel to determine by investigation, examination, or evaluation of objective evidence the adequacy and compliance with established procedures, or the applicable documents, and the effectiveness of implementation. Authorize — to give final approval. A person who can authorize something is vested with authority to give final endorsement, which requires no further approval. Authorized work — an effort that has been approved by higher authority and may or may not be defined or finalized. Availability — the ability of a product to be in a state to perform its designated function under stated conditions at a given time. Availability can be expressed by the ratio: uptime/[uptime + downtime] Average — see mean. Average outgoing quality limit (AOQL) — the maximum average outgoing quality over all possible levels of incoming quality for a given acceptance sampling plan and disposal specification. Backward pass — calculation of late finish times (dates) for all uncompleted network activities. Determined by working backwards through each activity. Balanced scorecard — translates an organization’s mission and strategy into a comprehensive set of performance measures to provide a basis for strategic measurement and management, utilizing four balanced views: financial, customers, internal business processes, and learning and growth. Baseline — management plan and/or scope document fixed at a specific point in time in the project life cycle. Baseline concept — management’s project management plan for a project fixed prior to commencement. Benefits administration — the formal system by which an organization manages its nonfinancial commitment to its employees; includes such benefits as vacation, leave time, and retirement. Best and final contract offer — final offer by the supplier to perform the work after incorporating negotiated and agreed changes in the procurement documents. β (beta) risk — the maximum probability of saying a process or lot is acceptable when, in fact, it should be rejected. See also consumer’s risk. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 471 Monday, September 30, 2002 8:02 PM

Glossary

471

Bayes theorem — a theorem of statistics relating conditional probabilities. Bell-shaped curve — a curve or distribution showing a central peak and tapering off smoothly and symmetrically to “tails” on either side. A normal (Gaussian) curve is an example. Bias (in measurement) — a characteristic of measurement that refers to a systematic difference. That systematic difference is an error that leads to a difference between the average result of a population of measurements and the true, accepted value of the quantity being measured. Bid — an offer to perform the work described in a set of bid documents at a specified cost. Bid cost considerations — consideration of suppliers’ approach and reasonableness of cost, cost realism, and forecast of economic factors affecting cost and cost risks used in the cost proposal. Bid documents — a set of documents issued for purposes of soliciting bids in the course of the acquisition process. Bid evaluation — review and analysis of response to determine a supplier’s ability to perform the work as requested. This activity may include an evaluation of supplier’s financial resources, ability to comply with technical criteria and delivery schedules, satisfactory record of performance, and eligibility for award. Bid list — a list of suppliers invited to submit bids for goods and services as specified. Bid protests — the process by which an unsuccessful supplier may seek remedy for unjust contract awards. Bid response — communications, positive or negative, from prospective suppliers in response to the invitation to bid. Bid time consideration — evaluation of suppliers’ offer with regard to dates identified for completion of phases of the work or total work. Bid technical consideration — suppliers’ technical competency, understanding of technical requirements, and capability to produce technically acceptable goods or services. Generally, this evaluation ranks highest among all other evaluations. Bimodal distribution — a frequency distribution that has two peaks. Usually an indication of samples from two processes incorrectly analyzed as a single process. Binomial distribution (probability distribution) — given that a trial can have only two possible outcomes (yes/no, pass/fail, heads/tails), of which one outcome has probability p and the other probability q (p + q = 1), the probability that the outcome represented by p occurs r times in n trials is given by the binomial distribution. Breakdown — identification of the smallest activities or tasks in a job according to a defined procedure. Break-even chart — a graphic representation of the relation between total value earned and total costs for various levels of productivity. Break-even point — the productivity point at which value earned equals total cost. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 472 Monday, September 30, 2002 8:02 PM

472

Six Sigma and Beyond: The Implementation Process

Budget — when unqualified, usually refers to an estimate of funds planned to cover a fiscal period. (See project budget.) Also, a planned allocation of resources. Budget costs — the translation of the estimate into manhour rates, quantity units of production, etc. so that these budget costs can be compared to actual costs and variances developed to highlight performance and those responsible to implement corrective action may be alerted, if necessary. Budget estimate (–10, +25%) — a budget estimate is prepared from flow sheets, layouts, and equipment details. This is often required for the owner’s budget system. These estimates are established based on quantitative information and are a mixture of firm and unit prices for labor, material, and equipment. In addition, they establish the funds required and are used for obtaining approval for the project. Other terms used to identify a budget estimate include appropriation, control, design, etc. Budgeted cost of work performed (BCWP) — the sum of the budgets for completed portions of in-process work, plus the appropriate portion of the budgets for level of effort and apportioned effort for the relevant time period. BCWP is commonly referred to as “earned value.” Budgeted cost of work scheduled (BCWS) — the sum of the budgets for work scheduled to be accomplished (including in-process work), plus the appropriate portion of the budgets for level of effort and apportioned effort for the relevant time period. Bulk material — material bought in lots; generally, no specific item is distinguishable from any other in the lot. These items can be purchased from a standard catalog description and are bought in quantity for distribution as required. Calendar — the calendar used in developing a project plan. This calendar identifies project workdays and can be altered so weekends, holidays, weather days, etc., are not included. Calendar range — the span of the calendar from the calendar start date through the last calendar unit performed. The calendar start date is unit number one. Calendar start date — the first calendar unit of the working calendar. Calendar unit — the smallest unit of the calendar produced. This unit is generally in hours, days, or weeks; it can also be grouped in shifts. Camp-Meidell conditions — for frequency distribution and histograms: a distribution is said to meet Camp-Meidell conditions if its mean and mode are equal and the frequency declines continuously on either side of the mode. Career path planning — the process of integrating an individual’s career planning and development into an organization’s personnel plans with the objective of satisfying both the organization’s requirements and the individual’s career goals. Cash flow analysis — the activity of establishing cash flow (dollars in and out of the project) by month and the accumulated total cash flow for the © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 473 Monday, September 30, 2002 8:02 PM

Glossary

473

project for the measurement of actual vs. budget costs. This is necessary to allow for funding of the project at the lowest carrying charges and is a method of measuring project progress. Cell — a layout of workstations and/or various machines for different operations (usually in a U-shape) in which multitasking operators proceed, with a part, from machine to machine, to perform a series of sequential steps to produce a whole product or major subassembly. Central tendency — the propensity of data collected on a process to concentrate around a value situated midway between the lowest and highest values. Central limit theorem — if samples of a population with size n are drawn, and the values of X-bar are calculated for each sample group, and the distribution of X-bar is found, the distribution’s shape is found to approach a normal distribution for sufficiently large n. This theorem allows one to use the assumption of a normal distribution when dealing with X-bar. “Sufficiently large” depends on the population’s distribution and what range of X-bar is being considered; for practical purposes, the easiest approach may be to take a number of samples of a desired size and see if their means are normally distributed. If not, the sample size should be increased. Central tendency — a measure of the point about which a group of values is clustered; some measures of central tendency are mean, mode, and median. Chaku-chaku — (Japanese) meaning “load-load” in a cell layout where a part is taken from one machine and loaded into the next. Characteristic — a dimension or parameter of a part that can be measured and monitored for control and capability. Change — an increase or decrease in any of the project characteristics. Change in Scope — a change in objectives, work plan, or schedule that results in a material difference from the terms of an approval to proceed previously granted by higher authority. Under certain conditions (normally so stated in the approval instrument), change in resource application may constitute a change in scope. Change order/purchase order amendment — written order directing the contractor to make changes according to the provisions of the contract documents. Changed conditions (contract) — a change in the contract environment, physical or otherwise, compared to that contemplated at the time of bid. Checklist — a tool used to ensure that all important steps or actions in an operation have been taken. Checklists contain items that are relevant to an issue or situation. Checklists are often confused with check sheets and data sheets (see also check sheet). Check sheet — a sheet for the recording of data on a process or its product. The check sheet is custom designed for a particular use to remind the user to record each piece of information required for a particular study and to

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 474 Monday, September 30, 2002 8:02 PM

474

Six Sigma and Beyond: The Implementation Process

reduce the likelihood of errors in recording data. Furthermore, a good check sheet will aid the researcher in interpreting the results. The data from the check sheet can be typed into a computer for analysis when the data collection is complete. The check sheet is one of the seven tools of quality. Check sheets are often confused with data sheets and checklists (see also checklist). Chi-square (χ2) — as used for goodness of fit: a measure of how well a set of data fits a proposed distribution such as the normal distribution. The data is placed into classes, and the observed frequency (0) is compared to the expected frequency (E) for each class of the proposed distribution. The result for each class is added to obtain a chi-square value. This is compared to a critical chi-square value from a standard table for a given α (alpha) risk and degrees of freedom. If the calculated value is smaller than the critical value, we can conclude that the data follows the proposed distribution at the chosen level of significance. Chronic condition — long-standing adverse condition that requires resolution by changing the status quo. For example, actions such as revising an unrealistic manufacturing process or addressing customer defections can change the status quo and remedy the situation. Client quality services — the process of creating a two-way feedback system to define expectations, opportunities, and anticipated needs. Close-out (phase) — see project close-out. Cluster — for control charts: a group of points with similar properties. Usually an indication of short duration, assignable causes. Code of accounts — once the project has been divided into the WBS Work Packages, a code or numbering system is assigned to the cost data for cost monitoring, control, reports, tax class separations, and forecasting purposes. Commissioning — activities performed for the purpose of substantiating the capability of the project to function as designed. Commitment — an agreement to consign or reserve the necessary resources to fulfill a requirement until expenditure occurs. A commitment is an event. Common causes — those sources of variability in a process that are truly random, i.e., inherent in the process itself. Common causes of variation — causes that are inherent in any process all the time. A process that has only common causes of variation is said to be stable or predictable or in-control. Also called “chance causes.” Communicating with groups — the means by which the project manager conducts meetings, presentations, negotiations, and other activities necessary to convey the project’s needs and concerns to the project team and other groups. Communicating with individuals — involves all activities by which the project manager transfers information or ideas to individuals working on the project.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 475 Monday, September 30, 2002 8:02 PM

Glossary

475

Communications management (framework) — the proper organization and control of information transmitted by whatever means to satisfy the needs of the project. It includes the processes of transmitting, filtering, receiving, and interpreting or understanding information using appropriate skills according to the application in the project environment. It is at once the master and the servant of a project in that it provides the means for interaction between the many disciplines, functions, and activities, both internal and external to the project, that together result in the successful completion of that project; conducting or supervising the exchange of information. Compensation and evaluation — the measurement of an individual’s performance and the financial payment provided to employees as a reward for their performance and as a motivator for future performance. Competence — a person’s ability to team and perform a particular activity. Competence generally consists of skill, knowledge, experience, and attitude components. Competitive analysis — the gathering of intelligence relative to competitors in order to identify opportunities or potential threats to current and future strategy. Completed activity — an activity with an actual finish date and no remaining duration. Computer cost applications — the computer assisted techniques to handle analysis and store the volume of data accumulated during the project life that are essential to the cost management function. The areas associated with cost management are: cost estimating database, computerized estimating, management reports, economic analysis, analysis of risk and contingency, progress measurements, productivity analysis and control, risk management, commitment accounting, and integrated project management information systems. Concept — an imaginative arrangement of a set of ideas. Concept (phase) — the first of four sequential phases in the generic project life cycle. Also known as idea, economics, feasibility, or prefeasibility phase. Conceptual development — a process of choosing or documenting the best approach to achieve project objectives. Conceptual project planning — the process of developing broad-scope project documentation from which the technical requirements, estimates, schedules, control procedures, and effective project management will all flow. Concerns — number of defects (nonconformities) found on a group of samples in question. Confidence interval — range within which a parameter of a population (e.g., mean, standard deviation, etc.) may be expected to fall, on the basis of measurement, with some specified confidence level. Confidence level — the probability set at the beginning of a hypothesis test that the variable will fall within the confidence interval. A confidence level of 0.95 is commonly used.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 476 Monday, September 30, 2002 8:02 PM

476

Six Sigma and Beyond: The Implementation Process

Confidence limits — the upper and lower boundaries of a confidence interval. Configuration (baseline) control — a system of procedures that monitors emerging project scope against the scope baseline. Requires documentation and management approval on any change to the baseline. Conflict management — the process by which the project manager uses appropriate managerial techniques to deal with the inevitable disagreements, both technical and personal in nature, that develop among those working toward project accomplishment. Conflict resolution — to seek a solution to a problem, five methods in particular have been proven through confrontation, compromise, smoothing, forcing and withdrawal Confrontation — where two parties work together toward a solution of the problem. Compromise — both sides agree such that each wins or loses a few points. Smoothing — differences between two groups are played down, and the strong points of agreement are given the most attention. Forcing — the project manager uses his power to direct the solution. This is a type of win-lose agreement where one side gets its way and the other does not. Withdrawal — one or both sides withdraw from the conflict. Conformance (of product) — adherence to some standard of the product’s properties. The term is often used in attribute studies of product quality, i.e., a given unit of the product is either in conformance to the standard or it is not. Constant-cause system — a system or process in which the variations are random and are constant in time. Constraints — applicable restrictions that will affect the scope. Any factor that affects when an activity can be scheduled. (See restraint.) Consumer’s risk — the maximum probability of saying a process or lot is acceptable when, in fact, it should be rejected. Contingencies — specific provisions for unforeseeable elements of cost within the defined project scope; particularly important where previous experience relating estimates and actual costs has shown that unforeseeable events that will increase costs are likely to occur. If an allowance for escalation is included in the contingency, it should be a separate item, determined to fit expected escalation conditions for the project. Contingency allowances — specific provisions for unforeseen elements of cost within the defined project scope; particularly important where previous experience relating estimates and actual costs has shown that unforeseen events that will increase costs are likely to occur. If an allowance for escalation is included in the contingency, it should be as a separate item, determined to fit expected escalation conditions of the project.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 477 Monday, September 30, 2002 8:02 PM

Glossary

477

Contingency plan — a plan that identifies key assumptions beyond the project manager’s control and their probability of occurrence. The plan identifies alternative strategies for achieving project success. Contingency planning — the establishment of management plans to be invoked in the event of specified risk events. Examples include the provision and prudent management of a contingency allowance in the budget, the preparation of alternative schedule activity sequences or “work-arounds,” emergency responses to reduce delays, and the evaluation of liabilities in the event of complete project shutdown. Continuous data — data for a continuous variable. The resolution of the value is only dependent on the measurement system used. Continuous variable — a variable which can assume any of a range of values; an example would be the measured size of a part. Continuous probability distribution — a graph or formula representing the probability of a particular numeric value of continuous (variable) data based on a particular type of process that produces the data. Contract — a binding agreement to acquire goods or services in support of a project. Contract administration — monitoring and control of performance, reviewing progress, making payments, recommending modifications, and approving a contractor’s actions to ensure compliance with contractual terms during contract execution. Contract award — the final outcome of the acquisition process in which generally the contract is awarded to one prospective supplier, through acceptance of a final offer generally either by issuing a purchase order or the signing of a legally binding contract formalizing the terms under which the goods or services are to be supplied. Contract award ranking — qualitative and/or quantitative determinations of prospective suppliers’ bid, tender, proposal, or quotation relative to each other measured against a common base. Contract closeout — contract closeout activities that assure that the contractor has fulfilled all contractual obligations and has released all claims and liens in connection with work performed. Contract dates — the dates specified in the contract that impact the project plan. Contract dispute — disagreement between the parties. This may occur during contract execution or at completion and may include misinterpretation of technical requirements and any terms and conditions or be due to changes not anticipated at the time of contract award. Contract documents — the set of documents that form the contract. Contract financial control — exercise of control over contract costs. Contract guarantee — a legally enforceable assurance of performance of a contract by a contractor. Contract negotiation — method of procurement where a contract results from a bid that may be changed through bargaining. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 478 Monday, September 30, 2002 8:02 PM

478

Six Sigma and Beyond: The Implementation Process

Contractor — a person or organization that undertakes responsibility for the performance of a contract. Contractor claims release — certificate to release and hold harmless from future claims by the contractor. Contract order modifications — changes in a contract during its execution to incorporate new requirements or to handle contingencies that develop after contract placement. Changes may include price adjustments or changes in scope. Contractor’s performance evaluation — a comprehensive review of a contractor’s technical and cost performance and work delivery schedules. Contract performance control — control of work during contract execution. Contract preaward meetings — meetings with prospective suppliers before final award determination to aid ranking and finalize terms of agreement between parties. Contract-procurement management — the function through which resources (including people, plant, equipment, and materials) are acquired for the project (usually through some form of formal contract) in order to produce the end product. It includes the processes of establishing strategy, instituting information systems, identifying sources, selection, conducting proposal or tender invitation and award, and administering the resulting contract. Contract risk — the potential and consideration of risk in procurement actions. Generally the forces of supply and demand determine who should have the maximum risk of contract performance, but the objective is to place on the supplier the maximum performance risk while maintaining an incentive for efficient performance. In a fixed price contract, the supplier accepts a higher risk than in a cost type contract in which supplier’s risk is lowest. Contract risk analysis — analysis of the consequences and probabilities that certain undesirable events will occur and their impact on attaining contract and procurement objectives. Contract types — the various forms of contracts by which goods or services can be acquired. See cost plus fixed fee, cost plus incentive fee, cost plus percentage of cost, firm fixed price, fixed price plus incentive fee, and unit price contracts. Control — the exercise of corrective action as necessary to yield a required outcome consequent upon monitoring performance. Control chart — a basic tool that consists of a chart with upper and lower control limits on which values of some statistical measure for a series of samples or subgroups are plotted. It frequently shows a central line to help detect a trend of plotted values toward either control limit. It is used to monitor and analyze variation from a process to see whether the process is in statistical control. Control group — an experimental group which is not given the treatment under study. The experimental group that is given the treatment is compared to the control group to ensure any changes are due to the treatment applied. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 479 Monday, September 30, 2002 8:02 PM

Glossary

479

Control limits — The limits within which the product of a process is expected (or required) to remain. If the process leaves the limits, it is said to be out of control. This is a signal that action should be taken to identify the cause and eliminate it, if possible. Note: control limits are not the same as tolerance limits. Control limits always indicate the “voice of the process,” and they are always calculated. Control system — A mechanism that reacts to the current project status in order to ensure accomplishment of project objectives. Corporate business life cycle — a life cycle that encompasses phases of policy planning and identification of needs before a project life cycle as well as product-in-service and disposal after the project life cycle. Corporate project strategy — the overall direction set by a corporation of which the project is a part and the relationship of specific procurement actions to these corporate directions. Corrective action (cost management) — the development of changes in plan and approach to improve the performance of the project. (Communications management) Measures taken to rectify conditions adverse to specified quality and, where necessary, to preclude repetition. Cost — cash value of project activity. Cost applications — the processes of applying cost data to other techniques that have not been described in the other processes. Cost budgeting — the process of establishing budgets, standards, and a monitoring system by which the investment costs of a project can be measured and managed, that is, the establishment of the control estimate. It is vital to be aware of the problems before the fact so that timely corrective action can be taken. Cost controls — the processes of gathering, accumulating, analyzing, reporting, and managing the costs on an on-going basis. Includes project procedures, project cost changes, monitoring actual versus budget, variance analysis, integrated cost/schedule reporting, progress analysis and corrective action. Cost effective — better value for money, or the best performance for the least cost. Cost estimating — the process of assembling and predicting the costs of a project. It encompasses the economic evaluation, project investment cost, and predicting or forecasting of future trends and costs. Cost forecasting — the activity of predicting future trends and costs within the project duration. These activities are normally marketing-oriented. However, such items as sales volume, price, and operating cost can affect the project profitability analysis. Items that affect the cost-management functions are: predicted time/cost, salvage value, etc. Cost management — the function required to maintain effective financial control of a project through the processes of evaluating, estimating, budgeting, monitoring, analyzing, forecasting, and reporting the cost information. Cost of poor quality (COPQ) — the costs associated with providing poor-quality products or services. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 480 Monday, September 30, 2002 8:02 PM

480

Six Sigma and Beyond: The Implementation Process

Cost of quality (COQ) — costs incurred in assuring quality of a product or service. There are four categories of quality costs: internal failure costs (costs associated with defects found before delivery of the product or service), external failure costs (costs associated with defects found during or after product or service delivery), appraisal costs (costs incurred to determine the degree of conformance to quality requirements), and prevention costs (costs incurred to keep failure and appraisal costs to a minimum). Cost performance measurement baseline — the formulation of budget costs and measurable goals, (particularly time and quantities) for the purposes of comparison and analysis and forecasting future costs. Cost plus fixed fee (CPFF) contract — provides reimbursement of allowable costs plus a fixed fee, which is paid proportionately as the contract progresses. Cost plus incentive fee (CPIF) contract — provides the supplier for cost of delivered performance, plus a predetermined fee as a bonus for superior performance. Cost plus percentage of cost (CPPC) contract — provides reimbursement of allowable cost of services performed plus an agreed-upon percentage of the estimated cost as profit. Cost status — see Scope reporting. Counseling/coaching — the process of advising or assisting an individual concerning career plans, work requirements, or the quality of work performed. Covariance — a measure of whether two variables (x and y) are related (correlated). It is given by the formula: σxy = [Σ(x – Xbar) (y – Ybar)]/n – 1 where “n” is the number of elements in the sample. Crashing — action to decrease the duration of an activity or project by increasing the expenditure of resources. Criteria — a statement that provides objectives, guidelines, procedures, and standards that are to be used to execute the development, design, and implementation portions of a PROJECT. Critical activity — any activity on a critical path. Critical path — the series of interdependent activities of a project, connected end to end, that determines the shortest total length of the project. The critical path of a project may change from time to time as activities are completed ahead of or behind schedule. Critical path method (CPM) — a scheduling technique using precedence diagrams for graphic display of a work plan; the method used to determine the length of a project and to identify the activities that are critical to the completion of the project. Critical path network (CPN) — a plan for the execution of a project that consists of activities and their logical relationships to one another. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 481 Monday, September 30, 2002 8:02 PM

Glossary

481

Critical to quality characteristic (CTQ) — a characteristic of a product, service, or information that is important to the customer. CTQs must be measurable in either a quantitative (e.g., 3.00 mg, etc.) or qualitative manner (correct/incorrect, etc.). Culture — the integrated pattern of human knowledge, belief, and behavior that depends upon the human capacity for learning and transmitting knowledge to succeeding generations. Current finish date — the current estimate of the calendar date when an activity will be completed. Current start date — the current estimate of the calendar date when an activity will begin. Customer/client personnel — those individuals working for an organization who will assume responsibility for the product produced by a project when the project is complete. Customer — anyone who receives a product, service, or information from an operation or process. The term is frequently used to describe “external” customers — those who purchase the manufactured products or services that are the basis for the existence of the business. However, “internal” customers, also important, are internal “customers” who receive the intermediate or internal products or services from internal “suppliers.” (See External customer and Internal customer). Customer delight — the result achieved when customer requirements are exceeded in ways the customer finds valuable. Customer loyalty/retention — the result of an organization’s plans, processes, practices, and efforts designed to deliver their services or products in ways that create retained and committed customers. Customer satisfaction — the result of delivering a product or service that meets customer requirements, needs, and expectations. Customer segmentation — the process of differentiating customers based on one or more dimensions for the purpose of developing a marketing strategy to address specific segments. Customer service — the activities of dealing with customer questions; also sometimes the department that takes customer orders or provides post-delivery services. Customer-supplier partnership — a long-term relationship between a buyer and supplier characterized by teamwork and mutual confidence. The supplier is considered an extension of the buyer’s organization. The partnership is based on several commitments. The buyer provides long-term contracts and uses fewer suppliers. The supplier implements quality assurance processes so that incoming inspection can be minimized. The supplier also helps the buyer reduce costs and improve product and process designs. Customer value — the market-perceived quality adjusted for the relative price of a product. Cycle — a recurring pattern. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 482 Monday, September 30, 2002 8:02 PM

482

Six Sigma and Beyond: The Implementation Process

Data — facts presented in descriptive, numeric, or graphic form. Data application — the development of a database of risk factors, both for the current project and as a matter of historic record. Data collection — the gathering and recording of facts, changes, and forecasts for reporting and future planning. Data date (DD) — the calendar date that separates actual (historical) data from scheduled data. Data refinements — reworking or redefinition of logic or data that may have previously been developed in the planning subfunction as required for proper input of milestones, restraints, priorities, and resources. Date of acceptance — the date on which a client agrees to the final acceptance of a project. Commitments against the capital authorization cease at this time. This is an event. DCOV — the design for the Six Sigma model: define, characterize, optimize, and verify. Decision matrix — a matrix used by teams to evaluate problems or possible solutions. For example, after a matrix is drawn to evaluate possible solutions, the team lists them in the far left vertical column. Next, the team selects criteria to rate the possible solutions, writing them across the top row. Then, each possible solution is rated on a predetermined scale (such as 1 to 5) for each criterion and the rating recorded in the corresponding grid. Finally, the ratings of all the criteria for each possible solution are added to determine its total score. The total score is then used to help decide which solution deserves the most attention. Defect — any output of an opportunity that does not meet a defined specification; a failure to meet an imposed requirement on a single quality characteristic or a single instance of nonconformance to the specification. Definitive estimate (–5, + 10%) — a definitive estimate is prepared from welldefined data, specifications, drawings, etc. This category covers all estimate ranges from a minimum to a maximum definitive type. These estimates are used for bid proposals, bid evaluations, contract changes, extra work, legal claims, permits, and government approvals. Other terms associated with a definitive estimate include check, lump sum, tender, post contract changes, etc. Deflection — the act of transferring all or part of a risk to another party, usually by some form of contract. Deformation — the bending or distorting of an object due to forces applied to it. Deformation can contribute to errors in measurement if the measuring instrument applies enough force. Degrees of freedom (df) — the number of unconstrained parameters in a statistical determination. For example, in determining X-bar (the mean value of a sample of n measurements), the number of degrees of freedom, df, is n. In determining the standard deviation (STD) of the same population, df = n – 1 because one parameter entering the determination is eliminated. The STD is obtained from a sum of terms based on the © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 483 Monday, September 30, 2002 8:02 PM

Glossary

483

individual measurements, which are unconstrained, but the nth measurement must now be considered “constrained” by the requirement that the values add up to make X-bar. An equivalent statement means that one degree of freedom is “factored out” because the STD is mathematically indifferent to the value of X-bar. Delegating — the process by which authority is distributed from the project manager to an individual working on the project. Deliverable — a report or product of one or more tasks that satisfy one or more objectives and must be delivered to satisfy contractual requirements. Deming cycle — (see plan-do-check-act cycle) Demographics — variables among buyers in the consumer market, which include geographic location, age, sex, marital status, family size, social class, education, nationality, occupation, and income. Dependability — the degree to which a product is operable and capable of performing its required function at any randomly chosen time during its specified operating time, provided that the product is available at the start of that period. (Nonoperation-related influences are not included.) Dependability can be expressed by the ratio: time available/[time available + time required] Deployment — (to spread around) used in strategic planning to describe the process of cascading plans throughout an organization. Design — the creation of final approach for executing a project’s work. Design contract — a contract for design. Design control — a system for monitoring a project’s scope, schedule, and cost during the project’s design stage. Design of experiment (DOE) — a branch of applied statistics dealing with planning, conducting, analyzing, and interpreting controlled tests to evaluate the factors and noises that control the value of a parameter or group of parameters. There are two approaches to DOE: classical and the Taguchi approach. In both cases, however, the planning of an experiment to minimize the cost of data obtained and maximize the validity range of the results is the primary concern. Requirements for a good experiment include clear treatment comparisons, controlled fixed and experimental variables, and maximum freedom from systematic error. The experiments should adhere to the scientific principles of statistical design and analysis. Each experiment should include three parts: the experimental statement, the design, and the analysis. Examples of experimental designs include single-/multifactor block, factorial, Latin square and nested arrangements. Desired quality — the additional features and benefits a customer discovers when using a product or service that lead to increased customer satisfaction. If missing, a customer may become dissatisfied. Detail schedule — a schedule used to communicate the day-to-day activities to working levels on the project. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 484 Monday, September 30, 2002 8:02 PM

484

Six Sigma and Beyond: The Implementation Process

Development (phase) — the second of four sequential phases in the generic project life cycle. Also known as planning phase. Deviation — a nonconformance or departure of a characteristic from specified product, process, or system requirements. Diagnostic journey and remedial journey — a two-phase investigation used by teams to solve chronic quality problems. In the first phase, the diagnostic journey, the team moves from the symptom of a problem to its cause. In the second phase, the remedial journey, the team moves from the cause to a remedy. Dimensions of quality — the different ways in which quality may be viewed; for example, meaning of quality, characteristics of quality, drivers of quality, etc. Direct project costs — the costs directly attributable to a project, including all personnel, goods, and/or services together with all their associated costs, but not including indirect project costs, such as any overhead and office costs incurred in support of the project. Discrimination — the requirements imposed on the organization and the procedures implemented by the organization to assure fairness in hiring and promotion practices. Discrete variable — a variable that assumes only integer values; for example, the number of people in a room is a discrete variable. Discrete probability distribution — term used to signify that the measured process variable takes on a finite or limited number of values; no other possible values exist. Discussion — dialogue explaining implications and impacts on objectives. The elaboration and description of facts, findings, and alternatives. Dispersion (of a statistical sample) — the tendency of the values of elements in a sample to differ from each other. Dispersion is commonly expressed in terms of the range of the sample (difference between the lowest and highest values) or by the standard deviation. Dispersion analysis diagram — a cause and effect diagram for analysis of the various contributions to variability of a process or product. The main factors contributing to the process are first listed, then the specific causes of variability from each factor are enumerated. A systematic study of each cause can then be performed. Display — a pictorial, verbal, written, tabulated, or graphical means of transmitting findings, results, and conclusions. Disposition of nonconformity — action taken to deal with an existing nonconformity; an action may include correcting (repairing), reworking, regrading, scrapping, obtaining a concession, or amending a requirement. Dispute — disagreements not settled by mutual consent that could be decided by litigation or arbitration. Dissatisfiers — those features or functions that the customer or employee has come to expect and whose absence would result in dissatisfaction.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 485 Monday, September 30, 2002 8:02 PM

Glossary

485

Distribution — the amount of potential variation in outputs of a process; it is usually described in terms of its shape, average, and standard deviation. Distribution (of communications) — the dissemination of information for the purpose of communication, approval, or decision-making. DMAIC — methodology used in the classical Six Sigma approach: define, measure, analyze, improve, control. Document control — a system for controlling and executing project documentation in a uniform and orderly fashion. Documentation — the collection of reports, user information, and references for distribution and retrieval, displays, back-up information, and records pertaining to the project. Dodge–Romig sampling plans — plans for acceptance sampling developed by Harold E. Dodge and Harry G. Romig. Four sets of tables were published in 1940: single-sampling lot tolerance tables, double-sampling lot tolerance tables, single-sampling average outgoing quality limit tables, and double-sampling average outgoing quality limit tables. Defects per unit (DPU) — the number of defects counted, divided by the number of “products” or “characteristics” (units) produced. Defects per million opportunities (DPMO) — the number of defects counted, divided by the actual number of opportunities to generate that defect, multiplied by one million. Drivers of quality — include customers, products or services, employee satisfaction, total organizational focus. Dummy activity — an activity, always of zero duration, that is used to show logical dependency when an activity cannot start before another is complete, but that does not lie on the same path through the network. Normally, these dummy activities are graphically represented as a dashed line headed by an arrow. Early finish date (EF) — the earliest time an activity may be completed equal to the early start of the activity plus its remaining duration. Early start date (ES) — the earliest time any activity may begin as logically constrained by the network for a given data date. Earned value — a method of reporting project status in terms of both cost and time. It is the budgeted value of work performed regardless of the actual cost incurred. Economic evaluation — the process of establishing the value of a project in relation to other corporate standards/benchmarks, project profitability, financing, interest rates, and acceptance. Effective interest — the true value of interest rate computed by equations for compound interest rate for a 1-year period. Effort — the application of human energy to accomplish an objective. Eighty-twenty (80–20) rule — a term referring to the Pareto principle, which suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 486 Monday, September 30, 2002 8:02 PM

486

Six Sigma and Beyond: The Implementation Process

Employee relations — those formal activities and procedures used by an organization to administer and develop its workforce. Empowerment — a condition whereby employees have the authority to make decisions and take action in their work areas, within stated bounds and without prior approval. Endorsement — written approval. Endorsement signifies personal understanding and acceptance of the thing endorsed and recommends further endorsement by higher levels of authority, if necessary. Endorsement of commitment by a person invested with appropriate authority signifies authorization. See Approve, Authorize. End users — external customers who purchase products or services for their own use. English system — the system of measurement units based on the foot, the pound, and the second. Entity (or item) — that which can be individually described and considered: process, product, and organization, a system, person, or any combination thereof; totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. Environment (framework) — the combined internal and external forces, both individual and collective, that assist or restrict the attainment of project objectives. These could be business- or project-related or may be due to political, economic, technological, or regulatory conditions (communications management); the circumstances, objects, or conditions by which one is surrounded. Environmentally concerned — those individuals who align themselves with the views of various groups concerned with issues of protecting the environment. Equipment procurement — the acquisition of equipment or material to be incorporated into a project. Estimate — an evaluation of all the costs of the elements of a project or effort as defined by an agreed-upon scope. See order of magnitude estimate, budget estimate, and definitive estimate. Estimated cost to complete (ECC) — the remaining costs to be incurred to satisfy the complete scope of a project at a specific data date; the difference between the cost to date and the forecast final cost. Estimated final cost — see forecast final cost. Ethics — a code of conduct that is based on moral principles and that tries to balance what is fair for individuals with what is right for society. Event — an identifiable single point in time on a project, task, or group of tasks. EVOP — the process of adjusting variables in a process in small increments in search of a more optimum point on the response surface. Exception reporting — the process of documenting those situations where there are significant deviations from the quality specifications of a project. The assumption is made that the project will be developed within established boundaries of quality. When the process falls outside of those boundaries, a report is made on why this deviation occurred. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 487 Monday, September 30, 2002 8:02 PM

Glossary

487

Exception reports — documentation that focuses its attention on variations of key control parameters that are critical rather than on those that are progressing as planned. Execution (phase) — see implementation (phase). Expectancy theory — a motivational theory that says that what people do is based on what they expect to gain from the activity. Expected quality — also known as basic quality; the minimum benefit a customer expects to receive from a product or service. Expenditure — the conversion of resources. An expenditure is an event. Conversions of resources may take more than one form: (1) exchange: conversion of title or ownership (e.g., dollars for materials) or (2) consumption: conversion of a liquid resource to a less recoverable state, e.g., expenditures of time, human resources, dollars to produce something of value; or the incorporation of inventoried materials into fixed assets. Experimental design — a formal plan that details the specifics for conducting an experiment, such as which responses, factors, levels, blocks, treatments, and tools are to be used. Explicit knowledge — the captured and recorded tools of the day, for example, procedures, processes, standards, and other such documents. Exponential distribution — a probability distribution mathematically described by an exponential function; used to describe the probability that a product survives a length of time t in service, under the assumption that the probability of a product failing in any small time interval is independent of time; a continuous distribution where data are more likely to occur below the average than above it; typically used to describe the break-in portion of the “bathtub” curve. External customer — a person or organization who receives a product, service, or information but is not part of the organization supplying it (see also Internal customer). External procurement sources — extra-firm sources, including industry contacts, market data, competitive intelligence, and regulatory information, that could aid procurement decision-making. F distribution — the distribution of F, the ratios of variances for pairs of samples; used to determine whether or not the populations from which two samples were taken have the same standard deviation. The F distribution is usually expressed as a table of the upper limit below which F can be expected to lie with some confidence level for samples of a specified number of degrees of freedom. F test — test of whether two samples are drawn from populations with the same standard deviation, with some specified confidence level. The test is performed by determining whether F, as defined above, falls below the upper limit given by the F distribution table. Facilitator — an individual who is responsible for creating favorable conditions that will enable a team to reach its purpose or achieve its goals by bringing together the necessary tools, information, and resources to get the job done. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 488 Monday, September 30, 2002 8:02 PM

488

Six Sigma and Beyond: The Implementation Process

Facilities/product life cycle — a life cycle that encompasses phases of operation and disposal, as well as, and following, the project life cycle. Factor analysis — a statistical technique that examines the relationships between a single dependent variable and multiple independent variables. For example, it is used to determine which questions on a questionnaire are related to a specific question such as “Would you buy this product again?” Failure mode analysis (FMA) — a procedure used to determine which malfunction symptoms appear immediately before or after a failure of a critical parameter in a system. After all the possible causes are listed for each symptom, the product is designed to eliminate the problems. Failure mode effects analysis (FMEA) — a procedure in which each potential failure mode in every subitem of an item is analyzed to determine its effect on other subitems and on the required function of the item. Failure mode effects and criticality analysis (FMECA) — a procedure that is performed after a failure mode effects analysis to classify each potential failure effect according to its severity and probability of occurrence. Fast track — the starting or implementation of a project by overlapping activities, commonly entailing the overlapping of design and construction (manufacturing) activities. Failure rate — the average number of failures per unit time. Used for assessing reliability of a product in service. Fault tree analysis — a technique for evaluating the possible causes that may lead to the failure of a product. For each possible failure, the possible causes of the failure are determined; then, the situations leading to those causes are determined; and so forth, until all paths leading to possible failures have been traced. The result is a flow chart for the failure process. Plans to deal with each path can then be made. Feasibility — the assessment of capability of being completed; the possibility, probability, and suitability of accomplishment. Feasibility studies — the methods and techniques used to examine technical and cost data to determine the economic potential and the practicality of project applications. It involves the use of techniques such as the time value of money so that projects may be evaluated and compared on an equivalent basis. Interest rates, present worth factors, capitalization costs, operating costs, depreciation, etc., are all considered. Feasible project alternatives — reviews of available alternate procurement actions that could attain the objectives. Feedback (general) — information (data) extracted from a process or situation and used in controlling (directly) or in planning or modifying immediate or future inputs (actions or decisions) into a process or situation. Feedback (process) — using the results of a process to control it. The feedback principle has wide application. An example would be using control charts to keep production personnel informed on the results of a process. This allows them to make suitable adjustments to the process. Some form of feedback on the results of a process is essential in order to keep the process under control. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 489 Monday, September 30, 2002 8:02 PM

Glossary

489

Feedback (teams) — the return of information in interpersonal communication; it may be based on fact or feeling and helps the party who is receiving the information judge how well she is being understood by the other party; more generally, information about the interaction process that is used to make decisions about its performance and to adjust the process when necessary. Feedback loops — pertain to open-loop and closed-loop feedback. Field cost — costs associated with a project site rather than with the home office. Figure of merit — generic term for any of several measures of product reliability, such as MTBF, mean life, etc. Filters — relative to human-to-human communication, those perceptions (based on culture, language, demographics, experience, etc.) that affect how a message is transmitted by the sender and how a message is interpreted by the receiver. Final completion — when the entire work has been performed to the requirements of the contract, except for those items arising from the provisions of warranty, and is so certified. Final payment — final settlement paid at contract completion of the contractually obligated amount including retention. Financial closeout — accounting analysis of how funds were spent on a project. Signifies a point in time when no further charges should be made “against” the project. Financial control — exercise of control on payments of supplier’s invoices. Financing — involves the techniques and methods related to providing the sources of monies and methods to raise funds (stock, mortgages, bonds, innovative financing agreements, leases, etc.) required for a project. Finding — a conclusion of importance based on observation. Fitness for use — a term used to indicate that a product or service fits a given customer’s defined purpose for that product or service. Firm fixed price (FFP) contract — a lump sum contract whereby the supplier agrees to furnish goods or services at a fixed price. Five whys — a persistent questioning technique to probe deeper to surface the root cause of a problem. Fixed price plus incentive fee (FPPIF) contract — provides the supplier with a fixed price for delivered performance plus a predetermined fee for superior performance. Float — see free float. Floating task — a task that can be performed earlier or later in the schedule without affecting the project duration. Flow chart — for programs, decision-making, process development, a pictorial representation of a process indicating the main steps, branches, and eventual outcomes of the process. Flowcharts are drawn to better understand processes. Also called “process map.” The flowchart is one of the seven tools of quality.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 490 Monday, September 30, 2002 8:02 PM

490

Six Sigma and Beyond: The Implementation Process

Focus group — a qualitative discussion group consisting of eight to ten participants invited from a segment of the customer base to discuss an existing or planned product or service, lead by a facilitator working from predetermined questions (focus groups may also be used to gather information in a context other than in the presence of customers). Force-field analysis — a technique for analyzing the forces that aid or hinder an organization in reaching an objective. Forecast — an estimate and prediction of future conditions and events based on information and knowledge available at the time of the forecast. Forecast final cost — the anticipated cost of a project or component when it is complete. The sum of the committed cost to date and the estimated cost to complete. Forecasting — the work performed to estimate and predict future conditions and events. Forecasting is an activity of the management function of planning. Forecasting is often confused with budgeting, which is a definitive allocation of resources rather than a prediction or estimate. Formal bid — bid, quotation, letter, or proposal submitted by prospective suppliers in response to the request for proposal or request for quotation. Formal communication — the officially sanctioned data within an organization, which includes publications, memoranda, training materials and/or events, public relations information, and company meetings. Formative quality evaluation — the process of reviewing project data at key junctures during the project life cycle for a comparative analysis against preestablished quality specifications. This evaluation process is ongoing during the life of a project to ensure that timely changes can be made as needed to protect the success of the project. Forward pass — network calculations that determine the earliest start/finish time (date) of each activity. These calculations are from data date through the logical flow of each activity. Free float (FF) — the amount of time (in work units) an activity may be delayed without affecting the early start of the activity immediately following. Frequency distribution — for a sample drawn from a statistical population, the number of times each outcome was observed. Function (PM function) — the series of processes by which project objectives in that particular area of project management, e.g., scope, cost, time, etc., are achieved. Function–quality integration — the process of actively ensuring that quality plans and programs are integrated, mutually consistent, necessary, and sufficient to permit the project team to achieve the defined product quality. Functional organization — an organization organized by discrete functions, for example, marketing and sales, engineering, production, finance, human resources. Funding (cost management) — the status of internal fund allocation, or allocation by an external agency, if applicable, to enable payment for the performance of the project scope (contract/procurement management); the status of internal or external monies available for performing the work. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 491 Monday, September 30, 2002 8:02 PM

Glossary

491

Future reality tree — a technique used in the application of Goldratt’s Theory of Constraints. Gantt chart — a type of bar chart used in process/project planning and control to display planned work and finished work in relation to time; also called a “milestone chart.” See bar charts. Gap analysis — a technique that compares a company’s existing state to its desired state (as expressed by its long-term plans) to help determine what needs to be done to remove or minimize the gap. Gatekeeping — the role of an individual (often a facilitator) in a group meeting in helping ensure effective interpersonal interactions (for example, ensuring that someone’s ideas are not ignored due to the team moving on to the next topic too quickly). General conditions — general definition of the legal relationships and responsibilities of the parties to the contract and how the contract is to be administered. They are usually standard for a corporation and/or project. General requirements — nontechnical specifications defining the scope of work, payments, procedures, implementation constraints, etc., pertaining to the contract. General sequencing — an overview of the order in which activities will be performed. Goal — a statement of general intent, aim, or desire; it is the point toward which management directs its efforts and resources; goals are often nonquantitative. Goodness of fit — any measure of how well a set of data matches a proposed distribution. Chi-square is the most common measure for frequency distributions. Simple visual inspection of a histogram is a less quantitative, but equally valid, way to determine goodness of fit. Government regulations and requirements — those laws, regulations, rules, policies, and administrative requirements imposed upon organizations by government agencies. Grand average — overall average of data represented on an X-bar chart at the time the control limits were calculated. Grapevine — the informal communication channels over which information flows within an organization, usually without a known origin of the information and without any confirmation of its accuracy or completeness (sometimes referred to as the “rumor mill”). Graph (quality management) — a visual comparison of variables that yield data in numerical facts. Examples include trend graphs, histograms, control charts, frequency distributions, and scatter diagrams (time management); the display or drawing that shows the relationship between activities; pictorial representation of relative variables. Group communication — the means by which a project manager conducts meetings, presentations, negotiations, and other activities necessary to convey a project’s needs and concerns to the project team and other groups. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 492 Monday, September 30, 2002 8:02 PM

492

Six Sigma and Beyond: The Implementation Process

Guideline — a document that recommends methods to be used to accomplish an objective. Hammock — an aggregate or summary activity. All related activities are tied as one summary activity and reported at the summary level. Hanger — a break in a network path. Hierarchy structure — describes an organization that is organized around functional departments/product lines or around customers/customer segments and is characterized by top-down management (also referred to as a bureaucratic model or pyramid structure). Histogram — a graphic summary of variation in a set of data (frequency distribution). The range of the variable is divided into a number of intervals of equal size (called cells), and an accumulation is made of the number of observations falling into each cell. The histogram is essentially a bar graph of the results of this accumulation, i.e., frequency distribution. The pictorial nature of the histogram lets people see patterns that are difficult to see in a simple table of numbers. The histogram is one of the seven tools of quality. Historic records — project documentation that can be used to predict trends, analyze feasibility, and highlight problem areas/pitfalls on future similar projects. Historical data banks — the data stored for future reference and referred to on a periodic basis to indicate trends, total costs, unit costs, technical relationships, etc. Different applications require different database information. This data can be used to assist in the development of future estimates. Hold point — a point, defined in an appropriate document, beyond which an activity must not proceed without the approval of a designated organization or authority. Horizontal structure — describes an organization that is organized along a process or value-added chain, eliminating hierarchy and functional boundaries (also referred to as a systems structure). HR compensation and evaluation — the measurement of an individual’s performance and financial payment provided to employees as a reward for their performance and as a motivator for future performance. HR organization development — the use of behavioral science technology, research, and theory to change an organization’s culture to meet predetermined objectives involving participation, joint decision-making, and team building. HR performance evaluation — the formal system by which managers evaluate and rate the quality of subordinates’ performance over a given period of time. HR records management — the procedures established by the organization to manage all documentation required for the effective development and application of its work force. Human resources management (HRM) — the function of directing and coordinating human resources throughout the life of a project by applying the art and science of behavioral and administrative knowledge to achieve © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 493 Monday, September 30, 2002 8:02 PM

Glossary

493

predetermined project objectives of scope, cost, time, quality, and participant satisfaction. Hypergeometric distribution — a discrete (probability) distribution defining the probability of r occurrences in n trials of an event, when there are a total of d occurrences in a population of N. Impact analysis — the mathematical examination of the nature of individual risks on a project as well as potential structures of interdependent risks. It includes the quantification of their respective impact severity, probability, and sensitivity to changes in related project variables including the project life cycle. To be complete, the analysis should also include an examination of the external “status quo” prior to project implementation as well as the project’s internal intrinsic worth as a reference baseline. A determination should also be made as to whether all risks identified are within the scope of the project’s risk-response planning process. Impact interpretation — clarification of the significance of a variance with respect to overall objectives. Imperfection — a quality characteristic’s departure from its intended level or state without any association to conformance to specification requirements or to the usability of a product or service (see also defect and nonconformity). Implementation (phase) — the third of four sequential phases in the project life cycle. Also known as execution or operation phase. Implementation, completion of — also known as closeout phase. Completion of implementation means that the project team has (1) provided completed project activities in accordance with the project requirements and (2) completed project close out. Imposed date (external) — a predetermined calendar date set without regard to logical considerations of the network. Indirect project costs — all costs that do not form a part of the final project but that are nonetheless required for the orderly completion of the project and that may include, but not necessarily be limited to, field administration, direct supervision, incidental tools and equipment, startup costs, contractors’ fees, insurance, taxes, etc. Individuals outside the project — all those individuals who impact the project work but who are not considered members of the project team. Infant mortality rate— high failure rate that shows up early in product usage. Normally caused by poor design, manufacture, or other identifiable cause. Inflation/escalation — a factor in cost evaluation and cost comparison that must be predicted as an allowance to account for the price changes that can occur with time and over which the project manager has no control (for example: cost of living index, interest rates, other cost indices, etc.). Informal communication — the unofficial communication that takes place in an organization as people talk freely and easily; examples include impromptu meetings and personal conversations (verbal or e-mail). Information — data transferred into an ordered format that makes it usable and allows one to draw conclusions. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 494 Monday, September 30, 2002 8:02 PM

494

Six Sigma and Beyond: The Implementation Process

Information system — technology-based systems used to support operations, aid day-to-day decision-making, and support strategic analysis (other names often used include: management information system, decision system, information technology [IT], data processing). Information flow (distribution list) — a list of individuals that would receive information on a given subject or project. Information gathering — researching, organizing, recording, and comprehending pertinent information/data. Information systems — a structured, interacting complex of persons, machines, and procedures designed to produce information that is collected from both internal and external sources for use as a basis for decision-making in specific contract/procurement activities. Initial Operation — see operation. In-progress activity — an activity that has been started but is not completed on a given date. Input limits — imposition of limitations to the resources through which the plan will be executed. Input milestones — imposed target dates or target events that are to be accomplished and that control the plan with respect to time. Input priorities — imposed priorities or sequence desired with respect to the scheduling of activities within previously imposed constraints. Input restraints — imposed external restraints, such as dates reflecting input from others and target dates reflecting output required by others, and such items as float allocation and constraints. In-service date — that point in time when the project is placed in a state of readiness or availability when it can be used for its specifically assigned function. Inspection — examination or measurement of work to verify whether an item or activity conforms to a specific requirement. Integrated cost/schedule reporting — the development of reports that measure actual vs. budget, “S” curves, BCWS, BCWP, and ACWP. Integrated project progress reports — documentation that measures actual (cost/schedule) vs. budget by utilizing BCWP, BCWS, and ACWP. Intelligence — the ability to learn or understand or to deal with new or trying situations. Intention for bid (IFB) — communications written or oral by the prospective organizations or individuals indicating their willingness to perform the specified work. This could be a letter, statement of qualifications, or response to a request for proposal/quotation. Interaction — mutual action or reciprocal action or influence. Interdependence — shared dependence between two or more items. Interest rate of return — see Profitability. Interface activity — an activity connecting anode in one subnet with a node in another subnet, representing logical interdependence. The activity identifies points of interaction or commonality between the project activities and outside influences. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 495 Monday, September 30, 2002 8:02 PM

Glossary

495

Interface management — the management of communication, coordination, and responsibility across a common boundary between two organizations, phases, or physical entities that are interdependent. Interface program — a computer program that relates status system line items to their parent activities in the project plan. Intermediate customers — distributors, dealers, or brokers who make products and services available to the end user by repairing, repackaging, reselling, or creating finished goods from components or subassemblies. Internal project sources — intra-firm sources and records including historical data on similar procurements, cost and performance data on various suppliers, and other data that could assist in proposed procurements. Interpret — present in understandable terms. Interpretation — reduction of information to appropriate and understandable terms and explanations. Interrelationship digraph — a management and planning tool that displays the relationship between factors in a complex situation. It identifies meaningful categories from a mass of ideas and is useful when relationships are difficult to determine. Intervention — an action taken by a leader or a facilitator to support the effective functioning of a team or work group. Intervention intensity — the strength of an intervention by the intervening person; intensity is affected by words, voice inflection, and nonverbal behaviors. Inventory closeout — settlement and credit of inventory if purchased from project funds. Invitation to bid — the invitation issued to prospective suppliers to submit a bid, quotation, or proposal for the supply of goods or services. Involuntary — contrary to or without choice; not subject to control of the will (reflex). Internal rate of return (IRR) — a discount rate that causes net present value to equal zero. ISO — “equal” (Greek). A prefix for a series of standards published by the International Organization for Standardization. Inspection — the measuring, examining, testing, and gauging of one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic. ISO 9000 series standards — a set of individual but related international standards and guidelines on quality management and quality assurance developed to help companies effectively document the quality system elements to be implemented to maintain an efficient quality system. The standards, initially published in 1987 and revised in 1994 and 2000, are not specific to any particular industry, product, or service. The standards were developed by the International Organization for Standardization, a specialized international agency for standardization composed of the national standards bodies of over 100 countries. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 496 Monday, September 30, 2002 8:02 PM

496

Six Sigma and Beyond: The Implementation Process

ISO 14000-series — a set of standards and guidelines relevant to developing and sustaining an environmental management system. Jidoka — Japanese method of autonomous control involving the adding of intelligent features to machines to start or stop operations as control parameters are reached, and to signal operators when necessary. Job aid — any device, document, or other medium that can be provided to a worker to aid in correctly performing his tasks (e.g., laminated setup instruction card hanging on machine, photos of product at different stages of assembly, metric conversion table, etc.). Job descriptions (scope management) — documentation of a project participant’s job title, supervisor, job summary, responsibilities, authority and any additional job factors (human resources management). Written outlines of the skills, responsibilities, knowledge, authority, environment, and interrelationships involved in an individual’s job. Joint planning meeting — a meeting involving representatives of a key customer and the sales and service team for that account to determine how better to meet the customer’s requirements and expectations. Just-in-time training — job training coincidental with or immediately prior to its need for the job. Kaikaku — Japanese word meaning a breakthrough improvement in eliminating waste. Kaizen — a Japanese term that means gradual unending improvement by doing little things better and setting and achieving increasingly higher standards. The term was made famous by Masaaki Imai in his book Kaizen: The Key to Japan’s Competitive Success. Kaizen blitz/event — an intense, short-term team approach to employing the concepts and techniques of continuous improvement (for example, to reduce cycle time, increase throughput). Kano model — a representation of the three levels of customer satisfaction defined as dissatisfaction, neutrality, and delight. Key event schedule — a schedule comprised of key events or milestones. These events are generally critical accomplishments planned at time intervals throughout a project and used as a basis to monitor overall project performance. The format may be either network or bar chart and may contain minimal detail at a highly summarized level. This is often referred to as a milestone schedule. Key process input variable (KPIV) — an independent material or element, with descriptive characteristics, that is either an object (going into) or a parameter of a process (step) and that has a significant (key) effect on the output of the process. Key process output variable (KPOV) — a dependent material or element, with descriptive characteristics, that is the result of a process (step) that either is or significantly affects a customer’s CTQ. Knowledge management — involves the transformation of data into information, the acquisition or creation of knowledge, as well as the processes © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 497 Monday, September 30, 2002 8:02 PM

Glossary

497

and technology employed in identifying, categorizing, storing, retrieving, disseminating, and using information and knowledge for the purposes of improving decisions and plans. Kurtosis — a measure of the shape of a distribution. If the distribution has longer tails than a normal distribution of the same standard deviation, then it is said to have positive kurtosis (platykurtosis); if it has shorter tails, then it has negative kurtosis (leptokurtosis). Labor relations — those formal activities developed by an organization to negotiate and bargain with its workforce, whether or not that workforce is unionized. Lag — the logical relationship between the start or finish of one activity and the start or finish of another activity. Lag relationship — the four basic types of lag relationships between the start and/or finish of a work item and the start and/or finish of another work item are: 1. 2. 3. 4.

Finish to Start Start to Finish Finish to Finish Start to Start

Language — a systematic means of communicating ideas or feelings by the use of conventionalized signs, sounds, or gestures. Late finish (LF) — the latest time an activity may be completed without delaying the project finish date. Late start (LS) — the latest time an activity may begin without delaying the project finish date of the network. This date is calculated as the late finish minus the duration of the activity. Leader — an individual, recognized by others, as the person to lead an effort. One cannot be a “leader” without one or more “followers.” The term is often used interchangeably with manager. A leader may or may not hold an officially designated management-type position. Leadership (general) — an essential part of a quality improvement effort. Organization leaders must establish a vision, communicate that vision to those in the organization, and provide the tools, knowledge, and motivation necessary to accomplish the vision. Leadership (PM) — the process by which a project manager influences the project team to behave in a manner that will facilitate project goal achievement. Lean (agile) approach/lean (agile) thinking — (“lean” and “agile” may be used interchangeably) has a focus on reducing cycle time and waste using a number of different techniques and tools, for example, value stream mapping and identifying and eliminating “monuments” and nonvalue-added steps. Lean manufacturing — applying the lean approach to improving manufacturing operations. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 498 Monday, September 30, 2002 8:02 PM

498

Six Sigma and Beyond: The Implementation Process

Learner-controlled instruction — (also called “self-directed learning”) a learning situation in which the learner works without an instructor, at her own pace, building mastery of a task (computer-based training is a form of LCI). Learning curve — the time it takes to achieve mastery of a task or body of knowledge. Put another way, it is a concept that recognizes the fact that productivity by workers improves as they become familiar with the sequence of activities involved in the production process. Learning organization — an organization that has as a policy to continue to learn and improve its products, services, processes and outcomes; “an organization that is continually expanding its capacity to create its future” (Senge). Legal tape — a computer tape that contains the contract base project plan as the first entry, and the (resource leveled) target project plan as the second entry. Also, all approved major changes to logic, time, or resources will be added, as a separate entity, to the legal tape. No other entries will be made to this tape. Legally concerned — those individuals who are concerned with assuring that the project complies with all aspects of the law. Leptokurtosis — for frequency distributions: a distribution that shows a higher peak and shorter “tails” than a normal distribution with the same standard deviation. Level finish/schedule (SF) — the date when an activity is scheduled to be completed using the resource allocation process. Level float — the difference between the level finish and the late finish date. Level of detail — a policy expression of content of plans, schedules, and reports in accordance with the scale of the breakdown of information. Level of effort (LOE) — support type effort (e.g., vendor liaison) that does not readily lend itself to measurement of discrete accomplishment. It is generally characterized by a uniform rate of activity over a specific period of time. Level start schedule (SS) — the date an activity is scheduled to begin using the resource allocation process. This date is equal to or later in time than early start. Life cycle — a product life cycle is the total time frame from product concept to the end of its intended use; a project life cycle is typically divided into five stages: concept, planning, design, implementation, evaluation, and close-out. Life cycle costing — the concept of including all costs within the total life of a project from concept, through implementation, startup to dismantling. It is used for making decisions between alternatives and is a term used principally by the government to express the total cost of an article or system. It is also used in the private sector by the real estate industry. Limitation of funds — the value of funds available for expenses beyond which no work could be authorized for performance during the specified period. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 499 Monday, September 30, 2002 8:02 PM

Glossary

499

Line/functional manager — those responsible for activities in one of the primary functions of the organization, such as production or marketing, with whom a project manager must relate in achieving a project’s goals. Line item — the smallest unit of product whose status is tracked in a status system. Linearity — the extent to which a measuring instrument’s response varies with a measured quantity. Linear regression — the mathematical application of the concept of a scatter diagram where the correlation is actually a cause-and-effect relationship. Linear responsibility matrix — a matrix providing a three-dimensional view of project tasks, responsible person, and level of relationship. Lists, project — the tabulations of information organized in meaningful fashion. Listening post data — customer data and information gathered from designated “listening posts.” Logic — the interdependency of activities in a network. Long-term goals — goals that an organization hopes to achieve in the future, usually in 3 to 5 years. They are commonly referred to as strategic goals. Loop — a path in a network closed on itself passing through any node more than once on any given path. The network cannot be analyzed, as it is not a logical network. Lot — a defined quantity of product accumulated under conditions that are considered uniform for sampling purposes. Lot tolerance percent defective (LTPD) — see Consumer’s risk. Lot formation — the process of collecting units into lots for the purpose of acceptance sampling. The lots are chosen to ensure, as much as possible, that the units have identical properties, i.e., that they were produced by the same process operating under the same conditions. Lot tolerance percent defective (LTPD) — for acceptance sampling: expressed in percent defective units; the poorest quality in an individual lot that should be accepted; commonly associated with a small consumer’s risk. Macro environment — consideration, interrelationship, and action of outside changes such as legal, social, economic, political, or technological that may directly or indirectly influence specific project actions. Macro processes — broad, far-ranging processes that often cross functional boundaries. Macro procurement environment — consideration, interrelationship, and action of outside changes such as legal, social, economic, political, or technological that may directly or indirectly influence specific procurement actions. Maintainability — the probability that a given maintenance action for an item under given usage conditions can be performed within a stated time interval when the maintenance is performed under stated conditions using stated procedures and resources. Maintainability has two categories: serviceability — the ease of conducting scheduled inspections and servicing, and repairability — the ease of restoring service after a failure. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 500 Monday, September 30, 2002 8:02 PM

500

Six Sigma and Beyond: The Implementation Process

Manager — an individual who manages and is responsible for resources (people, material, money, time); a person officially designated with a management-type position title. A manager is granted authority from above, whereas a leader’s role is derived by virtue of having followers. However, the terms manager and leader are often used interchangeably. Management — the process of planning, organizing, executing, coordinating, monitoring, forecasting, and exercising control. Management plan — a document that describes the overall guidelines within which a project is organized, administered, and managed to assure the timely accomplishment of project objectives. Management styles — a project manager may adopt several different management styles, according to circumstances, in the process of leadership and team motivation. These include: authoritarian, combative, conciliatory, disruptive, ethical, facilitating, intimidating, judicial, promotional, and secretive. Management styles include the following. Authoritarian — a style in which individuals know what is expected of them; the project manager gives specific guidance as to what should be done, makes his part of the group understood, schedules work to be done, and asks group members to follow standard rules and regulations. Combative — a style that is marked by an eagerness to fight or be disagreeable over any given situation. Conciliatory — a friendly and agreeable style; one that attempts to assemble and unite all project parties involved to provide a compatible working team. Disruptive — a style in which a project manager tends to break apart the unity of a group; the style of an agitator and one who causes disorder on a project. Ethical — the style of an honest and sincere project manager who is able to motivate and press for the best and fairest solution; one who generally goes “by the books.” Facilitating — a style in which a project manager is available to answer questions and give guidance when needed; he does not interfere with dayto-day tasks but rather maintains that status quo. Intimidating — a project manager with this style frequently reprimands employees for the sake of an image as a “tough guy,” at the risk of lowering department morale. Judicial — a style in which a project manager exercises the use of sound judgment or is characterized by applying sound judgment to most areas of the project. Promotional — a style that encourages subordinates to realize their full potential, cultivates a team spirit, and lets subordinates know that good work will be rewarded. Secretive — a style used by a project manager who is not open or outgoing in speech, activity, or purpose — much to the detriment of the overall project. Management time — manhours related to the project management team. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 501 Monday, September 30, 2002 8:02 PM

Glossary

501

Managerial grid — a management theory developed by Robert Blake and Jane Mouton that maintains that a manager’s management style is based on his or her mind-set toward people; it focuses on attitudes rather than behavior. The theory uses a grid to measure concern with production and concern with people. Managerial quality administration — the managerial process of defining and monitoring policies, responsibilities, and systems necessary to retain quality standards throughout a project. Managerial reserves — the reserve accounts for allocating and maintaining funds for contingency purposes on over- or underspending on project activities. These accounts will normally accrue from the contingency and other allowances in the project budget estimate. Manpower planning — the process of projecting an organization’s manpower needs over time, in terms of both numbers and skills, and obtaining the human resources required to match the organization’s needs. See human resources management. Market-perceived quality — the customer’s opinion of an organization’s products or services as compared to those of the competitors. Master schedule — an executive summary-level schedule that identifies the major components of a project and usually also identifies the major milestones. Matrix — a two-dimensional structure in which the horizontal and vertical intersections form cells or boxes. In each cell may be identified a block of knowledge whose interface with other blocks is determined by its position in the structure. Matrix (statistics) — an array of data arranged in rows and columns. Matrix organization — a two-dimensional organizational structure in which the horizontal and vertical intersections represent different staffing positions, with responsibility divided between the horizontal and vertical authorities. Mean time between failures (MTBF) — mean time between successive failures of a repairable product. This is a measure of product reliability. Mean (of a statistical sample) (X-bar) — the arithmetic average value of some variable. The mean is given by a formula, where x is the value of each measurement in a sample. All xs are added together and divided by the number of elements (n) in the sample. Mean (of a population) (µ) — a measure of central tendency. The true arithmetic average of all elements in a population. X-bar approximates the true value of the population mean. Also known as the average. Means (in the Hoshin planning usage) — the step of identifying the ways by which multiyear objectives will be met, leading to the development of action plans. Mean time between failures (MTBF) — the average time interval between failures for repairable product for a defined unit of measure (e.g., operating hours, cycles, or miles). © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 502 Monday, September 30, 2002 8:02 PM

502

Six Sigma and Beyond: The Implementation Process

Measurement — the reference standard or sample used for the comparison of properties. Measurement accuracy — the extent to which the average result of a repeated measurement tends toward the true value of the measured quantity. The difference between the true value and the average measured value is called the instrument bias and may be due to such things as improper zero-adjustment, nonlinear instrument response, or even improper use of an instrument. Measurement error — the difference between the actual and measured value of a measured quantity. Measurement precision — the extent to which a repeated measurement gives the same result. Variations may arise from the inherent capabilities of an instrument, from variations of the operator’s use of the instrument, from changes in operating conditions, etc. Median (of a statistical sample) — the middle number or center value of a set of data when all the data are arranged in an increasing sequence. For a sample of a specific variable, the median is the point X˜ such that half the sample elements are below and the other half are above the median. Method — the manner or way in which work is done. When formalized into a prescribed manner of performing specified work, a method becomes a procedure. Metric — a standard of measurement. Metrology — the science of measurement. Micro environment — consideration of company-, project-, or client-imposed policies and procedures applicable to project actions. Micro procurement environment — consideration of company-, project-, or client-imposed policies and procedures applicable in the procurement actions. Milestone — a significant event in the project (key item or key event). Milestone schedule — see Summary schedule. Milestones for control — interim objectives, points of arrival in terms of time for purposes of progress management. Mind mapping — a technique for creating a visual representation of a multitude of issues or concerns by forming a map of the interrelated ideas. M.I.S. — management information systems. M.I.S. quality requirements — the process of organizing a project’s objectives, strategies, and resources for the M.I.S. data systems. Mitigation — the act of revising a project’s scope, budget, schedule, or quality, preferably without material impact on the project’s objectives, in order to reduce uncertainty on the project. Mixture — a combination of two distinct populations. On control charts a mixture is indicated by an absence of points near the centerline. Mode — the value that occurs most frequently in a data set. Moment of truth (MOT) — a MOT is described by Jan Carlzon, former CEO of Scandinavian Air Services, in the 1980s as: “Any episode where a customer comes into contact with any aspect of your company, no matter

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 503 Monday, September 30, 2002 8:02 PM

Glossary

503

how distant, and by this contact, has an opportunity to form an opinion about your company.” Monitoring — the capture, analysis, and reporting of actual performance compared to planned performance. Monitoring actuals vs. budget — one of the main responsibilities of cost management is to continually measure and monitor the actual cost vs. the budget in order to identify problems, establish variance, analyze the reasons for variance, and take the necessary corrective action. Changes in the forecast final cost are constantly monitored, managed, and controlled. Monte Carlo simulation — a computer modeling technique to predict the behavior of a system from the known random behaviors and interactions of a system’s component parts. A mathematical model of the system is constructed in the computer program, and the response of the model to various operating parameters, conditions, etc. can then be investigated. The technique is useful for handling systems whose complexity prevents analytical calculation. Monument — the point in a process that necessitates a product must wait in a queue before processing further; a barrier to continuous flow. Motivating — the process of inducing an individual to work toward achieving an organization’s objectives while also working to achieve personal objectives. Muda — a Japanese term that refers to an activity that consumes resources but creates no value; seven categories are part of the Muda concept. They are: correction, processing, inventory, waiting, overproduction, internal transport, and motion. N — Population sample size. n — sample size (the number of units in a sample). Natural team — a work group having responsibility for a particular process. NDE — nondestructive evaluation (see Nondestructive testing and evaluation). Near-critical activity — an activity that has low total float. Near-term activities — activities that are planned to begin, be in process, or be completed during a relatively short period of time, such as 30, 60, or 90 days. Negotiating — the process of bargaining with individuals concerning the transfer of resources, generation of information, and the accomplishment of activities. Network diagram — a schematic display of the sequential and logical relationship of the activities that comprise a project. Two popular drawing conventions or notations for scheduling are arrow and precedence diagramming. Networking — the exchange of information or services among individuals, groups, or institutions. Node — one of the defining points of a network; a junction point joined to some or all of the others by dependency lines.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 504 Monday, September 30, 2002 8:02 PM

504

Six Sigma and Beyond: The Implementation Process

Nominal — for a product whose size is of concern: the desired mean value for the particular dimension, the target value. Nonconformance — a deficiency in characteristics, documentation, or procedure that renders the quality of material/service unacceptable or indeterminate. Nonconformity — the nonfulfillment of a specified requirement (see also defect and imperfection). Nondestructive testing and evaluation (NDT) — testing and evaluation methods that do not damage or destroy a product being tested. Nonlinearity (of a measuring instrument) — the deviation of an instrument’s response from linearity. Nonvalue-added — tasks or activities that can be eliminated with no deterioration in product or service functionality, performance, or quality in the eyes of the customer. Nonverbal communication — involving minimal use of the spoken language: gestures, facial expressions, and verbal fragments that communicate emotions without the use of words; sometimes known as body language. Nonwork unit — a calendar unit during which work may not be performed on an activity, such as weekends and holidays. Normal distribution — a probability distribution in the shape of a bell. This bell-shape distribution is for continuous data and is where most of the data are concentrated around the average; it is equally likely that an observation will occur above or below the average. It is significant to know that in this kind of distribution, the average, the middle, and the mode are the same. The normal distribution is a good approximation for a large class of situations. One example is the distribution resulting from the random additions of a large number of small variations. The Central Limits theorem expresses this for the distribution of means of samples; the distribution of means results from the random additions of a large number of individual measurements, each of which contributes a small variation of its own. Norms — behavioral expectations, mutually agreed-upon rules of conduct, protocols to be followed, social practice. Notice to proceed — formal notification to a supplier requesting the start of the work. Net present value (NPV) — a discounted cash-flow technique for finding the present value of each future year’s cash flow. Objective (time management) — a predetermined result; the end toward which effort is directed (contract/procurement management); used to define the method to follow and the service to be contracted or resource to be procured for the performance of work. Objective (general) — a quantitative statement of future expectations and an indication of when the expectations should be achieved; it flows from goals and clarifies what people must accomplish. Objective evidence — verifiable qualitative or quantitative observations, information, records, or statements of fact pertaining to the quality of an item © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 505 Monday, September 30, 2002 8:02 PM

Glossary

505

or service or to the existence and implementation of a quality system element. Observation — an item of objective evidence found during an audit. Operating characteristic (OC) curve — for a sampling plan, the OC curve indicates the probability of accepting a lot based on the sample size to be taken and the fraction defective in the batch. One-to-one marketing — the concept of knowing customers’ unique requirements and expectations and marketing to these (also see Customer relationship management). Open-book management — an approach to managing that exposes employees to an organization’s financial information, provides instruction in business literacy, and enables employees to better understand their role, contribution, and impact on the organization. Operation — the operation of a new facility is described by a variety of terms, each depicting an event in its early operating life. These are defined below, in chronological order: 1. Initial operation — the project milestone date on which material is first introduced into the system for the purpose of producing products. 2. Normal operation — the project milestone date on which the facility has demonstrated the capability of sustained operations at design conditions and the facility is accepted by the client. Operating characteristics curve — in acceptance sampling: a curve showing the probability of accepting a lot vs. the percentage of defective units in the lot. Opportunity — any event that generates an output (product, service, or information). Optimization — the achievement of planned process results that meet the needs of the customer and supplier alike and minimize their combined costs. Order of magnitude (–25, +75%) — this is an approximate estimate, made without detailed data, that is usually produced from cost capacity curves, scale-up or down factors that are appropriately escalated, and approximate cost capacity ratios. This type of estimate is used during the formative stages of an expenditure program for initial evaluation of a project. Other terms commonly used to identify an order of magnitude estimate are preliminary, conceptual, factored, quickie, feasibility, and SWAG. Organizational politics — the informal process by which personal friendships, loyalties, and enmities are used in an attempt to gain an advantage in influencing project decisions. Organization development — the use of behavioral science technology, research, and theory to change an organization’s culture to meet predetermined objectives involving participation, joint decision-making, and team building. Organization structure — identification of participants and their hierarchical relationships. Original duration — the first estimate of work time needed to execute an activity. The most common units of time are hours, days, and weeks. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 506 Monday, September 30, 2002 8:02 PM

506

Six Sigma and Beyond: The Implementation Process

Other bid considerations — an evaluation of personnel and financial resources, facilities, performance record, responsiveness to contract terms and conditions, and the general willingness to perform the work. Overall quality philosophy — the universal belief and performance throughout the company, based on established quality policies and procedures. Those policies and procedures become the basis for collecting facts about a project in an orderly way for study (statistics). Panels — groups of customers recruited by an organization to provide ad hoc feedback on performance or product development ideas. Parallel structure — an organizational module in which groups, such as quality circles or a quality council, exist in the organization in addition to and simultaneously with the line organization (also referred to as collateral structure). Parameter design (Taguchi) — the use of design of experiments for identifying the major contributors to variation. Pareto chart — a basic tool used to graphically rank causes from most significant to least significant. It utilizes a vertical bar graph in which the bar height reflects the frequency or impact of causes. Parametric cost estimating — an estimating methodology using statistical relationships between historical costs and other project variables such as system physical or performance characteristics, contractor output measures, or manpower loading, etc.; also referred to as “top down” estimating. Pareto analysis — an analysis of the frequency of occurrence of various possible concerns. This is a useful way to decide quality control priorities when more than one concern is present. The underlying “Pareto Principle” states that a very small number of concerns is usually responsible for most quality problems. Pareto diagrams (quality) — a graph, particularly popular in nontechnical projects, to prioritize the few change areas (often 20% of the total) that cause most quality deviations (often 80% of the total). Partnership/alliance — a strategy leading to a relationship with suppliers or customers aimed at reducing costs of ownership, maintenance of minimum stocks, just-in time deliveries, joint participation in design, exchange of information on materials and technologies, new production methods, quality improvement strategies, and the exploitation of market synergy. Path, network — the continuous, linear series of connected activities through a network. Payback period — the number of years it will take the results of a project or capital investment to recover the investment from net cash flows. Payment authorization — the process of allocated fund transfer to an account from which the supplier can be paid for delivered goods/services as per contractual terms. PDM — see precedence diagram method.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 507 Monday, September 30, 2002 8:02 PM

Glossary

507

PDM finish-to-finish relationship — this relationship restricts the finish of one work activity until some specified duration following the finish of another work activity. PDM finish-to-start relationship — a relationship in which one work activity may start just as soon as another work activity is finished. PDM start-to-finish relationship — a relationship that restricts the finish of one work activity until some duration following the start of another work activity. PDM start-to-start relationship — this relationship restricts the start of one work activity until some specified duration following the start of some preceding work activity. PDSA cycle — plan-do-study-act cycle (a variation of PDCA). Percent complete — a ratio comparison of the completion status to the current projection of total work. Performance — the calculation of achievement used to measure and manage project quality. Performance control — control of work during contract execution. Performance evaluation — the formal system by which managers evaluate and rate the quality of subordinates’ performance over a given period of time. Performance management system — a system that supports and contributes to the creation of high-performance work and work systems by translating behavioral principles into procedures. Performance plan — a performance management tool that describes desired performance and provides a way to assess the performance objectively. Personal recognition — the public acknowledgement of an individual’s performance on a project. Personal rewards — providing an individual with psychological or monetary benefits in return for his or her performance. Personnel training — the development of specific job skills and techniques required by an individual to become more productive. Persuade — to advise, to move by argument, entreaty, or expostulation to a belief, position, or course of action. Program evaluation and review technique (PERT) — an event- and probability-based network analysis system generally used in the research and development field where, at the planning stage, activities and their durations between events are difficult to define. Typically used on large programs where the projects involve numerous organizations at widely different locations. Phase — see project phase. Plan — an intended future course of action. Plan development — stage of planning during which the plan is initially created. Plan-do-check-act cycle (PDCA) — a four-step process for quality improvement. In the first step (plan), a plan to effect improvement is developed. In the second step (do), the plan is carried out, preferably on a small scale. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 508 Monday, September 30, 2002 8:02 PM

508

Six Sigma and Beyond: The Implementation Process

In the third step (check), the effects of the plan are observed. In the last step (act), the results are studied to determine what was learned and what can be predicted. The plan-do-check-act cycle is sometimes referred to as the Shewhart cycle because Walter A. Shewhart discussed the concept in his book Statistical Method from the Viewpoint of Quality Control and as the Deming cycle because W. Edwards Deming introduced the concept in Japan. The Japanese subsequently called it the Deming cycle. Planned activity — an activity that has not started or finished prior to the data date. Planner time — manhours related to the planning function. Planning (phase) — see development (phase). Platykurtosis — for frequency distributions: a distribution which has longer “tails” than a normal distribution with the same standard deviation. Plug date — a date externally assigned to an activity that establishes the earliest or latest date on which an activity is allowed to start or finish. PM function — see function. Point estimate — in statistics, a single-value estimate of a population parameter. Point estimates are commonly referred to as the points at which the interval estimates are centered; these estimates give information about how much uncertainty is associated with the estimate. Poisson distribution — a probability distribution for the number of occurrences of an event; n = number of trials, p = probability that the event occurs for a single trial, r = the number of trials for which the event occurred. The Poisson distribution is a good approximation of the binomial distribution for a case where p is small. A simpler way to say this is: a distribution used for discrete data, applicable when there are many opportunities for occurrence of an event but a low probability (less than 0.10) on each trial. Policy — directives issued by management for guidance and direction where uniformity of action is essential. Directives pertain to the approach, techniques, authorities, and responsibilities for carrying out a management function. Policies/procedures — see project policies. Population (statistical) — the set of all possible outcomes of a statistical determination; a group of people, objects, observations, or measurements about which one wishes to draw conclusions. The population is usually considered as an essentially infinite set from which a subset called a sample is selected to determine the characteristics of the population, i.e., if a process were to run for an infinite length of time, it would produce an infinite number of units. The outcome of measuring the length of each unit would represent a statistical universe, or population. Any subset of the units produced (say, a hundred of them collected in sequence) would represent a sample of the population. Also known as universe. Post contract evaluations — objective performance review and analysis of both parties’ performance; realistic technical problems encountered and the corrective actions taken. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 509 Monday, September 30, 2002 8:02 PM

Glossary

509

Post processing — processing of data after they are collected, usually done by computer. Post project analysis and report — a formal analysis and documentation of a project’s results including cost, schedule, and technical performance vs. the original plan. Post project evaluation — an appraisal of the costs and technical performance of a completed project and the development of new applications in project management methods to overcome problems that occurred during the project life to benefit future projects. Preaward meetings — meetings to aid ranking of prospective suppliers before final award determination and to examine their facilities or capabilities. Precedence diagram method (PDM) — a method of constructing a logic network using nodes to represent the activities and connecting them by lines that show dependencies. Precedence diagram method arrow — a graphical symbol in PDM networks used to represent the LAG describing the relationship between work activities. Precision — a characteristic of measurement that addresses the consistency or repeatability of a measurement system when the identical item is measured a number of times. Predecessor activity — any activity that exists on a common path with the activity in question and occurs before the activity in question. Precision (of measurement) — the extent to which repeated measurement of a standard with a given instrument yields the same result. Prescribe — to direct specified action. To prescribe implies that action must be carried out in a specified fashion. Prevention vs. detection — a term used to contrast two types of quality activities. Prevention refers to those activities designed to prevent nonconformances in products and services. Detection refers to those activities designed to detect nonconformances already in products and services. Another term used to describe this distinction is “designing in quality vs. inspecting in quality.” Preventive action — action taken to eliminate the causes of a potential nonconformity; defect, or other undesirable situation in order to prevent occurrence. Primary — process that refers to the basic steps or activities that will produce an output without the “nice-to-haves.” Priorities — the imposed sequences desired with respect to the scheduling of activities within previously imposed constraints. Priorities matrix — a tool used to choose between several options that have many useful benefits but where not all of them are of equal value. Probability distribution — a relationship giving the probability of observing each possible outcome of a random event. The relationship may be given by a mathematical expression, or it may be given empirically by drawing a frequency distribution for a large enough sample. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 510 Monday, September 30, 2002 8:02 PM

510

Six Sigma and Beyond: The Implementation Process

Probability (mathematical) — the likelihood that a particular occurrence (event) has a particular outcome. In mathematical terms, the probability that outcome x occurs is expressed by the formula: P(x) = (number of trials giving outcome x/total number of trials) Note that, because of this definition, summing up the probabilities for all values of x always gives a total of 1; this is another way of saying that each trial must have exactly one outcome. Problem/need statement/goal — documentation to define a problem, to document the need to find a solution, and to document the overall aim of the sponsor. Problem resolution — the interaction between the project manager and an individual team member with the goal of finding a solution to a technical or personal problem that affects project accomplishment. Problem solving — a rational process for identifying, describing, analyzing, and resolving situations in which something has gone wrong without explanation. Procedure — a prescribed method of performing specified work. A document that answers the questions: What has to be done? Where is it to be done? When is it to be done? Who is to do it? Why must it be done? (Contrasted with a work instruction that answers: How is it to be done? With what materials and tools is it to be done?); in the absence of a work instruction, the instructions may be embedded in the procedure. Process — an activity or group of activities that takes an input, adds value to it, and provides an output to an internal or external customer; a planned and repetitive sequence of steps by which a defined product or service is delivered. In manufacturing the elements are: machine, method, material, measurement, mother nature, manpower. In nonmanufacturing, the elements are: manpower, place, policy, procedures, measurement, environment. Process (framework) — the set of activities by means of which an output is achieved; a series of actions or operations that produce a result (especially a continuous operation). Process improvement — the act of changing a process to reduce variability and cycle time and make the process more effective, efficient, and productive. Process improvement team (PIT) — a natural work group or cross-functional team whose responsibility is to achieve needed improvements in existing processes. The life span of the team is based on the completion of the team purpose and specific goals. Process management — the collection of practices used to implement and improve process effectiveness; it focuses on holding the gains achieved through process improvement and assuring process integrity. Process mapping — the flowcharting of a work process in detail, including key measurements. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 511 Monday, September 30, 2002 8:02 PM

Glossary

511

Process organization — a form of departmentalization where each department specializes in one phase of the process. Process owner — the manager or leader who is responsible for ensuring that the total process is effective and efficient. Procurement addendum — a supplement to bidding documents issued prior to the receipt of bids for the purpose of clarifying, correcting, or adding to the bid documents issued previously. Procurement advertising — method of procurement where a contract results from the solicitation of competitive bids through the media. Procurement/contract negotiations — a process of communication, discussions, and agreement between parties for supply of goods/services in support of procurement objectives. Procurement cost considerations — a reckoning of a supplier’s approach, realism, and reasonableness of cost, forecast of economic factors affecting cost, and cost risks used in the cost proposal. Procurement environment — the combined internal and external forces, both isolated and in concert, that assist or restrict the attainment of an objective. These could be business- or project-related or may be due to political, economic, technological or regulatory conditions. See also Macro procurement environment, Micro procurement environment. Procurement identification — the identification of the different categories of procurement of which one or more may be required during project execution. Procurement invitation — a method of procurement where a contract results from the selected invitation of competitive bids. Procurement: other considerations — includes an evaluation of staff and financial resources, facilities, performance record, responsiveness to contract terms and conditions and a general willingness to perform the work. Procurement performance evaluation — a comprehensive review of the original specification, statement of work, scope, and contract modifications for the purpose of avoiding pitfalls in future procurements. Procurement prequalifications — the experience, past performance, capabilities, resources, and current workloads of potential sources. Procurement qualifications — see qualifications, contractor. Procurement: sole source — the only source that could fulfill the requirements of procurement. See Contract-procurement management. Procurement ranking — qualitative or quantitative determinations of prospective suppliers’ capabilities and qualifications in order to select one or more sources to provide proposed material/service. Procurement relationship with CWBS (contract work breakdown structure) — the relationship of services or items to be procured with the overall work and their interface with any other project activities. Procurement response — communications, positive or negative, from prospective suppliers in response to the request to supply material/services.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 512 Monday, September 30, 2002 8:02 PM

512

Six Sigma and Beyond: The Implementation Process

Procurement: source evaluation — overall review of capabilities and ranking of prospective suppliers either to request proposals or to enter into negotiations for the award of a contract. Procurement: sources selection — the process of selecting organizations or individuals whose resources, credibility, and performance are expected to meet the contract/procurement objectives. Procurement strategy — the relationship of specific procurement actions to the operating environment of the project. Procurement supplier valuation — assessment of suppliers’ qualifications in order to identify those from whom proposals/bids are to be requested or those who are to be invited to enter negotiations for the award of a contract. Procurement technical considerations — suppliers’ technical competency, understanding of the technical requirements, and capability to produce technically acceptable material or services. Generally this evaluation ranks highest among all other evaluations. Procurement/tender documents — the documents issued to prospective suppliers when inviting bids/quotations for supply of goods/services. Producer’s risk — the maximum probability of saying a process or lot is unacceptable when, in fact, it is acceptable. Product/service liability — the obligation of a company to make restitution for loss related to personal injury, property damage, or other harm caused by its product or service. Productivity — the measurement of labor efficiency when compared to an established base. It is also used to measure equipment effectiveness, drawing productivity, etc. Profitability — a measure of the total income of a project compared to the total monies expended at any period of time. the techniques that are utilized are payout time, return on original investment (ROI), net present value (NPV), discounted cash flow (DCF), sensitivity, and risk analysis. Profound knowledge, system of — as defined by W. Edwards Deming, states that learning cannot be based on experience only; it requires comparisons of results to a prediction, plan, or an expression of theory. Predicting why something happens is essential to understand results and to continually improve. The four components of the system of profound knowledge are: 1. 2. 3. 4.

Appreciation for a system Knowledge of variation Theory of knowledge Understanding of psychology

Program — an endeavor of considerable scope encompassing a number of projects. Program management — the management of a related series of projects executed over a broad period of time that are designed to accomplish broad goals and to which the individual projects contribute. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 513 Monday, September 30, 2002 8:02 PM

Glossary

513

Progress — development to a more advanced state. Progress relates to a progression of development and therefore shows relationships between current conditions and past conditions. Progress analysis — (time management) the evaluation of calculated progress against the approved schedule and the determination of its impact (cost management). The development of performance indices such as: 1. Cost Performance Index = CPI = BCWP/ACWP 2. Schedule Performance Index = SPI = BCWP/BCWS 3. Productivity Progress payments — interim payment for delivered work in accordance with contract terms generally tied to meeting specified performance milestones. Progress trend — an indication of whether the progress rate of an activity or project is increasing, decreasing, or remaining the same (steady) over a period of time. Project — any undertaking with a defined starting point and defined objectives by which completion is identified. In practice, most projects depend on finite or limited resources by which the objectives are to be accomplished. Project accounting — the process of identifying, measuring, recording, and communicating actual project cost data. Project archive tape — a computer tape that contains the contract base project plan, the target–project plan, and every subsequent update of the project plan. Project brief — see project plan. Project budget — the amount and distribution of money allocated to a project. Project change — an approved change to project work content caused by a scope of work change or a special circumstance on the project (weather, strikes, etc.). See also project cost changes. Project close-out — a process that provides for acceptance of a project by the project sponsor, completion of various project records, final revision, and issue of documentation to reflect the “as-built” condition and the retention of essential project documentation. See project life cycle. Project close-out and start-up costs — the estimated extra costs (both capital and operating) that are incurred during the period from the completion of project implementation to the beginning of normal revenue earnings on operations. Project cost — the actual costs of an entire project. Project cost changes — the changes to a project and the initiating of the preparation of detail estimates to determine the impact on project costs and schedule. These changes must then be communicated clearly (both written and verbally) to all participants so that they know that approval/rejection of the project changes has been obtained (especially those which change the original project intent). Project cost systems — the establishment of a project cost accounting system of ledgers, asset records, liabilities, write-offs, taxes, depreciation expense, raw materials, prepaid expenses, salaries, etc. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 514 Monday, September 30, 2002 8:02 PM

514

Six Sigma and Beyond: The Implementation Process

Project data gaps — identification of data gaps in available information in reference to a particular procurement. Project data review — review of qualification data to determine their adequacy. Project data verification — verification of qualification data to check their accuracy. Project duration — the elapsed time from project start date through project finish date. Project environment — see environment. Project finish date/schedule — the latest schedule calendar finish date of all activities on the project derived from network or resource allocation process calculations. Project goods — equipment and materials needed to implement a project. Project information sources — identification and listing of various available sources, internal as well as external, to provide relevant information on specific procurements. Project integration — the bringing together of diverse organizations, groups, or parts to form a cohesive whole to successfully achieve project objectives. Project investment cost — the activity of establishing and assembling all the cost elements (capital and operating) of a project as defined by an agreed scope of work. The estimate attempts to predict the final financial outcome of a future investment program even though all the parameters of the project are not yet fully defined. Project life cycle — the four sequential phases in time through which any project passes, namely, concept, development, execution (implementation or operation), and finishing (termination or closeout). Note that these phases may be broken down into further stages, depending on the area of project application. Sometimes these phases are known as: concept, planning, design, implementation, and evaluation. Project management (PM) — the art of directing and coordinating human and material resources throughout the life of a project by using modern management techniques to achieve predetermined objectives of scope, cost, time, quality, and participant satisfaction. Project manager — the individual appointed with responsibility for project management of the project. Project manual — see project policies/procedures. Project objectives — project scope expressed in terms of outputs, required resources, and timing. Project organization — the orderly structuring of project participants. Project personnel — those members of a project team employed directly by the organization responsible for the project. Project phase — the division of a project time frame (or project life cycle) into the largest logical collection of related activities. Project plan (in PM) — a management summary document that gives the essentials of a project in terms of its objectives, justification, and how the objectives are to be achieved. It should describe how all the major activities © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 515 Monday, September 30, 2002 8:02 PM

Glossary

515

under each project management function are to be accomplished including that of overall project control. The project plan will evolve through successive stages of the project life cycle. Prior to project implementation, for example, it maybe referred to as a project brief. See also baseline and baseline concept. Project plan (general) — all the documents that comprise the details of why a project is to be initiated, what the project is to accomplish, when and where it is to be implemented, who will have responsibility, how the implementation will be carried out, how much it will cost, what resources are required, and how the project’s progress and results will be measured. Project planning — the identification of project objectives and the ordered activity necessary for project completion; the identification of resource types and quantities required to carry out each activity or task. Project policies — general guidelines/formalized methodologies on how a project will be managed. Project preselection meetings — meetings held to supplement and verify qualifications, data, and specifications. Project procedures — the methods, practices, and policies (both written and verbal communications) that will be used during a project’s life. Project procurement strategy — the relationship of specific procurement actions to the operating environment of a project. Project reporting — a planning activity involved with the development and issuance of (internal) time management analysis reports and (external) progress reports. Project risk — the cumulative effect of the chances of uncertain occurrences that will adversely affect project objectives. It is the degree of exposure to negative events and their probable consequences. Project risk is characterized by three factors: risk event, risk probability, and the amount at stake. Project risk analysis — analysis of the consequences and probabilities that certain undesirable events will occur and their impact on achieving contract/procurement objectives. Project risk characterization — identifying the potential external or internal risks associated with procurement actions using estimates of probability of occurrence. Project segments — project subdivisions expressed as manageable components. Project services — expertise and/or labor needed to implement a project not available directly from a project manager’s organization. Project stage — a subset of project phase. Project start date/schedule — the earliest calendar start date among all activities in a network. Project team (framework) — the central management group of the project. The group of people, considered as a group, that shares responsibility for the accomplishment of project goals and who report either part-time or full-time to the project manager. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 516 Monday, September 30, 2002 8:02 PM

516

Six Sigma and Beyond: The Implementation Process

Proposal project plan — usually the first plan issued on a project and accompanies the proposal. It contains key analysis, procurement, and implementation milestones; historical data; and any client-supplied information. Usually presented in bar chart form or summary-level network and is used for inquiry and contract negotiations. Prospectus — the assembly of the evaluation profitability studies and all the pertinent technical data in an overall report for presentation and acceptance by the owner and funders of a project. Psychographic customer characteristics — variables among buyers in the consumer market that address lifestyle issues and include consumer interests, activities, and opinions’ pull system: (see Kanban). Public, the (project external) — all those that are not directly involved in the project but who have an interest in its outcome. This could include, for example, environmental protection groups, Equal Employment Opportunity groups, and others with a real or imagined interest in the project or the way it is managed. Public, the (project internal) — all personnel working directly or indirectly on a project. Public relations — an activity designed to improve the environment in which an organization operates in order to improve the performance of that organization. Punch list — a list made near the completion of a project showing the items of work remaining in order to complete the project scope. Purchase — outright acquisition of items, mostly off-the-shelf or catalog, manufactured outside the purchaser’s premises. Qualifications: contractor — a review of the experience, past performance, capabilities, resources, and current workloads of potential service resources. Quality — a subjective term for which each person has his or her own definition. In technical usage, quality can have two meanings: (1) the characteristics of a product or service that bear on its ability to satisfy stated or implied needs and (2) a product or service free of deficiencies. Quality adviser — the person (facilitator) who helps team members work together in quality processes and is a consultant to the team. The adviser is concerned about the process and how decisions are made rather than about which decisions are made. In the Six Sigma initiative, this person is also called champion. Quality assessment — the process of identifying business practices, attitudes, and activities that are enhancing or inhibiting the achievement of quality improvement in an organization. Quality assurance/quality control (QA/QC) — two terms that have many interpretations because of the multiple definitions for the words assurance and control. For example, assurance can mean the act of giving confidence, the state of being certain, or the act of making certain; control can mean an evaluation to indicate needed corrective responses, the act of guiding, or the state of a process in which the variability is attributable to a constant © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 517 Monday, September 30, 2002 8:02 PM

Glossary

517

system of chance causes. (For a detailed discussion on the multiple definitions, see ANSI/ISO/ASQC A35342, Statistics-Vocabulary and Symbols-Statistical Quality Control.) One definition of quality assurance is: all the planned and systematic activities implemented within the quality system that can be demonstrated to provide confidence that a product or service will fulfill requirements for quality. One definition for quality control is: the operational techniques and activities used to fulfill requirements for quality. Often, however, quality assurance and quality control are used interchangeably, referring to the actions performed to ensure the quality of a product, service, or process. The focus of assurance is planning and that of control is appraising. Quality assurance — (contract/procurement management) planned and systematic actions necessary to provide adequate confidence that the performed service or supplied goods will serve satisfactorily for its intended and specified purpose (managerial). The development of a comprehensive program that includes the processes of identifying objectives and strategy, of client interfacing, and of organizing and coordinating planned and systematic controls for maintaining established standards. This in turn involves measuring and evaluating performance to these standards, reporting results, and taking appropriate action to deal with deviations. Quality control (technical) — the planned process of identifying established system requirements and exercising influence through the collection of specific (usually highly technical and itself standardized) data. The basis for decision on any necessary corrective action is provided by analyzing the data and reporting it comparatively to system standards. Quality evaluation methods — the technical process of gathering measured variables or counted data for decision-making in quality process review. Normally these evaluation methods should operate in a holistic context involving proven statistical analysis, referred to previously as statistical process control. A few example methods are: graphs and charts; Pareto diagrams; and exception reporting. Quality loop — conceptual model of interacting activities that influence quality at the various stages ranging from the identification of needs to the assessment of whether those needs are satisfied. Quality loss function — a parabolic approximation (Taylor’s Series) of the quality loss that occurs when a quality characteristic deviates from its target value. The quality loss function is expressed in monetary units: the cost of deviating from the target increases as a quadratic function the further the quality characteristic moves from the target. The formula used to compute the quality loss function depends on the type of quality characteristic being used. The quality loss function was first introduced in this form by Genichi Taguchi. Quality characteristics — the unique characteristics of products and of services by which customers evaluate their perception of quality. Quality council — (sometimes called “quality steering committee”) the group driving the quality improvement effort and usually having oversight © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 518 Monday, September 30, 2002 8:02 PM

518

Six Sigma and Beyond: The Implementation Process

responsibility for the implementation and maintenance of the quality management system; operates in parallel with the normal operation of the business. Quality engineering — the analysis of a manufacturing system at all stages to maximize the quality of the process itself and the products it produces. Quality function — the entire collection of activities through which an organization achieves fitness for use, no matter where these activities are performed. Quality improvement — actions taken throughout an organization to increase the effectiveness and efficiency of activities and processes in order to provide added benefits to both the organization and its customers. Quality level agreement (QLA) — internal service/product with which providers assist their internal customers in clearly delineating the level of service/product required in quantitative measurable terms. A QLA may contain specifications for accuracy, timeliness, quality/usability, product life, service availability, responsiveness to needs, etc. Quality management — quality itself is the composite of material attributes (including performance features and characteristics) of the product, process, or service that are required to satisfy the need for which the project is launched. Quality policies, plans, procedures, specifications, and requirements are attained through the subfunctions of quality assurance (managerial) and quality control (technical). Therefore, QM is viewed as the umbrella for all activities of the overall management function that determine the quality policy, objectives, and responsibilities and implement them by means such as quality planning, quality control, quality assurance, and quality improvement within the quality system. Quality metrics — numerical measurements that give an organization the ability to set goals and evaluate actual performance vs. plan. Quality plan — the document setting out the specific quality practices, resources, and sequence of activities relevant to a particular product, project, or contract; also known as control plan. Quality planning — the activity of establishing quality objectives and quality requirements. Quality process review — the technical process of using data to decide how the actual project results compare with the quality specifications/requirements. If deviations occur, this analysis may cause changes in the project design, development, use, etc., depending on the decisions of the client, involved shareholders, and project team. Quality system — the organizational structure, procedures, processes, and resources needed to implement quality management. Quality trilogy — a three-pronged approach to managing for quality. The three legs are quality planning (developing the products and processes required to meet customer needs), quality control (meeting product and process goals), and quality improvement (achieving unprecedented levels of performance); attributed to Joseph M. Juran. Questionnaires — see Surveys. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 519 Monday, September 30, 2002 8:02 PM

Glossary

519

Queue processing — processing in batches (contrast with continuous flow processing). Queue time — wait time of product awaiting next step in process. Random — varying with no discernible pattern. Random number generator — used to select a stated quantity of random numbers from a table of random numbers; the resulting selection is then used to pull specific items or records corresponding to the selected numbers to comprise a “random sample.” Random sample — the process of selecting a sample of size n where each part in the lot or batch has an equal probability of being selected. Random sampling — a sampling method in which every element in a population has an equal chance of being included. Range — measure of dispersion, that is, the difference between the highest and lowest of a group of values. Ratio analysis — the process of relating isolated business numbers, such as sales, margins, expenses, debt, and profits, to make them meaningful. Rational subgroup — a subgroup that is expected to be as free as possible from assignable causes (usually consecutive items). In control charting: a subgroup of units selected to minimize the differences due to assignable causes. Usually samples taken consecutively from a process operating under the same conditions will meet this requirement. Real time — the application of external time constraints that might affect the calendar time position of execution of each activity in a schedule. Recommend — to offer or suggest for use. Recommendation describes the presentation of plans, ideas, or things to others for adoption. To recommend is to offer something with the option of refusal. Record retention — the necessity to retain records for reference for a specified period after contract close-out, in case they are needed. Records management — the procedures established by an organization to manage all documentation required for the effective development and application of its work force. Recovery schedule — a special schedule showing special efforts to recover time lost (compare master schedule). Recruitment, selection, and job placement — attracting a pool of potential employees, determining which of those employees is best suited for work on the project, and matching that employee to the most appropriate task based on his or her skills and abilities. Refinement — The reworking, redefinition, or modification of the logic or data that may have been previously developed in the planning process as required to properly input milestones, restraints, and priorities. Regression analysis — a study used to understand the relationship between two or more variables; in other words, a technique for determining the mathematical relation between a measured quantity and the variables it depends on. The relationship can be determined and expressed as a mathematical equation. For example, the method might be used to determine the mathematical form of the probability distribution from which a sample was © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 520 Monday, September 30, 2002 8:02 PM

520

Six Sigma and Beyond: The Implementation Process

drawn, by determining which form best “fits” the frequency distribution of the sample. The frequency distribution is the “measured quantity,” and the probability distribution is a “mathematical relation.” Regulatory personnel — those individuals working for government regulatory agencies whose task it is to assure compliance with their particular agency’s requirements. Reliability — in measurement system analysis, refers to the ability of an instrument to produce the same results over repeated administration to measure consistently. In reliability engineering, it is the probability of a product performing its intended function under stated conditions for a given period of time (see also: mean time between failures). Remaining available resource — the difference between the resource availability pool and the level schedule resource requirements. Computed from the resource allocation process. Remaining duration — the estimated work units needed to complete an activity as of the data date. Remaining float (RF) — the difference between the early finish and the late finish date. Remedy — something that eliminates or counteracts a problem cause; a solution. Repair — action taken on a nonconforming product so that it will fulfill the intended usage requirements, although it may not conform to the originally specified requirements. Repeatability and reproducibility (R & R) — a measurement-validation process to determine how much variation exists in the measurement system (including the variation in product, the gauge used to measure, and the individuals using the gauge). Repeatability (of a measurement) — the extent to which repeated measurements of a particular object with a particular instrument produce the same value. Reporting — planning activity involved with the development and issuance of (internal) time management analysis reports and (external) progress reports. Reproducibility — the variation between individual people taking the same measurement and using the same gauging. Request for proposal — a formal invitation containing a scope of work that seeks a formal response (proposal) describing both methodology and compensation to form the basis of a contract. Request for quotation — a formal invitation to submit a price for goods or services as specified. Reschedule — the process of changing the logic, duration, or dates of an existing schedule in response to externally imposed conditions. Resistance to change — unwillingness to change beliefs, habits, and ways of doing things. Resolution (of a measurement) – the smallest unit of measure that an instrument is capable of indicating. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 521 Monday, September 30, 2002 8:02 PM

Glossary

521

Resource — any factors, except time, required or consumed to accomplish an activity. Any substantive requirement of an activity that can be quantified and defined, e.g., manpower, equipment, material, etc. Resource allocation process — the scheduling of activities in a network with the knowledge of certain resource constraints and requirements. This process adjusts activity-level start and finish dates to conform to resource availability and use. Resource availability date — the calendar date when a resource pool becomes available for a given resource. Resource availability pool — the amount of resource availability for any given allocation period. Resource code — the code used to identify a given resource type. Resource description — the actual name or identification associated with a resource code. Resource identification — identification of potential sources that could provide the specified material or services. These sources could be identified either from the firm/project list of vendors or by advertising the need of procurement. Resource-limited planning — the planning of activities so that predetermined resource availability pools are not exceeded. Activities are started as soon as resources are available (subject to logic constraints), as required by the activity. Response planning — the process of formulating suitable risk management strategies for a project including the allocation of responsibility to the project’s various functional areas. It may involve mitigation, deflection, and contingency planning. It should also make some allowance, however tentative, for completely unforeseen occurrences. Resource plots — a display of the amount of resources required as a function of time on a graph. Individual, summary, incremental, and cumulative resource curve levels can be shown. Resource requirements matrix — a tool to relate the resources required to the project tasks requiring them (used to indicate types of individuals needed, material needed, subcontractors, etc.). Response surface methodology (RSM) — a method of determining the optimum operating conditions and parameters of a process by varying the process parameters and observing the results on the product. This is the same methodology used in evolutionary operations (EVOP), but it is used in process development rather than actual production, so that strict adherence to product tolerances need not be maintained. An important aspect of RSM is to consider the relationships among parameters and the possibility of simultaneously varying two or more parameters to optimize the process. Response system — the ongoing process put in place during the life of the project to monitor, review, and update project risk and make the necessary adjustments. Examination of the various risks will show that some risks are greater in some stages of the project life cycle than in others. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 522 Monday, September 30, 2002 8:02 PM

522

Six Sigma and Beyond: The Implementation Process

Responsibility — charged personally with the duties, assignments, and accountability for results associated with a designated position in the organization. Responsibility can be delegated, but it cannot be shared. Responsibility charting — the activity of clearly identifying personnel and staff responsibilities for each task within a project. Restraint — an externally imposed factor affecting when an activity can be scheduled. The external factor may be labor, cost, equipment, or other such resource. Review — to examine critically to determine suitability or accuracy. Risk assessment — review, examination, and judgment about whether or not the identified risks are acceptable in the proposed actions. Risk data applications — the development of a database of risk factors both for the current project and as a matter of historic record. Risk deflection — the act of transferring all or part of a risk to another party, usually by some form of contract. Risk event — the precise description of what might happen to the detriment of a project. Risk factor — any one of risk event, risk probability, or amount at stake, as defined above. Risk identification — the process of systematically identifying all possible risk events that may impact a project. The risk events may be conveniently classified according to their cause or source and ranked roughly according to their ability to manage effective responses. Not all risk events will impact all projects, but the cumulative effect of several risk events occurring in conjunction with each other may well be more severe than the examination of the individual risk events would suggest. Risk management — the art and science of identifying, analyzing, and responding to risk factors throughout the life of a project and in the best interests of its objectives. Risk mitigation — the act of revising a project’s scope, budget, schedule, or quality, preferably without material impact on the project’s objectives, in order to reduce uncertainty on the project. Risk probability — the degree to which the risk event is likely to occur. Risk response planning — the process of formulating suitable risk management strategies for a project, including the allocation of responsibility to the project’s various functional areas. It may involve risk mitigation, risk deflection, and contingency planning. It should also make some allowance, however tentative, for completely unforeseen occurrences. Risk response system — the ongoing process put in place during the life of the project to monitor, review, and update project risk and make the necessary adjustments. Examination of the various risks will show that some risks are greater in some stages of the project life cycle than in others. S — symbol used to represent standard deviation of a sample. σ hat — symbol used to represent the estimated standard deviation given by the formula: Rbar/d2 © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 523 Monday, September 30, 2002 8:02 PM

Glossary

523

The estimated standard deviation may be used only if the data is normally distributed and the process is in control. Salary administration — the formal system by which an organization manages its financial commitments to its employees. It includes manhour accounting and the development of a logical structure for compensation. Sales leveling — a strategy of establishing a long-term relationship with customers to lead to contracts for fixed amounts and scheduled deliveries in order to smooth the flow and eliminate surges. Sample — a finite number of items of a similar type taken from a population for the purpose of examination to determine whether all members of the population would conform to quality requirements or specifications. Sample size — the number of units in a sample chosen from a population. Sampling — the process of drawing conclusions about a population based on a part of the population. Sample — (statistics) a representative group selected from a population. The sample is used to determine the properties of the population. Sample size — the number of elements, or units, in a sample. Sampling — the process of selecting a sample of a population and determining the properties of the sample. The sample is chosen in such a way that its properties are representative of the population. Sampling variation — the variation of a sample’s properties from the properties of the population from which it was drawn. S curves — graphical display of accumulated costs, labor hours, or quantities plotted against time for both budgeted and actual amounts. Scatter plot — for a set of measurements of two variables on each unit of a group: a plot on which each unit is represented as a dot at the x,y position corresponding to the measured values for the unit. The scatter plot is a useful tool for investigating the relationship between the two variables. Scatter diagram — a graphical technique to analyze the relationship between two variables. Two sets of data are plotted on a graph, with the y-axis used for the variable to be predicted and the x-axis used for the variable to make the prediction. The graph will show possible relationships (although two variables might appear to be related, they might not be: those who know most about the variables must make that evaluation). The scatter diagram is one of the seven tools of quality. Scenario planning — a strategic planning process that generates multiple stories about possible future conditions, allowing an organization to look at the potential impact on them and different ways they could respond. Schedule — a display of project time allocation. Schedule: pictorial display — a display in the form of a still picture, slide, or video that represents scheduling information. Schedule refinement — the reworking, redefinition, or modification of the logic or data that may have previously been developed in the planning process as required to properly input milestones, restraints, and priorities. Schedule revision — in the context of scheduling, a change in the network logic or in resources that requires redrawing part or all of the network. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 524 Monday, September 30, 2002 8:02 PM

524

Six Sigma and Beyond: The Implementation Process

Schedule status — see scope reporting. Schedule update — revision of a schedule to reflect the most current information on a project. Schedule variance — any difference between the projected duration of an activity and the actual duration of the activity; also, the difference between projected start and finish dates and actual or revised start and finish dates. Schedule work unit — a calendar time unit when work may be performed on an activity. Scheduling — the recognition of realistic time and resource restraints that will, in some way, influence the execution of a plan. Scientific management — aimed at finding the one best way to perform a task so as to increase productivity and efficiency. Scope — the work content and products of a project or component of a project. Scope is fully described by naming all activities performed, the resources consumed, and the end products that result including quality standards. A statement of scope should be introduced by a brief background to the project, or component, and the general objectives. Scope baseline — summary description of a project’s or component’s original content and end product including basic budgetary and time-constraint data. Scope baseline approval — approval of the scope baseline by the appropriate authority (project sponsors and senior project management staff). Scope change — a deviation from the originally agreed project scope. Scope constraints — applicable restrictions that will affect the scope. Scope cost — basic budgetary constraints. Scope criteria — standards or rules composed of parameters to be considered in defining the project. Scope interfaces — points of interaction between the project or its components and its/their respective environments. Scope management — the function of controlling a project in terms of its goals and objectives through the processes of conceptual development, full definition or scope statement, execution, and termination. Scope of work — a narrative description of the work to be accomplished or resource to be supplied. Scope performance/quality — basic objective of a project. Defines the characteristics of the project’s end product as required by the sponsor. Scope reporting — a process of periodically documenting the status of basic project parameters during the course of a project. The three areas of scope reporting are: • Cost Status — as affecting financial status. • Schedule Status — as affecting time constraint status. • Technical Performance Status — as affecting quality. Scope schedule — basic time constraints. Scope statement — a documented description of the project as to its output, approach, and content. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 525 Monday, September 30, 2002 8:02 PM

Glossary

525

Screening — techniques used for reviewing, analyzing, ranking, and selecting the best alternative for the proposed action. Secondary float (SF) — the difference between the CPM calculated early finish and the imposed finish date. Semantics — the language used to achieve a desired effect on an audience. Sensitivity — (of a measuring instrument) the smallest change in the measured quantity that an instrument is capable of detecting. Service and support personnel — those individuals working in functions such as personnel, accounting, maintenance, and legislative relations that are needed to keep the “primary functions” operating effectively. Shape — pattern or outline formed by the relative position of a large number of individual values obtained from a process. Short term plan — a short duration schedule, usually 4 to 8 weeks, used to show in detail the activities and responsibilities for a particular period; a management technique often used “as needed” or in a critical area of a project. Short term schedule — see short term plan. σ) — the standard deviation of a statistical population. Sigma (σ Simulation (modeling) — using a mathematical model of a system or process to predict the performance of the real system. The model consists of a set of equations or logic rules that operate on numerical values representing the operating parameters of the system. The result of the equations is a prediction of the system’s output. SIPOC — a macro-level analysis of suppliers, inputs, processes, outputs, and customers. Skewness — a measure of a distribution’s symmetry. A skewed distribution shows a longer-than-normal tail on the right or left side of a distribution. Skill — an ability and competence learned by practice. Special causes — causes of variation that arise because of special circumstances. They are not an inherent part of a process. Special causes are also referred to as assignable causes (also see common causes). Specification (of a product) — a listing of the required properties of a product. The specifications may include the desired mean and/or tolerances for certain dimensions or other measurements, the color or texture of surface finish, or any other properties that define the product. Specification (time management) — an information vehicle that provides a precise description of a specific physical item, procedure, or result for the purpose of purchase and/or implementation of the item or service (contract/procurement management); written, pictorial, or graphic information that describes, defines, or specifies services or items to be procured. Specification control — a system for assuring that project specifications are prepared in a uniform fashion and only changed with proper authorization. Sporadic problem — a sudden adverse change in the status quo that can be remedied by restoring the status quo. For example, actions such as changing a worn part or proper handling of an irate customer’s complaint can restore the status quo. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 526 Monday, September 30, 2002 8:02 PM

526

Six Sigma and Beyond: The Implementation Process

Stabilization — the period of time between continuous operation and normal operation. This period encompasses those activities necessary to establish reliable operation at design conditions of capacity, product quality, and efficiency. Staff personnel — those individuals working in departments that are not directly involved in an organization’s mainstream activity but rather perform advising, counseling, and assisting duties for the line/functional departments. Stage — see project stage. Stakeholders — people, departments, and organizations that have an investment or interest in the success or actions taken by an organization. Standard (measurement) — a reference item providing a known value of a quantity to be measured. Standards may be primary — i.e., the standard essentially defines the unit of measure — or secondary (transfer) standards, which are compared to the primary standard (directly or by way of an intermediate transfer standard). Standards are used to calibrate instruments that are then employed to make routine measurements. Standard procedure — prescribes that a certain kind of work be done in the same way wherever it is performed. Standard proposal schedule — a preestablished network on file. Start-up — that period after the date of initial operation during which the unit is brought up to acceptable production capacity and quality. Start-up is the activity that is often confused (used interchangeably) with date of initial operation. Standard — a statement, specification, or quantity of material against which measured outputs from a process may be judged as acceptable or unacceptable. A basis for the uniformity of measuring performance. Also, a document that prescribes a specific consensus solution to a repetitive design, operating, or maintenance problem. Standard deviation — a calculated measure of variability that shows how much the data are spread around the mean. It is shown as the lower case of sigma of the Greek alphabet as σ for the population and s for samples. A measure of the variation among the members of a statistical sample. Statistic — an estimate of a population parameter using a value calculated from a random sample. Statistical confidence — (also called “statistical significance”) the level of accuracy expected of an analysis of data. Most frequently, it is expressed as either a “95% level of significance,” or “5% confidence level.” Statistical inference — the process of drawing conclusions on the basis of statistics. Statistical thinking — a philosophy of learning and action based on three fundamental principles: 1. All work occurs in a system of interconnected processes 2. Variation exists in all processes 3. Understanding and reducing variation are vital to improvement © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 527 Monday, September 30, 2002 8:02 PM

Glossary

527

Statistics — the mathematical methods used to determine the best range of probable values for a project and to assess the degree of accuracy or allowance for unpredictable future events such as accidents, technological innovations, strikes, etc. that can occur during a project’s life. The techniques that can be used are risk analysis with Monte Carlo simulation, confidence levels, range analysis, etc. Status — the condition of a project at a specified point in time. Statusing — indicating most current project status. Status system — system for tracking status at lowest level of detail. Stop work order — request for interim stoppage of work due to nonconformance or funding or technical limitations. Strategic plan — the target plan prioritized by critical total float from the current schedule. Strategy — a framework guiding those choices that determine the nature and direction to attain an objective. Stratification (of a sample) — if a sample is formed by combining units from several lots having different properties, the sample distribution will show a concentration or clumping about the mean value for each lot; this is called stratification. In control charting, if there are changes between subgroups due to stratification, the R-chart points will all tend to be near the centerline. Stratified random sampling — a technique to segment (stratify) a population prior to drawing a random sample from each stratum, the purpose being to increase precision when members of different strata would, if not stratified, cause an unrealistic distortion. Structural variation — variation caused by regular, systematic changes in output such as seasonal patterns and long-term trends. Study — the methodical examination or analysis of a question or problem. Subnet — the subdivision of a network into fragments, usually representing some form of subproject. Subgroup — for control charts: a sample of units from a given process, all taken at or near the same time. Substantial completion — the point in time when the work is ready for use or is being used for the purpose intended and is so certified. Suboptimization — the need for each business function to consider overall organizational objectives, resulting in higher efficiency and effectiveness of the entire system, although performance of a function may be suboptimal. Successor activity — any activity that exists on a common path with the activity in question and occurs after the activity in question. Summary schedule — a single page, usually time-scaled, project schedule; typically included in management level progress reports. Also known as milestone schedule. Summative quality evaluation — the process of determining what lessons have been learned after a project is completed. The objective is to document which behaviors helped determine, maintain, or increase quality standards and which did not (for use in future projects). © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 528 Monday, September 30, 2002 8:02 PM

528

Six Sigma and Beyond: The Implementation Process

Supplementary agreement — contract modification which is accomplished by the mutual action of parties. Supplementary conditions — modifications, deletions, and additions to standard general conditions developed for particular goods/services. Supplementary information — identification and collection of additional information from supplementary sources and its review and analysis. Supplier default — failure on the part of a supplier to meet technical or delivery requirements of a contract. Supplier expediting — actions taken to ensure that the goods/services are supplied in accordance with the schedule documented in the contract. Supplier ranking — qualitative or quantitative determinations of prospective suppliers’ qualifications relative to the provision of the proposed goods/services. Survey — an examination for some specific purpose; careful inspection or consideration; detailed review (survey implies the inclusion of matters not covered by agreed-upon criteria). Also, a structured series of questions designed to elicit a predetermined range of responses covering a preselected area of interest. May be administered orally by a survey taker, by paper and pencil, or by computer. Responses are tabulated and analyzed to identify significant areas for change. SWOT analysis — an assessment of an organization’s key strengths, weaknesses, opportunities, and threats. It considers factors such as the organization’s industry, its competitive position, functional areas, and management. System — a methodical assembly of actions or things forming a logical and connected scheme or unit. Systematic variation (of a process) — variations that exhibit a predictable pattern. The pattern may be cyclic (i.e., a recurring pattern) or may progress linearly (trend). t-distribution — for a sample with size n, drawn from a normally distributed population, with mean X-bar and standard deviation s. The true population parameters are unknown. The t-distribution is expressed as a table for a given number of degrees of freedom and a risk. As the degrees of freedom get very large, it approaches a z-distribution. t-test — a test of the statistical hypothesis that two population means are equal. The population standard deviations are unknown but thought to be the same. The hypothesis is rejected if the t value is outside the acceptable range listed in the t-table for a given risk and degrees of freedom. Take-off — a term used for identifying and recording from drawings the material and quantities required for estimating the time and cost for the completion of an activity. Target date — the date an activity is desired to be started or completed; accepted as the date generated by the initial CPM schedule operation and resource allocation process.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 529 Monday, September 30, 2002 8:02 PM

Glossary

529

Target project plan — the target plan prioritized by critical total float from the current schedule. Target reporting — a method of reporting the current schedule against some established baseline schedule and the computation of variances between them. Task types — characterization of tasks by resource requirement, responsibility, discipline, jurisdiction, function, etc. Team building — the process of influencing a group of diverse individuals, each with his or her own goals, needs, and perspectives, to work together effectively for the good of a project such that their team will accomplish more than the sum of their individual efforts could otherwise achieve. Team decision-making — the process by which the project manager and his team determine feasible alternatives in the face of a technical, psychological, or political problem and make a conscious selection of a course of action from among these available alternatives. Team members — the individuals reporting either part-time or full-time to the project manager who are responsible for some aspect of a project’s activities. Team motivation — the process by which the project manager influences his project team to initiate effort on project tasks, expend increasing amounts of effort on those tasks, and persist in expending effort on these tasks over the period of time necessary for project goal accomplishment. Team reward system — the process by which the project team receives recognition for its accomplishments. Technical quality administration — the technical process of establishing a plan for monitoring and controlling a project’s satisfactory completion. This plan also includes policies and procedures to prevent or correct deviations from quality specifications/requirements. Technical quality specifications — the process of establishing specific project requirements including execution criteria and technologies, project design, measurement specifications, and material procurement and control that satisfy the expectations of the client, shareholders and project team. Technical quality support — the process of providing technical training and expertise from one or more support groups to a project in a timely manner. Effects of these groups could generate considerations for future client needs or warranty services. Technical specifications — documentation that describes, defines or specifies the goods and services to be supplied. See also specifications. Termination (phase) — the fourth and final phase in the generic project life cycle. Also known as final or close-out phase. Tied activity — an activity that must start within a specified time or immediately after its predecessor’s completion. Time delay claim — a request for an extension to the contract dates.

© 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 530 Monday, September 30, 2002 8:02 PM

530

Six Sigma and Beyond: The Implementation Process

Time-limited scheduling — the scheduling of activities so predetermined resource availability pools are not exceeded unless the further delay will cause the project finish date to be delayed. Activities can be delayed only until their late start date. However, activities will begin when the late start date is reached, even if resource limits are exceeded. networks with negative total float time should not be processed by time-limited scheduling. Time management — the function required to maintain appropriate allocation of time to the overall conduct of a project through the successive stages of its natural life cycle, (i.e., concept, development, execution, and termination) by means of time planning, time estimating, time scheduling, and schedule control. Time periods — comparing calculated time vs. specified time in relation to constraints and time-span objectives. Tolerance — the permissible range of variation in a particular dimension of a product. Tolerances are often set by engineering requirements to ensure that components will function together properly. Top management — from the viewpoint of the project manager, top management includes the individual to whom he or she reports on project matters and other managers senior to that individual. Total float (TF) — the amount of time (in work units) that an activity may be delayed from its early start without delaying the project finish date. Total float is equal to the late finish or the late start minus the early start of the activity. Transmit — to send or convey from one person or place to another. Trend — a gradual, systematic change with time or other variable. Trend analyses — mathematical methods for establishing trends based on past project history allowing for adjustment, refinement, or revision to predict cost. Regression analysis techniques can be used for predicting cost and schedule trends using data from historical projects. Trend monitoring — a system for tracking the estimated cost, schedule, and resources of a project vs. those planned. Trend reports — indicators of variations of project control parameters against planned objectives. Trending — the review of proposed changes in resources allocation and the forecasting of their impact on budget. To be effective, trending should be regularly performed and the impacts on budget plotted graphically. Used in this manner, trending supports a decision to authorize a change. Type I error — in control chart analysis: concluding that a process is unstable when, in fact, it is stable. Type II error — in control chart analysis: concluding that a process is stable when, in fact, it is unstable. Uncertainty — lack of knowledge of future events. See also project risk. Uniform distribution — a type of distribution in which all outcomes are equally likely. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 531 Monday, September 30, 2002 8:02 PM

Glossary

531

Unit — a discrete item (lamp, invoice, etc.) that possesses one or more CTQs. (Note: “Units” must be considered with regard for the specific CTQs of concern by a customer or for a specific process.) Unit of measure — the smallest increment a measurement system can indicate. See also resolution. Universe — see population. Unit price (UP) contract — a fixed price contract whereby the supplier agrees to furnish goods or services at unit rates and the final price is dependent on the quantities needed to carry out the work. UP — see unit price contract. Update — to revise a schedule to reflect the most current information on a project. Validation — confirmation by examination of objective evidence that specific requirements or a specified intended use is met. Validity — the ability of a feedback instrument to measure what it was intended to measure. Value-added — refers to tasks or activities that convert resources into products or services consistent with customer requirements. The customer can be internal or external to the organization. Value analysis, value engineering, and value research (VA, VE, VR) — a n activity devoted to optimizing cost performance; the systematic use of techniques that identify the required functions of an item, establish values for those functions, and provide the functions at the lowest overall cost without loss of performance (optimum overall cost). Value analysis assumes that a process, procedure, product, or service is of no value unless proven otherwise. It assigns a price to every step of a process and then computes the worth-to-cost ratio of that step. VE points the way to elimination and reengineering. Value research (related to value engineering) for given features of the service or product helps determine the customers’ strongest “likes” and “dislikes” and those for which customers are neutral. Focuses attention on strong dislikes and enables identified “neutrals” to be considered for cost reductions. Variables — quantities that are subject to change or variability. Variable data — data resulting from the measurement of a parameter or variable as opposed to attributes data. A dimensional value can be recorded and is only limited in value by the resolution of the measurement system. Control charts based on variables data include average (X-bar) chart, individuals (X) chart, range (R) chart, sample standard deviation (s) chart, and CUSUM chart. Variable sampling plan — a plan in which a sample is taken and a measurement of a specified quality characteristic is made on each unit. The measurements are summarized into a simple statistic, and the observed value is compared with an allowable value defined in the plan. Variability — the property of exhibiting variation, i.e., changes or differences, in particular, in the product of a process. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 532 Monday, September 30, 2002 8:02 PM

532

Six Sigma and Beyond: The Implementation Process

Variance — in statistics, the square of the standard deviation. Any actual or potential deviation from an intended or budgeted figure or plan. A variance can be the difference between intended and actual time. Any difference between the projected duration of an activity and the actual duration of the activity. Also, the difference between projected start and finish dates and actual or revised start and finish dates. Variance analysis — the analysis of the following: 1. Cost Variance = BCWP – ACWP 2. %Over/Under = ACWP – BCWP × 100 BCWP 3. Unit Variance Analysis a. Labor Rate b. Labor Hours/Units of Work Accomplished c. Material Rate d. Material Usage 4. Schedule/Performance = BCWP – BCWS Variance reports — documentation of project performance about a planned or measured performance parameter. Variation — a change in data, a characteristic, or a function that is caused by one of four factors: special causes, common causes, tampering, or structural variation. Verification — the act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements. Verbal bid — undocumented quotation by telephone or other verbal means of communication. Vital few, useful many — a term used by J. M. Juran to describe his use of the Pareto principle, which he first defined in 1950. (The principle was used much earlier in economics and inventory-control methodologies.) The principle suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes. The 20% of the possible causes are referred to as the “vital few;” and the remaining causes are referred to as the “useful many.” When Juran first defined this principle, he referred to the remaining causes as the “trivial many,” but realizing that no problems are trivial in quality assurance, he changed it to “useful many.” Voice of the customer — an organization’s efforts to understand the customers’ needs and expectations (“voice”) and to provide products and services that truly meet such needs and expectations. Walk the talk — means not only talking about what one believes in but also being observed acting out those beliefs. Employee buy-in of the concept is more likely when management is seen as committed and involved in the process every day. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 533 Monday, September 30, 2002 8:02 PM

Glossary

533

Waste — activities that consume resources but add no value; visible waste (for example, scrap, rework, downtime) and invisible waste (for example, inefficient setups, wait times of people and machines, inventory). It is customary to view waste as any variation from target. WBS — see work breakdown structure. Weibull distribution — a distribution of continuous data that can take on many different shapes and is used to describe a variety of patterns; used to define when the infant mortality rate has ended and a steady state has been reached (decreasing failure rate); relates to the “bathtub” curve. Wisdom — the culmination of the continuum from data to information to knowledge to wisdom. Work acceptance — work is considered accepted when it is conducted, documented, and verified as per acceptance criteria provided in the technical specifications and contract documents. Work analysis — the analysis, classification, and study of the way work is done. Work may be categorized as value-added (necessary work) or nonvalue-added (rework, unnecessary work, idle). Collected data may be summarized on a Pareto chart, showing how people within the studied population work. The need for and value of all work are then questioned and opportunities for improvement identified. A time use analysis may also be included in the study. Work authorization — the process of sanctioning all project work. Work authorization/release — in cases where work is to be performed in segments due to technical or funding limitations, work authorization/release authorizes specified work to be performed during a specified period. Work breakdown structure (WBS) — A task-oriented “family tree” of activities that organizes, defines, and graphically displays the total work to be accomplished in order to achieve the final objectives of a project. Each descending level represents an increasingly detailed definition of the project objective. It is a system for subdividing a project into manageable work packages, components, or elements to provide a common framework for scope/cost/schedule communications, allocation of responsibility, monitoring, and management. Work group — a group composed of people from one functional area who work together on a daily basis and whose goal is to improve the processes of their function. Work packages/control point — WBS elements of the project isolated for assignment to “work centers” for accomplishment. Production control is established at this element level. Work plan — “Designer’s” schedule plan, budget, and monitoring system utilized during the design stage. Work unit — a calendar time unit when work maybe performed on an activity. Working calendar — the total calendar dates which cover all project activities, from start to finish. World-class quality — a term used to indicate a standard of excellence: best of the best. © 2003 by CRC Press LLC

SL316XCh16GlossaryFrame Page 534 Monday, September 30, 2002 8:02 PM

534

Six Sigma and Beyond: The Implementation Process

Workload — review of planned work demand on resources over time spans vs. acceptable limits and their availability. Yield — ratio between salable goods produced and the quantity of raw materials or components put in at the beginning of a process. z-distribution — for a sample size of n drawn from a normal distribution with mean p and standard deviation σ. Used to determine the area under the normal curve. z-test — a test of a statistical hypothesis that the population mean X-bar is equal to the sample mean µ when the population standard deviation is known. Zmax/3 — the greater result of the formula when calculating Cpk. It shows the distance from the tail of the distribution to the specification that shows the greatest capability. Zmin/3 — See Cpk.

© 2003 by CRC Press LLC

SL316XCh17BibFrame Page 535 Monday, September 30, 2002 8:02 PM

Selected Bibliography Agresti, A. (2000). An introduction to categorical data analysis. John Wiley & Sons. New York. Agresti, A. (1999).Categorical data analysis. John Wiley & Sons. New York. Ainsworth, M. and J. T. Oden. (2000). A posteriori error estimation in finite element analysis. John Wiley & Sons. New York. Banner, J. M. and H. C. Cannon. (2000). The elements of teaching. Yale University Press. New Haven. Banner, J. M. and H. C. Cannon. (2000). The elements of learning. Yale University Press. New Haven. Blischke, W. R. and P. Murphy. (2000). Reliability: modeling, prediction, and optimization. John Wiley & Sons. New York. Breyfogle, F. (1998) Implementing Six Sigma: smarter solutions using statistical methods. John Wiley & Sons. New York. Chatterjee, S., A. S. Hadi, and B. Price. (2000). Regression analysis by example. John Wiley & Sons. New York. Chen, Z. (2001). Data mining and uncertain reasoning. John Wiley & Sons. New York. Chong, E. and S. Zak (2001). An introduction to optimization. John Wiley & Sons. New York. Concover, W. J. (2000). Practical nonparametric statistics. John Wiley & Sons. New York. Congdon, P. (2001). Bayesian statistical modeling. John Wiley & Sons, New York. Cook, R. D. and S. Weisberg. (2000). Applied regression including computing and graphics. John Wiley & Sons. New York. Cressle, N. A. (2000). Statistics for spatial data. Rev. ed. John Wiley & Sons. New York. Draper, N. R. and H. Smith. (1999). Applied regression analysis. John Wiley & Sons. New York. Draman, R. H. and S. S. Chakravorty. (2000). An evaluation of quality improvement project selection alternatives. Quality Management Journal. Volume 7. Issue 1. pp. 58–73. Dusharme, D. (November 2001). Six Sigma survey: breaking through the Six Sigma hype. Quality Digest. pp. 27–32. Fletcher, R. (2000). Practical methods of optimization. 2nd ed. John Wiley & Sons. New York. Franko, V. R. (June 2001). Adopting Six Sigma. Quality Digest. Pp. 28–32. Freund, R. J. and R. C. Littell. (2000). SAS system for regression. 3rd ed. John Wiley & Sons. New York. Gustafsson, A., F. Ekdahl, K. Falk, and M. Johnson. (2000). Linking customer satisfaction to product design: a key to success for Volvo. Quality Management Journal. Volume 7. Issue 1. Pp. 27–38. Haimes, Y. Y. (2000). Risk modeling, assessment, and management. John Wiley & Sons. New York. Harrington, H. J. (June 2001). Does Six Sigma implementation really yield near perfect results? Quality Digest. P. 16. Hauser, J. R. and D. Clausing. (May 1, 1988). The house of quality. Harvard Business Review. Product 88307. Hoaglin, D. C., F. Mosteller and J. W. Tukey. (2000). Understanding robust and exploratory data analysis. John Wiley & Sons. New York.

535 © 2003 by CRC Press LLC

SL316XCh17BibFrame Page 536 Monday, September 30, 2002 8:02 PM

536

Six Sigma and Beyond: The Implementation Process

Hollander, M. and D. A. Wolfe. (2000). Nonparametric statistical methods. John Wiley & Sons. New York. Hosmer, D. and S. Lemeshow. (1998). Applied logistic regression. John Wiley & Sons. New York. Johnson, R. A. and K. Tsui. (2000). Statistical reasoning and methods. John Wiley & Sons. New York. Kendall, D. G., D. Barden, and T.K. Carne. (2000). Shape and shape theory. John Wiley & Sons. New York. Kezsbom, D. S. and K. A. Edward. (2001). The new dynamic project management. John Wiley & Sons. New York. Kirby, M. (2000). Geometric data analysis. John Wiley & Sons. New York. Levy, P. and S. Lemeshow (2000). Sampling of populations: methods and applications. John Wiley & Sons. New York. Lukas, J. M. (January 2002). The essential Six Sigma. Quality Progress. Pp. 27–32. Mardia, K. and P. Jupp. (2000). Directional statistics. John Wiley & Sons. New York. McCullock, C. E. and S. R. Searle. (1999). Generalized, linear, and mixed models. John Wiley & Sons. New York. McLachian, G. and D. Peel. (2000). Finite mixture models. John Wiley & Sons. New York. Meeker, W. (1999). Statistical methods for reliability data. John Wiley & Sons. New York. Miller, R. E. (2000). Optimization: foundations and applications. John Wiley & Sons. New York. Montgomery, D., E. A. Peck, and G. G. Vining. (2001). Introduction to linear regression analysis. 3rd ed. John Wiley & Sons. New York. Montgomery, D. C. (1999) Design and analysis of experiments. 5th ed. John Wiley & Sons. New York. Myers, R. H., D. C. Montgomery, and G. G. Vining. (2001). Generalized linear models. John Wiley & Sons. New York. Pearson, T. A. (February 2001). Measure for Six Sigma success. Quality Progress. Pp. 35–42. Ponniah, P. (2001). Data warehousing fundamentals. John Wiley & Sons. New York. Pourahmadi, M. (2001). Foundations of time series analysis and prediction theory. John Wiley & Sons. New York. Qualsoft (2000). Quality Function Deployment – software. Qualsoft, LLC. Birmingham, MI. Rencher, A. C. (2000). Linear models in statistics. John Wiley & Sons. New York. Rigton, S. E. and A. P. Basu. (2000) Statistical methods for the reliability of repairable systems. John Wiley & Sons. New York. Robinson, G. K. (2000). Practical strategies for experimenting. John Wiley & Sons. New York. Ryan, T. P. (2000). Statistical methods for quality improvement. 2nd ed. John Wiley & Sons. New York. Ryan, T. P. (2000). Modern regression methods. John Wiley & Sons. New York. Saltelli, A., K. Chan, and E. M. Scott. (2000) Sensitivity analysis. John Wiley & Sons. New York. Schimek, M. G. (2000). Smoothing and regression. John Wiley & Sons. New York. Shepard, L. A. (October 2000). The role of assessment in a learning culture. Educational Researcher. Pp. 4–14. Thompson, J. R. (2000). Simulation: a modeler’s approach. John Wiley & Sons. New York. Treichler, D., R. Carmichael, A. Kushmanoff, J. Lewis, and G. Berthiez. (January 2002). Design for Six Sigma: 15 lessons learned. Quality Progress. Pp. 33–43. Valliant, R., A. Dorfman, and R. Royall. (2000). Finite population sampling and inference: a prediction approach. Wiley. New York.

© 2003 by CRC Press LLC

SL316XCh17BibFrame Page 537 Monday, September 30, 2002 8:02 PM

Selected Bibliography

537

Wild, C. and G. F. Seber. (2000). Chance encounters: a first course in data analysis and inference. John Wiley & Sons. New York. Wolkenhauer, O. (2001). Data engineering: fuzzy mathematics in systems theory and data analysis. John Wiley & Sons. New York. Wu, C. F. J. and M. Hamada. (1998). Experiments: planning, analysis and parameter design optimization. John Wiley & Sons. New York.

© 2003 by CRC Press LLC