Marks Standard Handbook for Mechanical Engineers

  • 95 1,593 5
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Marks Standard Handbook for Mechanical Engineers

Marks' Standard Handbook for Mechanical Engineers Revised by a staff of specialists EUGENE A. AVALLONE Editor Consul

10,533 1,790 43MB

Pages 2304 Page size 576 x 720 pts Year 2009

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Marks'

Standard Handbook for Mechanical Engineers Revised by a staff of specialists

EUGENE A. AVALLONE

Editor

Consulting Engineer; Professor of Mechanical Engineering, Emeritus The City College of the City University of New York

THEODORE BAUMEISTER III

Editor

Retired Consultant, Information Systems Department E. I. du Pont de Nemours & Co.

ALI M. SADEGH

Editor

Consulting Engineer; Professor of Mechanical Engineering The City College of the City University of New York

Eleventh Edition

New York Chicago San Francisco Lisbon London Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto

Madrid

Library of Congress Cataloged The First Issue of this title as follows: Standard handbook for mechanical engineers. 1st-ed.; 1916– New York, McGraw-Hill. v. Illus. 18–24 cm. Title varies: 1916–58; Mechanical engineers’ handbook. Editors: 1916–51, L. S. Marks.—1958– T. Baumeister. Includes bibliographies. 1. Mechanical engineering—Handbooks, manuals, etc. I. Marks, Lionel Simeon, 1871– ed. II. Baumeister, Theodore, 1897– ed. III. Title; Mechanical engineers’ handbook. TJ151.S82 502’.4’621 16–12915 Library of Congress Catalog Card Number: 87-641192

MARKS’ STANDARD HANDBOOK FOR MECHANICAL ENGINEERS Copyright © 2007, 1996, 1987, 1978 by The McGraw-Hill Companies, Inc. Copyright © 1967, renewed 1995, and 1958, renewed 1986, by Theodore Baumeister III. Copyright © 1951, renewed 1979, by Lionel P. Marks and Alison P. Marks. Copyright © 1941, renewed 1969, and 1930, renewed 1958, by Lionel Peabody Marks. Copyright © 1924, renewed 1952 by Lionel S. Marks. Copyright © 1916 by Lionel S. Marks. All rights reserved. Printed in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the publisher. 1 2 3 4 5 6 7 8 9 0 DOW/DOW 0 1 0 9 8 ISBN-13: 978-0-07-142867-5 ISBN-10: 0-07-142867-4

The sponsoring editor for this book was Larry S. Hager, the editing supervisor was David E. Fogarty, and the production supervisor was Richard C. Ruzycka. It was set in Times Roman by International Typesetting and Composition. The art director for the cover was Anthony Landi. Printed and bound by RR Donnelley. This book is printed on acid-free paper. The editors and the publisher will be grateful to readers who notify them of any inaccuracy or important omission in this book.

Information contained in this work has been obtained by The McGraw-Hill Companies, Inc. (“McGraw-Hill”) from sources believed to be reliable. However, neither McGraw-Hill nor its authors guarantee the accuracy or completeness of any information published herein, and neither McGrawHill nor its authors shall be responsible for any errors, omissions, or damages arising out of use of this information. This work is published with the understanding that McGraw-Hill and its authors are supplying information but are not attempting to render engineering or other professional services. If such services are required, the assistance of an appropriate professional should be sought.

Contributors

Abraham Abramowitz* Consulting Engineer; Professor of Electrical Engineering, Emeritus, The City College of The City University of New York (ILLUMINATION) Vincent M. Altamuro President, VMA Inc., Toms River, NJ (MATERIAL HOLDING, FEEDING, AND METERING. CONVEYOR MOVING AND HANDLING. AUTOMATED GUIDED VEHICLES AND ROBOTS. MATERIAL STORAGE AND WAREHOUSING. METHODS ENGINEERING. AUTOMATIC MANUFACTURING. INDUSTRIAL PLANTS) Charles A. Amann Principal Engineer, KAB Engineering (AUTOMOTIVE ENGINEERING) Farid M. Amirouche Professor of Mechanical and Industrial Engineering, University of Illinois at Chicago (INTRODUCTION TO THE FINITE-ELEMENT METHOD. COMPUTER-AIDED DESIGN, COMPUTER-AIDED ENGINEERING, AND VARIATIONAL DESIGN) Yiannis Andreopoulos Professor of Mechanical Engineering, The City College of the City University of New York (EXPERIMENTAL FLUID MECHANICS) William Antis* Technical Director, Maynard Research Council, Inc., Pittsburgh, PA (METHODS ENGINEERING) Glenn E. Asauskas Lubrication Engineer, Chevron Corp. (LUBRICANTS AND LUBRICATION) Dennis N. Assanis Professor of Mechanical Engineering, University of Michigan (INTERNAL COMBUSTION ENGINES) Eugene A. Avallone Consulting Engineer; Professor of Mechanical Engineering, Emeritus, The City College of The City University of New York (MECHANICAL PROPERTIES OF MATERIALS. GENERAL PROPERTIES OF MATERIALS. PIPE, PIPE FITTINGS, AND VALVES. SOURCES OF ENERGY. STEAM ENGINES. MISCELLANY) Klemens C. Baczewski Consulting Engineer (CARBONIZATION OF COAL AND GAS MAKING) Glenn W. Baggley* Former Manager, Regenerative Systems, Bloom Engineering Co., Inc. (COMBUSTION FURNACES) Frederick G. Baily Consulting Engineer; Steam Turbines, General Electric Co. (STEAM TURBINES) Robert D. Bartholomew Associate, Sheppard T. Powell Associates, LLC (CORROSION) George F. Baumeister President, EMC Process Corp., Newport, DE (MATHEMATICAL TABLES) John T. Baumeister Manager, Product Compliance Test Center, Unisys Corp. (MEASURING UNITS) E. R. Behnke* Product Manager, CM Chain Division, Columbus, McKinnon Corp. (CHAINS) John T. Benedict* Retired Standards Engineer and Consultant, Society of Automotive Engineers (AUTOMOTIVE ENGINEERING) Bernadette M. Bennett, Esq. Associate; Carter, DeLuca, Farrell and Schmidt, LLP Melville, NY (PATENTS, TRADEMARKS, AND COPYRIGHTS) Louis Bialy Director, Codes & Product Safety, Otis Elevator Company (ELEVATORS, DUMBWAITERS, AND ESCALATORS) Malcolm Blair Technical and Research Director, Steel Founders Society of America (IRON AND STEEL CASTINGS) Omer W. Blodgett Senior Design Consultant, Lincoln Electric Co. (WELDING AND CUTTING) B. Douglas Bode Engineering Supervisor, Product Customization and Vehicle Enhancement, Construction and Forestry Div., John Deere (OFF-HIGHWAY VEHICLES AND EARTHMOVING EQUIPMENT) Donald E. Bolt* Engineering Manager, Heat Transfer Products Dept., Foster Wheeler Energy Corp. (POWER PLANT HEAT EXCHANGERS) G. David Bounds Senior Engineer, Duke Energy Corp. (PIPELINE TRANSMISSION) William J. Bow* Director, Retired, Heat Transfer Products Department, Foster Wheeler Energy Corp. (POWER PLANT HEAT EXCHANGERS)

*Contributions by authors whose names are marked with an asterisk were made for the previous edition and have been revised or rewritten by others for this edition. The stated professional position in these cases is that held by the author at the time of his or her contribution.

James L. Bowman* Senior Engineering Consultant, Rotary-Reciprocating Compressor Division, Ingersoll-Rand Co. (COMPRESSORS)

Walter H. Boyes, Jr. Editor-in-Chief/Publisher, Control Magazine (INSTRUMENTS) Richard L. Brazill Technology Specialist, ALCOA Technical Center, ALCOA (ALUMINUM AND ITS ALLOYS)

Frederic W. Buse* Chief Engineer, Standard Pump Division, Ingersoll-Rand Co. (DISPLACEMENT PUMPS)

Charles P. Butterfield Chief Engineer, National Wind Technology Center, National Renewable Energy Laboratory (WIND POWER) Late Fellow Engineer, Research Labs., Westinghouse Electric Corp. (NONFERROUS METALS AND ALLOYS. METALS AND ALLOYS FOR NUCLEAR ENERGY APPLICATIONS) Scott W. Case Professor of Engineering Science & Mechanics, Virginia Polytechnic Institute and State University (MECHANICS OF COMPOSITE MATERIALS) Vittorio (Rino) Castelli Senior Research Fellow, Retired, Xerox Corp.; Engineering Consultant (FRICTION. FLUID FILM BEARINGS) Paul V. Cavallaro Senior Mechanical Research Engineer, Naval Undersea Warfare Center (AIR-INFLATED FABRIC STRUCTURES) Eric L. Christiansen Johnson Space Center, NASA (METEOROID/ORBITAL DEBRIS SHIELDING) Robin O. Cleveland Associate Professor of Aerospace and Mechanical Engineering, Boston University (SOUND, NOISE, AND ULTRASONICS) Gary L. Cloud Professor, Department of Mechanical Engineering, Michigan State University (EXPERIMENTAL STRESS AND STRAIN ANALYSIS) Ashley C. Cockerill Vice President and Event Coordinator, nanoTech Business, Inc. (ENGINEERING STATISTICS AND QUALITY CONTROL) Timothy M. Cockerill Senior Project Manager, University of Illinois (ELECTRONICS) Thomas J. Cockerill Advisory Engineer, International Business Machines Corp. (COMPUTERS) Aaron Cohen Retired Center Director, Lyndon B. Johnson Space Center, NASA; Zachry Professor, Texas A&M University (ASTRONAUTICS) Arthur Cohen Former Manager, Standards and Safety Engineering, Copper Development Assn. (COPPER AND COPPER ALLOYS) D. E. Cole Director, Office for Study of Automotive Transportation, Transportation Research Institute, University of Michigan (INTERNAL COMBUSTION ENGINES) James M. Connolly Section Head, Projects Department, Jacksonville Electric Authority (COST OF ELECTRIC POWER) Alexander Couzis Professor of Chemical Engineering, The City College of the City University of New York (INTRODUCTION TO NANOTECHNOLOGY) Terry L. Creasy Assistant Professor of Mechanical Engineering, Texas A&M University (STRUCTURAL COMPOSITES) M. R. M. Crespo da Silva* University of Cincinnati (ATTITUDE DYNAMICS, STABILIZATION, AND CONTROL OF SPACECRAFT) Richard A. Dahlin Vice President, Engineering, Walker Magnetics (LIFTING MAGNETS) Benjamin C. Davenny Acoustical Consultant, Acentech Inc., Cambridge, MA (SOUND, NOISE, AND ULTRASONICS) William H. Day President, Longview Energy Associates, LLC; formerly Founder and Board Chairman, The Gas Turbine Association (GAS TURBINES) Benjamin B. Dayton Consulting Physicist, East Flat Rock, NC (HIGH-VACUUM PUMPS) Horacio M. de la Fuente Senior Engineer, NASA Johnson Space Center (TRANSHAB) Donald D. Dodge Supervisor, Retired, Product Quality and Inspection Technology, Manufacturing Development, Ford Motor Co. (NONDESTRUCTIVE TESTING) Andrew M. Donaldson Project Director, Parsons E&C, Reading, PA (COST OF ELECTRIC POWER) Joseph S. Dorson Senior Engineer, Columbus McKinnon Corp. (CHAIN) James Drago Manager, Engineering, Garlock Sealing Technologies (PACKING, GASKETS, AND SEALS) Michael B. Duke Chief, Solar Systems Exploration, Johnson Space Center, NASA (DYNAMIC ENVIRONMENTS) F. J. Edeskuty Retired Associate, Las Alamos National Laboratory (CRYOGENICS)

C. L. Carlson*

ix

x

CONTRIBUTORS

O. Elnan* University of Cincinnati (SPACE-VEHICLE TRAJECTORIES, FLIGHT MECHANICS, AND PERFORMANCE. ORBITAL MECHANICS)

Robert E. Eppich Vice President, Technology, American Foundry Society (IRON

AND

STEEL CASTINGS)

C. James Erickson*

Retired Principal Consultant, Engineering Department, E. I. du Pont de Nemours & Co. (ELECTRICAL ENGINEERING) George H. Ewing* Retired President and Chief Executive Officer, Texas Eastern Gas Pipeline Co. and Transwestern Pipeline Co. (PIPELINE TRANSMISSION) Heimir Fanner Chief Design Engineer, Ariel Corp. (COMPRESSORS) Erich A. Farber Distinguished Service Professor Emeritus, Director Emeritus of Solar Energy and Energy Conversion Lab., University of Florida [STIRLING (HOT AIR) ENGINES. SOLAR ENERGY. DIRECT ENERGY CONVERSION] Raymond E. Farrell, Esq. Partner; Carter, DeLuca, Farrell and Schmidt, LLP, Melville, NY (PATENTS, TRADEMARKS, AND COPYRIGHTS) D. W. Fellenz* University of Cincinnati (SPACE-VEHICLE TRAJECTORIES, FLIGHT MECHANICS, AND PERFORMANCE. ATMOSPHERIC ENTRY) Chuck Fennell Program Manager, Dalton Foundries (FOUNDARY PRACTICE AND EQUIPMENT) Arthur J. Fiehn* Late Retired Vice President, Project Operations Division, Burns & Roe, Inc. (COST OF ELECTRIC POWER) Sanford Fleeter McAllister Distinguished Professor, School of Mechanical Engineering, Purdue University (JET PROPULSION AND AIRCRAFT PROPELLERS) Luc G. Fréchette Professor of Mechanical Engineering, Université de Sherbrooke, Canada [AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS)] William L. Gamble Professor Emeritus of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign (CEMENT, MORTAR, AND CONCRETE. REINFORCED CONCRETE DESIGN AND CONSTRUCTION) Robert F. Gambon Power Plant Design and Development Consultant (COST OF ELECTRIC POWER) Burt Garofab Senior Engineer, Pittston Corp. (MINES, HOISTS, AND SKIPS. LOCOMOTIVE HAULAGE, COAL MINES) Siamak Ghofranian Senior Engineer, Rockwell Aerospace (DOCKING OF TWO FREEFLYING SPACECRAFT) Samuel V. Glorioso Section Chief, Metallic Materials, Johnson Space Center, NASA (STRESS CORROSION CRACKING) Norman Goldberg Consulting Engineer, Economides and Goldberg (AIR-CONDITIONING, HEATING, AND VENTILATING) Andrew Goldenberg Professor of Mechanical and Industrial Engineering, University of Toronto, Canada; President and CEO of Engineering Service Inc. (ESI) Toronto (ROBOTICS, MECHATRONICS, AND INTELLIGENT AUTOMATION) David T. Goldman* Late Deputy Manager, U.S. Department of Energy, Chicago Operations Office (MEASURING UNITS) Frank E. Goodwin Executive Vice President, ILZRO, Inc. (BEARING METALS. LOWMELTING-POINT METALS AND ALLOYS. ZINC AND ZINC ALLOYS) Don Graham Manager, Turning Products, Carboloy, Inc. (CEMENTED CARBIDES) David W. Green Supervisory Research General Engineer, Forest Products Lab., USDA (WOOD) Leonard M. Grillo Principal, Grillo Engineering Co. (MUNICIPAL WASTE COMBUSTION) Walter W. Guy Chief, Crew and Thermal Systems Division, Johnson Space Center, NASA (SPACECRAFT LIFE SUPPORT AND THERMAL MANAGEMENT) Marsbed Hablanian Retired Manager of Engineering and R&D, Varian Vacuum Technologies (HIGH-VACUUM PUMPS) Christopher P. Hansen Structures and Mechanism Engineer, NASA Johnson Space Center (PORTABLE HYPERBARIC CHAMBER) Harold V. Hawkins* Late Manager, Product Standards and Services, Columbus McKinnon Corp. (DRAGGING, PULLING, AND PUSHING. PIPELINE FLEXURE STRESSES) Keith L. Hawthorne Vice President—Technology, Transportation Technology Center, Inc. (RAILWAY ENGINEERING) V. Terrey Hawthorne Late Senior Engineer, LTK Engineering Services (RAILWAY ENGINEERING) J. Edmund Hay U.S. Department of the Interior (EXPLOSIVES) Terry L. Henshaw Consulting Engineer, Magnolia, TX (DISPLACEMENT PUMPS. CENTRIFUGAL PUMPS) Roland Hernandez Research Engineer, Forest Products Lab., USDA (WOOD) David T. Holmes Manager of Engineering, Lift-Tech International Div. of Columbus McKinnon Corp. (MONORAILS. OVERHEAD TRAVELING CRANES) Hoyt C. Hottel Late Professor Emeritus, Massachusetts Institute of Technology (RADIANT HEAT TRANSFER) Michael W. Hyer Professor of Engineering Science & Mechanics, Virginia Polytechnic Institute and State University (MECHANICS OF COMPOSITE MATERIALS) Timothy J. Jacobs Research Fellow, Department of Mechanical Engineering, University of Michigan (INTERNAL COMBUSTION ENGINES) Michael W. M. Jenkins Professor, Aerospace Design, Georgia Institute of Technology (AERONAUTICS) Peter K. Johnson Consultant (POWDERED METALS) Randolph T. Johnson Naval Surface Warfare Center (ROCKET FUELS) Robert L. Johnston Branch Chief, Materials, Johnson Space Center, NASA (METALLIC MATERIALS FOR AEROSPACE APPLICATIONS. MATERIALS FOR USE IN HIGH-PRESSURE OXYGEN SYSTEMS)

Byron M. Jones*

Retired Assistant Professor of Electrical Engineering, School of Engineering, University of Tennessee at Chattanooga (ELECTRONICS) Scott K. Jones Professor, Department of Accounting & MIS, Alfred Lerner College of Business & Economics, University of Delaware (COST ACCOUNTING) Robert Jorgensen Engineering Consultant (FANS) Serope Kalpakjian Professor Emeritus of Mechanical and Materials Engineering, Illinois Institute of Technology (MACHINING PROCESSES AND MACHINE TOOLS) Igor J. Karassik* Late Senior Consulting Engineer, Ingersoll Dresser Pump Co. (CENTRIFUGAL AND AXIAL FLOW PUMPS) Jonathan D. Kemp Vibration Consultant, Acentech, Inc., Cambridge, MA (SOUND, NOISE, AND ULTRASONICS) J. Randolph Kissell President, The TGB Partnership (ALUMINUM AND ITS ALLOYS) John B. Kitto, Jr. Babcock & Wilcox Co. (STEAM BOILERS) Andrew C. Klein Professor of Nuclear Engineering, Oregon State University; Director of Training, Education and Research Partnership, Idaho National Laboratories (NUCLEAR POWER. ENVIRONMENTAL CONTROL. OCCUPATIONAL SAFETY AND HEALTH. FIRE PROTECTION) Doyle Knight Professor of Mechanical and Aerospace Engineering, Rutgers University (INTRODUCTION TO COMPUTATIONAL FLUID MECHANICS) Ronald M. Kohner President, Landmark Engineering Services, Ltd. (DERRICKS) Ezra S. Krendel Emeritus Professor of Operations Research and Statistics, University of Pennsylvania (HUMAN FACTORS AND ERGONOMICS. MUSCLE GENERATED POWER) A. G. Kromis* University of Cincinnati (SPACE-VEHICLE TRAJECTORIES, FLIGHT MECHANICS, AND PERFORMANCE) Srirangam Kumaresan Biomechanics Institute, Santa Barbara, California (HUMAN INJURY TOLERANCE AND ANTHROPOMETRIC TEST DEVICES) L. D. Kunsman* Late Fellow Engineer, Research Labs, Westinghouse, Electric Corp. (NONFERROUS METALS AND ALLOYS. METALS AND ALLOYS FOR NUCLEAR ENERGY APPLICATIONS) Colin K. Larsen Vice President, Blue Giant U.S.A. Corp. (SURFACE HANDLING) Stan Lebow Research Forest Products Technologist, Forest Products Lab., USDA (WOOD) John H. Lewis Engineering Consultant; Formerly Engineering Staff, Pratt & Whitney Division, United Technologies Corp.; Adjunct Associate Professor, Hartford Graduate Center, Renssealear Polytechnic Institute (GAS TURBINES) Jackie Jie Li Professor of Mechanical Engineering, The City College of the City University of New York (FERROELECTRICS/PIEZOELECTRICS AND SHAPE MEMORY ALLOYS) Peter E. Liley Professor Emeritus of Mechanical Engineering, Purdue University (THERMODYNAMICS, THERMODYNAMIC PROPERTIES OF SUBSTANCES) James P. Locke Flight Surgeon, NASA Johnson Space Center (PORTABLE HYPERBARIC CHAMBER) Ernst K. H. Marburg Manager, Product Standards and Service, Columbus McKinnon Corp. (LIFTING, HOISTING, AND ELEVATING. DRAGGING, PULLING, AND PUSHING. LOADING, CARRYING, AND EXCAVATING) Larry D. Means President, Means Engineering and Consulting (WIRE ROPE) Leonard Meirovitch University Distinguished Professor Emeritus, Department of Engineering Science and Mechanics, Virginia Polytechnic Institute and State University (VIBRATION) George W. Michalec Late Consulting Engineer (GEARING) Duane K. Miller Manager, Engineering Services, Lincoln Electric Co. (WELDING AND CUTTING) Patrick C. Mock Principal Electron Optical Scientist, Science and Engineering Associates, Inc. (OPTICS) Thomas L. Moser Deputy Associate Administrator, Office of Space Flight, NASA Headquarters, NASA (SPACE-VEHICLE STRUCTURES) George J. Moshos* Professor Emeritus of Computer and Information Science, New Jersey Institute of Technology (COMPUTERS) Eduard Muljadi Senior Engineer, National Wind Technology Center, National Renewable Energy Laboratory (WIND POWER) Otto Muller-Girard* Consulting Engineer (INSTRUMENTS) James W. Murdock Late Consulting Engineer (MECHANICS OF FLUIDS) Gregory V. Murphy Head and Professor, Department of Electrical Engineering, College of Engineering, Tuskegee University (AUTOMATIC CONTROLS) Joseph F. Murphy Research General Engineer, Forest Products Lab., USDA (WOOD) John Nagy Retired Supervisory Physical Scientist, U.S. Department of Labor, Mine Safety and Health Administration (DUST EXPLOSIONS) B. W. Niebel* Professor Emeritus of Industrial Engineering, The Pennsylvania State University (INDUSTRIAL ECONOMICS AND MANAGEMENT) James J. Noble Formerly Research Associate Professor of Chemical and Biological Engineering, Tufts University (RADIANT HEAT TRANSFER) Charles Osborn Business Manager, Precision Cleaning Division, PTI Industries, Inc. (PRECISION CLEANING) D. J. Patterson Professor of Mechanical Engineering, Emeritus, University of Michigan (INTERNAL COMBUSTION ENGINES) Harold W. Paxton United States Steel Professor Emeritus, Carnegie Mellon University (IRON AND STEEL) Richard W. Perkins Professor Emeritus of Mechanical, Aerospace, and Manufacturing Engineering, Syracuse University (WOODCUTTING TOOLS AND MACHINES) W. R. Perry* University of Cincinnati (ORBITAL MECHANICS. SPACE-VEHICLE TRAJECTORIES, FLIGHT MECHANICS, AND PERFORMANCE)

CONTRIBUTORS Kenneth A. Phair Senior Project Engineer, Shaw Stone & Webster (GEOTHERMAL POWER)

Orvis E. Pigg Section Head, Structural Analysis, Johnson Space Center, NASA (SPACE VEHICLE STRUCTURES)

Henry O. Pohl Chief, Propulsion and Power Division, Johnson Space Center, NASA (SPACE PROPULSION)

Nicholas R. Rafferty Retired Technical Associate, E. I. du Pont de Nemours & Co., Inc. (ELECTRICAL ENGINEERING) Rama Ramakumar PSO/Albrecht Naeter Professor and Director, Engineering Energy Laboratory, Oklahoma State University (WIND POWER) Pascal M. Rapier* Scientist III, Retired, Lawrence Berkeley Laboratory (ENVIRONMENTAL CONTROL. OCCUPATIONAL SAFETY AND HEALTH. FIRE PROTECTION) James D. Redmond President, Technical Marketing Services, Inc. (STAINLESS STEELS) Darrold E. Roen Late Manager, Sales & Special Engineering & Government Products, John Deere (OFF-HIGHWAY VEHICLES) Ivan L. Ross* International Manager, Chain Conveyor Division, ACCO (OVERHEAD CONVEYORS) Robert J. Ross Supervisory Research General Engineer, Forest Products Lab., USDA (WOOD) J. W. Russell* University of Cincinnati (SPACE-VEHICLE TRAJECTORIES, FLIGHT MECHANICS, AND PERFORMANCE. LUNAR- AND INTERPLANETARY FLIGHT MECHANICS) A. J. Rydzewski Consultant, DuPont Engineering, E. I. du Pont de Nemours and Company (MECHANICAL REFRIGERATION) Ali M. Sadegh Professor of Mechanical Engineering, The City College of The City University of New York (MECHANICS OF MATERIALS. NONMETALLIC MATERIALS. MECHANISM. MACHINE ELEMENTS. SURFACE TEXTURE DESIGNATION, PRODUCTION, AND QUALITY CONTROL. INTRODUCTION TO BIOMECHANICS. AIR-INFLATED FABRIC STRUCTURES. RAPID PROTOTYPING.) Anthony Sances, Jr. Biomechanics Institute, Santa Barbara, CA (HUMAN INJURY TOLERANCE AND ANTHROPOMETRIC TEST DEVICES) C. Edward Sandifer Professor, Western Connecticut State University, Danbury, CT (MATHEMATICS) Erwin M. Saniga Dana Johnson Professor of Information Technology and Professor of Operations Management, University of Delaware (OPERATIONS MANAGEMENT) Adel F. Sarofim Presidential Professor of Chemical Engineering, University of Utah (RADIANT HEAT TRANSFER) Martin D. Schlesinger Late Consultant, Wallingford Group, Ltd. (FUELS) John R. Schley Manager, Technical Marketing, RMI Titanium Co. (TITANIUM AND ZIRCONIUM) Matthew S. Schmidt Senior Engineer, Rockwell Aerospace (DOCKING OF TWO FREEFLYING SPACECRAFT)

xi

Wiliam C. Schneider Retired Assistant Director Engineering/Senior Engineer, NASA Johnson Space Center; Visiting Professor, Texas A&M University (ASTRONAUTICS)

James D. Shearouse, III Late Senior Development Engineer, The Dow Chemical Co. (MAGNESIUM AND MAGNESIUM ALLOYS)

David A. Shifler Metallurgist, MERA Metallurgical Services (CORROSION) Rajiv Shivpuri Professor of Industrial, Welding, and Systems Engineering, Ohio State University (PLASTIC WORKING OF METALS)

James C. Simmons Senior Vice President, Business Development, Core Furnace Systems Corp. (ELECTRIC FURNACES AND OVENS)

William T. Simpson Research Forest Products Technologist, Forest Products Lab., USDA (WOOD)

Kenneth A. Smith Edward R. Gilliland Professor of Chemical Engineering, Massachusetts Institute of Technology (TRANSMISSION OF HEAT BY CONDUCTION AND CONVECTION)

Lawrence H. Sobel* University of Cincinnati (VIBRATION OF STRUCTURES) James G. Speight Western Research Institute (FUELS) Robert D. Steele Project Manager, Voith Siemens Hydro Power Generation, Inc. (HYDRAULIC TURBINES)

Stephen R. Swanson Professor of Mechanical Engineering, University of Utah (FIBER COMPOSITE MATERIALS)

John Symonds* Fellow Engineer, Retired, Oceanic Division, Westinghouse Electric Corp. (MECHANICAL PROPERTIES OF MATERIALS)

Peter L. Tea, Jr. Professor of Physics Emeritus, The City College of the City University of New York (MECHANICS OF SOLIDS)

Anton TenWolde Supervisory Research Physicist, Forest Products Lab., USDA (WOOD) W. David Teter Retired Professor of Civil Engineering, University of Delaware (SURVEYING) Michael C. Tracy Rear Admiral, U.S. Navy (MARINE ENGINEERING) John H. Tundermann Former Vice President, Research and Technology, INCO Intl., Inc. (METALS AND ALLOYS FOR USE AT ELEVATED TEMPERATURES. NICKEL AND NICKEL ALLOYS)

Charles O. Velzy Consultant (MUNICIPAL WASTE COMBUSTION) Harry C. Verakis Supervisory Physical Scientist, U.S. Department of Labor, Mine Safety and Health Administration (DUST EXPLOSIONS)

Arnold S. Vernick Former Associate, Geraghty & Miller, Inc. (WATER) Robert J. Vondrasek* Assistant Vice President of Engineering, National Fire Protection Assoc. (COST OF ELECTRIC POWER)

Michael W. Washo Senior Engineer, Retired, Eastman Kodak, Co. (BEARINGS

WITH

ROLLING CONTACT)

Larry F. Wieserman Senior Technical Supervisor, ALCOA (ALUMINUM AND ITS ALLOYS) Robert H. White Supervisory Wood Scientist, Forest Products Lab., USDA (WOOD) John W. Wood, Jr. Manager, Technical Services, Garlock Klozure (PACKING, GASKETS, AND SEALS)

Symbols and Abbreviations

For symbols of chemical elements, see Sec. 6; for abbreviations applying to metric weights and measures and SI units, Sec. 1; SI unit prefixes are listed on p. 1–19. Pairs of parentheses, brackets, etc., are frequently used in this work to indicate corresponding values. For example, the statement that “the cost per kW of a 30,000-kW plant is $86; of a 15,000-kW plant, $98; and of an 8,000-kW plant, $112,” is condensed as follows: The cost per kW of a 30,000 (15,000) [8,000]-kW plant is $86 (98) [112]. In the citation of references readers should always attempt to consult the latest edition of referenced publications. A or Å A AA AAA AAMA AAR AAS ABAI abs a.c. a-c, ac ACI ACM ACRMA ACS ACSR ACV A.D. AEC a-f, af AFBMA AFS AGA AGMA ahp AIChE AIEE AIME AIP AISC AISE AISI Al. Assn. a.m. a-m, am Am. Mach. AMA AMCA amu AN AN-FO ANC

Angstrom unit  1010 m; 3.937  1011 in mass number  N  Z; ampere arithmetical average Am. Automobile Assoc. American Automobile Manufacturers’ Assoc. Assoc. of Am. Railroads Am. Astronautical Soc. Am. Boiler & Affiliated Industries absolute aerodynamic center alternating current Am. Concrete Inst. Assoc. for Computing Machinery Air Conditioning and Refrigerating Manufacturers Assoc. Am. Chemical Soc. aluminum cable steel-reinforced air cushion vehicle anno Domini (in the year of our Lord) Atomic Energy Commission (U.S.) audio frequency Anti-friction Bearings Manufacturers’ Assoc. Am. Foundrymen’s Soc. Am. Gas Assoc. Am. Gear Manufacturers’ Assoc. air horsepower Am. Inst. of Chemical Engineers Am. Inst. of Electrical Engineers (see IEEE) Am. Inst. of Mining Engineers Am. Inst. of Physics American Institute of Steel Construction, Inc. Am. Iron & Steel Engineers Am. Iron and Steel Inst. Aluminum Association ante meridiem (before noon) amplitude modulation Am. Machinist (New York) Acoustical Materials Assoc. Air Moving & Conditioning Assoc., Inc. atomic mass unit ammonium nitrate (explosive); Army-Navy Specification ammonium nitrate-fuel oil (explosive) Army-Navy Civil Aeronautics Committee

ANS ANSI antilog API approx APWA AREA ARI ARS ASCE ASHRAE ASLE ASM ASME ASST ASTM ASTME atm Auto. Ind. avdp avg, ave AWG AWPA AWS AWWA b bar B&S bbl B.C. B.C.C. Bé B.G. bgd BHN bhp BLC B.M. bmep B of M, BuMines

Am. Nuclear Soc. American National Standards Institute antilogarithm of Am. Petroleum Inst. approximately Am. Public Works Assoc. Am. Railroad Eng. Assoc. Air Conditioning and Refrigeration Inst. Am. Rocket Soc. Am. Soc. of Civil Engineers Am. Soc. of Heating, Refrigerating, and Air Conditioning Engineers Am. Soc. of Lubricating Engineers Am. Soc. of Metals Am. Soc. of Mechanical Engineers Am. Soc. of Steel Treating Am. Soc. for Testing and Materials Am. Soc. of Tool & Manufacturing Engineers atmosphere Automotive Industries (New York) avoirdupois average Am. Wire Gage Am. Wood Preservation Assoc. American Welding Soc. American Water Works Assoc. barns barometer Brown & Sharp (gage); Beams and Stringers barrels before Christ body centered cubic Baumé (degrees) Birmingham gage (hoop and sheet) billions of gallons per day Brinnell Hardness Number brake horsepower boundary layer control board measure; bench mark brake mean effective pressure Bureau of Mines

xix

xx

SYMBOLS AND ABBREVIATIONS

BOD bp Bq bsfc BSI Btu Btub, Btu/h bu Bull. Buweaps BWG c C C CAB CAGI cal C-B-R CBS cc, cm3 CCR c to c cd c.f. cf. cfh, ft3/h cfm, ft3/min C.F.R. cfs, ft3/s cg cgs Chm. Eng. chu C.I. cir cir mil cm CME C.N. coef COESA col colog const cos cos1 cosh cosh1 cot cot1 coth coth1 covers c.p. cp cp CP CPH cpm cycles/min cps, cycles/s CSA csc csc1 csch csch1 cu cyl

biochemical oxygen demand boiling point bequerel brake specific fuel consumption British Standards Inst. British thermal units Btu per hr bushels Bulletin Bureau of Weapons, U.S. Navy Birmingham wire gage velocity of light degrees Celsius (centigrade) coulomb Civil Aeronautics Board Compressed Air & Gas Inst. calories chemical, biological & radiological (filters) Columbia Broadcasting System cubic centimetres critical compression ratio center to center candela centrifugal force confer (compare) cubic feet per hour cubic feet per minute Cooperative Fuel Research cubic feet per second center of gravity centimetre-gram-second Chemical Eng’g (New York) centrigrade heat unit cast iron circular circular mils centimetres Chartered Mech. Engr. (IMechE) cetane number coefficient U.S. Committee on Extension to the Standard Atmosphere column cologarithm of constant cosine of angle whose cosine is, inverse cosine of hyperbolic cosine of inverse hyperbolic cosine of cotangent of angle whose cotangent is (see cos1) hyperbolic cotangent of inverse hyperbolic cotangent of coversed sine of circular pitch; center of pressure candle power coef of performance chemically pure close packed hexagonal cycles per minute cycles per second Canadian Standards Assoc. cosecant of angle whose cosecant is (see cos1) hyperbolic cosecant of inverse hyperbolic cosecant of cubic cylinder

db, dB d-c, dc def deg diam. (dia) DO D2O d.p. DP DPH DST d 2 tons DX e EAP EDR EEI eff e.g. ehp EHV El. Wld. elec elong emf Engg. Engr. ENT EP ERDA Eq. est etc. et seq. eV evap exp exsec ext F F FAA F.C. FCC F.C.C. ff. fhp Fig. F.I.T. f-m, fm F.O.B. FP FPC fpm, ft/min fps ft/s F.S. FSB fsp ft fc fL ft  lb g g gal

decibel direct current definition degrees diameter dissolved oxygen deuterium (heavy water) double pole Diametral pitch diamond pyramid hardness daylight saving time breaking strength, d  chain wire diam. in. direct expansion base of Napierian logarithmic system ( 2.7182) equivalent air pressure equivalent direct radiation Edison Electric Inst. efficiency exempli gratia (for example) effective horsepower extra high voltage Electrical World (New York) electric elongation electromotive force Engineering (London) The Engineer (London) emergency negative thrust extreme pressure (lubricant) Energy Research & Development Administration (successor to AEC; see also NRC) equation estimated et cetera (and so forth) et sequens (and the following) electron volts evaporation exponential function of exterior secant of external degrees Fahrenheit farad Federal Aviation Agency fixed carbon, % Federal Communications Commission; Federal Constructive Council face-centered-cubic (alloys) following (pages) friction horsepower figure Federal income tax frequency modulation free on board (cars) fore perpendicular Federal Power Commission feet per minute foot-pound-second system feet per second Federal Specifications Federal Specifications Board fiber saturation point feet foot candles foot lamberts foot-pounds acceleration due to gravity grams gallons

SYMBOLS AND ABBREVIATIONS gc GCA g  cal gd G.E. GEM GFI G.M. GMT GNP gpcd gpd gpm, gal/min gps, gal/s gpt H h h HEPA h-f, hf hhv horiz hp h-p HPAC hp  hr hr, h HSS H.T. HTHW Hz IACS IAeS ibid. ICAO ICC ICE ICI I.C.T. I.D., ID i.e. IEC IEEE IES i-f, if IGT ihp IMechE imep Imp in., in in  lb, in  lb INA Ind. & Eng. Chem. int i-p, ip ipm, in/min ipr IPS IRE IRS ISO isoth ISTM IUPAC

gigacycles per second ground-controlled approach gram-calories Gudermannian of General Electric Co. ground effect machine gullet feed index General Motors Co. Greenwich Mean Time gross national product gallons per capita day gallons per day, grams per denier gallons per minute gallons per second grams per tex henry Planck’s constant  6.624  1027 org-sec Planck’s constant, h  h/2 high efficiency particulate matter high frequency high heat value horizontal horsepower high-pressure Heating, Piping, & Air Conditioning (Chicago) horsepower-hour hours high speed steel heat-treated high temperature hot water hertz  1 cycle/s (cps) International Annealed Copper Standard Institute of Aerospace Sciences ibidem (in the same place) International Civil Aviation Organization Interstate Commerce Commission Inst. of Civil Engineers International Commission on Illumination International Critical Tables inside diameter id est (that is) International Electrotechnical Commission Inst. of Electrical & Electronics Engineers (successor to AIEE, q.v.) Illuminating Engineering Soc. intermediate frequency Inst. of Gas Technology indicated horsepower Inst. of Mechanical Engineers indicated mean effective pressure Imperial inches inch-pounds Inst. of Naval Architects Industrial & Eng’g Chemistry (Easton, PA) internal intermediate pressure inches per minute inches per revolution iron pipe size Inst. of Radio Engineers (see IEEE) Internal Revenue Service International Organization for Standardization isothermal International Soc. for Testing Materials International Union of Pure & Applied Chemistry

J J&P Jour. JP k K K kB kc kcps kg kg  cal kg  m kip kips km kmc kmcps kpsi ksi kts kVA kW kWh L l, L £ lb L.B.P. lhv lim lin ln loc. cit. log LOX l-p, lp LPG lpw, lm/W lx L.W.L. lm m M mA Machy. max MBh mc m.c. Mcf mcps Mech. Eng. mep METO me V MF mhc mi MIL-STD min mip MKS MKSA mL ml, mL mlhc mm

joule joists and planks Journal jet propulsion fuel isentropic exponent; conductivity degrees Kelvin (Celsius abs) Knudsen number kilo Btu (1000 Btu) kilocycles kilocycles per second kilograms kilogram-calories kilogram-metres 1000 lb or 1 kilo-pound thousands of pounds kilometres kilomegacycles per second kilomegacycles per second thousands of pounds per sq in one kip per sq in, 1000 psi (lb/in2) knots kilovolt-amperes kilowatts kilowatt-hours lamberts litres Laplace operational symbol pounds length between perpendiculars low heat value limit linear Napierian logarithm of loco citato (place already cited) common logarithm of liquid oxygen explosive low pressure liquified petroleum gas lumens per watt lux load water line lumen metres thousand; Mach number; moisture, % milliamperes Machinery (New York) maximum thousands of Btu per hr megacycles per second moisture content thousand cubic feet megacycles per second Mechanical Eng’g (ASME) mean effective pressure maximum, except during take-off million electron volts maintenance factor mean horizontal candles mile U.S. Military Standard minutes; minimum mean indicated pressure metre-kilogram-second system metre-kilogram-second-ampere system millilamberts millilitre  1.000027 cm3 mean lower hemispherical candles millimetres

xxi

xxii

SYMBOLS AND ABBREVIATIONS

mm-free mmf mol mp MPC mph, mi/h MRT ms msc MSS mu MW MW day MWT n N N Ns NA NAA NACA NACM NASA nat. NBC NBFU NBS NCN NDHA NEC®

NEMA NFPA NIST NLGI nm No. (Nos.) NPSH NRC NTP O.D., OD O.H. O.N. op. cit. OSHA OSW OTS oz p. (pp.) Pa P.C. PE PEG P.E.L. PETN pf PFI PIV p.m. PM P.N. ppb PPI ppm press

mineral matter free magnetomotive force mole melting point maximum permissible concentration miles per hour mean radiant temperature manuscript; milliseconds mean spherical candles Manufacturers Standardization Soc. of the Valve & Fittings Industry micron, micro megawatts megawatt day mean water temperature polytropic exponent number (in mathematical tables) number of neutrons; newton specific speed not available National Assoc. of Accountants National Advisory Committee on Aeronautics (see NASA) National Assoc. of Chain Manufacturers National Aeronautics and Space Administration natural National Broadcasting Company National Board of Fire Underwriters National Bureau of Standards (see NIST) nitrocarbonitrate (explosive) National District Hearing Assoc. National Electric Code® (National Electrical Code® and NEC® are registered trademarks of the National Fire Protection Association, Inc., Quincy, MA.) National Electrical Manufacturers Assoc. National Fire Protection Assoc. National Institute of Standards and Technology National Lubricating Grease Institute nautical miles number(s) net positive suction head Nuclear Regulator Commission (successor to AEC; see also ERDA) normal temperature and pressure outside diameter (pipes) open-hearth (steel) octane number opere citato (work already cited) Occupational Safety & Health Administration Office of Saline Water Office of Technical Services, U.S. Dept. of Commerce ounces page (pages) pascal propulsive coefficient polyethylene polyethylene glycol proportional elastic limit an explosive power factor Pipe Fabrication Inst. peak inverse voltage post meridiem (after noon) preventive maintenance performance number parts per billion plan position indicator parts per million pressure

Proc. PSD psi, lb/in2 psia psig pt PVC Q qt q.v. r R R rad RBE R-C RCA R&D RDX rem rev r-f, rf RMA rms rpm, r/min rps, r/s RSHF ry. s s S SAE sat SBI scfm SCR sec sec–1 Sec. sech sech–1 segm SE No. SEI sfc sfm, sfpm shp SI sin sin–1 sinh sinh–1 SME SNAME SP sp specif sp gr sp ht spp SPS sq sr SSF SSU std

Proceedings power spectral density, g2/cps lb per sq in lb per sq in. abs lb per sq in. gage point; pint polyvinyl chloride 1018 Btu quarts quod vide (which see) roentgens gas constant deg Rankine (Fahrenheit abs); Reynolds number radius; radiation absorbed dose; radian see rem resistor-capacitor Radio Corporation of America research and development cyclonite, a military explosive Roentgen equivalent man (formerly RBE) revolutions radio frequency Rubber Manufacturers Assoc. square root of mean square revolutions per minute revolutions per second room sensible heat factor railway entropy seconds sulfur, %; siemens Soc. of Automotive Engineers saturated steel Boiler Inst. standard cu ft per min silicon controlled rectifier secant of angle whose secant is (see cos–1) Section hyperbolic secant of inverse hyperbolic secant of segment steam emulsion number Structural Engineering Institute specific fuel consumption, lb per hphr surface feet per minute shaft horsepower International System of Units (Le Systéme International d’Unites) sine of angle whose sine is (see cos–1) hyperbolic sine of inverse hyperbolic sine of Society of Manufacturing Engineers (successor to ASTME) Soc. of Naval Architects and Marine Engineers static pressure specific specification specific gravity specific heat species unspecified (botanical) standard pipe size square steradian sec Saybolt Furol seconds Saybolt Universal (same as SUS) standard

SYMBOLS AND ABBREVIATIONS SUS SWG T TAC tan tan1 tanh tanh1 TDH TEL temp THI thp TNT torr TP tph tpi TR Trans. T.S. tsi ttd UHF UKAEA UL ult UMS USAF USCG USCS USDA USFPL USGS USHEW USN USP

Saybolt Universal seconds (same as SSU) Standard (British) wire gage tesla Technical Advisory Committee on Weather Design Conditions (ASHRAE) tangent of angle whose tangent is (see cos1) hyperbolic tangent of inverse hyperbolic tangent of total dynamic head tetraethyl lead temperature temperature-humidity (discomfort) index thrust horsepower trinitrotoluol (explosive)  1 mm Hg  1.332 millibars (1/760) atm  (1.013250/760) dynes per cm2 total pressure tons per hour turns per in transmitter-receiver Transactions tensile strength; tensile stress tons per sq in terminal temperature difference ultra high frequency United Kingdom Atomic Energy Authority Underwriters’ Laboratory ultimate universal maintenance standards U.S. Air Force U.S. Coast Guard U.S. Commercial Standard; U.S. Customary System U.S. Dept. of Agriculture U.S. Forest Products Laboratory U.S. Geologic Survey U.S. Dept. of Health, Education & Welfare U.S. Navy U.S. pharmacopoeia

USPHS USS USSG UTC V VCF VCI VDI vel vers vert VHF VI viz. V.M. vol VP vs. W Wb W&M w.g. WHO W.I. W.P.A. wt yd Y.P. yr Y.S. z Zeit.  mc s, s m mm

U.S. Public Health Service United States Standard U.S. Standard Gage Coordinated Universal Time volt visual comfort factor visual comfort index Verein Deutscher Ingenieure velocity versed sine of vertical very high frequency viscosity index videlicet (namely) volatile matter, % volume velocity pressure versus watt weber Washburn & Moen wire gage water gage World Health Organization wrought iron Western Pine Assoc. weight yards yield point year(s) yield strength; yield stress atomic number, figure of merit Zeitschrift mass defect microcurie Boltzmann constant micro ( 106 ), as in ms micrometre (micron)  106 m (103 mm) ohm

2 S8  ` 2 3 2 ⬖ y () [] {}

not equal to approaches varies as infinity square root of cube root of therefore parallel to parentheses, brackets and braces; quantities enclosed by them to be taken together in multiplying, dividing, etc. length of line from A to B pi ( 3.14159) degrees minutes seconds angle differential of x (delta) difference increment of x partial derivative of u with respect to x integral of

MATHEMATICAL SIGNS AND SYMBOLS    

( )   / : ⬋

 V W  ; , L >  

plus (sign of addition) positive minus (sign of subtraction) Negative plus or minus (minus or plus) times, by (multiplication sign) multiplied by sign of division divided by ratio sign, divided by, is to equals, as (proportion) less than greater than much less than much greater than equals identical with similar to approximately equals approximately equals, congruent equal to or less than equal to or greater than

AB p  r rr l dx  x 'u/'x e

xxiii

xxiv

SYMBOLS AND ABBREVIATIONS

a

3

integral of, between limits a and b

b

r o f (x), F(x) exp x  ex = =2 £

line integral around a closed path (sigma) summation of functions of x [e  2.71828 (base of natural, or Napierian, logarithms)] del or nabla, vector differential operator Laplacian operator Laplace operational symbol

4! |x| # x $ x AB AB

factorial 4  4  3  2  1 absolute value of x first derivative of x with respect to time second derivative of x with respect to time vector product; magnitude of A times magnitude of B times sine of the angle from A to B; AB sin AB scalar product; magnitude of A times magnitude of B times cosine of the angle from A to B; AB cos AB

The Editors

EUGENE A. AVALLONE, Editor, is Professor of Mechanical Engineering, Emeritus, The City College of the City University of New York. He has been engaged for many years as a consultant to industry and to a number of local and national governmental agencies. THEODORE BAUMEISTER, III, Editor, is now retired from Du Pont where he was an internal consultant. His specialties are operations research, business decision making, and longrange planning. He has also taught financial modeling in the United States, South America, and the Far East. ALI M. SADEGH, Editor, is Professor of Mechanical Engineering, The City College of the City University of New York. He is also Director of the Center for Advanced Engineering Design and Development. He is actively engaged in research in the areas of machine design, manufacturing, and biomechanics. He is a Fellow of ASME and SME.

xiii

Preface to the Eleventh Edition

The evolutionary trends underlying modern engineering practice are grounded not only on the tried and true principles and techniques of the past, but also on more recent and current advances. Thus, in the preparation of the eleventh edition of “Marks’,” the Editors have considered the broad enterprise falling under the rubric of “Mechanical Engineering” and have added to and/or amended the contents to include subject areas that will be of maximum utility to the practicing engineer. That said, the Editors note that the publication of this eleventh edition has been accomplished through the combined and coordinated efforts of contributors, readers, and the Editors. First, we recognize, with pleasure, the input from our many contributors—past, continuing, and those newly engaged. Their contributions have been prepared with care, and are authoritative, informative, and concise. Second, our readers, who are practitioners in their own wise, have found that the global treatment of the subjects presented in the “Marks’” permits of great utility and serves as a convenient ready reference. The reading public has had access to “Marks’” since 1916, when Lionel S. Marks edited the first edition. This eleventh edition follows 90 years later. During the intervening years, “Marks’” and “Handbook for Mechanical Engineers” have become synonymous to a wide readership which includes mechanical engineers, engineers in the associated disciplines, and others. Our readership derives from a wide spectrum of interests, and it appears many find the “Marks’” useful as they pursue their professional endeavors. The Editors consider it a given that every successive edition must balance the requests to broaden the range or depth of subject matter printed, the incorporation of new material which will be useful to the widest possible audience, and the requirement to keep the size of the Handbook reasonable and manageable. We are aware that the current engineering practitioner learns quickly that the revolutionary developments of the recent past soon become standard practice. By the same token, it is prudent to realize that as a consequence of rapid developments, some cutting-edge technologies prove to have a short shelf life and soon are regarded as obsolescent—if not obsolete. The Editors are fortunate to have had, from time to time, input from readers and reviewers, who have proffered cogent commentary and suggestions; a number are included in this edition. Indeed, the synergy between Editors, contributors, and readers has been instrumental in the continuing usefulness of successive editions of “Marks’ Standard Handbook for Mechanical Engineers.” The reader will note that a considerable portion of the tabular data and running text continue to be presented in dual units; i.e., USCS and SI. The date for a projected full transition to SI units is not yet firm, and the “Marks’” reflects that. We look to the future in that regard. Society is in an era of information technology, as manifest by the practicing engineer’s working tools. For example: the ubiquitous personal computer, its derivative use of software programs of a vast variety and number, printers, computer-aided design and drawing, universal access to the Internet, and so on. It is recognized, too, that the great leaps forward which

xv

xvi

PREFACE TO THE ELEVENTH EDITION

are thereby enhanced still require the engineer to exercise sound and rational judgment as to the reliability of the solutions provided. Last, the Handbook is ultimately the responsibility of the Editors. The utmost care has been exercised to avoid errors, but if any inadvertently are included, the Publisher and Editors will appreciate being so informed. Corrections will be incorporated into subsequent printings. Ardsley, NY Newark, DE Franklin Lakes, NJ

EUGENE A. AVALLONE THEODORE BAUMEISTER III ALI M. SADEGH

Contents

For the detailed contents of any section consult the title page of that section.

Contributors ix The Editors xiii Preface to the Eleventh Edition xv Preface to the First Edition xvii Symbols and Abbreviations xix

1. Mathematical Tables and Measuring Units . . . . . . . . . . . . . . 1.1 1.2

1-1

Mathematical Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measuring Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1-1 1-16

2. Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2-1

2.1 2.2

Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2-2 2-40

3. Mechanics of Solids and Fluids . . . . . . . . . . . . . . . . . . . . . .

3-1

3.1 3.2 3.3 3.4

. . . .

3-2 3-20 3-29 3-61

4. Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-1

4.1 4.2 4.3 4.4

Mechanics of Solids Friction . . . . . . . . . . . Mechanics of Fluids . Vibration . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . . . . .

. . . . . .

. . . .

6-1

. . . . . . . . .

. . . . . .

. . . .

. . . .

6. Materials of Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . .

. . . .

5-2 5-14 5-51 5-57 5-63 5-71

General Properties of Materials . . . . . . . . . . . . . . . Iron and Steel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iron and Steel Castings . . . . . . . . . . . . . . . . . . . . . . Nonferrous Metals and Alloys; Metallic Specialties Corrosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paints and Protective Coatings . . . . . . . . . . . . . . . . Wood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonmetallic Materials . . . . . . . . . . . . . . . . . . . . . . . Cement, Mortar, and Concrete . . . . . . . . . . . . . . . . .

. . . . . .

. . . .

. . . .

. . . . . .

6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9

. . . . . .

. . . .

. . . .

5-1

. . . . . .

. . . .

. . . .

5. Strength of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

4-2 4-32 4-63 4-79

Mechanical Properties of Materials . . . . . Mechanics of Materials . . . . . . . . . . . . . . Pipeline Flexure Stresses . . . . . . . . . . . . Nondestructive Testing . . . . . . . . . . . . . . Experimental Stress and Strain Analysis Mechanics of Composite Materials . . . . .

. . . .

. . . .

. . . .

5.1 5.2 5.3 5.4 5.5 5.6

Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thermodynamic Properties of Substances . . . . . . . . Radiant Heat Transfer . . . . . . . . . . . . . . . . . . . . . . . . . Transmission of Heat by Conduction and Convection

. . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . . . .

6-3 6-12 6-34 6-46 6-92 6-111 6-115 6-131 6-162 v

vi

CONTENTS

6.10 6.11 6.12 6.13

. . . .

6-171 6-180 6-189 6-206

7. Fuels and Furnaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7-1

7.1 7.2 7.3 7.4 7.5

Water . . . . . . . . . . . . . . . . . Lubricants and Lubrication Plastics . . . . . . . . . . . . . . . Fiber Composite Materials

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . .

. . . . . . . .

. . . . . .

. . . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . .

. . . . . .

12-2 12-19 12-37 12-49 12-88 12-96

13. Manufacturing Processes . . . . . . . . . . . . . . . . . . . . . . . . . . .

13-1

13.1

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

12-1

. . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

12. Building Construction and Equipment . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

11-3 11-18 11-40 11-58 11-83 11-104 11-139 11-149

Industrial Plants . . . . . . . . . . . . . . . . . . . . . . . . Structural Design of Buildings . . . . . . . . . . . . . Reinforced Concrete Design and Construction Air Conditioning, Heating, and Ventilating . . . . Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sound, Noise, and Ultrasonics . . . . . . . . . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . . . .

12.1 12.2 12.3 12.4 12.5 12.6

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

11-1

. . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

11. Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

10-2 10-4 10-22 10-26 10-42 10-63 10-69

Automotive Engineering . . . . . . . . . . . Railway Engineering . . . . . . . . . . . . . . Marine Engineering . . . . . . . . . . . . . . . Aeronautics . . . . . . . . . . . . . . . . . . . . . Jet Propulsion and Aircraft Propellers Astronautics . . . . . . . . . . . . . . . . . . . . Pipeline Transmission . . . . . . . . . . . . . Containerization . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . . .

11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

10-1

. . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

10. Materials Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

9-3 9-29 9-56 9-58 9-78 9-93 9-127 9-138 9-154

Materials Holding, Feeding, and Metering Lifting, Hoisting, and Elevating . . . . . . . . . Dragging, Pulling, and Pushing . . . . . . . . Loading, Carrying, and Excavating . . . . . . Conveyor Moving and Handling . . . . . . . . Automatic Guided Vehicles and Robots . . Material Storage and Warehousing . . . . . .

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . . . . .

10.1 10.2 10.3 10.4 10.5 10.6 10.7

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

9-1

. . . . . . . . .

. . . . . . .

. . . . .

. . . .

9. Power Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . .

. . . .

8-3 8-10 8-83 8-111 8-127 8-133 8-138

Sources of Energy . . . . . . . . . Steam Boilers . . . . . . . . . . . . . Steam Engines . . . . . . . . . . . . Steam Turbines . . . . . . . . . . . Power-Plant Heat Exchangers Internal-Combustion Engines Gas Turbines . . . . . . . . . . . . . Nuclear Power . . . . . . . . . . . . Hydraulic Turbines . . . . . . . . .

. . . . . . .

. . . . .

. . . .

. . . . . . .

9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9

. . . . . . .

. . . . .

. . . .

8-1

. . . . . . .

. . . . .

. . . .

8. Machine Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . .

7-2 7-30 7-43 7-48 7-54

Mechanism . . . . . . . . . . . . . . . . Machine Elements . . . . . . . . . . Gearing . . . . . . . . . . . . . . . . . . Fluid-Film Bearings . . . . . . . . . Bearings with Rolling Contact Packings, Gaskets, and Seals . Pipe, Pipe Fittings, and Valves

. . . . .

. . . .

. . . . .

8.1 8.2 8.3 8.4 8.5 8.6 8.7

Fuels . . . . . . . . . . . . . . . . . . . . . . . . . . . Carbonization of Coal and Gas Making Combustion Furnaces . . . . . . . . . . . . . . Municipal Waste Combustion . . . . . . . . Electric Furnaces and Ovens . . . . . . . .

. . . .

. . . . . .

Foundry Practice and Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13-3

CONTENTS

13.2 13.3 13.4 13.5 13.6 13.7

. . . . . .

13-9 13-29 13-50 13-72 13-77 13-80

14. Fans, Pumps, and Compressors . . . . . . . . . . . . . . . . . . . . . .

14-1

14.1 14.2 14.3 14.4 14.5

Plastic Working of Metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Welding and Cutting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machining Processes and Machine Tools . . . . . . . . . . . . . . . . . Surface Texture Designation, Production, and Quality Control Woodcutting Tools and Machines . . . . . . . . . . . . . . . . . . . . . . . Precision Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . .

. . . . .

. . . . . .

. . . . .

. . . . . .

. . . . .

. . . . . .

. . . . .

. . . . . .

. . . . .

. . . . . .

. . . . .

14-2 14-15 14-26 14-39 14-46

15. Electrical and Electronics Engineering . . . . . . . . . . . . . . . .

15-1

15.1 15.2

Displacement Pumps Centrifugal Pumps . . . Compressors . . . . . . . High-Vacuum Pumps . Fans . . . . . . . . . . . . . .

. . . . . .

. . . . .

Electrical Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15-2 15-68

16. Instruments and Controls . . . . . . . . . . . . . . . . . . . . . . . . . . .

16-1

16.1 16.2 16.3

Instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatic Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surveying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16-2 16-21 16-52

17. Industrial Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17-1

17.1 17.2 17.3 17.4 17.5 17.6 17.7

. . . . . . .

17-3 17-11 17-18 17-25 17-31 17-39 17-43

18. The Regulatory Environment . . . . . . . . . . . . . . . . . . . . . . . .

18-1

18.1 18.2 18.3 18.4

Operations Management . . . . . . . . . . . . . . . Cost Accounting . . . . . . . . . . . . . . . . . . . . . Engineering Statistics and Quality Control Methods Engineering . . . . . . . . . . . . . . . . . Cost of Electric Power . . . . . . . . . . . . . . . . . Human Factors and Ergonomics . . . . . . . . Automatic Manufacturing . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

18-2 18-18 18-22 18-27

19. Refrigeration, Cryogenics, and Optics . . . . . . . . . . . . . . . . .

19-1

19.1 19.2 19.3

Environmental Control . . . . . . . . . . . Occupational Safety and Health . . . . Fire Protection . . . . . . . . . . . . . . . . . . Patents, Trademarks, and Copyrights

. . . . . . .

. . . .

Mechanical Refrigeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cryogenics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19-2 19-26 19-41

20. Emerging Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20-1

20.1 An Introduction to Microelectromechanical Systems (MEMS) 20.2 Introduction to Nanotechnology . . . . . . . . . . . . . . . . . . . . . . . 20.3 Ferroelectrics/Piezoelectrics and Shape Memory Alloys . . . . 20.4 Introduction to the Finite-Element Method . . . . . . . . . . . . . . . 20.5 Computer-Aided Design, Computer-Aided Engineering, and Variational Design . . . . . . . . . . . . . . . . . . 20.6 Introduction to Computational Fluid Dynamics . . . . . . . . . . . 20.7 Experimental Fluid Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 20.8 Introduction to Biomechanics . . . . . . . . . . . . . . . . . . . . . . . . . 20.9 Human Injury Tolerance and Anthropometric Test Devices . . 20.10 Air-Inflated Fabric Structures . . . . . . . . . . . . . . . . . . . . . . . . . 20.11 Robotics, Mechatronics, and Intelligent Automation . . . . . . . 20.12 Rapid Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.13 Miscellany . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index follows Section 20

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

20-3 20-13 20-20 20-28

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

20-44 20-51 20-63 20-79 20-104 20-108 20-118 20-132 20-135

vii

Section

1

Mathematical Tables and Measuring Units BY

GEORGE F. BAUMEISTER President, EMC Process Co., Newport, DE JOHN T. BAUMEISTER Manager, Product Compliance Test Center, Unisys Corp.

1.1 MATHEMATICAL TABLES by George F. Baumeister Segments of Circles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 Regular Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4 Binomial Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4 Compound Interest and Annuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Statistical Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9 Decimal Equivalents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15 1.2 MEASURING UNITS by John T. Baumeister U.S. Customary System (USCS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16 Metric System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17

The International System of Units (SI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17 Systems of Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25 Terrestrial Gravity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25 Mohs Scale of Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25 Density and Relative Density. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26 Conversion and Equivalency Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-27

1.1 MATHEMATICAL TABLES by George F. Baumeister REFERENCES FOR MATHEMATICAL TABLES: Dwight, “Mathematical Tables of Elementary and Some Higher Mathematical Functions,” McGraw-Hill. Dwight, “Tables of Integrals and Other Mathematical Data,” Macmillan. Jahnke and Emde, “Tables of Functions,” B. G. Teubner, Leipzig, or Dover. Pierce-Foster,

“A Short Table of Integrals,” Ginn. “Mathematical Tables from Handbook of Chemistry and Physics,” Chemical Rubber Co. “Handbook of Mathematical Functions,” NBS.

1-1

1-2

MATHEMATICAL TABLES

Table 1.1.1 Segments of Circles, Given h/c Given: h  height; c  chord. To find the diameter of the circle, the length of arc, or the area of the segment, form the ratio h/c, and find from the table the value of (diam /c), (arc/c); then, by a simple multiplication, diam  c  (diam/c) arc  c  (arc/c) area  h  c  (area/h  c) The table gives also the angle subtended at the center, and the ratio of h to D. h c

Diam c

.00 1 2 3 4

25.010 12.520 8.363 6.290

.05 6 7 8 9

5.050 4.227 3.641 3.205 2.868

.10 1 2 3 4

2.600 2.383 2.203 2.053 1.926

.15 6 7 8 9

1.817 1.723 1.641 1.569 1.506

.20 1 2 3 4

1.450 1.400 1.356 1.317 1.282

.25 6 7 8 9

1.250 1.222 1.196 1.173 1.152

.30 1 2 3 4

1.133 1.116 1.101 1.088 1.075

.35 6 7 8 9

1.064 1.054 1.046 1.038 1.031

.40 1 2 3 4

1.025 1.020 1.015 1.011 1.008

.45 6 7 8 9

1.006 1.003 1.002 1.001 1.000

.50

1.000

Diff

12490 *4157 *2073 *1240 *823 *586 *436 *337 *268 *217 *180 *150 *127 *109 *94 *82 *72 *63 56 50 44 39 35 32 28 26 23 21 19 17 15 13 13 11 10 8 8 7 6 5 5 4 3 2 3 1 1 1 0

* Interpolation may be inaccurate at these points.

Arc c 1.000 1.000 1.001 1.002 1.004 1.007 1.010 1.013 1.017 1.021 1.026 1.032 1.038 1.044 1.051 1.059 1.067 1.075 1.084 1.094 1.103 1.114 1.124 1.136 1.147 1.159 1.171 1.184 1.197 1.211 1.225 1.239 1.254 1.269 1.284 1.300 1.316 1.332 1.349 1.366 1.383 1.401 1.419 1.437 1.455 1.474 1.493 1.512 1.531 1.551 1.571

Diff 0 1 1 2 3 3 3 4 4 5 6 6 6 7 8 8 8 9 10 9 11 10 12 11 12 12 13 13 14 14 14 15 15 15 16 16 16 17 17 17 18 18 18 18 19 19 19 19 20 20

Area h3c .6667 .6667 .6669 .6671 .6675 .6680 .6686 .6693 .6701 .6710 .6720 .6731 .6743 .6756 .6770 .6785 .6801 .6818 .6836 .6855 .6875 .6896 .6918 .6941 .6965 .6989 .7014 .7041 .7068 .7096 .7125 .7154 .7185 .7216 .7248 .7280 .7314 .7348 .7383 .7419 .7455 .7492 .7530 .7568 .7607 .7647 .7687 .7728 .7769 .7811 .7854

Diff 0 2 2 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 24 25 27 27 28 29 29 31 31 32 32 34 34 35 36 36 37 38 38 39 40 40 41 41 42 43

Central angle, v 0.008 4.58 9.16 13.73 18.30 22.848 27.37 31.88 36.36 40.82 45.248 49.63 53.98 58.30 62.57 66.808 70.98 75.11 79.20 83.23 87.218 91.13 95.00 98.81 102.56 106.268 109.90 113.48 117.00 120.45 123.868 127.20 130.48 133.70 136.86 139.978 143.02 146.01 148.94 151.82 154.648 157.41 160.12 162.78 165.39 167.958 170.46 172.91 175.32 177.69 180.008

Diff 458 458 457 457 454 453 451 448 446 442 439 435 432 427 423 418 413 409 403 399 392 387 381 375 370 364 358 352 345 341 334 328 322 316 311 305 299 293 288 282 277 271 266 261 256 251 245 241 237 231

h Diam .0000 .0004 .0016 .0036 .0064 .0099 .0142 .0192 .0250 .0314 .0385 .0462 .0545 .0633 .0727 .0826 .0929 .1036 .1147 .1263 .1379 .1499 .1622 .1746 .1873 .2000 .2128 .2258 .2387 .2517 .2647 .2777 .2906 .3034 .3162 .3289 .3414 .3538 .3661 .3783 .3902 .4021 .4137 .4252 .4364 .4475 .4584 .4691 .4796 .4899 .5000

Diff 4 12 20 28 35 43 50 58 64 71 77 83 88 94 99 103 107 111 116 116 120 123 124 127 127 128 130 129 130 130 130 129 128 128 127 125 124 123 122 119 119 116 115 112 111 109 107 105 103 101

MATHEMATICAL TABLES

1-3

Table 1.1.2 Segments of Circles, Given h/D Given: h  height; D  diameter of circle. To find the chord, the length of arc, or the area of the segment, form the ratio h/D, and find from the table the value of (chord/D), (arc/D), or (area/D2); then by a simple multiplication, chord  D  (chord/D) arc  D  (arc/D) area  D 2  (area/D 2) This table gives also the angle subtended at the center, the ratio of the arc of the segment of the whole circumference, and the ratio of the area of the segment to the area of the whole circle. h D

Arc D

.00 1 2 3 4

0.000 .2003 .2838 .3482 .4027

.05 6 7 8 9

.4510 .4949 .5355 .5735 .6094

.10 1 2 3 4

.6435 .6761 .7075 .7377 .7670

.15 6 7 8 9

.7954 .8230 .8500 .8763 .9021

.20 1 2 3 4

0.9273 0.9521 0.9764 1.0004 1.0239

.25 6 7 8 9

1.0472 1.0701 1.0928 1.1152 1.1374

.30 1 2 3 4

1.1593 1.1810 1.2025 1.2239 1.2451

.35 6 7 8 9

1.2661 1.2870 1.3078 1.3284 1.3490

.40 1 2 3 4

1.3694 1.3898 1.4101 1.4303 1.4505

.45 6 7 8 9

1.4706 1.4907 1.5108 1.5308 1.5508

.50

1.5708

Diff 2003 *835 *644 *545 *483 *439 *406 *380 *359 *341 *326 *314 *302 *293 *284 276 270 263 258 252 248 243 240 235 233 229 227 224 222 219 217 215 214 212 210 209 208 206 206 204 204 203 202 202 201 201 201 200 200 200

Area D2 .0000 .0013 .0037 .0069 .0105 .0147 .0192 .0242 .0294 .0350 .0409 .0470 .0534 .0600 .0668 .0739 .0811 .0885 .0961 .1039 .1118 .1199 .1281 .1365 .1449 .1535 .1623 .1711 .1800 .1890 .1982 .2074 .2167 .2260 .2355 .2450 .2546 .2642 .2739 .2836 .2934 .3032 .3130 .3229 .3328 .3428 .3527 .3627 .3727 .3827 .3927

* Interpolation may be inaccurate at these points.

Diff 13 24 32 36 42 45 50 52 56 59 61 64 66 68 71 72 74 76 78 79 81 82 84 84 86 88 88 89 90 92 92 93 93 95 95 96 96 97 97 98 98 98 99 99 100 99 100 100 100 100

Central angle, v 0.008 22.96 32.52 39.90 46.15 51.688 56.72 61.37 65.72 69.83 73.748 77.48 81.07 84.54 87.89 91.158 94.31 97.40 100.42 103.37 106.268 109.10 111.89 114.63 117.34 120.008 122.63 125.23 127.79 130.33 132.848 135.33 137.80 140.25 142.67 145.088 147.48 149.86 152.23 154.58 156.938 159.26 161.59 163.90 166.22 168.528 170.82 173.12 175.41 177.71 180.008

Diff 2296 * 956 * 738 * 625 * 553 *

504 465 * 435 * 411 * 391 *

*

374 359 * 347 * 335 * 326 *

316 309 302 295 289 284 279 274 271 266 263 260 256 254 251 249 247 245 242 241 240 238 237 235 235 233 233 231 232 230 230 230 229 230 229

Chord D .0000 .1990 .2800 .3412 .3919 .4359 .4750 .5103 .5426 .5724 .6000 .6258 .6499 .6726 .6940 .7141 .7332 .7513 .7684 .7846 .8000 .8146 .8285 .8417 .8542 .8660 .8773 .8879 .8980 .9075 .9165 .9250 .9330 .9404 .9474 .9539 .9600 .9656 .9708 .9755 .9798 .9837 .9871 .9902 .9928 .9950 .9968 .9982 .9992 .9998 1.0000

Diff *1990 *810 *612 *507 *440 *391 *353 *323 *298 *276 *258 *241 *227 *214 *201 *191 *181 *171 162 154 146 139 132 125 118 113 106 101 95 90 85 80 74 70 65 61 56 52 47 43 39 34 31 26 22 18 14 10 6 2

Arc Circum .0000 .0638 .0903 .1108 .1282 .1436 .1575 .1705 .1826 .1940 .2048 .2152 .2252 .2348 .2441 .2532 .2620 .2706 .2789 .2871 .2952 .3031 .3108 .3184 .3259 .3333 .3406 .3478 .3550 .3620 .3690 .3759 .3828 .3896 .3963 .4030 .4097 .4163 .4229 .4294 .4359 .4424 .4489 .4553 .4617 .4681 .4745 .4809 .4873 .4936 .5000

Diff *638 *265 *205 *174 *154 *139 *130 121 114 108 104 100 96 93 91 88 86 83 82 81 79 77 76 75 74 73 72 72 70 70 69 69 68 67 67 67 66 66 65 65 65 65 64 64 64 64 64 64 63 64

Area Circle .0000 .0017 .0048 .0087 .0134 .0187 .0245 .0308 .0375 .0446 .0520 .0598 .0680 .0764 .0851 .0941 .1033 .1127 .1224 .1323 .1424 .1527 .1631 .1738 .1846 .1955 .2066 .2178 .2292 .2407 .2523 .2640 .2759 .2878 .2998 .3119 .3241 .3364 .3487 .3611 .3735 .3860 .3986 .4112 .4238 .4364 .4491 .4618 .4745 .4873 .5000

Diff 17 31 39 47 53 58 63 67 71 74 78 82 84 87 90 92 94 97 99 101 103 104 107 108 109 111 112 114 115 116 117 119 119 120 121 122 123 123 124 124 125 126 126 126 126 127 127 127 128 127

1-4

MATHEMATICAL TABLES

Table 1.1.3 Regular Polygons n  number of sides v  3608/n  angle subtended at the center by one side v v a  length of one side 5 R a2 sin b 5 r a2 tan b 2 2 v v R  radius of circumscribed circle 5 a a1⁄2 csc b 5 r asec b 2 2 v v r  radius of inscribed circle 5 R acos b 5 a a1⁄2 cot b 2 2 v v Area  a 2 a1⁄4 n cot b 5 R2 s 1⁄2 n sin vd 5 r 2 an tan b 2 2 n

v

3 4 5 6

1208 908 728 608

Area a2

Area R2

Area r2

R a

R r

a R

a r

r R

r a

0.4330 1.000 1.721 2.598

1.299 2.000 2.378 2.598

5.196 4.000 3.633 3.464

0.5774 0.7071 0.8507 1.0000

2.000 1.414 1.236 1.155

1.732 1.414 1.176 1.000

3.464 2.000 1.453 1.155

0.5000 0.7071 0.8090 0.8660

0.2887 0.5000 0.6882 0.8660

3.634 4.828 6.182 7.694

2.736 2.828 2.893 2.939

3.371 3.314 3.276 3.249

1.152 1.307 1.462 1.618

1.110 1.082 1.064 1.052

0.8678 0.7654 0.6840 0.6180

0.9631 0.8284 0.7279 0.6498

0.9010 0.9239 0.9397 0.9511

1.038 1.207 1.374 1.539

7 8 9 10

518.43 458 408 368

12 15 16 20

308 248 228.50 188

11.20 17.64 20.11 31.57

3.000 3.051 3.062 3.090

3.215 3.188 3.183 3.168

1.932 2.405 2.563 3.196

1.035 1.022 1.020 1.013

0.5176 0.4158 0.3902 0.3129

0.5359 0.4251 0.3978 0.3168

0.9659 0.9781 0.9808 0.9877

1.866 2.352 2.514 3.157

24 32 48 64

158 118.25 78.50 58.625

45.58 81.23 183.1 325.7

3.106 3.121 3.133 3.137

3.160 3.152 3.146 3.144

3.831 5.101 7.645 10.19

1.009 1.005 1.002 1.001

0.2611 0.1960 0.1308 0.0981

0.2633 0.1970 0.1311 0.0983

0.9914 0.9952 0.9979 0.9968

3.798 5.077 7.629 10.18

Table 1.1.4 (n)0  1

Binomial Coefficients nsn 2 1d nsn 2 1dsn 2 2d nsn 2 1dsn 2 2d c[n 2 sr 2 1d] n etc. in general sndr 5 snd2 5 snd3 5 . Other notations: nCr 5 a b 5 sndr 132 13233 1 3 2 3 3 3 c3 r r

(n)I  n

n

(n)0

(n)1

(n)2

(n)3

(n)4

(n)5

(n)6

(n)7

(n)8

(n)9

(n)10

(n)11

(n)12

(n)13

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

⋅⋅⋅⋅⋅⋅

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 4 10 20 35 56 84 120 165 220 286 364 455

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 5 15 35 70 126 210 330 495 715 1001 1365

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 6 21 56 126 252 462 792 1287 2002 3003

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 7 28 84 210 462 924 1716 3003 5005

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 8 36 120 330 792 1716 3432 6435

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 9 45 165 495 1287 3003 6435

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 10 55 220 715 2002 5005

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 11 66 286 1001 3003

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 12 78 364 1365

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 13 91 455

⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅ 1 14 105

1 3 6 10 15 21 28 36 45 55 66 78 91 105

NOTE: For n  14, (n)14  1; for n  15, (n)14  15, and (n)15  1.

MATHEMATICAL TABLES

1-5

Table 1.1.5 Compound Interest. Amount of a Given Principal The amount A at the end of n years of a given principal P placed at compound interest today is A  P  x or A  P  y, according as the interest (at the rate of r percent per annum) is compounded annually, or continuously; the factor x or y being taken from the following tables. Values of x (interest compounded annually: A  P  x) Years

r2

3

4

5

6

7

1 2 3 4 5

1.0200 1.0404 1.0612 1.0824 1.1041

1.0300 1.0609 1.0927 1.1255 1.1593

1.0400 1.0816 1.1249 1.1699 1.2167

1.0500 1.1025 1.1576 1.2155 1.2763

1.0600 1.1236 1.1910 1.2625 1.3382

1.0700 1.1449 1.2250 1.3108 1.4026

1.0800 1.1664 1.2597 1.3605 1.4693

1.1000 1.2100 1.3310 1.4641 1.6105

1.1200 1.2544 1.4049 1.5735 1.7623

6 7 8 9 10

1.1262 1.1487 1.1717 1.1951 1.2190

1.1941 1.2299 1.2668 1.3048 1.3439

1.2653 1.3159 1.3686 1.4233 1.4802

1.3401 1.4071 1.4775 1.5513 1.6289

1.4185 1.5036 1.5938 1.6895 1.7908

1.5007 1.6058 1.7182 1.8385 1.9672

1.5869 1.7138 1.8509 1.9990 2.1589

1.7716 1.9487 2.1436 2.3579 2.5937

1.9738 2.2107 2.4760 2.7731 3.1058

11 12 13 14 15

1.2434 1.2682 1.2936 1.3195 1.3459

1.3842 1.4258 1.4685 1.5126 1.5580

1.5395 1.6010 1.6651 1.7317 1.8009

1.7103 1.7959 1.8856 1.9799 2.0789

1.8983 2.0122 2.1329 2.2609 2.3966

2.1049 2.2522 2.4098 2.5785 2.7590

2.3316 2.5182 2.7196 2.9372 3.1722

2.8531 3.1384 3.4523 3.7975 4.1772

3.4785 3.8960 4.3635 4.8871 5.4736

16 17 18 19 20

1.3728 1.4002 1.4282 1.4568 1.4859

1.6047 1.6528 1.7024 1.7535 1.8061

1.8730 1.9479 2.0258 2.1068 2.1911

2.1829 2.2920 2.4066 2.5270 2.6533

2.5404 2.6928 2.8543 3.0256 3.2071

2.9522 3.1588 3.3799 3.6165 3.8697

3.4259 3.7000 3.9960 4.3157 4.6610

4.5950 5.0545 5.5599 6.1159 6.7275

6.1304 6.8660 7.6900 8.6128 9.6463

25 30 40 50 60

1.6406 1.8114 2.2080 2.6916 3.2810

2.0938 2.4273 3.2620 4.3839 5.8916

2.6658 3.2434 4.8010 7.1067 10.520

3.3864 4.3219 7.0400 11.467 18.679

4.2919 5.7435 10.286 18.420 32.988

5.4274 7.6123 14.974 29.457 57.946

6.8485 10.063 21.725 46.902 101.26

8

10

12

10.835 17.449 45.259 117.39 304.48

17.000 29.960 93.051 289.00 897.60

10

12

NOTE: This table is computed from the formula x  [1  (r/100)]n.

Values of y (interest compounded continuously: A  P  y) Years

r2

3

4

5

6

7

1 2 3 4 5

1.0202 1.0408 1.0618 1.0833 1.1052

1.0305 1.0618 1.0942 1.1275 1.1618

1.0408 1.0833 1.1275 1.1735 1.2214

1.0513 1.1052 1.1618 1.2214 1.2840

1.0618 1.1275 1.1972 1.2712 1.3499

1.0725 1.1503 1.2337 1.3231 1.4191

1.0833 1.1735 1.2712 1.3771 1.4918

1.1052 1.2214 1.3499 1.4918 1.6487

1.1275 1.2712 1.4333 1.6161 1.8221

6 7 8 9 10

1.1275 1.1503 1.1735 1.1972 1.2214

1.1972 1.2337 1.2712 1.3100 1.3499

1.2712 1.3231 1.3771 1.4333 1.4918

1.3499 1.4191 1.4918 1.5683 1.6487

1.4333 1.5220 1.6161 1.7160 1.8221

1.5220 1.6323 1.7507 1.8776 2.0138

1.6161 1.7507 1.8965 2.0544 2.2255

1.8221 2.0138 2.2255 2.4596 2.7183

2.0544 2.3164 2.6117 2.9447 3.3201

11 12 13 14 15

1.2461 1.2712 1.2969 1.3231 1.3499

1.3910 1.4333 1.4770 1.5220 1.5683

1.5527 1.6161 1.6820 1.7507 1.8221

1.7333 1.8221 1.9155 2.0138 2.1170

1.9348 2.0544 2.1815 2.3164 2.4596

2.1598 2.3164 2.4843 2.6645 2.8577

2.4109 2.6117 2.8292 3.0649 3.3201

3.0042 3.3201 3.6693 4.0552 4.4817

3.7434 4.2207 4.7588 5.3656 6.0496

16 17 18 19 20

1.3771 1.4049 1.4333 1.4623 1.4918

1.6161 1.6653 1.7160 1.7683 1.8221

1.8965 1.9739 2.0544 2.1383 2.2255

2.2255 2.3396 2.4596 2.5857 2.7183

2.6117 2.7732 2.9447 3.1268 3.3201

3.0649 3.2871 3.5254 3.7810 4.0552

3.5966 3.8962 4.2207 4.5722 4.9530

4.9530 5.4739 6.0496 6.6859 7.3891

6.8210 7.6906 8.6711 9.7767 11.023

25 30 40 50 60

1.6487 1.8221 2.2255 2.7183 3.3201

2.1170 2.4596 3.3201 4.4817 6.0496

2.7183 3.3201 4.9530 7.3891 11.023

3.4903 4.4817 7.3891 12.182 20.086

4.4817 6.0496 11.023 20.086 36.598

5.7546 8.1662 16.445 33.115 66.686

7.3891 11.023 24.533 54.598 121.51

FORMULA: y  e(r/100)  n.

8

12.182 20.086 54.598 148.41 403.43

20.086 36.598 121.51 403.43 1339.4

1-6

MATHEMATICAL TABLES Table 1.1.6 Principal Which Will Amount to a Given Sum The principal P, which, if placed at compound interest today, will amount to a given sum A at the end of n years P  A  xr or P  A  yr, according as the interest (at the rate of r percent per annum) is compounded annually, or continuously; the factor xr or yr being taken from the following tables. Values of xr (interest compounded annually: P  A  xr) Years

r2

3

4

5

6

7

8

10

12

1 2 3 4 5

.98039 .96117 .94232 .92385 .90573

.97087 .94260 .91514 .88849 .86261

.96154 .92456 .88900 .85480 .82193

.95238 .90703 .86384 .82270 .78353

.94340 .89000 .83962 .79209 .74726

.93458 .87344 .81630 .76290 .71299

.92593 .85734 .79383 .73503 .68058

.90909 .82645 .75131 .68301 .62092

.89286 .79719 .71178 .63552 .56743

6 7 8 9 10

.88797 .87056 .85349 .83676 .82035

.83748 .81309 .78941 .76642 .74409

.79031 .75992 .73069 .70259 .67556

.74622 .71068 .67684 .64461 .61391

.70496 .66506 .62741 .59190 .55839

.66634 .62275 .58201 .54393 .50835

.63017 .58349 .54027 .50025 .46319

.56447 .51316 .46651 .42410 .38554

.50663 .45235 .40388 .36061 .32197

11 12 13 14 15

.80426 .78849 .77303 .75788 .74301

.72242 .70138 .68095 .66112 .64186

.64958 .62460 .60057 .57748 .55526

.58468 .55684 .53032 .50507 .48102

.52679 .49697 .46884 .44230 .41727

.47509 .44401 .41496 .38782 .36245

.42888 .39711 .36770 .34046 .31524

.35049 .31863 .28966 .26333 .23939

.28748 .25668 .22917 .20462 .18270

16 17 18 19 20

.72845 .71416 .70016 .68643 .67297

.62317 .60502 .58739 .57029 .55368

.53391 .51337 .49363 .47464 .45639

.45811 .43630 .41552 .39573 .37689

.39365 .37136 .35034 .33051 .31180

.33873 .31657 .29586 .27651 .25842

.29189 .27027 .25025 .23171 .21455

.21763 .19784 .17986 .16351 .14864

.16312 .14564 .13004 .11611 .10367

25 30 40 50 60

.60953 .55207 .45289 .37153 .30478

.47761 .41199 .30656 .22811 .16973

.37512 .30832 .20829 .14071 .09506

.29530 .23138 .14205 .08720 .05354

.23300 .17411 .09722 .05429 .03031

.18425 .13137 .06678 .03395 .01726

.14602 .09938 .04603 .02132 .00988

.09230 .05731 .02209 .00852 .00328

.05882 .03338 .01075 .00346 .00111

FORMULA: xr  [1  (r/100)]n  1/x.

Values of yr (interest compounded continuously: P  A  yr) Years

r2

3

4

5

6

7

8

10

12

1 2 3 4 5

.98020 .96079 .94176 .92312 .90484

.97045 .94176 .91393 .88692 .86071

.96079 .92312 .88692 .85214 .81873

.95123 .90484 .86071 .81873 .77880

.94176 .88692 .83527 .78663 .74082

.93239 .86936 .81058 .75578 .70469

.92312 .85214 .78663 .72615 .67032

.90484 .81873 .74082 .67032 .60653

.88692 .78663 .69768 .61878 .54881

6 7 8 9 10

.88692 .86936 .85214 .83527 .81873

.83527 .81058 .78663 .76338 .74082

.78663 .75578 .72615 .69768 .67032

.74082 .70469 .67032 .63763 .60653

.69768 .65705 .61878 .58275 .54881

.65705 .61263 .57121 .53259 .49659

.61878 .57121 .52729 .48675 .44933

.54881 .49659 .44933 .40657 .36788

.48675 .43171 .38289 .33960 .30119

11 12 13 14 15

.80252 .78663 .77105 .75578 .74082

.71892 .69768 .67706 .65705 .63763

.64404 .61878 .59452 .57121 .54881

.57695 .54881 .52205 .49659 .47237

.51685 .48675 .45841 .43171 .40657

.46301 .43171 .40252 .37531 .34994

.41478 .38289 .35345. .32628 .30119

.33287 .30119 .27253 .24660 .22313

.26714 .23693 .21014 .18637 .16530

16 17 18 19 20

.72615 .71177 .69768 .68386 .67032

.61878 .60050 .58275 .56553 .54881

.52729 .50662 .48675 .46767 .44933

.44933 .42741 .40657 .38674 .36788

.38289 .36059 .33960 .31982 .30119

.32628 .30422 .28365 .26448 .24660

.27804 .25666 .23693 .21871 .20190

.20190 .18268 .16530 .14957 .13534

.14661 .13003 .11533 .10228 .09072

25 30 40 50 60

.60653 .54881 .44933 .36788 .30119

.47237 .40657 .30119 .22313 .16530

.36788 .30119 .20190 .13534 .09072

.28650 .22313 .13534 .08208 .04979

.22313 .16530 .09072 .04979 .02732

.17377 .12246 .06081 .03020 .01500

.13534 .09072 .04076 .01832 .00823

.08208 .04979 .01832 .00674 .00248

.04979 .02732 .00823 .00248 .00075

FORMULA: yr  e(r/100)n  1/y.

MATHEMATICAL TABLES

1-7

Table 1.1.7 Amount of an Annuity The amount S accumulated at the end of n years by a given annual payment Y set aside at the end of each year is S  Y  v, where the factor v is to be taken from the following table (interest at r percent per annum, compounded annually). Values of v Years

r2

3

4

5

10

12

1 2 3 4 5

1.0000 2.0200 3.0604 4.1216 5.2040

1.0000 2.0300 3.0909 4.1836 5.3091

1.0000 2.0400 3.1216 4.2465 5.4163

1.0000 2.0500 3.1525 4.3101 5.5256

1.0000 2.0600 3.1836 4.3746 5.6371

6

1.0000 2.0700 3.2149 4.4399 5.7507

7

1.0000 2.0800 3.2464 4.5061 5.8666

8

1.0000 2.1000 3.3100 4.6410 6.1051

1.0000 2.1200 3.3744 4.7793 6.3528

6 7 8 9 10

6.3081 7.4343 8.5830 9.7546 10.950

6.4684 7.6625 8.8923 10.159 11.464

6.6330 7.8983 9.2142 10.583 12.006

6.8019 8.1420 9.5491 11.027 12.578

6.9753 8.3938 9.8975 11.491 13.181

7.1533 8.6540 10.260 11.978 13.816

7.3359 8.9228 10.637 12.488 14.487

7.7156 9.4872 11.436 13.579 15.937

8.1152 10.089 12.300 14.776 17.549

11 12 13 14 15

12.169 13.412 14.680 15.974 17.293

12.808 14.192 15.618 17.086 18.599

13.486 15.026 16.627 18.292 20.024

14.207 15.917 17.713 19.599 21.579

14.972 16.870 18.882 21.015 23.276

15.784 17.888 20.141 22.550 25.129

16.645 18.977 21.495 24.215 27.152

18.531 21.384 24.523 27.975 31.772

20.655 24.133 28.029 32.393 37.280

16 17 18 19 20

18.639 20.012 21.412 22.841 24.297

20.157 21.762 23.414 25.117 26.870

21.825 23.698 25.645 27.671 29.778

23.657 25.840 28.132 30.539 33.066

25.673 28.213 30.906 33.760 36.786

27.888 30.840 33.999 37.379 40.995

30.324 33.750 37.450 41.446 45.762

35.950 40.545 45.599 51.159 57.275

42.753 48.884 55.750 63.440 72.052

25 30 40 50 60

32.030 40.568 60.402 84.579 114.05

36.459 47.575 75.401 112.80 163.05

41.646 56.085 95.026 152.67 237.99

47.727 66.439 120.80 209.35 353.58

54.865 79.058 154.76 290.34 533.13

63.249 94.461 199.64 406.53 813.52

73.106 113.28 259.06 573.77 1253.2

98.347 164.49 442.59 1163.9 3034.8

133.33 241.33 767.09 2400.0 7471.6

FORMULA: v {[1  (r/100)]n  1} (r/100)  (x  1) (r/100).

Table 1.1.8 Annuity Which Will Amount to a Given Sum (Sinking Fund) The annual payment Y which, if set aside at the end of each year, will amount with accumulated interest to a given sum S at the end of n years is Y  S  vr, where the factor vr is given below (interest at r percent per annum, compounded annually). Values of vr Years

r2

3

4

5

6

7

8

10

12

1 2 3 4 5

1.0000 .49505 .32675 .24262 .19216

1.0000 .49261 .32353 .23903 .18835

1.0000 .49020 .32035 .23549 .18463

1.0000 .48780 .31721 .23201 .18097

1.0000 .48544 .31411 .22859 .17740

1.0000 .48309 .31105 .22523 .17389

1.0000 .48077 .30803 .22192 .17046

1.0000 .47619 .30211 .21547 .16380

1.0000 .47170 .29635 .20923 .15741

6 7 8 9 10

.15853 .13451 .11651 .10252 .09133

.15460 .13051 .11246 .09843 .08723

.15076 .12661 .10853 .09449 .08329

.14702 .12282 .10472 .09069 .07950

.14336 .11914 .10104 .08702 .07587

.13980 .11555 .09747 .08349 .07238

.13632 .11207 .09401 .08008 .06903

.12961 .10541 .08744 .07364 .06275

.12323 .09912 .08130 .06768 .05698

11 12 13 14 15

.08218 .07456 .06812 .06260 .05783

.07808 .07046 .06403 .05853 .05377

.07415 .06655 .06014 .05467 .04994

.07039 .06283 .05646 .05102 .04634

.06679 .05928 .05296 .04758 .04296

.06336 .05590 .04965 .04434 .03979

.06008 .05270 .04652 .04130 .03683

.05396 .04676 .04078 .03575 .03147

.04842 .04144 .03568 .03087 .02682

16 17 18 19 20

.05365 .04997 .04670 .04378 .04116

.04961 .04595 .04271 .03981 .03722

.04582 .04220 .03899 .03614 .03358

.04227 .03870 .03555 .03275 .03024

.03895 .03544 .03236 .02962 .02718

.03586 .03243 .02941 .02675 .02439

.03298 .02963 .02670 .02413 .02185

.02782 .02466 .02193 .01955 .01746

.02339 .02046 .01794 .01576 .01388

25 30 40 50 60

.03122 .02465 .01656 .01182 .00877

.02743 .02102 .01326 .00887 .00613

.02401 .01783 .01052 .00655 .00420

.02095 .01505 .00828 .00478 .00283

.01823 .01265 .00646 .00344 .00188

.01581 .01059 .00501 .00246 .00123

.01368 .00883 .00386 .00174 .00080

.01017 .00608 .00226 .00086 .00033

.00750 .00414 .00130 .00042 .00013

FORMULA: v  (r/100) {[1  (r/100)]n  1}  1/v.

1-8

MATHEMATICAL TABLES

Table 1.1.9 Present Worth of an Annuity The capital C which, if placed at interest today, will provide for a given annual payment Y for a term of n years before it is exhausted is C  Y  w, where the factor w is given below (interest at r percent per annum, compounded annually). Values of w Years

r2

3

4

5

6

7

8

10

12

1 2 3 4 5

.98039 1.9416 2.8839 3.8077 4.7135

.97087 1.9135 2.8286 3.7171 4.5797

.96154 1.8861 2.7751 3.6299 4.4518

.95238 1.8594 2.7232 3.5460 4.3295

.94340 1.8334 2.6730 3.4651 4.2124

.93458 1.8080 2.6243 3.3872 4.1002

.92593 1.7833 2.5771 3.3121 3.9927

.90909 1.7355 2.4869 3.1699 3.7908

.89286 1.6901 2.4018 3.0373 3.6048

6 7 8 9 10

5.6014 6.4720 7.3255 8.1622 8.9826

5.4172 6.2303 7.0197 7.7861 8.5302

5.2421 6.0021 6.7327 7.4353 8.1109

5.0757 5.7864 6.4632 7.1078 7.7217

4.9173 5.5824 6.2098 6.8017 7.3601

4.7665 5.3893 5.9713 6.5152 7.0236

4.6229 5.2064 5.7466 6.2469 6.7101

4.3553 4.8684 5.3349 5.7590 6.1446

4.1114 4.5638 4.9676 5.3282 5.6502

11 12 13 14 15

9.7868 10.575 11.348 12.106 12.849

9.2526 9.9540 10.635 11.296 11.938

8.7605 9.3851 9.9856 10.563 11.118

8.3064 8.8633 9.3936 9.8986 10.380

7.8869 8.3838 8.8527 9.2950 9.7122

7.4987 7.9427 8.3577 8.7455 9.1079

7.1390 7.5361 7.9038 8.2442 8.5595

6.4951 6.8137 7.1034 7.3667 7.6061

5.9377 6.1944 6.4235 6.6282 6.8109

16 17 18 19 20

13.578 14.292 14.992 15.678 16.351

12.561 13.166 13.754 14.324 14.877

11.652 12.166 12.659 13.134 13.590

10.838 11.274 11.690 12.085 12.462

10.106 10.477 10.828 11.158 11.470

9.4466 9.7632 10.059 10.336 10.594

8.8514 9.1216 9.3719 9.6036 9.8181

7.8237 8.0216 8.2014 8.3649 8.5136

6.9740 7.1196 7.2497 7.3658 7.4694

25 30 40 50 60

19.523 22.396 27.355 31.424 34.761

17.413 19.600 23.115 25.730 27.676

15.622 17.292 19.793 21.482 22.623

14.094 15.372 17.159 18.256 18.929

12.783 13.765 15.046 15.762 16.161

11.654 12.409 13.332 13.801 14.039

9.0770 9.4269 9.7791 9.9148 9.9672

7.8431 8.0552 8.2438 8.3045 8.3240

10.675 11.258 11.925 12.233 12.377

FORMULA: w  {1  [1  (r/100)]n} [r/100]  v/x.

Table 1.1.10 Annuity Provided for by a Given Capital The annual payment Y provided for a term of n years by a given capital C placed at interest today is Y  C  wr (interest at r percent per annum, compounded annually; the fund supposed to be exhausted at the end of the term). Values of wr Years

r2

1 2 3 4 5

1.0200 .51505 .34675 .26262 .21216

6 7 8 9 10

3

4

5

6

7

8

10

12

1.0300 .52261 .35353 .26903 .21835

1.0400 .53020 .36035 .27549 .22463

1.0500 .53780 .36721 .28201 .23097

1.0600 .54544 .37411 .28859 .23740

1.0700 .55309 .38105 .29523 .24389

1.0800 .56077 .38803 .30192 .25046

1.1000 .57619 .40211 .31547 .26380

1.1200 .59170 .41635 .32923 .27741

.17853 .15451 .13651 .12252 .11133

.18460 .16051 .14246 .12843 .11723

.19076 .16661 .14853 .13449 .12329

.19702 .17282 .15472 .14069 .12950

.20336 .17914 .16104 .14702 .13587

.20980 .18555 .16747 .15349 .14238

.21632 .19207 .17401 .16008 .14903

.22961 .20541 .18744 .17364 .16275

.24323 .21912 .20130 .18768 .17698

11 12 13 14 15

.10218 .09456 .08812 .08260 .07783

.10808 .10046 .09403 .08853 .08377

.11415 .10655 .10014 .09467 .08994

.12039 .11283 .10646 .10102 .09634

.12679 .11928 .11296 .10758 .10296

.13336 .12590 .11965 .11434 .10979

.14008 .13270 .12652 .12130 .11683

.15396 .14676 .14078 .13575 .13147

.16842 .16144 .15568 .15087 .14682

16 17 13 19 20

.07365 .06997 .06670 .06378 .06116

.07961 .07595 .07271 .06981 .06722

.08582 .08220 .07899 .07614 .07358

.09227 .08870 .08555 .08275 .08024

.09895 .09544 .09236 .08962 .08718

.10586 .10243 .09941 .09675 .09439

.11298 .10963 .10670 .10413 .10185

.12782 .12466 .12193 .11955 .11746

.14339 .14046 .13794 .13576 .13388

25 30 40 50 60

.05122 .04465 .03656 .03182 .02877

.05743 .05102 .04326 .03887 .03613

.06401 .05783 .05052 .04655 .04420

.07095 .06505 .05828 .05478 .05283

.07823 .07265 .06646 .06344 .06188

.08581 .08059 .07501 .07246 .07123

.09368 .08883 .08386 .08174 .08080

.11017 .10608 .10226 .10086 .10033

.12750 .12414 .12130 .12042 .12013

FORMULA: wr  [r/100] {1  [1  (r/100)]n}  1/w  v  (r/100).

MATHEMATICAL TABLES Table 1.1.11 Ordinates of the Normal Density Function 1 2 fsxd 5 e2x >2 !2p x

.00

.01

.02

.03

.04

.05

.06

.07

.08

.09

.0 .1 .2 .3 .4

.3989 .3970 .3910 .3814 .3683

.3989 .3965 .3902 .3802 .3668

.3989 .3961 .3894 .3790 .3653

.3988 .3956 .3885 .3778 .3637

.3986 .3951 .3876 .3765 .3621

.3984 .3945 .3867 .3752 .3605

.3982 .3939 .3857 .3739 .3589

.3980 .3932 .3847 .3725 .3572

.3977 .3925 .3836 .3712 .3555

.3973 .3918 .3825 .3697 .3538

.5 .6 .7 .8 .9

.3521 .3332 .3123 .2897 .2661

.3503 .3312 .3101 .2874 .2637

.3485 .3292 .3079 .2850 .2613

.3467 .3271 .3056 .2827 .2589

.3448 .3251 .3034 .2803 .2565

.3429 .3230 .3011 .2780 .2541

.3410 .3209 .2989 .2756 .2516

.3391 .3187 .2966 .2732 .2492

.3372 .3166 .2943 .2709 .2468

.3352 .3144 .2920 .2685 .2444

1.0 1.1 1.2 1.3 1.4

.2420 .2179 .1942 .1714 .1497

.2396 .2155 .1919 .1691 .1476

.2371 .2131 .1895 .1669 .1456

.2347 .2107 .1872 .1647 .1435

.2323 .2083 .1849 .1626 .1415

.2299 .2059 .1826 .1604 .1394

.2275 .2036 .1804 .1582 .1374

.2251 .2012 .1781 .1561 .1354

.2227 .1989 .1758 .1539 .1334

.2203 .1965 .1736 .1518 .1315

1.5 1.6 1.7 1.8 1.9

.1295 .1109 .0940 .0790 .0656

.1276 .1092 .0925 .0775 .0644

.1257 .1074 .0909 .0761 .0632

.1238 .1057 .0893 .0748 .0620

.1219 .1040 .0878 .0734 .0608

.1200 .1023 .0863 .0721 .0596

.1182 .1006 .0848 .0707 .0584

.1163 .0989 .0833 .0694 .0573

.1154 .0973 .0818 .0681 .0562

.1127 .0957 .0804 .0669 .0551

2.0 2.1 2.2 2.3 2.4

.0540 .0440 .0355 .0283 .0224

.0529 .0431 .0347 .0277 .0219

.0519 .0422 .0339 .0270 .0213

.0508 .0413 .0332 .0264 .0208

.0498 .0404 .0325 .0258 .0203

.0488 .0396 .0317 .0252 .0198

.0478 .0387 .0310 .0246 .0194

.0468 .0379 .0303 .0241 .0189

.0459 .0371 .0297 .0235 .0184

.0449 .0363 .0290 .0229 .0180

2.5 2.6 2.7 2.8 2.9

.0175 .0136 .0104 .0079 .0060

.0171 .0132 .0101 .0077 .0058

.0167 .0129 .0099 .0075 .0056

.0163 .0126 .0096 .0073 .0055

.0158 .0122 .0093 .0071 .0053

.0154 .0119 .0091 .0069 .0051

.0151 .0116 .0088 .0067 .0050

.0147 .0113 .0086 .0065 .0048

.0143 .0110 .0084 .0063 .0047

.0139 .0107 .0081 .0061 .0046

3.0 3.1 3.2 3.3 3.4

.0044 .0033 .0024 .0017 .0012

.0043 .0032 .0023 .0017 .0012

.0042 .0031 .0022 .0016 .0012

.0040 .0030 .0022 .0016 .0011

.0039 .0029 .0021 .0015 .0011

.0038 .0028 .0020 .0015 .0010

.0037 .0027 .0020 .0014 .0010

.0036 .0026 .0019 .0014 .0010

.0035 .0025 .0018 .0013 .0009

.0034 .0025 .0018 .0013 .0009

3.5 3.6 3.7 3.8 3.9

.0009 .0006 .0004 .0003 .0002

.0008 .0006 .0004 .0003 .0002

.0008 .0006 .0004 .0003 .0002

.0008 .0005 .0004 .0003 .0002

.0008 .0005 .0004 .0003 .0002

.0007 .0005 .0004 .0002 .0002

.0007 .0005 .0003 .0002 .0002

.0007 .0005 .0003 .0002 .0002

.0007 .0005 .0003 .0002 .0001

.0006 .0004 .0003 .0002 .0001

NOTE: x is the value in left-hand column  the value in top row. f(x) is the value in the body of the table. Example: x  2.14; f (x)  0.0404.

1-9

1-10

MATHEMATICAL TABLES Table 1.1.12

Cumulative Normal Distribution 1 2 e2t / 2 dt 2` !2p x

Fsxd 5 3 x

.00

.01

.02

.03

.04

.05

.06

.07

.08

.09

.0 .1 .2 .3 .4

.5000 .5398 .5793 .6179 .6554

.5040 .5438 .5832 .6217 .6591

.5080 .5478 .5871 .6255 .6628

.5120 .5517 .5910 .6293 .6664

.5160 .5557 .5948 .6331 .6700

.5199 .5596 .5987 .6368 .6736

.5239 .5636 .6026 .6406 .6772

.5279 .5675 .6064 .6443 .6808

.5319 .5714 .6103 .6480 .6844

.5359 .5735 .6141 .6517 .6879

.5 .6 .7 .8 .9

.6915 .7257 .7580 .7881 .8159

.6950 .7291 .7611 .7910 .8186

.6985 .7324 .7642 .7939 .8212

.7019 .7357 .7673 .7967 .8238

.7054 .7389 .7703 .7995 .8264

.7088 .7422 .7734 .8023 .8289

.7123 .7454 .7764 .8051 .8315

.7157 .7486 .7793 .8078 .8340

.7190 .7517 .7823 .8106 .8365

.7224 .7549 .7852 .8133 .8389

1.0 1.1 1.2 1.3 1.4

.8413 .8643 .8849 .9032 .9192

.8438 .8665 .8869 .9049 .9207

.8461 .8686 .8888 .9066 .9222

.8485 .8708 .8906 .9082 .9236

.8508 .8729 .8925 .9099 .9251

.8531 .8749 .8943 .9115 .9265

.8554 .8770 .8962 .9131 .9279

.8577 .8790 .8980 .9147 .9292

.8599 .8810 .8997 .9162 .9306

.8621 .8830 .9015 .9177 .9319

1.5 1.6 1.7 1.8 1.9

.9332 .9452 .9554 .9641 .9713

.9345 .9463 .9564 .9649 .9719

.9357 .9474 .9573 .9656 .9726

.9370 .9484 .9582 .9664 .9732

.9382 .9495 .9591 .9671 .9738

.9394 .9505 .9599 .9678 .9744

.9406 .9515 .9608 .9686 .9750

.9418 .9525 .9616 .9693 .9756

.9429 .9535 .9625 .9699 .9761

.9441 .9545 .9633 .9706 .9767

2.0 2.1 2.2 2.3 2.4

.9772 .9812 .9861 .9893 .9918

.9778 .9826 .9864 .9896 .9920

.9783 .9830 .9868 .9898 .9922

.9788 .9834 .9871 .9901 .9925

.9793 .9838 .9875 .9904 .9927

.9798 .9842 .9878 .9906 .9929

.9803 .9846 .9881 .9909 .9931

.9808 .9850 .9884 .9911 .9932

.9812 .9854 .9887 .9913 .9934

.9817 .9857 .9890 .9916 .9936

2.5 2.6 2.7 2.8 2.9

.9938 .9953 .9965 .9974 .9981

.9940 .9955 .9966 .9975 .9982

.9941 .9956 .9967 .9976 .9982

.9943 .9957 .9968 .9977 .9983

.9945 .9959 .9969 .9977 .9984

.9946 .9960 .9970 .9978 .9984

.9948 .9961 .9971 .9979 .9985

.9949 .9962 .9972 .9979 .9985

.9951 .9963 .9973 .9980 .9986

.9952 .9964 .9974 .9981 .9986

3.0 3.1 3.2 3.3 3.4

.9986 .9990 .9993 .9995 .9997

.9987 .9991 .9993 .9995 .9997

.9987 .9991 .9994 .9995 .9997

.9988 .9991 .9994 .9996 .9997

.9988 .9992 .9994 .9996 .9997

.9989 .9992 .9994 .9996 .9997

.9989 .9992 .9994 .9996 .9997

.9989 .9992 .9995 .9996 .9997

.9990 .9993 .9995 .9996 .9997

.9990 .9993 .9995 .9997 .9998

NOTE: x  (a  m)/s where a is the observed value, m is the mean, and s is the standard deviation. x is the value in the left-hand column  the value in the top row. F(x) is the probability that a point will be less than or equal to x. F(x) is the value in the body of the table. Example: The probability that an observation will be less than or equal to 1.04 is .8508. NOTE: F(x)  1  F(x).

MATHEMATICAL TABLES

1-11

Table 1.1.13 Cumulative Chi-Square Distribution t x sn22d/2e2x/2 dx Fstd 5 3 n/2 0 2 [sn 2 2d/2]! F n 1 2 3 4 5

.005

.010

.025

.050

.100

.250

.500

.750

.900

.950

.975

.990

.995

.000039 .0100 .0717 .207 .412

.00016 .0201 .155 .297 .554

.00098 .0506 .216 .484 .831

.0039 .103 .352 .711 1.15

.0158 .211 .584 1.06 1.61

.101 .575 1.21 1.92 2.67

.455 1.39 2.37 3.36 4.35

1.32 2.77 4.11 5.39 6.63

2.70 4.61 6.25 7.78 9.24

3.84 5.99 7.81 9.49 11.1

5.02 7.38 9.35 11.1 12.8

6.62 9.21 11.3 13.3 15.1

7.86 10.6 12.8 14.9 16.7

5.35 6.35 7.34 8.34 9.34

7.84 9.04 10.2 11.4 12.5

10.6 12.0 13.4 14.7 16.0

12.6 14.1 15.5 16.9 18.3

14.4 16.0 17.5 19.0 20.5

16.8 18.5 20.1 21.7 23.2

18.5 20.3 22.0 23.6 25.2

6 7 8 9 10

.676 .989 1.34 1.73 2.16

.872 1.24 1.65 2.09 2.56

1.24 1.69 2.18 2.70 3.25

1.64 2.17 2.73 3.33 3.94

2.20 2.83 3.49 4.17 4.87

3.45 4.25 5.07 5.90 6.74

11 12 13 14 15

2.60 3.07 3.57 4.07 4.60

3.05 3.57 4.11 4.66 5.23

3.82 4.40 5.01 5.63 6.26

4.57 5.23 5.89 6.57 7.26

5.58 6.30 7.04 7.79 8.55

7.58 8.44 9.30 10.2 11.0

10.3 11.3 12.3 13.3 14.3

13.7 14.8 16.0 17.1 18.2

17.3 18.5 19.8 21.1 22.3

19.7 21.0 22.4 23.7 25.0

21.9 23.3 24.7 26.1 27.5

24.7 26.2 27.7 29.1 30.6

26.8 28.3 29.8 31.3 32.8

16 17 18 19 20

5.14 5.70 6.26 6.84 7.43

5.81 6.41 7.01 7.63 8.26

6.91 7.56 8.23 8.91 9.59

7.96 8.67 9.39 10.1 10.9

9.31 10.1 10.9 11.7 12.4

11.9 12.8 13.7 14.6 15.5

15.3 16.3 17.3 18.3 19.3

19.4 20.5 21.6 22.7 23.8

23.5 24.8 26.0 27.2 28.4

26.3 27.6 28.9 30.1 31.4

28.8 30.2 31.5 32.9 34.2

32.0 33.4 34.8 36.2 37.6

34.3 35.7 37.2 38.6 40.0

21 22 23 24 25

8.03 8.64 9.26 9.89 10.5

8.90 9.54 10.2 10.9 11.5

10.3 11.0 11.7 12.4 13.1

11.6 12.3 13.1 13.8 14.6

13.2 14.0 14.8 15.7 16.5

16.3 17.2 18.1 19.0 19.9

20.3 21.3 22.3 23.3 24.3

24.9 26.0 27.1 28.2 29.3

29.6 30.8 32.0 33.2 34.4

32.7 33.9 35.2 36.4 37.7

35.5 36.8 38.1 39.4 40.6

38.9 40.3 41.6 43.0 44.3

41.4 42.8 44.2 45.6 46.9

26 27 28 29 30

11.2 11.8 12.5 13.1 13.8

12.2 12.9 13.6 14.3 15.0

13.8 14.6 15.3 16.0 16.8

15.4 16.2 16.9 17.7 18.5

17.3 18.1 18.9 19.8 20.6

20.8 21.7 22.7 23.6 24.5

25.3 26.3 27.3 28.3 29.3

30.4 31.5 32.6 33.7 34.8

35.6 36.7 37.9 39.1 40.3

38.9 40.1 41.3 42.6 43.8

41.9 43.2 44.5 45.7 47.0

45.6 47.0 48.3 49.6 50.9

48.3 49.6 51.0 52.3 53.7

NOTE: n is the number of degrees of freedom. Values for t are in the body of the table. Example: The probability that, with 16 degrees of freedom, a point will be 23.5 is .900.

1-12

MATHEMATICAL TABLES Table 1.1.14 t

Fstd 5 3

2`

Cumulative “Student’s” Distribution n21 a b! 2 dx n22 x 2 sn11d/2 a b ! 2pn a1 1 b 2 n

F n

.75

.90

.95

.975

.99

.995

.9995

1 2 3 4 5

1.000 .816 .765 .741 .727

3.078 1.886 1.638 1.533 1.476

6.314 2.920 2.353 2.132 2.015

12.70 4.303 3.182 2.776 2.571

31.82 6.965 4.541 3.747 3.365

63.66 9.925 5.841 4.604 4.032

636.3 31.60 12.92 8.610 6.859

6 7 8 9 10

.718 .711 .706 .703 .700

1.440 1.415 1.397 1.383 1.372

1.943 1.895 1.860 1.833 1.812

2.447 2.365 2.306 2.262 2.228

3.143 2.998 2.896 2.821 2.764

3.707 3.499 3.355 3.250 3.169

5.959 5.408 5.041 4.781 4.587

11 12 13 14 15

.697 .695 .694 .692 .691

1.363 1.356 1.350 1.345 1.341

1.796 1.782 1.771 1.761 1.753

2.201 2.179 2.160 2.145 2.131

2.718 2.681 2.650 2.624 2.602

3.106 3.055 3.012 2.977 2.947

4.437 4.318 4.221 4.140 4.073

16 17 18 19 20

.690 .689 .688 .688 .687

1.337 1.333 1.330 1.328 1.325

1.746 1.740 1.734 1.729 1.725

2.120 2.110 2.101 2.093 2.086

2.583 2.567 2.552 2.539 2.528

2.921 2.898 2.878 2.861 2.845

4.015 3.965 3.922 3.883 3.850

21 22 23 24 25

.686 .686 .685 .685 .684

1.323 1.321 1.319 1.318 1.316

1.721 1.717 1.714 1.711 1.708

2.080 2.074 2.069 2.064 2.060

2.518 2.508 2.500 2.492 2.485

2.831 2.819 2.807 2.797 2.787

3.819 3.792 3.768 3.745 3.725

26 27 28 29 30

.684 .684 .683 .683 .683

1.315 1.314 1.313 1.311 1.310

1.706 1.703 1.701 1.699 1.697

2.056 2.052 2.048 2.045 2.042

2.479 2.473 2.467 2.462 2.457

2.779 2.771 2.763 2.756 2.750

3.707 3.690 3.674 3.659 3.646

40 60 120

.681 .679 .677

1.303 1.296 1.289

1.684 1.671 1.658

2.021 2.000 1.980

2.423 2.390 2.385

2.704 2.660 2.617

3.551 3.460 3.373

NOTE: n is the number of degrees of freedom. Values for t are in the body of the table. Example: The probability that, with 16 degrees of freedom, a point will be 2.921 is .995. NOTE: F(t)  1  F(t).

MATHEMATICAL TABLES

1-13

Table 1.1.15 Cumulative F Distribution m degrees of freedom in numerator; n in denominator F [sm 1 n 2 2d/2]!m m/2nn/2x sm22d/2 sn 1 mxd2sm1nd/2 dx GsFd 5 3 [sm 2 2d/2]![sn 2 2d/2]! 0 Upper 5% points (F.95)

Degrees of freedom for denominator

Degrees of freedom for numerator 1

2

3

4

5

6

7

8

9

10

12

15

20

24

30

40

60

120



1 2 3 4 5

161 18.5 10.1 7.71 6.61

200 19.0 9.55 6.94 5.79

216 19.2 9.28 6.59 5.41

225 19.2 9.12 6.39 5.19

230 19.3 9.01 6.26 5.05

234 19.3 8.94 6.16 4.95

237 19.4 8.89 6.09 4.88

239 19.4 8.85 6.04 4.82

241 19.4 8.81 6.00 4.77

242 19.4 8.79 5.96 4.74

244 19.4 8.74 5.91 4.68

246 19.4 8.70 5.86 4.62

248 19.4 8.66 5.80 4.56

249 19.5 8.64 5.77 4.53

250 19.5 8.62 5.75 4.50

251 19.5 8.59 5.72 4.46

252 19.5 8.57 5.69 4.43

253 19.5 8.55 5.66 4.40

254 19.5 8.53 5.63 4.37

6 7 8 9 10

5.99 5.59 5.32 5.12 4.96

5.14 4.74 4.46 4.26 4.10

4.76 4.35 4.07 3.86 3.71

4.53 4.12 3.84 3.63 3.48

4.39 3.97 3.69 3.48 3.33

4.28 3.87 3.58 3.37 3.22

4.21 3.79 3.50 3.29 3.14

4.15 3.73 3.44 3.23 3.07

4.10 3.68 3.39 3.18 3.02

4.06 3.64 3.35 3.14 2.98

4.00 3.57 3.28 3.07 2.91

3.94 3.51 3.22 3.01 2.85

3.87 3.44 3.15 2.94 2.77

3.84 3.41 3.12 2.90 2.74

3.81 3.38 3.08 2.86 2.70

3.77 3.34 3.04 2.83 2.66

3.74 3.30 3.01 2.79 2.62

3.70 3.27 2.97 2.75 2.58

3.67 3.23 2.93 2.71 2.54

11 12 13 14 15

4.84 4.75 4.67 4.60 4.54

3.98 3.89 3.81 3.74 3.68

3.59 3.49 3.41 3.34 3.29

3.36 3.26 3.18 3.11 3.06

3.20 3.11 3.03 2.96 2.90

3.09 3.00 2.92 2.85 2.79

3.01 2.91 2.83 2.76 2.71

2.95 2.85 2.77 2.70 2.64

2.90 2.80 2.71 2.65 2.59

2.85 2.75 2.67 2.60 2.54

2.79 2.69 2.60 2.53 2.48

2.72 2.62 2.53 2.46 2.40

2.65 2.54 2.46 2.39 2.33

2.61 2.51 2.42 2.35 2.29

2.57 2.47 2.38 2.31 2.25

2.53 2.43 2.34 2.27 2.20

2.49 2.38 2.30 2.22 2.16

2.45 2.34 2.25 2.18 2.11

2.40 2.30 2.21 2.13 2.07

16 17 18 19 20

4.49 4.45 4.41 4.38 4.35

3.63 3.59 3.55 3.52 3.49

3.24 3.20 3.16 3.13 3.10

3.01 2.96 2.93 2.90 2.87

2.85 2.81 2.77 2.74 2.71

2.74 2.70 2.66 2.63 2.60

2.66 2.61 2.58 2.54 2.51

2.59 2.55 2.51 2.48 2.45

2.54 2.49 2.46 2.42 2.39

2.49 2.45 2.41 2.38 2.35

2.42 2.38 2.34 2.31 2.28

2.35 2.31 2.27 2.23 2.20

2.28 2.23 2.19 2.16 2.12

2.24 2.19 2.15 2.11 2.08

2.19 2.15 2.11 2.07 2.04

2.15 2.10 2.06 2.03 1.99

2.11 2.06 2.02 1.98 1.95

2.06 2.01 1.97 1.93 1.90

2.01 1.96 1.92 1.88 1.84

21 22 23 24 25

4.32 4.30 4.28 4.26 4.24

3.47 3.44 3.42 3.40 3.39

3.07 3.05 3.03 3.01 2.99

2.84 2.82 2.80 2.78 2.76

2.68 2.66 2.64 2.62 2.60

2.57 2.55 2.53 2.51 2.49

2.49 2.46 2.44 2.42 2.40

2.42 2.40 2.37 2.36 2.34

2.37 2.34 2.32 2.30 2.28

2.32 2.30 2.27 2.25 2.24

2.25 2.23 2.20 2.18 2.16

2.18 2.15 2.13 2.11 2.09

2.10 2.07 2.05 2.03 2.01

2.05 2.03 2.01 1.98 1.96

2.01 1.98 1.96 1.94 1.92

1.96 1.94 1.91 1.89 1.87

1.92 1.89 1.86 1.84 1.82

1.87 1.84 1.81 1.79 1.77

1.81 1.78 1.76 1.73 1.71

30 40 60 120 

4.17 4.08 4.00 3.92 3.84

3.32 3.23 3.15 3.07 3.00

2.92 2.84 2.76 2.68 2.60

2.69 2.61 2.53 2.45 2.37

2.53 2.45 2.37 2.29 2.21

2.42 2.34 2.25 2.18 2.10

2.33 2.25 2.17 2.09 2.01

2.27 2.18 2.10 2.02 1.94

2.21 2.12 2.04 1.96 1.88

2.16 2.08 1.99 1.91 1.83

2.09 2.00 1.92 1.83 1.75

2.01 1.92 1.84 1.75 1.67

1.93 1.84 1.75 1.66 1.57

1.89 1.79 1.70 1.61 1.52

1.84 1.74 1.65 1.55 1.46

1.79 1.69 1.59 1.50 1.39

1.74 1.64 1.53 1.43 1.32

1.68 1.58 1.47 1.35 1.22

1.62 1.51 1.39 1.25 1.00

Upper 1% points (F.99)

Degrees of freedom for denominator

Degrees of freedom for numerator 1

2

3

4

5

6

7

8

9

10

12

15

20

24

30

40

60

120



1 2 3 4 5

4052 98.5 34.1 21.2 16.3

5000 99.0 30.8 18.0 13.3

5403 99.2 29.5 16.7 12.1

5625 99.2 28.7 16.0 11.4

5764 99.3 28.2 15.5 11.0

5859 99.3 27.9 15.2 10.7

5928 99.4 27.7 15.0 10.5

5982 99.4 27.5 14.8 10.3

6023 99.4 27.3 14.7 10.2

6056 99.4 27.2 14.5 10.1

6106 99.4 27.1 14.4 9.89

6157 99.4 26.9 14.2 9.72

6209 99.4 26.7 14.0 9.55

6235 99.5 26.6 13.9 9.47

6261 99.5 26.5 13.8 9.38

6287 99.5 26.4 13.7 9.29

6313 99.5 26.3 13.7 9.20

6339 99.5 26.2 13.6 9.11

6366 99.5 26.1 13.5 9.02

6 7 8 9 10

13.7 12.2 11.3 10.6 10.0

!0.9 9.55 8.65 8.02 7.56

9.78 8.45 7.59 6.99 6.55

9.15 7.85 7.01 6.42 5.99

8.75 7.46 6.63 6.06 5.64

8.47 7.19 6.37 5.80 5.39

8.26 6.99 6.18 5.61 5.20

8.10 6.84 6.03 5.47 5.06

7.98 6.72 5.91 5.35 4.94

7.87 6.62 5.81 5.26 4.85

7.72 6.47 5.67 5.11 4.71

7.56 6.31 5.52 4.96 4.56

7.40 6.16 5.36 4.81 4.41

7.31 6.07 5.28 4.73 4.33

7.23 5.99 5.20 4.65 4.25

7.14 5.91 5.12 4.57 4.17

7.06 5.82 5.03 4.48 4.08

6.97 5.74 4.95 4.40 4.40

6.88 5.65 4.86 4.31 3.91

11 12 13 14 15

9.65 9.33 9.07 8.86 8.68

7.21 6.93 6.70 6.51 6.36

6.22 5.95 5.74 5.56 5.42

5.67 5.41 5.21 5.04 4.89

5.32 5.06 4.86 4.70 4.56

5.07 4.82 4.62 4.46 4.32

4.89 4.64 4.44 4.28 4.14

4.74 4.50 4.30 4.14 4.00

4.63 4.39 4.19 4.03 3.89

4.54 4.30 4.10 3.94 3.80

4.40 4.16 3.96 3.80 3.67

4.25 4.01 3.82 3.66 3.52

4.10 3.86 3.66 3.51 3.37

4.02 3.78 3.59 3.43 3.29

3.94 3.70 3.51 3.35 3.21

3.86 3.62 3.43 3.27 3.13

3.78 3.54 3.34 3.18 3.05

3.69 3.45 3.25 3.09 2.96

3.60 3.36 3.17 3.00 2.87

16 17 18 19 20

8.53 8.40 8.29 8.19 8.10

6.23 6.11 6.01 5.93 5.85

5.29 5.19 5.09 5.01 4.94

4.77 4.67 4.58 4.50 4.43

4.44 4.34 4.25 4.17 4.10

4.20 4.10 4.01 3.94 3.87

4.03 3.93 3.84 3.77 3.70

3.89 3.79 3.71 3.63 3.56

3.78 3.68 3.60 3.52 3.46

3.69 3.59 3.51 3.43 3.37

3.55 3.46 3.37 3.30 3.23

3.41 3.31 3.23 3.15 3.09

3.26 3.16 3.08 3.00 2.94

3.18 3.08 3.00 2.92 2.86

3.10 3.00 2.92 2.84 2.78

3.02 2.92 2.84 2.76 2.69

2.93 2.83 2.75 2.67 2.61

2.84 2.75 2.66 2.58 2.52

2.75 2.65 2.57 2.49 2.42

21 22 23 24 25

8.02 7.95 7.88 7.82 7.77

5.78 5.72 5.66 5.61 5.57

4.87 4.82 4.76 4.72 4.68

4.37 4.31 4.26 4.22 4.18

4.04 3.99 3.94 3.90 3.86

3.81 3.76 3.71 3.67 3.63

3.64 3.59 3.54 3.50 3.46

3.51 3.45 3.41 3.36 3.32

3.40 3.35 3.30 3.26 3.22

3.31 3.26 3.21 3.17 3.13

3.17 3.12 3.07 3.03 2.99

3.03 2.98 2.93 2.89 2.85

2.88 2.83 2.78 2.74 2.70

2.80 2.75 2.70 2.66 2.62

2.72 2.67 2.62 2.58 2.53

2.64 2.58 2.54 2.49 2.45

2.55 2.50 2.45 2.40 2.36

2.46 2.40 2.35 2.31 2.27

2.36 2.31 2.26 2.21 2.17

30 40 60 120 

7.56 7.31 7.08 6.85 6.63

5.39 5.18 4.98 4.79 4.61

4.51 4.31 4.13 3.95 3.78

4.02 3.83 3.65 3.48 3.32

3.70 3.51 3.34 3.17 3.02

3.47 3.29 3.12 2.96 2.80

3.30 3.12 2.95 2.79 2.64

3.17 2.99 2.82 2.66 2.51

3.07 2.89 2.72 2.56 2.41

2.98 2.80 2.63 2.47 2.32

2.84 2.66 2.50 2.34 2.18

2.70 2.52 2.35 2.19 2.04

2.55 2.37 2.20 2.03 1.88

2.47 2.29 2.12 1.95 1.79

2.39 2.20 2.03 1.86 1.70

2.30 2.11 1.94 1.76 1.59

2.21 2.02 1.84 1.66 1.47

2.11 1.92 1.73 1.53 1.32

2.01 1.80 1.60 1.38 1.00

NOTE: m is the number of degrees of freedom in the numerator of F; n is the number of degrees of freedom in the denominator of F. Values for F are in the body of the table. G is the probability that a point, with m and n degrees of freedom will be F. Example: With 2 and 5 degrees of freedom, the probability that a point will be 13.3 is .99. SOURCE: “Chemical Engineers’ Handbook,” 5th edition, by R. H. Perry and C. H. Chilton, McGraw-Hill, 1973. Used with permission.

1-14

MATHEMATICAL TABLES Table 1.1.16

Standard Distribution of Residuals

a  any positive quantity y  the number of residuals which are numerically a r  the probable error of a single observation n  number of observations

Table 1.1.17

a r

y n

0.0 1 2 3 4

.000 .054 .107 .160 .213

0.5 6 7 8 9

.264 .314 .363 .411 .456

1.0 1 2 3 4

.500 .542 .582 .619 .655

1.5 6 7 8 9

.688 .719 .748 .775 .800

2.0 1 2 3 4

.823 .843 .862 .879 .895

Diff 54 53 53 53 51 50 49 48 45 44 42 40 37 36 33 31 29 27 25 23

a r

y n

2.5 6 7 8 9

.908 .921 .931 .941 .950

3.0 1 2 3 4

.957 .963 .969 .974 .978

3.5 6 7 8 9

.982 .985 .987 .990 .991

4.0

.993

5.0

.999

Diff 13 10 10 9 7 6 6 5 4 4 3 2 3 1 2 6

20 19 17 16 13

Factors for Computing Probable Error Bessel

n

Peters

Bessel

n

0.6745 !sn 2 1d

0.6745 !nsn 2 1d

0.8453 !nsn 2 1d

0.8453 n!n 2 1

2 3 4

.6745 .4769 .3894

.4769 .2754 .1947

.5978 .3451 .2440

.4227 .1993 .1220

5 6 7 8 9

.3372 .3016 .2754 .2549 .2385

.1508 .1231 .1041 .0901 .0795

.1890 .1543 .1304 .1130 .0996

.0845 .0630 .0493 .0399 .0332

10 11 12 13 14

.2248 .2133 .2034 .1947 .1871

.0711 .0643 .0587 .0540 .0500

.0891 .0806 .0736 .0677 .0627

.0282 .0243 .0212 .0188 .0167

15 16 17 18 19

.1803 .1742 .1686 .1636 .1590

.0465 .0435 .0409 .0386 .0365

.0583 .0546 .0513 .0483 .0457

.0151 .0136 .0124 .0114 .0105

20 21 22 23 24

.1547 .1508 .1472 .1438 .1406

.0346 .0329 .0314 .0300 .0287

.0434 .0412 .0393 .0376 .0360

.0097 .0090 .0084 .0078 .0073

25 26 27 28 29

.1377 .1349 .1323 .1298 .1275

.0275 .0265 .0255 .0245 .0237

.0345 .0332 .0319 .0307 .0297

.0069 .0065 .0061 .0058 .0055

Peters

0.6745 !sn 2 1d

0.6745 !nsn 2 1d

0.8453 !nsn 2 1d

0.8453 n!n 2 1

30 31 32 33 34

.1252 .1231 .1211 .1192 .1174

.0229 .0221 .0214 .0208 .0201

.0287 .0277 .0268 .0260 .0252

.0052 .0050 .0047 .0045 .0043

35 36 37 38 39

.1157 .1140 .1124 .1109 .1094

.0196 .0190 .0185 .0180 .0175

.0245 .0238 .0232 .0225 .0220

.0041 .0040 .0038 .0037 .0035

40 45

.1080 .1017

.0171 .0152

.0214 .0190

.0034 .0028

50 55

.0964 .0918

.0136 .0124

.0171 .0155

.0024 .0021

60 65

.0878 .0843

.0113 .0105

.0142 .0131

.0018 .0016

70 75

.0812 .0784

.0097 .0091

.0122 .0113

.0015 .0013

80 85

.0759 .0736

.0085 .0080

.0106 .0100

.0012 .0011

90 95

.0715 .0696

.0075 .0071

.0094 .0089

.0010 .0009

100

.0678

.0068

.0085

.0008

MATHEMATICAL TABLES Table 1.1.18

1-15

Decimal Equivalents Common fractions

From minutes and seconds into decimal parts of a degree 0r 1 2 3 4 5r 6 7 8 9 10r 1 2 3 4 15r 6 7 8 9 20r 1 2 3 4 25r 6 7 8 9 30r 1 2 3 4 35r 6 7 8 9 40r 1 2 3 4 45r 6 7 8 9 50r 1 2 3 4 55r 6 7 8 9 60r

08.0000 .0167 .0333 .05 .0667 .0833 .10 .1167 .1333 .15 08.1667 .1833 .20 .2167 .2333 .25 .2667 .2833 .30 .3167 08.3333 .35 .3667 .3833 .40 .4167 .4333 .45 .4667 .4833 08.50 .5167 .5333 .55 .5667 .5833 .60 .6167 .6333 .65 08.6667 .6833 .70 .7167 .7333 .75 .7667 .7833 .80 .8167 08.8333 .85 .8667 .8833 .90 .9167 .9333 .95 .9667 .9833 1.00

0s 1 2 3 4 5s 6 7 8 9 10s 1 2 3 4 15s 6 7 8 9 20s 1 2 3 4 25s 6 7 8 9 30s 1 2 3 4 35s 6 7 8 9 40s 1 2 3 4 45s 6 7 8 9 50s 1 2 3 4 55s 6 7 8 9 60s

From decimal parts of a degree into minutes and seconds (exact values) 08.0000 .0003 .0006 .0008 .0011 .0014 .0017 .0019 .0022 .0025 08.0028 .0031 .0033 .0036 .0039 .0042 .0044 .0047 .005 .0053 08.0056 .0058 .0061 .0064 .0067 .0069 .0072 .0075 .0078 .0081 08.0083 .0086 .0089 .0092 .0094 .0097 .01 .0103 .0106 .0108 08.0111 .0114 .0117 .0119 .0122 .0125 .0128 .0131 .0133 .0136 08.0139 .0142 .0144 .0147 .015 .0153 .0156 .0158 .0161 .0164 08.0167

08.00 1 2 3 4 08.05 6 7 8 9 08.10 1 2 3 4 08.15 6 7 8 9 08.20 1 2 3 4 08.25 6 7 8 9 08 .30 1 2 3 4 08.35 6 7 8 9 08.40 1 2 3 4 08.45 6 7 8 9 08.50

0r 0r 36s 1r 12s 1r 48s 2r 24s 3r 3r 36s 4r 12s 4r 48s 5r 24s 6r 6r 36s 7r 12s 7r 48s 8r 24s 9r 9r 36s 10r 12s 10r 48s 11r 24s 12r 12r 36s 13r 12s 13r 48s 14r 24s 15r 15r 36s 16r 12s 16r 48s 17r 24s 18r 18r 36s 19r 12s 19r 48s 20r 24s 21r 21r 36s 22r 12s 22r 48s 23r 24s 24r 24r 36s 25r 12s 25r 48s 26r 24s 27r 27r 36s 28r 12s 28r 48s 29r 24s 30r

08.50 1 2 3 4 08.55 6 7 8 9 08.60 1 2 3 4 08.65 6 7 8 9 08.70 1 2 3 4 08.75 6 7 8 9 08.80 1 2 3 4 08.85 6 7 8 9 08.90 1 2 3 4 08.95 6 7 8 9 18.00

08.000 1 2 3 4 08.005 6 7 8 9 08.010

0s.0 3s.6 7s.2 10s.8 14s.4 18s 21s.6 25s.2 28s.8 32s.4 36s

8 ths 30r 30r 36s 31r 12s 31r 48s 32r 24s 33r 33r 36s 34r 12s 34r 48s 35r 24s 36r 36r 36s 37r 12s 37r 48s 38r 24s 39r 39r 36s 40r 12s 40r 48s 41r 24s 42r 42r 36s 43r 12s 43r 48s 44r 24s 45r 45r 36s 46r 12s 46r 48s 47r 24s 48r 48r 36s 49r 12s 49r 48s 50r 24s 51r 51r 36s 52r 12s 52r 48s 53r 24s 54r 54r 36s 55r 12s 55r 48s 56r 24s 57r 57r 36s 58r 12s 58r 48s 59r 24s 60r

16 ths

32 nds 1

1

2 3

1

2

4 5

3

6 7

2

4

8 9

5

10 11

3

6

12 13

7

14 15

4

8

16 17

9

18 19

5

10

20 21

11

22 23

6

12

24 25

13

26 27

7

14

28 29

15

30 31

64 ths

Exact decimal values

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63

.01 5625 .03 125 .04 6875 .06 25 .07 8125 .09 375 .10 9375 .12 5 .14 0625 .15 625 .17 1875 .18 75 .20 3125 .21 875 .23 4375 .25 .26 5625 .28 125 .29 6875 .31 25 .32 8125 .34 375 .35 9375 .37 5 .39 0625 .40 625 .42 1875 .43 75 .45 3125 .46 875 .48 4375 .50 .51 5625 .53 125 .54 6875 .56 25 .57 8125 .59 375 .60 9375 .62 5 .64 0625 .65 625 .67 1875 .68 75 .70 3125 .71 875 .73 4375 .75 .76 5625 .78 125 .79 6875 .81 25 .82 8125 .84 375 .85 9375 .87 5 .89 0625 .90 625 .92 1875 .93 75 .95 3125 .96 875 .98 4375

1.2 MEASURING UNITS by John T. Baumeister REFERENCES: “International Critical Tables,” McGraw-Hill. “Smithsonian Physical Tables,” Smithsonian Institution. “Landolt-Börnstein: Zahlenwerte und Funktionen aus Physik, Chemie, Astronomie, Geophysik und Technik,” Springer. “Handbook of Chemistry and Physics,” Chemical Rubber Co. “Units and Systems of Weights and Measures; Their Origin, Development, and Present Status,” NBS LC 1035 (1976). “Weights and Measures Standards of the United States, a Brief History,” NBS Spec. Pub. 447 (1976). “Standard Time,” Code of Federal Regulations, Title 49. “Fluid Meters, Their Theory and Application,” 6th ed., chaps. 1–2, ASME, 1971. H.E. Huntley, “Dimensional Analysis,” Richard & Co., New York, 1951. “U.S. Standard Atmosphere, 1962,” Government Printing Office. Public Law 89-387, “Uniform Time Act of 1966.” Public Law 94-168, “Metric Conversion Act of 1975.” ASTM E380-91a, “Use of the International Standards of Units (SI) (the Modernized Metric System).” The International System of Units,” NIST Spec. Pub. 330. “Guide for the Use of the International System of Units (SI),” NIST Spec. Pub. 811. “Guidelines for Use of the Modernized Metric System,” NBS LC 1120. “NBS Time and Frequency Dissemination Services,” NBS Spec. Pub. 432. “Factors for High Precision Conversion,” NBS LC 1071. American Society of Mechanical Engineers SI Series, ASME SI 19. Jespersen and FitzRandolph, “From Sundials to Atomic Clocks: Understanding Time and Frequency,” NBS, Monograph 155. ANSI/IEEE Std 268-1992, “American National Standard for Metric Practice.”

U.S. CUSTOMARY SYSTEM (USCS)

The USCS, often called the “inch-pound system,” is the system of units most commonly used for measures of weight and length (Table 1.2.1). The units are identical for practical purposes with the corresponding English units, but the capacity measures differ from those used in the British Commonwealth, the U.S. gallon being defined as 231 cu in and the bushel as 2,150.42 cu in, whereas the corresponding British Imperial units are, respectively, 277.42 cu in and 2,219.36 cu in (1 Imp gal  1.2 U.S. gal, approx; 1 Imp bu  1.03 U.S. bu, approx).

Table 1.2.1

U.S. Customary Units Units of length

12 inches 3 feet 51⁄2 yards  161⁄2 feet 40 poles  220 yards 8 furlongs  1,760 yards  5,280 feet 3 miles 4 inches 9 inches 6,076.11549 feet 6 feet 120 fathoms 1 nautical mile per hr

j

 1 foot  1 yard  1 rod, pole, or perch  1 furlong  1 mile  1 league  1 hand  1 span Nautical units  1 international nautical mile  1 fathom  1 cable length  1 knot

Surveyor’s or Gunter’s units 7.92 inches  1 link 100 links  66 ft  4 rods  1 chain 80 chains  1 mile 331⁄3 inches  1 vara (Texas) Units of area 144 square inches 9 square feet 301⁄4 square yards 1-16

 1 square foot  1 square yard  1 square rod, pole, or perch

160 square rods  10 square chains  43,560 square feet  5,645 sq varas (Texas)

J

640 acres  1 square mile  1 circular inch  area of circle 1 inch in diameter 1 square inch 1 circular mil 1,000,000 cir mils

j

 1 acre

h

1 “section” of U.S. government-surveyed land

 0.7854 sq in  1.2732 circular inches  area of circle 0.001 in in diam  1 circular inch Units of volume

1,728 cubic inches 231 cubic inches 27 cubic feet 1 cord of wood 1 perch of masonry

 1 cubic foot  1 gallon  1 cubic yard  128 cubic feet  161⁄2 to 25 cu ft

Liquid or fluid measurements 4 gills  1 pint 2 pints  1 quart 4 quarts  1 gallon 7.4805 gallons  1 cubic foot (There is no standard liquid barrel; by trade custom, 1 bbl of petroleum oil, unrefined  42 gal. The capacity of the common steel barrel used for refined petroleum products and other liquids is 55 gal.) Apothecaries’ liquid measurements  1 liquid dram or drachm  1 liquid ounce  1 pint

60 minims 8 drams 16 ounces

Water measurements The miner’s inch is a unit of water volume flow no longer used by the Bureau of Reclamation. It is used within particular water districts where its value is defined by statute. Specifically, within many of the states of the West the miner’s inch is 1⁄50 cubic foot per second. In others it is equal to 1⁄40 cubic foot per second, while in the state of Colorado, 38.4 miner’s inch is equal to 1 cubic-foot per second. In SI units, these correspond to .32  106 m3/s, .409  106 m3/s, and .427  106 m3/s, respectively. Dry measures 2 pints  1 quart 8 quarts  1 peck 4 pecks  1 bushel 1 std bbl for fruits and vegetables  7,056 cu in or 105 dry qt, struck measure 1 Register ton 1 U.S. shipping ton 1 British shipping ton

Shipping measures  100 cu ft  40 cu ft  32.14 U.S. bu or 31.14 Imp bu  42 cu ft  32.70 Imp bu or 33.75 U.S. bu

Board measurements (Based on nominal not actual dimensions; see Table 12.2.8) 144 cu in  volume of board 1 board foot  1 ft sq and 1 in thick

h

The international log rule, based upon 1⁄4 in kerf, is expressed by the formula X  0.904762(0.22 D2  0.71 D) where X is the number of board feet in a 4-ft section of a log and D is the top diam in in. In computing the number of board feet in a log, the taper is taken at 1⁄2 in per 4 ft linear, and separate computation is made for each 4-ft section.

THE INTERNATIONAL SYSTEM OF UNITS (SI) Weights (The grain is the same in all systems.) 16 drams  437.5 grains 16 ounces  7,000 grains 100 pounds 2,000 pounds 2,240 pounds 1 std lime bbl, small 1 std lime bbl, large Also (in Great Britain): 14 pounds 2 stone  28 pounds 4 quarters  112 pounds 20 hundredweight

Avoirdupois weights  1 ounce  1 pound  1 cental  1 short ton  1 long ton  180 lb net  280 lb net  1 stone  1 quarter  1 hundredweight (cwt)  1 long ton

Troy weights 24 grains  1 pennyweight (dwt) 20 pennyweights  480 grains  1 ounce 12 ounces  5,760 grains  1 pound 1 assay ton  29,167 milligrams, or as many milligrams as there are troy ounces in a ton of 2,000 lb avoirdupois. Consequently, the number of milligrams of precious metal yielded by an assay ton of ore gives directly the number of troy ounces that would be obtained from a ton of 2,000 lb avoirdupois. 20 grains 3 scruples  60 grains 8 drams 12 ounces  5,760 grains

Apothecaries’ weights  1 scruple  1 dram  1 ounce  1 pound

Weight for precious stones 1 carat  200 milligrams (Used by almost all important nations) 60 seconds 60 minutes 90 degrees 360 degrees 57.2957795 degrees ( 5717r44.806s)

Circular measures  1 minute  1 degree  1 quadrant  circumference  1 radian (or angle having arc of length equal to radius)

METRIC SYSTEM

In the United States the name “metric system” of length and mass units is commonly taken to refer to a system that was developed in France about 1800. The unit of length was equal to 1/10,000,000 of a quarter meridian (north pole to equator) and named the metre. A cube 1/10th metre on a side was the litre, the unit of volume. The mass of water filling this cube was the kilogram, or standard of mass; i.e., 1 litre of water  1 kilogram of mass. Metal bars and weights were constructed conforming to these prescriptions for the metre and kilogram. One bar and one weight were selected to be the primary representations. The kilogram and the metre are now defined independently, and the litre, although for many years defined as the volume of a kilogram of water at the temperature of its maximum density, 48C, and under a pressure of 76 cm of mercury, is now equal to 1 cubic decimeter. In 1866, the U.S. Congress formally recognized metric units as a legal system, thereby making their use permissible in the United States. In 1893, the Office of Weights and Measures (now the National Bureau of Standards), by executive order, fixed the values of the U.S. yard and pound in terms of the meter and kilogram, respectively, as 1 yard  3,600/3,937 m; and 1 lb  0.453 592 4277 kg. By agreement in 1959 among the national standards laboratories of the English-speaking nations, the relations in use now are: 1 yd  0.9144 m, whence 1 in 

1-17

25.4 mm exactly; and 1 lb  0.453 592 37 kg, or 1 lb  453.59 g (nearly).

THE INTERNATIONAL SYSTEM OF UNITS (SI)

In October 1960, the Eleventh General (International) Conference on Weights and Measures redefined some of the original metric units and expanded the system to include other physical and engineering units. This expanded system is called, in French, Le Système International d’Unités (abbreviated SI), and in English, The International System of Units.

The Metric Conversion Act of 1975 codifies the voluntary conversion of the U.S. to the SI system. It is expected that in time all units in the United States will be in SI form. For this reason, additional tables of units, prefixes, equivalents, and conversion factors are included below (Tables 1.2.2 and 1.2.3). SI consists of seven base units, two supplementary units, a series of derived units consistent with the base and supplementary units, and a series of approved prefixes for the formation of multiples and submultiples of the various units (see Tables 1.2.2 and 1.2.3). Multiple and submultiple prefixes in steps of 1,000 are recommended. (See ASTM E380-91a for further details.) Base and supplementary units are defined [NIST Spec. Pub. 330 (2001)] as: Metre The metre is defined as the length of path traveled by light in a vacuum during a time interval 1/299 792 458 of a second. Kilogram The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram. Second The second is the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom. Ampere The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible cross section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2  107 newton per metre of length. Kelvin The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. Mole The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12. (When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles.) Candela The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540  1012 hertz and that has a radiant intensity in that direction of 1⁄683 watt per steradian. Radian The unit of measure of a plane angle with its vertex at the center of a circle and subtended by an arc equal in length to the radius. Steradian The unit of measure of a solid angle with its vertex at the center of a sphere and enclosing an area of the spherical surface equal to that of a square with sides equal in length to the radius. SI conversion factors are listed in Table 1.2.4 alphabetically (adapted from ASTM E380-91a, “Standard Practice for Use of the International System of Units (SI) (the Modernized Metric System).” Conversion factors are written as a number greater than one and less than ten with six or fewer decimal places. This number is followed by the letter E (for exponent), a plus or minus symbol, and two digits which indicate the power of 10 by which the number must be multiplied to obtain the correct value. For example: 3.523 907 E  02 is 3.523 907  102 or 0.035 239 07 An asterisk (*) after the sixth decimal place indicates that the conversion factor is exact and that all subsequent digits are zero. All other conversion factors have been rounded off.

1-18

MEASURING UNITS Table 1.2.2

SI Units

Quantity

Unit

SI symbol

Formula

Base units* Length Mass Time Electric current Thermodynamic temperature Amount of substance Luminous intensity

metre kilogram second ampere kelvin mole candela

Plane angle Solid angle

radian steradian

m kg s A K mol cd Supplementary units* rad sr Derived units*

Acceleration Activity (of a radioactive source) Angular acceleration Angular velocity Area Density Electric capacitance Electrical conductance Electric field strength Electric inductance Electric potential difference Electric resistance Electromotive force Energy Entropy Force Frequency Illuminance Luminance Luminous flux Magnetic field strength Magnetic flux Magnetic flux density Magnetic potential difference Power Pressure Quantity of electricity Quantity of heat Radiant intensity Specific heat capacity Stress Thermal conductivity Velocity Viscosity, dynamic Viscosity, kinematic Voltage Volume Wave number Work

metre per second squared disintegration per second radian per second squared radian per second square metre kilogram per cubic metre farad siemens volt per metre henry volt ohm volt joule joule per kelvin newton hertz lux candela per square metre lumen ampere per metre weber tesla ampere watt pascal coulomb joule watt per steradian joule per kilogram-kelvin pascal watt per metre-kelvin metre per second pascal-second square metre per second volt cubic metre reciprocal metre joule

F S H V V J N Hz lx lm Wb T A W Pa C J

m/s2 (disintegration)/s rad/s2 rad/s m2 kg/m3 A  s/V A/V V/m V  s/A W/A V/A W/A Nm J/K kg  m/s2 1/s lm/m2 cd/m2 cd  sr A/m Vs Wb/m2

J

J/s N/m2 As Nm W/sr J/(kg  K) N/m2 W/(m  K) m/s Pa  s m2/s W/A m3 1/m Nm

min h d 8 r s L t u eV

1 min  60 s 1 h  60 min  3,600 s 1 d  24 h  86,400 s 18  p/180 rad 1r  (1⁄60)8  (p/10,800) rad 1s  (1⁄60)r  (p/648,000) rad 1 L  1 dm3  103 m3 1 t  103 kg 1 u  1.660 54  1027 kg 1 eV  1.602 18  1019 J

Pa

V

Units in use with the SI† Time

Plane angle

Volume Mass Energy

minute hour day degree minute‡ second‡ litre metric ton unified atomic mass unit§ electronvolt§

* ASTM E380-91a. † These units are not part of SI, but their use is both so widespread and important that the International Committee for Weights and Measures in 1969 recognized their continued use with the SI (see NIST Spec. Pub. 330). ‡ Use discouraged, except for special fields such as cartography. § Values in SI units obtained experimentally. These units are to be used in specialized fields only.

THE INTERNATIONAL SYSTEM OF UNITS (SI) Table 1.2.3

SI Prefixes*

Multiplication factors 1 000 000 000 000 000 000 000 000  10 1 000 000 000 000 000 000 000  1021 1 000 000 000 000 000 000  1018 1 000 000 000 000 000  1015 1 000 000 000 000  1012 1 000 000 000  109 1 000 000  106 1 000  103 100  102 10  101 0.1  101 0.01  102 0.001  103 0.000 001  106 0.000 000 001  109 0.000 000 000 001  1012 0.000 000 000 000 001  1015 0.000 000 000 000 000 001  1018 0.000 000 000 000 000 000 001  1021 0.000 000 000 000 000 000 000 001  1024 24

Prefix

SI symbol

yotta zetta exa peta tera giga mega kilo hecto† deka† deci† centi† milli micro nano pico femto atto zepto yocto

Y Z E P T G M k h da d c m m n P f a z y

* ANSI/IEEE Std 268-1992. † To be avoided where practical.

Table 1.2.4

SI Conversion Factors

To convert from abampere abcoulomb abfarad abhenry abmho abohm abvolt acre-foot (U.S. survey)a acre (U.S. survey)a ampere, international U.S. (AINTUS)b ampere, U.S. legal 1948 (AUS48) ampere-hour angstrom are astronomical unit atmosphere (normal) atmosphere (technical  1 kgf/cm2) bar barn barrel (for crude petroleum, 42 gal) board foot British thermal unit (International Table)c British thermal unit (mean) British thermal unit (thermochemical) British thermal unit (398F) British thermal unit (598F) British thermal unit (608F) Btu (thermochemical)/foot2-second Btu (thermochemical)/foot2-minute Btu (thermochemical)/foot2-hour Btu (thermochemical)/inch2-second Btu (thermochemical)  in/s  ft2  8F (k, thermal conductivity) Btu (International Table)  in/s  ft2  8F (k, thermal conductivity) Btu (thermochemical)  in/h  ft2  8F (k, thermal conductivity) Btu (International Table)  in/h  ft2  8F (k, thermal conductivity) Btu (International Table)/ft2 Btu (thermochemical)/ft2 Btu (International Table)/h  ft2  8F (C, thermal conductance) Btu (thermochemical)/h  ft2  8F (C, thermal conductance) Btu (International Table)/pound-mass

to

Multiply by

ampere (A) coulomb (C) farad (F) henry (H) siemens (S) ohm ( ) volt (V) metre3 (m3) metre2 (m2) ampere (A) ampere (A) coulomb (C) metre (m) metre2 (m2) metre (m) pascal (Pa) pascal (Pa) pascal (Pa) metre2 (m2) metre3 (m3) metre3 (m3) joule (J) joule (J) joule (J) joule (J) joule (J) joule (J) watt/metre2 (W/m2) watt/metre2 (W/m2) watt/metre2 (W/m2) watt/metre2 (W/m2) watt/metre-kelvin (W/m  K)

1.000 000*E01 1.000 000*E01 1.000 000*E09 1.000 000*E09 1.000 000*E09 1.000 000*E09 1.000 000*E08 1.233 489 E03 4.046 873 E03 9.998 43 E01 1.000 008 E00 3.600 000*E03 1.000 000*E10 1.000 000*E02 1.495 98 E11 1.013 25 E05 9.806 650*E04 1.000 000*E05 1.000 000*E28 1.589 873 E01 2.359 737 E03 1.055 056 E03 1.055 87 E03 1.054 350 E03 1.059 67 E03 1.054 80 E03 1.054 68 E03 1.134 893 E04 1.891 489 E02 3.152 481 E00 1.634 246 E06 5.188 732 E02

watt/metre-kelvin (W/m  K)

5.192 204 E02

watt/metre-kelvin (W/m  K)

1.441 314 E01

watt/metre-kelvin (W/m  K)

1.442 279 E01

joule/metre2 (J/m2) joule/metre2 (J/m2) watt/metre2-kelvin (W/m2  K)

1.135 653 E04 1.134 893 E04 5.678 263 E00

watt/metre2-kelvin (W/m2  K)

5.674 466 E00

joule/kilogram (J/kg)

2.326 000*E03

1-19

1-20

MEASURING UNITS Table 1.2.4

SI Conversion Factors

(Continued )

To convert from

to

Multiply by

Btu (thermochemical)/pound-mass Btu (International Table)/lbm  8F (c, heat capacity) Btu (thermochemical)/lbm  8F (c, heat capacity) Btu (International Table)/s  ft2  8F Btu (thermochemical)/s  ft2  8F Btu (International Table)/hour Btu (thermochemical)/second Btu (thermochemical)/minute Btu (thermochemical)/hour bushel (U.S.) calorie (International Table) calorie (mean) calorie (thermochemical) calorie (158C) calorie (208C) calorie (kilogram, International Table) calorie (kilogram, mean) calorie (kilogram, thermochemical) calorie (thermochemical)/centimetre2minute cal (thermochemical)/cm2 cal (thermochemical)/cm2  s cal (thermochemical)/cm  s  8C cal (International Table)/g cal (International Table)/g  8C cal (thermochemical)/g cal (thermochemical)/g  8C calorie (thermochemical)/second calorie (thermochemical)/minute carat (metric) centimetre of mercury (08C) centimetre of water (48C) centipoise centistokes chain (engineer or ramden) chain (surveyor or gunter) circular mil cord coulomb, international U.S. (CINTUS)b coulomb, U.S. legal 1948 (CUS48) cup curie day (mean solar) day (sidereal) degree (angle) degree Celsius degree centigrade degree Fahrenheit degree Fahrenheit deg F  h  ft2/Btu (thermochemical) (R, thermal resistance) deg F  h  ft2/Btu (International Table) (R, thermal resistance) degree Rankine dram (avoirdupois) dram (troy or apothecary) dram (U.S. fluid) dyne dyne-centimetre dyne-centimetre2 electron volt EMU of capacitance EMU of current EMU of electric potential EMU of inductance EMU of resistance ESU of capacitance ESU of current ESU of electric potential ESU of inductance

joule/kilogram (J/kg) joule/kilogram-kelvin (J/kg  K)

2.324 444 E03 4.186 800*E03

joule/kilogram-kelvin (J/kg  K)

4.184 000*E03

watt/metre -kelvin (W/m  K) watt/metre2-kelvin (W/m2  K) watt (W) watt (W) watt (W) watt (W) metre3 (m3) joule (J) joule (J) joule (J) joule (J) joule (J) joule (J) joule (J) joule (J) watt/metre2 (W/m2)

2.044 175 E04 2.042 808 E04 2.930 711 E01 1.054 350 E03 1.757 250 E01 2.928 751 E01 3.523 907 E02 4.186 800*E00 4.190 02 E00 4.184 000*E00 4.185 80 E00 4.181 90 E00 4.186 800*E03 4.190 02 E03 4.184 000*E03 6.973 333 E02

joule/metre2 (J/m2) watt/metre2 (W/m2) watt/metre-kelvin (W/m  K) joule/kilogram (J/kg) joule/kilogram-kelvin (J/kg  K) joule/kilogram (J/kg) joule/kilogram-kelvin (J/kg  K) watt (W) watt (W) kilogram (kg) pascal (Pa) pascal (Pa) pascal-second (Pa  s) metre2/second (m2/s) meter (m) meter (m) metre2 (m2) metre3 (m3) coulomb (C)

4.184 000*E04 4.184 000*E04 4.184 000*E02 4.186 800*E03 4.186 800*E03 4.184 000*E03 4.184 000*E03 4.184 000*E00 6.973 333 E02 2.000 000*E04 1.333 22 E03 9.806 38 E01 1.000 000*E03 1.000 000*E06 3.048* E01 2.011 684 E01 5.067 075 E10 3.624 556 E00 9.998 43 E01

coulomb (C) metre3 (m3) becquerel (Bq) second (s) second (s) radian (rad) kelvin (K) kelvin (K) degree Celsius kelvin (K) kelvin-metre2/watt (K  m2/W)

1.000 008 E00 2.365 882 E04 3.700 000*E10 8.640 000 E04 8.616 409 E04 1.745 329 E02 tK  t8C  273.15 tK  t8C  273.15 t8C  (t8F  32)/1.8 tK  (t8F  459.67)/1.8 1.762 280 E01

kelvin-metre2/watt (K  m2/ W)

1.761 102 E01

kelvin (K) kilogram (kg) kilogram (kg) kilogram (kg) newton (N) newton-metre (N  m) pascal (Pa) joule (J) farad (F) ampere (A) volt (V) henry (H) ohm ( ) farad (F) ampere (A) volt (V) henry (H)

tK  t8R/1.8 1.771 845 E03 3.887 934 E03 3.696 691 E06 1.000 000*E05 1.000 000*E07 1.000 000*E01 1.602 18 E19 1.000 000*E09 1.000 000*E01 1.000 000*E08 1.000 000*E09 1.000 000*E09 1.112 650 E12 3.335 6 E10 2.997 9 E02 8.987 552 E11

2

2

THE INTERNATIONAL SYSTEM OF UNITS (SI) Table 1.2.4

SI Conversion Factors

To convert from ESU of resistance erg erg/centimetre2-second erg/second farad, international U.S. (FINTUS) faraday (based on carbon 12) faraday (chemical) faraday (physical) fathom (U.S. survey)a fermi (femtometer) fluid ounce (U.S.) foot foot (U.S. survey)a foot3/minute foot3/second foot3 (volume and section modulus) foot2 foot4 (moment of section)d foot/hour foot/minute foot/second foot2/second foot of water (39.28F) footcandle footcandle footlambert foot-pound-force foot-pound-force/hour foot-pound-force/minute foot-pound-force/second foot-poundal ft2/h (thermal diffusivity) foot/second2 free fall, standard furlong gal gallon (Canadian liquid) gallon (U.K. liquid) gallon (U.S. dry) gallon (U.S. liquid) gallon (U.S. liquid)/day gallon (U.S. liquid)/minute gamma gauss gilbert gill (U.K.) gill (U.S.) grade grade grain (1/7,000 lbm avoirdupois) gram gram/centimetre3 gram-force/centimetre2 hectare henry, international U.S. (HINTUS) hogshead (U.S.) horsepower (550 ft  lbf/s) horsepower (boiler) horsepower (electric) horsepower (metric) horsepower (water) horsepower (U.K.) hour (mean solar) hour (sidereal) hundredweight (long) hundredweight (short) inch inch2 inch3 (volume and section modulus) inch3/minute inch4 (moment of section)d inch/second inch of mercury (328F)

(Continued ) to ohm ( ) joule (J) watt/metre2 (W/m2) watt (W) farad (F) coulomb (C) coulomb (C) coulomb (C) metre (m) metre (m) metre3 (m3) metre (m) metre (m) metre3/second (m3/s) metre3/second (m3/s) metre3 (m3) metre2 (m2) metre4 (m4) metre/second (m/s) metre/second (m/s) metre/second (m/s) metre2/second (m2/s) pascal (Pa) lumen/metre2 (lm/m2) lux (lx) candela/metre2 (cd/m2) joule (J) watt (W) watt (W) watt (W) joule (J) metre2/second (m2/s) metre/second2 (m/s2) metre/second2 (m/s2) metre (m) metre/second2 (m/s2) metre3 (m3) metre3 (m3) metre3 (m3) metre3 (m3) metre3/second (m3/s) metre3/second (m3/s) tesla (T) tesla (T) ampere-turn metre3 (m3) metre3 (m3) degree (angular) radian (rad) kilogram (kg) kilogram (kg) kilogram/metre3 (kg/m3) pascal (Pa) metre2 (m2) henry (H) metre3 (m3) watt (W) watt (W) watt (W) watt (W) watt (W) watt (W) second (s) second (s) kilogram (kg) kilogram (kg) metre (m) metre2 (m2) metre3 (m3) metre3/second (m3/s) metre4 (m4) metre/second (m/s) pascal (Pa)

Multiply by 8.987 552 E11 1.000 000*E07 1.000 000*E03 1.000 000*E07 9.995 05 E01 9.648 531 E04 9.649 57 E04 9.652 19 E04 1.828 804 E00 1.000 000*E15 2.957 353 E05 3.048 000*E01 3.048 006 E01 4.719 474 E04 2.831 685 E02 2.831 685 E02 9.290 304*E02 8.630 975 E03 8.466 667 E05 5.080 000*E03 3.048 000*E01 9.290 304*E02 2.988 98 E03 1.076 391 E01 1.076 391 E01 3.426 259 E00 1.355 818 E00 3.766 161 E04 2.259 697 E02 1.355 818 E00 4.214 011 E02 2.580 640*E05 3.048 000*E01 9.806 650*E00 2.011 68 *E02 1.000 000*E02 4.546 090 E03 4.546 092 E03 4.404 884 E03 3.785 412 E03 4.381 264 E08 6.309 020 E05 1.000 000*E09 1.000 000*E04 7.957 747 E01 1.420 653 E04 1.182 941 E04 9.000 000*E01 1.570 796 E02 6.479 891*E05 1.000 000*E03 1.000 000*E03 9.806 650*E01 1.000 000*E04 1.000 495 E00 2.384 809 E01 7.456 999 E02 9.809 50 E03 7.460 000*E02 7.354 99 E02 7.460 43 E02 7.457 0 E02 3.600 000*E03 3.590 170 E03 5.080 235 E01 4.535 924 E01 2.540 000*E02 6.451 600*E04 1.638 706 E05 2.731 177 E07 4.162 314 E07 2.540 000*E02 3.386 38 E03

1-21

1-22

MEASURING UNITS Table 1.2.4

SI Conversion Factors

(Continued )

To convert from inch of mercury (608F) inch of water (39.28F) inch of water (608F) inch/second2 joule, international U.S. (JINTUS)b joule, U.S. legal 1948 (JUS48) kayser kelvin kilocalorie (thermochemical)/minute kilocalorie (thermochemical)/second kilogram-force (kgf ) kilogram-force-metre kilogram-force-second2/metre (mass) kilogram-force/centimetre2 kilogram-force/metre3 kilogram-force/millimetre2 kilogram-mass kilometre/hour kilopond kilowatt hour kilowatt hour, international U.S. (kWhINTUS)b kilowatt hour, U.S. legal 1948 (kWhUS48) kip (1,000 lbf ) kip/inch2 (ksi) knot (international) lambert langley league, nautical (international and U.S.) league (U.S. survey)a league, nautical (U.K.) light year (365.2425 days) link (engineer or ramden) link (surveyor or gunter) litree lux maxwell mho microinch micron (micrometre) mil mile, nautical (international and U.S.) mile, nautical (U.K.) mile (international) mile (U.S. survey)a mile2 (international) mile2 (U.S. survey)a mile/hour (international) mile/hour (international) millimetre of mercury (08C) minute (angle) minute (mean solar) minute (sidereal) month (mean calendar) oersted ohm, international U.S. ( INT–US) ohm-centimetre ounce-force (avoirdupois) ounce-force-inch ounce-mass (avoirdupois) ounce-mass (troy or apothecary) ounce-mass/yard2 ounce (avoirdupois)(mass)/inch3 ounce (U.K. fluid) ounce (U.S. fluid) parsec peck (U.S.) pennyweight perm (08C) perm (23 8C)

to

Multiply by

pascal (Pa) pascal (Pa) pascal (Pa) metre/second2 (m/s2) joule (J) joule (J) 1/metre (1/m) degree Celsius watt (W) watt (W) newton (N) newton-metre (N  m) kilogram (kg) pascal (Pa) pascal (Pa) pascal (Pa) kilogram (kg) metre/second (m/s) newton (N) joule (J) joule (J)

3.376 85 E03 2.490 82 E02 2.488 4 E02 2.540 000*E02 1.000 182 E00 1.000 017 E00 1.000 000*E02 tC  tK  273.15 6.973 333 E01 4.184 000*E03 9.806 650*E00 9.806 650*E00 9.806 650*E00 9.806 650*E04 9.806 650*E00 9.806 650*E06 1.000 000*E00 2.777 778 E01 9.806 650*E00 3.600 000*E06 3.600 655 E06

joule (J)

3.600 061 E06

newton (N) pascal (Pa) metre/second (m/s) candela/metre2 (cd/m2) joule/metre2 (J/m2) metre (m) metre (m) metre (m) metre (m) metre (m) metre (m) metre3 (m3) lumen/metre2 (lm/m2) weber (Wb) siemens (S) metre (m) metre (m) metre (m) metre (m) metre (m) metre (m) metre (m) metre2 (m2) metre2 (m2) metre/second (m/s) kilometre/hour pascal (Pa) radian (rad) second (s) second (s) second (s) ampere/metre (A/m) ohm ( ) ohm-metre (  m) newton (N) newton-metre (N  m) kilogram (kg) kilogram (kg) kilogram/metre2 (kg/m2) kilogram/metre3 (kg/m3) metre3 (m3) metre3 (m3) metre (m) metre3 (m3) kilogram (kg) kilogram/pascal-secondmetre2 (kg/Pa  s  m2) kilogram/pascal-secondmetre2 (kg/Pa  s  m2)

4.448 222 E03 6.894 757 E06 5.144 444 E01 3.183 099 E03 4.184 000*E04 5.556 000*E03 4.828 041 E03 5.559 552*E03 9.460 54 E15 3.048* E01 2.011 68* E01 1.000 000*E03 1.000 000*E00 1.000 000*E08 1.000 000*E00 2.540 000*E08 1.000 000*E06 2.540 000*E05 1.852 000*E03 1.853 184*E03 1.609 344*E03 1.609 347 E03 2.589 988 E06 2.589 998 E06 4.470 400*E01 1.609 344*E00 1.333 224 E02 2.908 882 E04 6.000 000 E01 5.983 617 E01 2.268 000 E06 7.957 747 E01 1.000 495 E00 1.000 000*E02 2.780 139 E01 7.061 552 E03 2.834 952 E02 3.110 348 E02 3.390 575 E02 1.729 994 E03 2.841 306 E05 2.957 353 E05 3.085 678 E16 8.809 768 E03 1.555 174 E03 5.721 35 E11 5.745 25 E11

THE INTERNATIONAL SYSTEM OF UNITS (SI) Table 1.2.4

SI Conversion Factors

To convert from perm-inch (08C) perm-inch (238C) phot pica (printer’s) pint (U.S. dry) pint (U.S. liquid) point (printer’s) poise (absolute viscosity) poundal poundal/foot2 poundal-second/foot2 pound-force (lbf avoirdupois) pound-force-inch pound-force-foot pound-force-foot/inch pound-force-inch/inch pound-force/inch pound-force/foot pound-force/foot2 pound-force/inch2 (psi) pound-force-second/foot2 pound-mass (lbm avoirdupois) pound-mass (troy or apothecary) pound-mass-foot2 (moment of inertia) pound-mass-inch2 (moment of inertia) pound-mass/foot2 pound-mass/second pound-mass/minute pound-mass/foot3 pound-mass/inch3 pound-mass/gallon (U.K. liquid) pound-mass/gallon (U.S. liquid) pound-mass/foot-second quart (U.S. dry) quart (U.S. liquid) rad (radiation dose absorbed) rem (dose equivalent) rhe rod (U.S. survey)a roentgen second (angle) second (sidereal) section (U.S. survey)a shake slug slug/foot3 slug/foot-second statampere statcoulomb statfarad stathenry statmho statohm statvolt stere stilb stokes (kinematic viscosity) tablespoon teaspoon ton (assay) ton (long, 2,240 lbm) ton (metric) ton (nuclear equivalent of TNT) ton (register) ton (short, 2,000 lbm) ton (short, mass)/hour ton (long, mass)/yard3 tonne torr (mm Hg, 08C) township (U.S. survey)a unit pole

(Continued ) to

Multiply by

kilogram/pascal-secondmetre (kg/Pa  s  m) kilogram/pascal-secondmetre (kg/Pa  s  m) lumen/metre2 (lm/m2) metre (m) metre3 (m3) metre3 (m3) metre pascal-second (Pa  s) newton (N) pascal (Pa) pascal-second (Pa  s) newton (N) newton-metre (N  m) newton-metre (N  m) newton-metre/metre (N  m/m) newton-metre/metre (N  m/m) newton/metre (N/m) newton/metre (N/m) pascal (Pa) pascal (Pa) pascal-second (Pa  s) kilogram (kg) kilogram (kg) kilogram-metre2 (kg  m2) kilogram-metre2 (kg  m2) kilogram/metre2 (kg/m2) kilogram/second (kg/s) kilogram/second (kg/s) kilogram/metre3 (kg/m3) kilogram/metre3 (kg/m3) kilogram/metre3 (kg/m3) kilogram/metre3 (kg/m3) pascal-second (Pa  s) metre3 (m3) metre3 (m3) gray (Gy) sievert (Sv) metre2/newton-second (m2/N  s) metre (m) coulomb/kilogram (C/kg) radian (rad) second (s) metre2 (m2) second (s) kilogram (kg) kilogram/metre3 (kg/m3) pascal-second (Pa  s) ampere (A) coulomb (C) farad (F) henry (H) siemens (S) ohm ( ) volt (V) metre3 (m3) candela/metre2 (cd/m2) metre2/second (m2/s) metre3 (m3) metre3 (m3) kilogram (kg) kilogram (kg) kilogram (kg) joule (J) metre3 (m3) kilogram (kg) kilogram/second (kg/s) kilogram/metre3 (kg/m3) kilogram (kg) pascal (Pa) metre2 (m2) weber (Wb)

1.453 22 E12 1.459 29 E12 1.000 000*E04 4.217 518 E03 5.506 105 E04 4.731 765 E04 3.514 598 E04 1.000 000*E01 1.382 550 E01 1.488 164 E00 1.488 164 E00 4.448 222 E00 1.129 848 E01 1.355 818 E00 5.337 866 E01 4.448 222 E00 1.751 268 E02 1.459 390 E01 4.788 026 E01 6.894 757 E03 4.788 026 E01 4.535 924 E01 3.732 417 E01 4.214 011 E02 2.926 397 E04 4.882 428 E00 4.535 924 E01 7.559 873 E03 1.601 846 E01 2.767 990 E04 9.977 637 E01 1.198 264 E02 1.488 164 E00 1.101 221 E03 9.463 529 E04 1.000 000*E02 1.000 000*E02 1.000 000*E01 5.029 210 E00 2.580 000*E04 4.848 137 E06 9.972 696 E01 2.589 998 E06 1.000 000*E08 1.459 390 E01 5.153 788 E02 4.788 026 E01 3.335 641 E10 3.335 641 E10 1.112 650 E12 8.987 552 E11 1.112 650 E12 8.987 552 E11 2.997 925 E02 1.000 000*E00 1.000 000*E04 1.000 000*E04 1.478 676 E05 4.928 922 E06 2.916 667 E02 1.016 047 E03 1.000 000*E03 4.184 000*E09 2.831 685 E00 9.071 847 E02 2.519 958 E01 1.328 939 E03 1.000 000*E03 1.333 22 E02 9.323 994 E07 1.256 637 E07

1-23

1-24

MEASURING UNITS Table 1.2.4

SI Conversion Factors

(Continued )

To convert from

to b

volt, international U.S. (VINTUS) volt, U.S. legal 1948 (VUS48) watt, international U.S. (WINTUS)b watt, U.S. legal 1948 (WUS48) watt/centimetre2 watt-hour watt-second yard yard2 yard3 yard3/minute year (calendar) year (sidereal) year (tropical)

Multiply by

volt (V) volt (V) watt (W) watt (W) watt/metre2 (W/m2) joule (J) joule (J) metre (m) metre2 (m2) metre3 (m3) metre3/second (m3/s) second (s) second (s) second (s)

1.000 338 E00 1.000 008 E00 1.000 182 E00 1.000 017 E00 1.000 000*E04 3.600 000*E03 1.000 000*E00 9.144 000*E01 8.361 274 E01 7.645 549 E01 1.274 258 E02 3.153 600*E07 3.155 815 E07 3.155 693 E07

Based on the U.S. survey foot (1 ft  1,200/3,937 m). b In 1948 a new international agreement was reached on absolute electrical units, which changed the value of the volt used in this country by about 300 parts per million. Again in 1969 a new base of reference was internationally adopted making a further change of 8.4 parts per million. These changes (and also changes in ampere, joule, watt, coulomb) require careful terminology and conversion factors for exact use of old information. Terms used in this guide are: Volt as used prior to January 1948—volt, international U.S. (VINTUS) Volt as used between January 1948 and January 1969—volt, U.S. legal 1948 (VINT48) Volt as used since January 1969—volt (V) Identical treatment is given the ampere, coulomb, watt, and joule. c This value was adopted in 1956. Some of the older International Tables use the value 1.055 04 E03. The exact conversion factor is 1.055 055 852 62*E03. d Moment of inertia of a plane section about a specified axis. e In 1964, the General Conference on Weights and Measures adopted the name “litre” as a special name for the cubic decimetre. Prior to this decision the litre differed slightly (previous value, 1.000028 dm3), and in expression of precision, volume measurement, this fact must be kept in mind. a

SYSTEMS OF UNITS

The principal units of interest to mechanical engineers can be derived from three base units which are considered to be dimensionally independent of each other. The British “gravitational system,” in common use in the United States, uses units of length, force, and time as base units and is also called the “foot-pound-second system.” The metric system, on the other hand, is based on the meter, kilogram, and second, units of length, mass, and time, and is often designated as the “MKS system.” During the nineteenth century a metric “gravitational system,” based on a kilogram-force (also called a “kilopond”) came into general use. With the development of the International System of Units (SI), based as it is on the original metric system for mechanical units, and the general requirements by members of the European Community that only SI units be used, it is anticipated that the kilogram-force will fall into disuse to be replaced by the newton, the SI unit of force. Table 1.2.5 gives the base units of four systems with the corresponding derived unit given in parentheses. In the definitions given below, the “standard kilogram body” refers to the international kilogram prototype, a platinum-iridium cylinder kept in the International Bureau of Weights and Measures in Sèvres, just outside Paris. The “standard pound body” is related to the kilogram by a precise numerical factor: 1 lb  0.453 592 37 kg. This new “unified” pound has replaced the somewhat smaller Imperial pound of the United Kingdom and the slightly larger pound of the United States (see NBS Spec. Pub. 447). The “standard locality” means sea level, 458 latitude,

or more strictly any locality in which the acceleration due to gravity has the value 9.80 665 m/s2  32.1740 ft/s2, which may be called the standard acceleration (Table 1.2.6). The pound force is the force required to support the standard pound body against gravity, in vacuo, in the standard locality; or, it is the force which, if applied to the standard pound body, supposed free to move, would give that body the “standard acceleration.” The word pound is used for the unit of both force and mass and consequently is ambiguous. To avoid uncertainty, it is desirable to call the units “pound force” and “pound mass,” respectively. The slug has been defined as that mass which will accelerate at 1 ft/s2 when acted upon by a one pound force. It is therefore equal to 32.1740 pound-mass. The kilogram force is the force required to support the standard kilogram against gravity, in vacuo, in the standard locality; or, it is the force which, if applied to the standard kilogram body, supposed free to move, would give that body the “standard acceleration.” The word kilogram is used for the unit of both force and mass and consequently is ambiguous. It is for this reason that the General Conference on Weights and Measures declared (in 1901) that the kilogram was the unit of mass, a concept incorporated into SI when it was formally approved in 1960. The dyne is the force which, if applied to the standard gram body, would give that body an acceleration of 1 cm/s2; i.e., 1 dyne  1/980.665 of a gram force. The newton is that force which will impart to a 1-kilogram mass an acceleration of 1 m/s2.

Table 1.2.5

Systems of Units

Quantity

Dimensions of units in terms of L/M/F/T

British “gravitational system”

Metric “gravitational system”

L M F T

1 ft (1 slug) 1 lb 1s

1m

Length Mass Force Time

1 kg 1s

CGS system

SI system

1 cm 1g (1 dyne) 1s

1m 1 kg (1 N) 1s

TIME Table 1.2.6

1-25

Acceleration of Gravity g

Latitude, deg

m/s

ft/s

0 10 20 30 40

9.780 9.782 9.786 9.793 9.802

32.088 32.093 32.108 32.130 32.158

2

2

g

g/g

Latitude, deg

m/s

ft/s2

g/g0

0.9973 0.9975 0.9979 0.9986 0.9995

50 60 70 80 90

9.811 9.819 9.826 9.831 9.832

32.187 32.215 32.238 32.253 32.258

1.0004 1.0013 1.0020 1.0024 1.0026

0

2

NOTE: Correction for altitude above sea level: 3 mm/s2 for each 1,000 m; 0.003 ft/s2 for each 1,000 ft. SOURCE: U.S. Coast and Geodetic Survey, 1912.

TEMPERATURE

The SI unit for thermodynamic temperature is the kelvin, K, which is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. Thus 273.16 K is the fixed (base) point on the kelvin scale. Another unit used for the measurement of temperature is degrees Celsius (formerly centigrade), 8C. The relation between a thermodynamic temperature T and a Celsius temperature t is t 5 T 2 273.15 K (the ice point of water) Thus the unit Celsius degree is equal to the unit kelvin, and a difference of temperature would be the same on either scale. In the USCS temperature is measured in degrees Fahrenheit, F. The relation between the Celsius and the Fahrenheit scales is t8C 5 st8F 2 32d/1.8 (For temperature-conversion tables, see Sec. 4.) TERRESTRIAL GRAVITY Standard acceleration of gravity is g 0  9.80665 m per sec per sec,

or 32.1740 ft per sec per sec. This value g0 is assumed to be the value of g at sea level and latitude 458. MOHS SCALE OF HARDNESS

This scale is an arbitrary one which is used to describe the hardness of several mineral substances on a scale of 1 through 10 (Table 1.2.7). The given number indicates a higher relative hardness compared with that of substances below it; and a lower relative hardness than those above it. For example, an unknown substance is scratched by quartz, but it, in turn, scratches feldspar. The unknown has a hardness of between 6 and 7 on the Mohs scale. Table 1.2.7 1. 2. 3. 4.

Talc Gypsum Calc-spar Fluorspar

Mohs Scale of Hardness 5. Apatite 6. Feldspar 7. Quartz

8. Topaz 9. Sapphire 10. Diamond

TIME Kinds of Time Three kinds of time are recognized by astronomers: sidereal, apparent solar, and mean solar time. The sidereal day is the interval between two consecutive transits of some fixed celestial object across any given meridian, or it is the interval required by the earth to make one complete revolution on its axis. The interval is constant, but it is inconvenient as a time unit because the noon of the sidereal day occurs at all hours of the day and night. The apparent solar day is the interval between two consecutive transits of the sun across any given meridian. On account of the variable distance between the sun and earth, the variable speed of the earth in its orbit, the effect of the moon, etc., this interval is not constant and consequently cannot be kept by any simple mechanisms, such as clocks or watches. To overcome the objection noted above, the mean solar day was devised. The mean solar day is

the length of the average apparent solar day. Like the sidereal day it is constant, and like the apparent solar day its noon always occurs at approximately the same time of day. By international agreement, beginning Jan. 1, 1925, the astronomical day, like the civil day, is from midnight to midnight. The hours of the astronomical day run from 0 to 24, and the hours of the civil day usually run from 0 to 12 A.M. and 0 to 12 P.M. In some countries the hours of the civil day also run from 0 to 24. The Year Three different kinds of year are used: the sidereal, the tropical, and the anomalistic. The sidereal year is the time taken by the earth to complete one revolution around the sun from a given star to the same star again. Its length is 365 days, 6 hours, 9 minutes, and 9 seconds. The tropical year is the time included between two successive passages of the vernal equinox by the sun, and since the equinox moves westward 50.2 seconds of arc a year, the tropical year is shorter by 20 minutes 23 seconds in time than the sidereal year. As the seasons depend upon the earth’s position with respect to the equinox, the tropical year is the year of civil reckoning. The anomalistic year is the interval between two successive passages of the perihelion, viz., the time of the earth’s nearest approach to the sun. The anomalistic year is used only in special calculations in astronomy. The Second Although the second is ordinarily defined as 1/86,400 of the mean solar day, this is not sufficiently precise for many scientific purposes. Scientists have adopted more precise definitions for specific purposes: in 1956, one in terms of the length of the tropical year 1900 and, more recently, in 1967, one in terms of a specific atomic frequency. Frequency is the reciprocal of time for 1 cycle; the unit of frequency is the hertz (Hz), defined as 1 cycle/s. The Calendar The Gregorian calendar, now used in most of the civilized world, was adopted in Catholic countries of Europe in 1582 and in Great Britain and her colonies Jan. 1, 1752. The average length of the Gregorian calendar year is 365 1⁄4 2 3⁄400 days, or 365.2425 days. This is equivalent to 365 days, 5 hours, 49 minutes, 12 seconds. The length of the tropical year is 365.2422 days, or 365 days, 5 hours, 48 minutes, 46 seconds. Thus the Gregorian calendar year is longer than the tropical year by 0.0003 day, or 26 seconds. This difference amounts to 1 day in slightly more than 3,300 years and can properly be neglected. Standard Time Prior to 1883, each city of the United States had its own time, which was determined by the time of passage of the sun across the local meridian. A system of standard time had been used since its first adoption by the railroads in 1883 but was first legalized on Mar. 19, 1918, when Congress directed the Interstate Commerce Commission to establish limits of the standard time zones. Congress took no further steps until the Uniform Time Act of 1966 was enacted, followed with an amendment in 1972. This legislation, referred to as “the Act,” transferred the regulation and enforcement of the law to the Department of Transportation. By the legislation of 1918, with some modifications by the Act, the contiguous United States is divided into four time zones, each of which, theoretically, was to span 15 degrees of longitude. The first, the Eastern zone, extends from the Atlantic westward to include most of Michigan and Indiana, the eastern parts of Kentucky and Tennessee, Georgia, and Florida, except the west half of the panhandle. Eastern standard time is

1-26

MEASURING UNITS

based upon the mean solar time of the 75th meridian west of Greenwich, and is 5 hours slower than Greenwich Mean Time (GMT). (See also discussion of UTC below.) The second or Central zone extends westward to include most of North Dakota, about half of South Dakota and Nebraska, most of Kansas, Oklahoma, and all but the two most westerly counties of Texas. Central standard time is based upon the mean solar time of the 90th meridian west of Greenwich, and is 6 hours slower than GMT. The third or Mountain zone extends westward to include Montana, most of Idaho, one county of Oregon, Utah, and Arizona. Mountain standard time is based upon the mean solar time of the 105th meridian west of Greenwich, and is 7 hours slower than GMT. The fourth or Pacific zone includes all of the remaining 48 contiguous states. Pacific standard time is based on the mean solar time of the 120th meridian west of Greenwich, and is 8 hours slower than GMT. Exact locations of boundaries may be obtained from the Department of Transportation. In addition to the above four zones there are four others that apply to the noncontiguous states and islands. The most easterly is the Atlantic zone, which includes Puerto Rico and the Virgin Islands, where the time is 4 hours slower than GMT. Eastern standard time is used in the Panama Canal strip. To the west of the Pacific time zone there are the Yukon, the Alaska-Hawaii, and Bering zones where the times are, respectively, 9, 10, and 11 hours slower than GMT. The system of standard time has been adopted in all civilized countries and is used by ships on the high seas. The Act directs that from the first Sunday in April to the last Sunday in October, the time in each zone is to be advanced one hour for advanced time or daylight saving time (DST). However, any state-bystate enactment may exempt the entire state from using advanced time. By this provision Arizona and Hawaii do not observe advanced time (as of 1973). By the 1972 amendment to the Act, a state split by a timezone boundary may exempt from using advanced time all that part which is in one zone without affecting the rest of the state. By this amendment, 80 counties of Indiana in the Eastern zone are exempt from using advanced time, while 6 counties in the northwest corner and 6 counties in the southwest, which are in Central zone, do observe advanced time. Pursuant to its assignment of carrying out the Act, the Department of Transportation has stipulated that municipalities located on the boundary between the Eastern and Central zones are in the Central zone; those on the boundary between the Central and Mountain zones are in the Mountain zone (except that Murdo, SD, is in the Central zone); those on the boundary between Mountain and Pacific time zones are in the Mountain zone. In such places, when the time is given, it should be specified as Central, Mountain, etc. Standard Time Signals The National Institute of Standards and Technology broadcasts time signals from station WWV, Ft. Collins, CO, and from station WWVH, near Kekaha, Kaui, HI. The broadcasts by WWV are on radio carrier frequencies of 2.5, 5, 10, 15, and 20 MHz, while those by WWVH are on radio carrier frequencies of 2.5, 5, 10, and 15 MHz. Effective Jan. 1, 1975, time announcements by both WWV and WWVH are referred to as Coordinated Universal Time, UTC, the international coordinated time scale used around the world for most timekeeping purposes. UTC is generated by reference to International Atomic Time (TAI), which is determined by the Bureau International de l’Heure on the basis of atomic clocks operating in various establishments in accordance with the definition of the second. Since the difference between UTC and TAI is defined to be a whole number of seconds, a “leap second” is periodically added to or subtracted from UTC to take into account variations in the rotation of the earth. Time (i.e., clock time) is given in terms of 0 to 24 hours a day, starting with 0000 at midnight at Greenwich zero longitude. The beginning of each 0.8-second-long audio tone marks the end of an announced time interval. For example, at 2:15 P.M., UTC, the voice announcement would be: “At the tone fourteen hours fifteen minutes Coordinated Universal Time,” given during the last 7.5 seconds of each minute. The tone markers from both stations are given simultaneously, but owing to propagation interferences may not be received simultaneously.

Beginning 1 minute after the hour, a 600-Hz signal is broadcast for about 45 s. At 2 min after the hour, the standard musical pitch of 440 Hz is broadcast for about 45 s. For the remaining 57 min of the hour, alternating tones of 600 and 500 Hz are broadcast for the first 45 s of each minute (see NIST Spec. Pub. 432). The time signal can also be received via long-distance telephone service from Ft. Collins. In addition to providing the musical pitch, these tone signals may be of use as markers for automated recorders and other such devices. DENSITY AND RELATIVE DENSITY Density of a body is its mass per unit volume. With SI units densities

are in kilograms per cubic meter. However, giving densities in grams per cubic centimeter has been common. With the USCS, densities are given in pounds per mass cubic foot. Table 1.2.8 Relative Densities at 608/608F Corresponding to Degrees API and Weights per U.S. Gallon at 608F 141.5 ¢Calculated from the formula, relative density 5 ≤ 131.5 1 deg API Degrees API

Relative density

Lb per U.S. gallon

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55

1.0000 0.9930 0.9861 0.9792 0.9725 0.9659 0.9593 0.9529 0.9465 0.9402 0.9340 0.9279 0.9218 0.9159 0.9100 0.9042 0.8984 0.8927 0.8871 0.8816 0.8762 0.8708 0.8654 0.8602 0.8550 0.8498 0.8448 0.8398 0.8348 0.8299 0.8251 0.8203 0.8155 0.8109 0.8063 0.8017 0.7972 0.7927 0.7883 0.7839 0.7796 0.7753 0.7711 0.7669 0.7628 0.7587

8.328 8.270 8.212 8.155 8.099 8.044 7.989 7.935 7.882 7.830 7.778 7.727 7.676 7.627 7.578 7.529 7.481 7.434 7.387 7.341 7.296 7.251 7.206 7.163 7.119 7.076 7.034 6.993 6.951 6.910 6.870 6.830 6.790 6.752 6.713 6.675 6.637 6.600 6.563 6.526 6.490 6.455 6.420 6.385 6.350 6.316

Degrees API

Relative density

Lb per U.S. gallon

56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100

0.7547 0.7507 0.7467 0.7428 0.7389 0.7351 0.7313 0.7275 0.7238 0.7201 0.7165 0.7128 0.7093 0.7057 0.7022 0.6988 0.6953 0.6919 0.6886 0.6852 0.6819 0.6787 0.6754 0.6722 0.6690 0.6659 0.6628 0.6597 0.6566 0.6536 0.6506 0.6476 0.6446 0.6417 0.6388 0.6360 0.6331 0.6303 0.6275 0.6247 0.6220 0.6193 0.6166 0.6139 0.6112

6.283 6.249 6.216 6.184 6.151 6.119 6.087 6.056 6.025 5.994 5.964 5.934 5.904 5.874 5.845 5.817 5.788 5.759 5.731 5.703 5.676 5.649 5.622 5.595 5.568 5.542 5.516 5.491 5.465 5.440 5.415 5.390 5.365 5.341 5.316 5.293 5.269 5.246 5.222 5.199 5.176 5.154 5.131 5.109 5.086

NOTE: The weights in this table are weights in air at 608F with humidity 50 percent and pressure 760 mm.

CONVERSION AND EQUIVALENCY TABLES Table 1.2.9 Relative Densities at 608/608F Corresponding to Degrees Baumé for Liquids Lighter than Water and Weights per U.S. Gallon at 608F 608 140 ¢Calculated from the formula, relative density F5 ≤ 608 130 1 deg Baumé Degrees Baumé

Relative density

Lb per gallon

Degrees Baumé

Relative density

Lb per gallon

10.0 11.0 12.0 13.0

1.0000 0.9929 0.9859 0.9790

8.328 8.269 8.211 8.153

56.0 57.0 58.0 59.0

0.7527 0.7487 0.7447 0.7407

6.266 6.233 6.199 6.166

14.0 15.0 16.0 17.0

0.9722 0.9655 0.9589 0.9524

8.096 8.041 7.986 7.931

60.0 61.0 62.0 63.0

0.7368 0.7330 0.7292 0.7254

6.134 6.102 6.070 6.038

18.0 19.0 20.0 21.0

0.9459 0.9396 0.9333 0.9272

7.877 7.825 7.772 7.721

64.0 65.0 66.0 67.0

0.7216 0.7179 0.7143 0.7107

6.007 5.976 5.946 5.916

22.0 23.0 24.0 25.0

0.9211 0.9150 0.9091 0.9032

7.670 7.620 7.570 7.522

68.0 69.0 70.0 71.0

0.7071 0.7035 0.7000 0.6965

5.886 5.856 5.827 5.798

26.0 27.0 28.0 29.0

0.8974 0.8917 0.8861 0.8805

7.473 7.425 7.378 7.332

72.0 73.0 74.0 75.0

0.6931 0.6897 0.6863 0.6829

5.769 5.741 5.712 5.685

30.0 31.0 32.0 33.0

0.8750 0.8696 0.8642 0.8589

7.286 7.241 7.196 7.152

76.0 77.0 78.0 79.0

0.6796 0.6763 0.6731 0.6699

5.657 5.629 5.602 5.576

34.0 35.0 36.0 37.0

0.8537 0.8485 0.8434 0.8383

7.108 7.065 7.022 6.980

80.0 81.0 82.0 83.0

0.6667 0.6635 0.6604 0.6573

5.549 5.522 5.497 5.471

38.0 39.0 40.0 41.0

0.8333 0.8284 0.8235 0.8187

6.939 6.898 6.857 6.817

84.0 85.0 86.0 87.0

0.6542 0.6512 0.6482 0.6452

5.445 5.420 5.395 5.370

42.0 43.0 44.0 45.0

0.8140 0.8092 0.8046 0.8000

6.777 6.738 6.699 6.661

88.0 89.0 90.0 91.0

0.6422 0.6393 0.6364 0.6335

5.345 5.320 5.296 5.272

46.0 47.0 48.0 49.0

0.7955 0.7910 0.7865 0.7821

6.623 6.586 6.548 6.511

92.0 93.0 94.0 95.0

0.6306 0.6278 0.6250 0.6222

5.248 5.225 5.201 5.178

50.0 51.0 52.0 53.0 54.0 55.0

0.7778 0.7735 0.7692 0.7650 0.7609 0.7568

6.476 6.440 6.404 6.369 6.334 6.300

96.0 97.0 98.0 99.0 100.0

0.6195 0.6167 0.6140 0.6114 0.6087

5.155 5.132 5.100 5.088 5.066

1-27

equal volumes, the ratio of the molecular weight of the gas to that of air may be used as the relative density of the gas. When this is done, the molecular weight of air may be taken as 28.9644. The relative density of liquids is usually measured by means of a hydrometer. In addition to a scale reading in relative density as defined above, other arbitrary scales for hydrometers are used in various trades and industries. The most common of these are the API and Baumé. The API (American Petroleum Institute) scale is approved by the American Petroleum Institute, the ASTM, the U.S. Bureau of Mines, and the National Bureau of Standards and is recommended for exclusive use in the U.S. petroleum industry, superseding the Baumé scale for liquids lighter than water. The relation between API degrees and relative density (see Table 1.2.8) is expressed by the following equation: Degrees API 5

141.5 2 131.5 rel dens 608/608F

The relative densities corresponding to the indications of the Baumé hydrometer are given in Tables 1.2.9 and 1.2.10. Table 1.2.10 Relative Densities at 608/608F Corresponding to Degrees Baumé for Liquids Heavier than Water 608 145 ¢Calculated from the formula, relative density F5 ≤ 608 145 2 deg Baumé Degrees Baumé

Relative density

Degrees Baumé

Relative density

Degrees Baumé

Relative density

0 1 2 3

1.0000 1.0069 1.0140 1.0211

24 25 26 27

1.1983 1.2083 1.2185 1.2288

48 49 50 51

1.4948 1.5104 1.5263 1.5426

4 5 6 7

1.0284 1.0357 1.0432 1.0507

28 29 30 31

1.2393 1.2500 1.2609 1.2719

52 53 54 55

1.5591 1.5761 1.5934 1.6111

8 9 10 11

1.0584 1.0662 1.0741 1.0821

32 33 34 35

1.2832 1.2946 1.3063 1.3182

56 57 58 59

1.6292 1.6477 1.6667 1.6860

12 13 14 15

1.0902 1.0985 1.1069 1.1154

36 37 38 39

1.3303 1.3426 1.3551 1.3679

60 61 62 63

1.7059 1.7262 1.7470 1.7683

16 17 18 19

1.1240 1.1328 1.1417 1.1508

40 41 42 43

1.3810 1.3942 1.4078 1.4216

64 65 66 67

1.7901 1.8125 1.8354 1.8590

20 21 22 23

1.1600 1.1694 1.1789 1.1885

44 45 46 47

1.4356 1.4500 1.4646 1.4796

68 69 70

1.8831 1.9079 1.9333

CONVERSION AND EQUIVALENCY TABLES Note for Use of Conversion Tables (Tables 1.2.11 through 1.2.34)

Relative density is the ratio of the density of one substance to that of a

second (or reference) substance, both at some specified temperature. Use of the earlier term specific gravity for this quantity is discouraged. For solids and liquids water is almost universally used as the reference substance. Physicists use a reference temperature of 48C ( 39.28F); U.S. engineers commonly use 608F. With the introduction of SI units, it may be found desirable to use 598F, since 598F and 158C are equivalents. For gases, relative density is generally the ratio of the density of the gas to that of air, both at the same temperature, pressure, and dryness (as regards water vapor). Because equal numbers of moles of gases occupy

Subscripts after any figure, 0s, 9s, etc., mean that that figure is to be repeated the indicated number of times.

1-28

MEASURING UNITS

Table 1.2.11 Centimetres 1 2.540 30.48 91.44 100 2012 100000 160934

Length Equivalents Inches 0.3937 1 12 36 39.37 792 39370 63360

Feet 0.03281 0.08333 1 3 3.281 66 3281 5280

Yards 0.01094 0.02778 0.3333 1 1.0936 22 1093.6 1760

Metres 0.01 0.0254 0.3048 0.9144 1 20.12 1000 1609

Chains

Kilometres

0.034971 0.001263 0.01515 0.04545 0.04971 1 49.71 80

5

10 0.04254 0.033048 0.039144 0.001 0.02012 1 1.609

Miles 0.056214 0.041578 0.031894 0.035682 0.036214 0.0125 0.6214 1

(As used by metrology laboratories for precise measurements, including measurements of surface texture)*

Angstrom units Å

Surface texture (U.S.), microinch min

Light bands,† monochromatic helium light count ‡

Surface texture foreign, mm

Precision measurements, § 0.0001 in

Close-tolerance measurements, 0.001 in (mils)

Metric unit, mm

USCS unit, in

1 254 2937.5 10,000 25,400 254,000 10,000,000 254,000,000

0.003937 1 11.566 39.37 100 1000 39,370 1,000,000

0.0003404 0.086 1 3.404 8.646 86.46 3404 86,460

0.0001 0.0254 0.29375 1 2.54 25.4 1000 25,400

0.043937 0.01 0.11566 0.3937 1 10 393.7 10,000

0.053937 0.001 0.011566 0.03937 0.1 1 39.37 1000

0.061 0.04254 0.0329375 0.001 0.00254 0.0254 1 25.4

0.083937 0.051 0.0411566 0.043937 0.0001 0.001 0.03937 1

* Computed by J. A. Broadston. † One light band equals one-half corresponding wavelength. Visible-light wavelengths range from red at 6,500 Å to violet at 4,100 Å. ‡ One helium light band  0.000011661 in  2937.5 Å; one krypton 86 light band  0.0000119 in  3,022.5 Å; one mercury 198 light band  0.00001075 in  2,730 Å. § The designations “precision measurements,” etc., are not necessarily used in all metrology laboratories.

CONVERSION AND EQUIVALENCY TABLES Table 1.2.12

1-29

Conversion of Lengths*

Inches to millimetres

Millimetres to inches

Feet to metres

Metres to feet

Yards to metres

Metres to yards

Miles to kilometres

Kilometres to miles

1 2 3 4

25.40 50.80 76.20 101.60

0.03937 0.07874 0.1181 0.1575

0.3048 0.6096 0.9144 1.219

3.281 6.562 9.843 13.12

0.9144 1.829 2.743 3.658

1.094 2.187 3.281 4.374

1.609 3.219 4.828 6.437

0.6214 1.243 1.864 2.485

5 6 7 8 9

127.00 152.40 177.80 203.20 228.60

0.1969 0.2362 0.2756 0.3150 0.3543

1.524 1.829 2.134 2.438 2.743

16.40 19.69 22.97 26.25 29.53

4.572 5.486 6.401 7.315 8.230

5.468 6.562 7.655 8.749 9.843

6.047 9.656 11.27 12.87 14.48

3.107 3.728 4.350 4.971 5.592

* EXAMPLE: 1 in  25.40 mm.

Common fractions of an inch to millimetres (from 1⁄64 to 1 in) 64ths

Millimetres

64ths

Millimetres

64th

Millimetres

64ths

Millimetres

64ths

Millimetres

64ths

Millimetres

1 2 3 4

0.397 0.794 1.191 1.588

13 14 15 16

5.159 5.556 5.953 6.350

25 26 27 28

9.922 10.319 10.716 11.112

37 38 39 40

14.684 15.081 15.478 15.875

49 50 51 52

19.447 19.844 20.241 20.638

57 58 59 60

22.622 23.019 23.416 23.812

5 6 7 8

1.984 2.381 2.778 3.175

17 18 19 20

6.747 7.144 7.541 7.938

29 30 31 32

11.509 11.906 12.303 12.700

41 42 43 44

16.272 16.669 17.066 17.462

53 54 55 56

21.034. 21.431 21.828 22.225

61 62 63 64

24.209 24.606 25.003 25.400

9 10 11 12

3.572 3.969 4.366 4.762

21 22 23 24

8.334 8.731 9.128 9.525

33 34 35 36

13.097 13.494 13.891 14.288

45 46 47 48

17.859 18.256 18.653 19.050

0

1

2

3

4

5

6

7

8

9

.0 .1 .2 .3 .4

2.540 5.080 7.620 10.160

0.254 2.794 5.334 7.874 10.414

0.508 3.048 5.588 8.128 10.668

0.762 3.302 5.842 8.382 10.922

1.016 3.556 6.096 8.636 11.176

1.270 3.810 6.350 8.890 11.430

1.524 4.064 6.604 9.144 11.684

1.778 4.318 6.858 9.398 11.938

2.032 4.572 7.112 9.652 12.192

2.286 4.826 7.366 9.906 12.446

.5 .6 .7 .8 .9

12.700 15.240 17.780 20.320 22.860

12.954 15.494 18.034 20.574 23.114

13.208 15.748 18.288 20.828 23.368

13.462 16.002 18.542 21.082 23.622

13.716 16.256 18.796 21.336 23.876

13.970 16.510 19.050 21.590 24.130

14.224 16.764 19.304 21.844 24.384

14.478 17.018 19.558 22.098 24.638

14.732 17.272 19.812 22.352 24.892

14.986 17.526 20.066 22.606 25.146

0.

1.

2.

3.

4.

5.

6.

7.

8.

9.

0 1 2 3 4

0.3937 0.7874 1.1811 1.5748

0.0394 0.4331 0.8268 1.2205 1.6142

0.0787 0.4724 0.8661 1.2598 1.6535

0.1181 0.5118 0.9055 1.2992 1.6929

0.1575 0.5512 0.9449 1.3386 1.7323

0.1969 0.5906 0.9843 1.3780 1.7717

0.2362 0.6299 1.0236 1.4173 1.8110

0.2756 0.6693 1.0630 1.4567 1.8504

0.3150 0.7087 1.1024 1.4961 1.8898

0.3543 0.7480 1.1417 1.5354 1.9291

5 6 7 8 9

1.9685 2.3622 2.7559 3.1496 3.5433

2.0079 2.4016 2.7953 3.1890 3.5827

2.0472 2.4409 2.8346 3.2283 3.6220

2.0866 2.4803 2.8740 3.2677 3.6614

2.1260 2.5197 2.9134 3.3071 3.7008

2.1654 2.5591 2.9528 3.3465 3.7402

2.2047 2.5984 2.9921 3.3858 3.7795

2.2441 2.6378 3.0315 3.4252 3.8189

2.2835 2.6772 3.0709 3.4646 3.8583

2.3228 2.7165 3.1102 3.5039 3.8976

Decimals of an inch to millimetres (0.01 to 0.99 in)

Millimetres to decimals of an inch (from 1 to 99 mm)

1-30

MEASURING UNITS

Table 1.2.13 Area Equivalents (1 hectare  100 ares  10,000 centiares or square metres) Square metres

Square inches

Square feet

Square yards

Square rods

Square chains

Roods

Acres

Square miles or sections

1 0.036452 0.09290 0.8361 25.29 404.7 1012 4047 2589988

1550 1 144 1296 39204 627264 1568160 6272640

10.76 0.006944 1 9 272.25 4356 10890 43560 27878400

1.196 0.037716 0.1111 1 30.25 484 1210 4840 3097600

0.0395 0.042551 0.003673 0.03306 1 16 40 160 102400

0.002471 0.051594 0.032296 0.002066 0.0625 1 2.5 10 6400

0.039884 0.066377 0.049183 0.038264 0.02500 0.4 1 4 2560

0.032471 0.061594 0.042296 0.0002066 0.00625 0.1 0.25 1 640

0.063861 0.092491 0.073587 0.063228 0.059766 0.0001562 0.033906 0.001562 1

Table 1.2.14

Conversion of Areas*

Sq in to sq cm

Sq cm to sq in

Sq ft to sq m

Sq m to sq ft

Sq yd to sq m

Sq m to sq yd

Acres to hectares

Hectares to acres

Sq mi to sq km

Sq km to sq mi

1 2 3 4

6.452 12.90 19.35 25.81

0.1550 0.3100 0.4650 0.6200

0.0929 0.1858 0.2787 0.3716

10.76 21.53 32.29: 43.06

0.8361 1.672 2.508 3.345

1.196 2.392 3.588 4.784

0.4047 0.8094 1.214 1.619

2.471 4.942 7.413 9.884

2.590 5.180 7.770 10.360

0.3861 0.7722 1.158 1.544

5 6 7 8 9

32.26 38.71 45.16 51.61 58.06

0.7750 0.9300 1.085 1.240 1.395

0.4645 0.5574 0.6503 0.7432 0.8361

53.82 64.58 75.35 86.11 96.88

4.181 5.017 5.853 6.689 7.525

5.980 7.176 8.372 9.568 10.764

2.023 2.428 2.833 3.237 3.642

12.355 14.826 17.297 19.768 22.239

12.950 15.540 18.130 20.720 23.310

1.931 2.317 2.703 3.089 3.475

* EXAMPLE: 1 in2  6.452 cm2.

Table 1.2.15

Volume and Capacity Equivalents

Cubic inches

Cubic feet

Cubic yards

U.S. Apothecary fluid ounces

Liquid

Dry

U.S. gallons

U.S. bushels

Cubic decimetres or litres

1 1728 46656 1.805 57.75 67.20 231 2150 61.02

0.035787 1 27 0.001044 0.03342 0.03889 0.1337 1.244 0.03531

0.042143 0.03704 1 0.043868 0.001238 0.001440 0.004951 0.04609 0.001308

0.5541 957.5 25853 1 32 37.24 128 1192 33.81

0.01732 29.92 807.9 0.03125 1 1.164 4 37.24 1.057

0.01488 25.71 694.3 0.02686 0.8594 1 3.437 32 0.9081

0.024329 7.481 202.2 0.007812 0.25 0.2909 1 9.309 0.2642

0.034650 0.8036 21.70 0.038392 0.02686 0.03125 0.1074 1 0.02838

0.01639 28.32 764.6 0.02957 0.9464 1.101 3.785 35.24 1

Table 1.2.16

U.S. quarts

Conversion of Volumes or Cubic Measure*

Cu in to mL

mL to cu in

Cu ft to cu m

Cu m to cu ft

Cu yd to cu m

Cu m to cu yd

Gallons to cu ft

Cu ft to gallons

1 2 3 4

16.39 32.77 49.16 65.55

0.06102 0.1220 0.1831 0.2441

0.02832 0.05663 0.08495 0.1133

35.31 70.63 105.9 141.3

0.7646 1.529 2.294 3.058

1.308 2.616 3.924 5.232

0.1337 0.2674 0.4010 0.5347

7.481 14.96 22.44 29.92

5 6 7 8 9

81.94 98.32 114.7 131.1 147.5

0.3051 0.3661 0.4272 0.4882 0.5492

0.1416 0.1699 0.1982 0.2265 0.2549

176.6 211.9 247.2 282.5 317.8

3.823 4.587 5.352 6.116 6.881

6.540 7.848 9.156 10.46 11.77

0.6684 0.8021 0.9358 1.069 1.203

37.40 44.88 52.36 59.84 67.32

* EXAMPLE: 1 in3  16.39 mL.

CONVERSION AND EQUIVALENCY TABLES Table 1.2.17

1-31

Conversion of Volumes or Capacities*

Fluid ounces to mL

mL to fluid ounces

Liquid pints to litres

Litres to liquid pints

Liquid quarts to litres

Litres to liquid quarts

Gallons to litres

Litres to gallons

Bushels to hectolitres

Hectolitres to bushels

1 2 3 4

29.57 59.15 88.72 118.3

0.03381 0.06763 0.1014 0.1353

0.4732 0.9463 1.420 1.893

2.113 4.227 6.340 8.454

0.9463 1.893 2.839 3.785

1.057 2.113 3.170 4.227

3.785 7.571 11.36 15.14

0.2642 0.5284 0.7925 1.057

0.3524 0.7048 1.057 1.410

2.838 5.676 8.513 11.35

5 6 7 8 9

147.9 177.4 207.0 236.6 266.2

0.1691 0.2092 0.2367 0.2705 0.3043

2.366 2.839 3.312 3.785 4.259

4.732 5.678 6.624 7.571 8.517

5.284 6.340 7.397 8.454 9.510

18.93 22.71 26.50 30.28 34.07

1.321 1.585 1.849 2.113 2.378

1.762 2.114 2.467 2.819 3.171

14.19 17.03 19.86 22.70 25.54

10.57 12.68 14.79 16.91 19.02

* EXAMPLE: 1 fluid oz  29.57 mL.

Table 1.2.18

Mass Equivalents Ounces

Pounds

Tons

Kilograms

Grains

Troy and apoth

Avoirdupois

Troy and apoth

Avoirdupois

Short

Long

Metric

1 0.046480 0.03110 0.02835 0.3732 0.4536 907.2 1016 1000

15432 1 480 437.5 5760 7000 1406 156804 15432356

32.15 0.022083 1 0.9115 12 14.58 29167 32667 32151

35.27 0.022286 1.09714 1 13.17 16 3203 35840 35274

2.6792 0.031736 0.08333 0.07595 1 1.215 2431 2722 2679

2.205 0.031429 0.06857 0.0625 0.8229 1 2000 2240 2205

0.021102 0.077143 0.043429 0.043125 0.034114 0.0005 1 1.12 1.102

0.039842 0.076378 0.043061 0.042790 0.033673 0.034464 0.8929 1 0.9842

0.001 0.076480 0.043110 0.042835 0.033732 0.034536 0.9072 1.016 1

Table 1.2.19

Conversion of Masses*

Grams to ounces (avdp)

Pounds (avdp) to kilograms

Kilograms to pounds (avdp)

Short tons (2000 lb) to metric tons

2.205 4.409 6.614 8.818

0.907 1.814 2.722 3.629

1.102 2.205 3.307 4.409

1.016 2.032 3.048 4.064

0.984 1.968 2.953 3.937

4.536 5.443 6.350 7.257 8.165

5.512 6.614 7.716 8.818 9.921

5.080 6.096 7.112 8.128 9.144

4.921 5.905 6.889 7.874 8.858

Grains to grams

Grams to grains

Ounces (avdp) to grams

1 2 3 4

0.06480 0.1296 0.1944 0.2592

15.43 30.86 46.30 61.73

28.35 56.70 85.05 113.40

0.03527 0.07055 0.1058 0.1411

0.4536 0.9072 1.361 1.814

5 6 7 8 9

0.3240 0.3888 0.4536 0.5184 0.5832

77.16 92.59 108.03 123.46 138.89

141.75 170.10 198.45 226.80 255.15

0.1764 0.2116 0.2469 0.2822 0.3175

2.268 2.722 3.175 3.629 4.082

11.02 13.23 15.43 17.64 19.84

Metric tons (1000 kg) to short tons

Long tons (2240 lb) to metric tons

Metric tons to long tons

* EXAMPLE: 1 grain  0.06480 grams.

Table 1.2.20

Pressure Equivalents Columns of mercury at temperature 08C and g  9.80665 m/s2

Columns of water at temperature 158C and g  9.80665 m/s2

Pascals N/m2

Bars 105 N/m2

Poundsf per in2

Atmospheres

cm

in

cm

in

1 100000 6894.8 101326 1333 3386 97.98 248.9

105 1 0.068948 1.0132 0.0133 0.03386 0.0009798 0.002489

0.000145 14.504 1 14.696 0.1934 0.4912 0.01421 0.03609

0.00001 0.9869 0.06805 1 0.01316 0.03342 0.000967 0.002456

0.00075 75.01 5.171 76.000 1 2.540 0.07349 0.1867

0.000295 29.53 2.036 29.92 0.3937 1 0.02893 0.07349

0.01021 1020.7 70.37 1034 13.61 34.56 1 2.540

0.00402 401.8 27.703 407.1 5.357 13.61 0.3937 1

1-32

MEASURING UNITS Table 1.2.21

Conversion of Pressures*

Lb/in2 to bars

Bars to lb/in2

Lb/in2 to atmospheres

Atmospheres to lb/in2

Bars to atmospheres

Atmospheres to bars

1 2 3 4

0.06895 0.13790 0.20684 0.27579

14.504 29.008 43.511 58.015

0.06805 0.13609 0.20414 0.27218

14.696 29.392 44.098 58.784

0.98692 1.9738 2.9607 3.9477

1.01325 2.0265 3.0397 4.0530

5 6 7 8 9

0.34474 0.41368 0.48263 0.55158 0.62053

72.519 87.023 101.53 116.03 130.53

0.34823 0.40826 0.47632 0.54436 0.61241

73.480 88.176 102.87 117.57 132.26

4.9346 5.9215 6.9085 7.8954 8.8823

5.0663 6.0795 7.0927 8.1060 9.1192

* EXAMPLE: 1 lb/in2  0.06895 bar.

Table 1.2.22

Velocity Equivalents

cm/s

m/s

m/min

km/h

ft/s

ft/min

mi/h

Knots

1 100 1.667 27.78 30.48 0.5080 44.70 51.44

0.01 1 0.01667 0.2778 0.3048 0.005080 0.4470 0.5144

0.6 60 1 16.67 18.29 0.3048 26.82 30.87

0.036 3.6 0.06 1 1.097 0.01829 1.609 1.852

0.03281 3.281 0.05468 0.9113 1 0.01667 1.467 1.688

1.9685 196.85 3.281 54.68 60 1 88 101.3

0.02237 2.237 0.03728 0.6214 0.6818 0.01136 1 1.151

0.01944 1.944 0.03240 0.53996 0.59248 0.00987 0.86898 1

Table 1.2.23

Conversion of Linear and Angular Velocities*

cm/s to ft/min

ft/min to cm/s

cm/s to mi/h

mi/h to cm/s

ft/s to mi/h

mi/h to ft/s

rad/s to r/min

r/min to rad/s

1 2 3 4

1.97 3.94 5.91 7.87

0.508 1.016 1.524 2.032

0.0224 0.0447 0.0671 0.0895

44.70 89.41 134.1 178.8

0.682 1.364 2.045 2.727

1.47 2.93 4.40 5.87

9.55 19.10 28.65 38.20

0.1047 0.2094 0.3142 0.4189

5 6 7 8 9

9.84 11.81 13.78 15.75 17.72

2.540 3.048 3.556 4.064 4.572

0.1118 0.1342 0.1566 0.1790 0.2013

223.5 268.2 312.9 357.6 402.3

3.409 4.091 4.773 5.455 6.136

7.33 8.80 10.27 11.73 13.20

47.75 57.30 66.84 76.39 85.94

0.5236 0.6283 0.7330 0.8378 0.9425

* EXAMPLE: 1 cm/s  1.97 ft/min.

Table 1.2.24

Acceleration Equivalents

cm/s

m/s2

m/(h  s)

km/(h  s)

ft/(h  s)

ft/s2

ft/min2

mi/(h  s)

knots/s

1 100 0.02778 27.78 0.008467 30.48 0.008467 44.70 51.44

0.01 1 0.0002778 0.2778 0.00008467 0.3048 0.00008467 0.4470 0.5144

36.00 3600 1 1000 0.3048 1097 0.3048 1609 1852

0.036 3.6 0.001 1 0.0003048 1.097 0.0003048 1.609 1.852

118.1 11811 3.281 3281 1 3600 1 5280 6076

0.03281 3.281 0.0009113 0.9113 0.0002778 1 0.0002778 1.467 1.688

118.1 11811 3.281 3281 1 3600 1 5280 6076

0.02237 2.237 0.0006214 0.6214 0.0001894 0.6818 0.0001894 1 1.151

0.01944 1.944 0.0005400 0.5400 0.0001646 0.4572 0.0001646 0.8690 1

2

CONVERSION AND EQUIVALENCY TABLES Table 1.2.25

1-33

Conversion of Accelerations*

cm/s2 to ft/min2

km/(h  s) to mi/(h  s)

km/(h  s) to knots/s

ft/s2 to mi/(h  s)

ft/s2 to knots/s

ft/min2 to cm/s2

mi/(h  s) to km/(h  s)

mi/(h  s) to knots/s

knots/s to mi/(h  s)

knots/s to km/(h  s)

1 2 3 4 5

118.1 236.2 354.3 472.4 590.6

0.6214 1.243 1.864 2.485 3.107

0.5400 1.080 1.620 2.160 2.700

0.6818 1.364 2.045 2.727 3.409

0.4572 0.9144 1.372 1.829 2.286

0.008467 0.01693 0.02540 0.03387 0.04233

1.609 3.219 4.828 6.437 8.046

0.8690 1.738 2.607 3.476 4.345

1.151 2.302 3.452 4.603 5.754

1.852 3.704 5.556 7.408 9.260

6 7 8 9

708.7 826.8 944.9 1063

3.728 4.350 4.971 5.592

3.240 3.780 4.320 4.860

4.091 4.772 5.454 6.136

2.743 3.200 3.658 4.115

0.05080 0.05927 0.06774 0.07620

9.656 11.27 12.87 14.48

5.214 6.083 6.952 7.821

6.905 8.056 9.206 10.36

11.11 12.96 14.82 16.67

* EXAMPLE: 1 cm/s2  118.1 ft/min2.

Table 1.2.26

Energy or Work Equivalents

Joules or Newton-metres

Kilogramfmetres

1 9.80665 1.356 3.600  106 2.648  106 2.6845  106 101.33 4186.8 1055

0.10197 1 0.1383 3.671  105 270000 2.7375  105 10.333 426.9 107.6

Table 1.2.27

Foot-poundsf

Kilowatt hours

Metric horsepowerhours

Horsepowerhours

Litreatmospheres

Kilocalories

British thermal units

0.7376 7.233 1 2.655  106 1.9529  106 1.98  106 74.74 3088 778.2

0.062778 0.052724 0.063766 1 0.7355 0.7457 0.042815 0.001163 0.032931

0.063777 0.0537037 0.0651206 1.3596 1 1.0139 0.043827 0.001581 0.033985

0.063725 0.053653 0.0650505 1.341 0.9863 1 0.043775 0.001560 0.033930

0.009869 0.09678 0.01338 35528 26131 26493 1 41.32 10.41

0.032388 0.002342 0.033238 859.9 632.4 641.2 0.02420 1 0.25200

0.039478 0.009295 0.001285 3412 2510 2544 0.09604 3.968 1

Conversion of Energy, Work, Heat*

Ft  lbf to joules

Joules to ft  lbf

Ft  lbf to Btu

Btu to ft  lbf

Kilogramfmetres to kilocalories

Kilocalories to kilogramfmetres

Joules to calories

Calories to joules

1 2 3 4

1.3558 2.7116 4.0674 5.4232

0.7376 1.4751 2.2127 2.9503

0.001285 0.002570 0.003855 0.005140

778.2 1,556 2,334 3,113

0.002342 0.004685 0.007027 0.009369

426.9 853.9 1,281 1,708

0.2388 0.4777 0.7165 0.9554

4.187 8.374 12.56 16.75

5 6 7 8 9

6.7790 8.1348 9.4906 10.8464 12.2022

3.6879 4.4254 5.1630 5.9006 6.6381

0.006425 0.007710 0.008995 0.01028 0.01156

3,891 4,669 5,447 6,225 7,003

0.01172 0.01405 0.01640 0.01874 0.02108

2,135 2,562 2,989 3,415 3,842

1.194 1.433 1.672 1.911 2.150

20.93 25.12 29.31 33.49 37.68

* EXAMPLE: 1 ft  lbf  1.3558 J.

Table 1.2.28

Power Equivalents

Horsepower

Kilowatts

Metric horsepower

Kgf  m per s

Ft  lbf per s

Kilocalories per s

Btu per s

1 1.341 0.9863 0.01315 0.00182 5.615 1.415

0.7457 1 0.7355 0.009807 0.001356 4.187 1.055

1.014 1.360 1 0.01333 0.00184 5.692 1.434

76.04 102.0 75 1 0.1383 426.9 107.6

550 737.6 542.5 7.233 1 3088 778.2

0.1781 0.2388 0.1757 0.002342 0.033238 1 0.2520

0.7068 0.9478 0.6971 0.009295 0.001285 3.968 1

1-34

MEASURING UNITS Table 1.2.29

Conversion of Power*

Horsepower to kilowatts

Kilowatts to horsepower

Metric horsepower to kilowatts

Kilowatts to metric horsepower

Horsepower to metric horsepower

Metric horsepower to horsepower

1 2 3 4

0.7457 1.491 2.237 2.983

1.341 2.682 4.023 5.364

0.7355 1.471 2.206 2.942

1.360 2.719 4.079 5.438

1.014 2.028 3.042 4.055

0.9863 1.973 2.959 3.945

5 6 7 8 9

3.729 4.474 5.220 5.966 6.711

6.705 8.046 9.387 10.73 12.07

3.677 4.412 5.147 5.883 6.618

6.798 8.158 9.520 10.88 12.24

5.069 6.083 7.097 8.111 9.125

4.932 5.918 6.904 7.891 8.877

* EXAMPLE: 1 hp  0.7457 kW.

Table 1.2.30

Table 1.2.31

Density Equivalents*

Grams per mL

Lb per cu in

Lb per cu ft

Short tons (2,000 lb) per cu yd

1 27.68 0.01602 1.187 0.1198

0.03613 1 0.035787 0.04287 0.004329

62.43 1728 1 74.7 7.481

0.8428 23.33 0.0135 1 0.1010

Lb per U.S. gal 8.345 231 0.1337 9.902 1

* EXAMPLE: 1 g per mL  62.43 lb per cu ft.

Table 1.2.32

Conversion of Density Grams per mL to lb per cu ft

Lb per cu ft to grams per mL

Grams per mL to short tons per cu yd

Short tons per cu yd to grams per mL

1 2

62.43 124.86

0.01602 0.03204

0.8428 1.6856

1.187 2.373

3 4

187.28 249.71

0.04805 0.06407

2.5283 3.3711

3.560 4.746

5 6

312.14 374.57

0.08009 0.09611

4.2139 5.0567

5.933 7.119

7 8

437.00 499.43

0.11213 0.12814

5.8995 6.7423

8.306 9.492

9 10

561.85 624.28

0.14416 0.16018

7.5850 8.4278

10.679 11.866

Thermal Conductivity

Calories per cm  s  8C

Watts per cm  8C

Calories per cm  h  8C

Btu  ft per ft2  h  8F

Btu  in per ft2  day  8F

1 0.2388 0.0002778 0.004134 0.00001435

4.1868 1 0.001163 0.01731 0.00006009

3,600 860 1 14.88 0.05167

241.9 57.79 0.0672 1 0.00347

69,670 16,641 19.35 288 1

Table 1.2.33

Thermal Conductance

Calories per cm2  s  8C

Watts per cm2  8C

Calories per cm2  h  8C

Btu per ft2  h  8F

Btu per ft2  day  8F

1 0.2388 0.0002778 0.0001356 0.000005651

4.1868 1 0.001163 0.0005678 0.00002366

3,600 860 1 0.4882 0.02034

7,373 1,761 2.048 1 0.04167

176,962 42,267 49.16 24 1

Table 1.2.34

Heat Flow

Calories per cm2  s

Watts per cm2

Calories per cm2  h

Btu per ft2  h

Btu per ft2  day

1 0.2388 0.0002778 0.00007535 0.000003139

4.1868 1 0.001163 0.0003154 0.00001314

3,600 860 1 0.2712 0.01130

13,272 3,170 3.687 1 0.04167

318,531 76,081 88.48 24 1

Section

2

Mathematics BY

C. EDWARD SANDIFER Professor, Western Connecticut State University, Danbury, CT. THOMAS J. COCKERILL Advisory Engineer, International Business Machines, Inc.

2.1 MATHEMATICS by C. Edward Sandifer Sets, Numbers, and Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2 Significant Figures and Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4 Geometry, Areas, and Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5 Permutations and Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11 Trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-14 Analytical Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-18 Differential and Integral Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-24 Series and Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-34 Vector Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-34 Theorems about Line and Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . 2-35

Laplace and Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35 Special Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-38 2.2 COMPUTERS by Thomas J. Cockerill Computer Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-40 Computer Data Structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-41 Computer Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43 Distributed Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-45 Relational Database Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-47 Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48 Software Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-50

2-1

2.1 MATHEMATICS by C. Edward Sandifer REFERENCES: Conte and DeBoor, “Elementary Numerical Analysis: An Algorithmic Approach,” McGraw-Hill. Boyce and DiPrima, “Elementary Differential Equations and Boundary Value Problems,” Wiley. Hamming, “Numerical Methods for Scientists and Engineers,” McGraw-Hill. Kreyszig, “Advanced Engineering Mathematics,” Wiley.

The union has the properties: A#A´B

A ¨ B 5 5x : x  A and x  B6

Sets and Elements

The concept of a set appears throughout modern mathematics. A set is a well-defined list or collection of objects and is generally denoted by capital letters, A, B, C, . . . . The objects composing the set are called elements and are denoted by lowercase letters, a, b, x, y, . . . . The notation xA is read “x is an element of A” and means that x is one of the objects composing the set A. There are two basic ways to describe a set. The first way is to list the elements of the set. A 5 52, 4, 6, 8, 106 This often is not practical for very large sets. The second way is to describe properties which determine the elements of the set. A 5 5even numbers from 2 to 106 This method is sometimes awkward since a single set may sometimes be described in several different ways. In describing sets, the symbol : is read “such that.” The expression B 5 5x : x is an even integer, x . 1, x , 116 is read “B equals the set of all x such that x is an even integer, x is greater than 1, and x is less than 11.” Two sets, A and B, are equal, written A  B, if they contain exactly the same elements. The sets A and B above are equal. If two sets, X and Y, are not equal, it is written X  Y. Subsets A set C is a subset of a set A, written C # A, if each element in C is also an element in A. It is also said that C is contained in A. Any set is a subset of itself. That is, A # A always. A is said to be an “improper subset of itself.” Otherwise, if C # A and C  A, then C is a proper subset of A. Two theorems are important about subsets: (Fundamental theorem of set equality) and

Y # X,

then X 5 Y

(2.1.1)

(Transitivity) If X # Y

and

Y # Z,

then X # Z

(2.1.2)

Universe and Empty Set In an application of set theory, it often happens that all sets being considered are subsets of some fixed set, say integers or vectors. This fixed set is called the universe and is sometimes denoted U. It is possible that a set contains no elements at all. The set with no elements is called the empty set or the null set and is denoted [. Set Operations New sets may be built from given sets in several ways. The union of two sets, denoted A ´ B, is the set of all elements belonging to A or to B, or to both.

A ´ B 5 5x : x  A or x  B6

2-2

(2.1.3)

The intersection is denoted A ¨ B and consists of all elements, each of which belongs to both A and B.

SETS, NUMBERS, AND ARITHMETIC

If X # Y

B#A´B

and

The intersection has the properties A¨B#A

A¨B#B

and

(2.1.4)

If A ¨ B 5 [, then A and B are called disjoint. In general, a union makes a larger set and an intersection makes a smaller set. The complement of a set A is the set of all elements in the universe set which are not in A. This is written A 5 5x : x  U, x  A6

,

The difference of two sets, denoted A  B, is the set of all elements which belong to A but do not belong to B. Algebra on Sets The operations of union, intersection, and complement obey certain laws known as Boolean algebra. Using these laws, it is possible to convert an expression involving sets into other equivalent expressions. The laws of Boolean algebra are given in Table 2.1.1. Venn Diagrams To give a pictorial representation of a set, Venn diagrams are often used. Regions in the plane are used to correspond to sets, and areas are shaded to indicate unions, intersections, and complements. Examples of Venn diagrams are given in Fig. 2.1.1. The laws of Boolean algebra and their relation to Venn diagrams are particularly important in programming and in the logic of computer searches. Numbers

Numbers are the basic instruments of computation. It is by operations on numbers that calculations are made. There are several different kinds of numbers. Natural numbers, or counting numbers, denoted N, are the whole numbers greater than zero. Sometimes zero is included as a natural number. Any two natural numbers may be added or multiplied to give another natural number, but subtracting them may produce a negative Table 2.1.1

Laws of Boolean Algebra

1. Idempotency A´A5A 2. Associativity sA ´ Bd ´ C 5 A ´ sB ´ Cd 3. Commutativity A´B5B´A 4. Distributivity A ´ sB ¨ Cd 5 sA ´ Bd ¨ sA ´ Cd 5. Identity A´[5A A´U5U 6. Complement A ´ ,A 5 U , , s Ad 5 A , U5[ , [5U 7. DeMorgan’s laws , sA ´ Bd 5 ,A ¨ ,B

A¨A5A sA ¨ Bd ¨ C 5 A ¨ sB ¨ Cd A¨B5B¨A A ¨ sB ´ Cd 5 sA ¨ Bd ´ sA ¨ Cd A¨U5A A¨[5[ A ¨ ,A 5 [

,

sA ¨ Bd 5 ,A ´ ,B

SETS, NUMBERS, AND ARITHMETIC

2-3

If two functions f and g have the same range and domain and if the ranges are numbers, then f and g may be added, subtracted, multiplied, or divided according to the rules of the range. If f(x)  3x  4 and g(x)  sin(x) and both have range and domain equal to R, then f 1 gsxd 5 3x 1 4 1 sin sxd f 3x 1 4 g sxd 5 sin x Dividing functions occasionally leads to complications when one of the functions assumes a value of zero. In the example f/g above, this occurs when x  0. The quotient cannot be evaluated for x  0 although the quotient function is still meaningful. In this case, the function f/g is said to have a pole at x  0. Polynomial functions are functions of the form and

n

f sxd 5 g ai x i i50

Fig. 2.1.1 Venn diagrams.

number, which is not a natural number, and dividing them may produce a fraction, which is not a natural number. When computers are used, it is important to know what kind of number is being used so the correct data type will be used as well. Integers, or whole numbers, are denoted by Z. They include both positive and negative numbers and zero. Integers may be added, subtracted, and multiplied, but division might not produce an integer. Real numbers, denoted R, are essentially all values which it is possible for a measurement to take, or all possible lengths for line segments. Rational numbers are real numbers that are the quotient of two integers, for example, 11⁄78. Irrational numbers are not the quotient of two integers, for example, p and 22. Within the real numbers, it is always possible to add, subtract, multiply, and divide (except division by zero). Complex numbers, or imaginary numbers, denoted C, are an extension of the real numbers that include the square root of 1, denoted i. Within the real numbers, only positive numbers have square roots. Within the complex numbers, all numbers have square roots. Any complex number z can be written uniquely as z  x  iy, where x and y are real. Then x is the real part of z, denoted Re(z), and y is the imaginary part, denoted Im(z). The complex conjugate, or simply conjugate of a complex number, z is z 5 x 2 iy. If z  x  iy and w  u  iv, then z and w may be manipulated as follows: z 1 w 5 sx 1 ud 1 is y 1 vd z 2 w 5 sx 2 ud 1 is y 2 vd zw 5 xu 2 yv 1 isxv 1 yud xu 1 yv 1 is yu 2 xvd z w5 u2 1 v2 As sets, the following relation exists among these different kinds of numbers: N#Z#R#C Functions A function f is a rule that relates two sets A and B. Given an element x

of the set A, the function assigns a unique element y from the set B. This is written y 5 fsxd The set A is called the domain of the function, and the set B is called the range. It is possible for A and B to be the same set. Functions are usually described by giving the rule. For example, f sxd 5 3x 1 4 is a rule for a function with range and domain both equal to R. Given a value, say, 2, from the domain, f (2)  3(2)  4  10.

where an  0. The domain and range of polynomial functions are always either R or C. The number n is the degree of the polynomial. Polynomials of degree 0 or 1 are called linear; of degree 2 they are called parabolic or quadratic; and of degree 3 they are called cubic. The values of f for which f(x)  0 are called the roots of f. A polynomial of degree n has at most n roots. There is exactly one exception to this rule: If f(x)  0 is the constant zero function, the degree of f is zero, but f has infinitely many roots. Roots of polynomials of degree 1 are found as follows: Suppose the polynomial is f(x)  ax  b. Set f(x)  0 and solve for x. Then x  b/a. Roots of polynomials of degree 2 are often found using the quadratic formula. If f(x)  a x2  bx  c, then the two roots of f are given by the quadratic formula:

2b 1 2b 2 2 4ac 2b 2 2b 2 2 4ac and x2 5 2a 2a Roots of a polynomial of degree 3 fall into two types. x1 5

Equations of the Third Degree with Term in x 2 Absent

Solution: After dividing through by the coefficient of x3, any equation of this type can be written x3  Ax  B. Let p  A/3 and q  B/2. The general solution is as follows: CASE 1. q2  p3 positive. One root is real, viz., 3 x1 5 #q 1 2q 2 2 p 3 1 # q 2 2q 2 2 p 3 3

The other two roots are imaginary. CASE 2.

q2  p3  zero. Three roots real, but two of them equal. 3 x1 5 2 2 q

3 x2 5 2 2 q

3 x3 5 2 2 q

CASE 3. q2  p3 negative. All three roots are real and distinct. Determine an angle u between 0 and 180, such that cos u 5 q/s p 2pd. Then

x1 5 2 2p cos su/3d x2 5 2 2p cos su/3 1 1208d x3 5 2 2p cos su/3 1 2408d Graphical Solution: Plot the curve y1  x3, and the straight line y2  Ax  B. The abscissas of the points of intersection will be the roots of the equation. Equations of the Third Degree (General Case)

Solution: The general cubic equation, after dividing through by the coefficient of the highest power, may be written x3  ax2  bx  c  0. To get rid of the term in x2, let x  x1  a/3. The equation then becomes x 31 5 Ax1 1 B, where A  3(a/3)2  b, and B  2(a/3)3  b(a/3)  c. Solve this equation for x1, by the method above, and then find x itself from x  x1  (a/3). Graphical Solution: Without getting rid of the term in x2, write the equation in the form x3  a[x  (b/ 2a)]2  [a(b/ 2a)2  c], and solve by the graphical method.

2-4

MATHEMATICS

Computer Solutions: Equations of degree 3 or higher are frequently solved by using computer algebra systems such as Mathematica or Maple.

0.6  0.8  .048 should be written as 0.5 since the factors have one significant figure. There is a gain of precision from 0.1 to 0.01.

Arithmetic

Addition and Subtraction A sum or difference should be represented with the same precision as the least precise term involved. The number of significant figures may change.

When numbers, functions, or vectors are manipulated, they always obey certain properties, regardless of the types of the objects involved. Elements may be added or subtracted only if they are in the same universe set. Elements in different universes may sometimes be multiplied or divided, but the result may be in a different universe. Regardless of the universe sets involved, the following properties hold true: 1. Associative laws. a  (b  c)  (a  b)  c, a(bc)  (ab)c 2. Identity laws. 0  a  a, 1a  a 3. Inverse laws. a  a  0, a/a  1 4. Distributive law. a(b  c)  ab  ac 5. Commutative laws. a  b  b  a, ab  ba Certain universes, for example, matrices, do not obey the commutative law for multiplication. SIGNIFICANT FIGURES AND PRECISION Number of Significant Figures In engineering computations, the data are ordinarily the result of measurement and are correct only to a limited number of significant figures. Each of the numbers 3.840 and 0.003840 is said to be given “correct to four figures”; the true value lies in the first case between 0.0038395 and 0.0038405. The absolute error is less than 0.001 in the first case, and less than 0.000001 in the second; but the relative error is the same in both cases, namely, an error of less than “one part in 3,840.” If a number is written as 384,000, the reader is left in doubt whether the number of correct significant figures is 3, 4, 5, or 6. This doubt can be removed by writing the number as 3.84  10 5, or 3.840  10 5, or 3.8400  10 5, or 3.84000  10 5. In any numerical computation, the possible or desirable degree of accuracy should be decided on and the computation should then be so arranged that the required number of significant figures, and no more, is secured. Carrying out the work to a larger number of places than is justified by the data is to be avoided, (1) because the form of the results leads to an erroneous impression of their accuracy and (2) because time and labor are wasted in superfluous computation. The unit value of the least significant figure in a number is its precision. The number 123.456 has six significant figures and has precision 0.001. Two ways to represent a real number are as fixed-point or as floatingpoint, also known as “scientific notation.” In scientific notation, a number is represented as a product of a mantissa and a power of 10. The mantissa has its first significant figure either immediately before or immediately after the decimal point, depending on which convention is being used. The power of 10 used is called the exponent. The number 123.456 may be represented as either

0.123456 3 103

or

1.23456 3 102

Fixed-point representations tend to be more convenient when the quantities involved will be added or subtracted or when all measurements are taken to the same precision. Floating-point representations are more convenient for very large or very small numbers or when the quantities involved will be multiplied or divided. Many different numbers may share the same representation. For example, 0.05 may be used to represent, with precision 0.01, any value between 0.045000 and 0.054999. The largest value a number represents, in this case 0.0549999, is sometimes denoted x*, and the smallest is denoted x*. An awareness of precision and significant figures is necessary so that answers correctly represent their accuracy. Multiplication and Division A product or quotient should be written with the smallest number of significant figures of any of the factors involved. The product often has a different precision than the factors, but the significant figures must not increase. EXAMPLES. (6.)(8.)  48 should be written as 50 since the factors have one significant figure. There is a loss of precision from 1 to 10.

EXAMPLES. 3.14  0.001  3.141 should be represented as 3.14 since the least precise term has precision 0.01. 3.14  0.1  3.24 should be represented as 3.2 since the least precise term has precision 0.1. Loss of Significant Figures Addition and subtraction may result in serious loss of significant figures and resultant large relative errors if the sums are near zero. For example,

3.15  3.14  0.01 shows a loss from three significant figures to just one. Where it is possible, calculations and measurements should be planned so that loss of significant figures can be avoided. Mixed Calculations When an expression involves both products and sums, significant figures and precision should be noted for each term or factor as it is calculated, so that correct significant figures and precision for the result are known. The calculation should be performed to as much precision as is available and should be rounded to the correct precision when the calculation is finished. This process is frequently done incorrectly, particularly when calculators or computers provide many decimal places in their result but provide no clue as to how many of those figures are significant. Significant Figures in Evaluating Functions If y  f(x), then the correct number of significant figures in y depends on the number of significant figures in x and on the behavior of the function f in the neighborhood of x. In general, y should be represented so that all of f(x), f(x*), and f (x*) are between y* and y*. EXAMPLES. 5 5 5 5

1.39642 1.41421 1.43178 1.4

sin s0.5d sin s1.0d sin s1.5d so sin s18d

5 5 5 5

0.00872 0.01745 0.02617 0.0

sin s89.5d sin s90.0d sin s90.5d so sin s908d

5 5 5 5

0.99996 1.00000 0.99996 1.0000

sqr s2.0d sqr s1.95d sqr s2.00d sqr s2.05d so y sin s18d

sin s908d

Note that in finding sin (90), there was a gain in significant figures from two to five and also a gain in precision. This tends to happen when f(x) is close to zero. On the other hand, precision and significant figures are often lost when f (x) or f (x) are large. Rearrangement of Formulas Often a formula may be rewritten in order to avoid a loss of significant figures. In using the quadratic formula to find the roots of a polynomial, significant figures may be lost if the ax2  bx  c has a root near zero. The quadratic formula may be rearranged as follows: 1. Use the quadratic formula to find the root that is not close to 0. Call this root x1. 2. Then x2  c/ax1. If f sxd 5 2x 1 1 2 2x, then loss of significant figures occurs if x is large. This can be eliminated by “rationalizing the numerator” as follows: s 2x 1 1 2 2xds 2x 1 1 1 2xd 2x 1 1 1 2x and this has no loss of significant figures.

5

1 2x 1 1 1 2x

GEOMETRY, AREAS, AND VOLUMES

There is an almost unlimited number of “tricks” for rearranging formulas to avoid loss of significant figures, but many of these are very similar to the tricks used in calculus to evaluate limits.

2-5

The Circle An angle that is inscribed in a semicircle is a right angle (Fig. 2.1.6). A tangent is perpendicular to the radius drawn to the point of contact.

GEOMETRY, AREAS, AND VOLUMES Geometrical Theorems Right Triangles a2  b2  c2. (See Fig. 2.1.2.) / A 1 / B 5 908. p2  mn. a2  mc. b2  nc. Oblique Triangles Sum of angles  180. An exterior angle  sum of the two opposite interior angles (Fig. 2.1.2).

Fig. 2.1.2 Right triangle.

The medians, joining each vertex with the middle point of the opposite side, meet in the center of gravity G (Fig. 2.1.3), which trisects each median.

Fig. 2.1.6 Angle inscribed in a semicircle.

Fig. 2.1.7 Dihedral angle.

Dihedral Angles The dihedral angle between two planes is measured by a plane angle formed by two lines, one in each plane, perpendicular to the edge (Fig. 2.1.7). (For solid angles, see Surfaces and Volumes of Solids.) In a tetrahedron, or triangular pyramid, the four medians, joining each vertex with the center of gravity of the opposite face, meet in a point, the center of gravity of the tetrahedron; this point is 3⁄4 of the way from any vertex to the center of gravity of the opposite face. The Sphere (See also Surfaces and Volumes of Solids.) If AB is a diameter, any plane perpendicular to AB cuts the sphere in a circle, of which A and B are called the poles. A great circle on the sphere is formed by a plane passing through the center. Geometrical Constructions To Bisect a Line AB (Fig. 2.1.8) (1) From A and B as centers, and with equal radii, describe arcs intersecting at P and Q, and draw PQ, which will bisect AB in M. (2) Lay off AC  BD  approximately half of AB, and then bisect CD.

Fig. 2.1.3 Triangle showing medians and center of gravity.

The altitudes meet in a point called the orthocenter, O. The perpendiculars erected at the midpoints of the sides meet in a point C, the center of the circumscribed circle. (In any triangle G, O, and C lie in line, and G is two-thirds of the way from O to C.) The bisectors of the angles meet in the center of the inscribed circle (Fig. 2.1.4).

Fig. 2.1.4 Triangle showing bisectors of angles.

The largest side of a triangle is opposite the largest angle; it is less than the sum of the other two sides. Similar Figures Any two similar figures, in a plane or in space, can be placed in “perspective,” i.e., so that straight lines joining corresponding points of the two figures will pass through a common point (Fig. 2.1.5). That is, of two similar figures, one is merely an enlargement of the other. Assume that each length in one figure is k times the corresponding length in the other; then each area in the first figure is k2 times the corresponding area in the second, and each volume in the first figure is k3 times the corresponding volume in the second. If two lines are cut by a set of parallel lines (or parallel planes), the corresponding segments are proportional.

Fig. 2.1.8 Bisectors of a line.

Fig. 2.1.9 Construction of a line parallel to a given line.

To Draw a Parallel to a Given Line l through a Given Point A (Fig. 2.1.9) With point A as center draw an arc just touching the line l; with any point O of the line as center, draw an arc BC with the same radius. Then a line through A touching this arc will be the required parallel. Or, use a straightedge and triangle. Or, use a sheet of celluloid with a set of lines parallel to one edge and about 1⁄4 in apart ruled upon it. To Draw a Perpendicular to a Given Line from a Given Point A Outside the Line (Fig. 2.1.10) (1) With A as center, describe an arc

cutting the line at R and S, and bisect RS at M. Then M is the foot of the perpendicular. (2) If A is nearly opposite one end of the line, take any point B of the line and bisect AB in O; then with O as center, and OA or OB as radius, draw an arc cutting the line in M. Or, (3) use a straightedge and triangle.

Fig. 2.1.10 Construction of a line perpendicular to a given line from a point not on the line.

Fig. 2.1.5 Similar figures.

To Erect a Perpendicular to a Given Line at a Given Point P (1) Lay off PR  PS (Fig. 2.1.11), and with R and S as centers draw arcs

2-6

MATHEMATICS

intersecting at A. Then PA is the required perpendicular. (2) If P is near the end of the line, take any convenient point O (Fig. 2.1.12) above the line as center, and with radius OP draw an arc cutting the line at Q. Produce QO to meet the arc at A; then PA is the required perpendicular. (3) Lay off PB  4 units of any scale (Fig. 2.1.13); from P and B as centers lay off PA  3 and BA  5; then APB is a right angle.

Fig. 2.1.11 Construction of a line perpendicular to a given line from a point on the line.

Fig. 2.1.12 Construction of a line perpendicular to a given line from a point on the line.

To Divide a Line AB into n Equal Parts (Fig. 2.1.14) Through A draw a line AX at any angle, and lay off n equal steps along this line. Connect the last of these divisions with B, and draw parallels through the other divisions. These parallels will divide the given line into n equal parts. A similar method may be used to divide a line into parts which shall be proportional to any given numbers. To Bisect an Angle AOB (Fig. 2.1.15) Lay off OA  OB. From A and B as centers, with any convenient radius, draw arcs meeting at M; then OM is the required bisector.

Fig. 2.1.13 Construction of a line perpendicular to a given line from a point on the line.

Fig. 2.1.14 Division of a line into equal parts.

To draw the bisector of an angle when the vertex of the angle is not accessible. Parallel to the given lines a, b, and equidistant from them, draw two lines a, b which intersect; then bisect the angle between a and b. To Inscribe a Hexagon in a Circle (Fig. 2.1.16) Step around the circumference with a chord equal to the radius. Or, use a 60 triangle.

Fig. 2.1.17 Hexagon circumscribed about a circle.

Fig. 2.1.18 Construction of a polygon with a given side.

To Draw a Common Tangent to Two Given Circles (Fig. 2.1.20) Let C and c be centers and R and r the radii (R  r). From C as center, draw two concentric circles with radii R  r and R  r; draw tangents to

Fig. 2.1.19 Construction of a tangent to a circle.

Fig. 2.1.20 Construction of a tangent common to two circles.

these circles from c; then draw parallels to these lines at distance r. These parallels will be the required common tangents. To Draw a Circle through Three Given Points A, B, C, or to find the center of a given circular arc (Fig. 2.1.21) Draw the perpendicular bisectors of AB and BC; these will meet at the center, O.

Fig. 2.1.21 Construction of a circle passing through three given points. To Draw a Circle through Two Given Points A, B, and Touching a Given Circle (Fig. 2.1.22) Draw any circle through A and B, cutting

the given circle at C and D. Let AB and CD meet at E, and let ET be tangent from E to the circle just drawn. With E as center, and radius ET, draw an arc cutting the given circle at P and Q. Either P or Q is the required point of contact. (Two solutions.) Fig. 2.1.15 Bisection of an angle.

Fig. 2.1.16 Hexagon inscribed in a circle.

To Circumscribe a Hexagon about a Circle (Fig. 2.1.17) Draw a chord AB equal to the radius. Bisect the arc AB at T. Draw the tangent at T (parallel to AB), meeting OA and OB at P and Q. Then draw a circle with radius OP or OQ and inscribe in it a hexagon, one side being PQ. To Construct a Polygon of n Sides, One Side AB Being Given

(Fig. 2.1.18) With A as center and AB as radius, draw a semicircle, and divide it into n parts, of which n  2 parts (counting from B) are to be used. Draw rays from A through these points of division, and complete the construction as in the figure (in which n  7). Note that the center of the polygon must lie in the perpendicular bisector of each side. To Draw a Tangent to a Circle from an external point A (Fig. 2.1.19) Bisect AC in M; with M as center and radius MC, draw arc cutting circle in P; then P is the required point of tangency.

Fig. 2.1.22 Construction of a circle through two given points and touching a given circle. To Draw a Circle through One Given Point, A, and Touching Two Given Circles (Fig. 2.1.23) Let S be a center of similitude for the two

given circles, i.e., the point of intersection of two external (or internal)

GEOMETRY, AREAS, AND VOLUMES

common tangents. Through S draw any line cutting one circle at two points, the nearer of which shall be called P, and the other at two points, the more remote of which shall be called Q. Through A, P, Q draw a circle cutting SA at B. Then draw a circle through A and B and touching one of the given circles (see preceding construction). This circle will touch the other given circle also. (Four solutions.)

2-7

Rectangle (Fig. 2.1.28) Area  ab  1⁄2D2 sin u, where u  angle between diagonals D, D. Rhombus (Fig. 2.1.29) Area  a2 sin C  1⁄2D1D2, where C  angle between two adjacent sides; D1, D2  diagonals.

Fig. 2.1.28 Rectangle.

Fig. 2.1.29 Rhombus.

Parallelogram (Fig. 2.1.30) Area  bh  ab sin C  1⁄2D1D2 sin u, where u  angle between diagonals D1 and D2. Trapezoid (Fig. 2.1.31) Area  1⁄2(a  b) h where bases a and b are parallel. Fig. 2.1.23 Construction of a circle through a given point and touching two given circles. To Draw an Annulus Which Shall Contain a Given Number of Equal Contiguous Circles (Fig. 2.1.24) (An annulus is a ring-shaped area

enclosed between two concentric circles.) Let R  r and R  r be the inner and outer radii of the annulus, r being the radius of each of the n circles. Then the required relation between these quantities is given by r  R sin (180/n), or r  (R  r) [sin (180/n)]/[1  sin (180/n)].

Fig. 2.1.30 Parallelogram.

Fig. 2.1.31 Trapezoid.

Any Quadrilateral (Fig. 2.1.32) Area  1⁄2D1D2 sin u.

Fig. 2.1.24 Construction of an annulus containing a given number of contiguous circles. Fig. 2.1.32 Quadrilateral. Lengths and Areas of Plane Figures Right Triangle (Fig. 2.1.25) a2  b2  c2. Area  1⁄2ab  1⁄2a2 cot A  1⁄2 b2 tan A  1⁄4c2 sin 2A. Equilateral Triangle (Fig. 2.1.26) Area 5 1⁄4 a 2 23 5 0.43301a 2.

Fig. 2.1.25 Right triangle.

Fig. 2.1.26 Equilateral triangle.

Regular Polygons n  number of sides; v  360/n  angle subtended at center by one side; a  length of one side  2R sin (v/2)  2r tan (v/2); R  radius of circumscribed circle  0.5 a csc (v/2)  r sec (v/2); r  radius of inscribed circle  R cos (v/2)  0.5 cot (v/2); area  0.25 a2n cot (v/2)  0.5 R2n sin (v)  r2n tan (v/2). Areas of regular polygons are tabulated in Table 1.1.3. Circle Area  pr2  1⁄2Cr  1⁄4Cd  1⁄4pd 2  0.785398d 2, where r  radius, d  diameter, C  circumference  2 pr  pd. Annulus (Fig. 2.1.33) Area  p(R2  r 2 )  p(D2  d 2 )/4  2pRb, where R  mean radius  1⁄2(R  r), and b  R  r.

Any Triangle (Fig. 2.1.27)

s 5 1⁄2 sa 1 b 1 cd, t 5 1⁄2 sm1 1 m2 1 m3d r 5 2ss 2 adss 2 bdss 2 cd/s 5 radius inscribed circle R 5 1⁄2 a/sin A 5 1⁄2 b/sin B 5 1⁄2 c/sin C 5 radius circumscribed circle Area 5 1⁄2 base 3 altitude 5 1⁄2 ah 5 1⁄2 ab sin C 5 rs 5 abc/4R 5 61⁄2 5 sx1 y2 2 x2 y1d 1 sx2 y3 2 x3 y2d 1 sx3 y1 2 x1 y3d6, where sx1, y1d, sx2, y2d, sx3, y3d are coordinates of vertices.

Fig. 2.1.33 Annulus.

(Fig. 2.1.34) Area  1⁄2rs  pr2A/360  1⁄2r2 rad A, where rad A  radian measure of angle A, and s  length of arc  r rad A. Sector

Fig. 2.1.27 Triangle.

Fig. 2.1.34 Sector.

2-8

MATHEMATICS

Segment (Fig. 2.1.35) Area  1⁄2r2(rad A  sin A)  1⁄2[r(s  c)  ch], where rad A radian measure of angle A. For small arcs, s  1⁄3(8c  c), where c  chord of half of the arc (Huygens’ approximation). Areas of segments are tabulated in Tables 1.1.1 and 1.1.2.

Right Circular Cylinder (Fig. 2.1.40) Volume  pr2h  Bh. Lateral area  2prh  Ph. Here B  area of base; P  perimeter of base.

Fig. 2.1.39 Regular prism.

Fig. 2.1.35 Segment. Ribbon bounded by two parallel curves (Fig. 2.1.36). If a straight line AB moves so that it is always perpendicular to the path traced by its middle point G, then the area of the ribbon or strip thus generated is equal to the length of AB times the length of the path traced by G. (It is assumed that the radius of curvature of G’s path is never less than 1⁄2AB, so that successive positions of generating line will not intersect.)

Fig. 2.1.40 Right circular cylinder.

Truncated Right Circular Cylinder (Fig. 2.1.41) Volume  pr2h  Bh. Lateral area  2prh  Ph. Here h  mean height  1⁄2(h  h ); B  area of base; P  perimeter of base. 1 2

Fig. 2.1.41 Truncated right circular cylinder. Fig. 2.1.36 Ribbon. Ellipse (Fig. 2.1.37) Area of ellipse  pab. Area of shaded segment  xy  ab sin1 (x/a). Length of perimeter of ellipse  p(a  b)K, where K  (1  1⁄4m2  1⁄64m4  1⁄256m6  . . .), m  (a  b)/(a  b).

For m  0.1 K  1.002 For m  0.6 K  1.092

0.2 1.010 0.7 1.127

0.3 1.023 0.8 1.168

0.4 1.040 0.9 1.216

Any Prism or Cylinder (Fig. 2.1.42) Volume  Bh  Nl. Lateral area  Ql. Here l  length of an element or lateral edge; B  area of base; N  area of normal section; Q  perimeter of normal section.

0.5 1.064 1.0 1.273 Fig. 2.1.42 Any prism or cylinder. Special Ungula of a Right Cylinder (Fig. 2.1.43) Volume  2⁄3r2H. Lateral area  2rH. r  radius. (Upper surface is a semiellipse.)

Fig. 2.1.37 Ellipse. Hyperbola (Fig. 2.1.38) In any hyperbola, shaded area A  ab ln [(x/a)  (y/b)]. In an equilateral hyperbola (a  b), area A  a2 sinh1 (y/a)  a2 cosh1 (x/a). Here x and y are coordinates of point P. Fig. 2.1.43 Special ungula of a right circular cylinder. Any Ungula of a right circular cylinder (Figs. 2.1.44 and 2.1.45) Volume  H(2⁄3 a3 cB)/(r c)  H[a(r2  1⁄3a2) r2c rad u]/ (r c). Lateral area  H(2ra cs)/(r c)  2rH(a c rad u)/

Fig. 2.1.38 Hyperbola.

For lengths and areas of other curves see Analytical Geometry. Surfaces and Volumes of Solids Regular Prism (Fig. 2.1.39) Volume  1⁄2nrah  Bh. Lateral area  nah  Ph. Here n  number of sides; B  area of base; P  perimeter of base.

Fig. 2.1.44 Ungula of a right circular cylinder.

Fig. 2.1.45 Ungula of a right circular cylinder.

GEOMETRY, AREAS, AND VOLUMES

(r c). If base is greater (less) than a semicircle, use  () sign. r  radius of base; B  area of base; s  arc of base; u  half the angle subtended by arc s at center; rad u  radian measure of angle u. Regular Pyramid (Fig. 2.1.46) Volume  1⁄3 altitude  area of base  1⁄6hran. Lateral area  1⁄2 slant height  perimeter of base  1⁄2san. Here r  radius of inscribed circle; a  side (of regular polygon); n  number of sides; s 5 2r 2 1 h2. Vertex of pyramid directly above center of base.

2-9

pd 2  lateral area of circumscribed cylinder. Here r  radius; 3 d 5 2r 5 diameter 5 2 6V/p 5 2A/p. Hollow Sphere or spherical shell. Volume 5 4⁄3psR 3 2 r 3d  1⁄6psD 3 2 d 3d 5 4pR 2 t 1 1⁄3pt 3. Here R, r  outer and inner radii; 1 D,d  outer and inner diameters; t  thickness  R  r; R1  mean radius  1⁄2(R  r). Any Spherical Segment. Zone (Fig. 2.1.50) Volume  1⁄6phs3a 2 1 3a 2 1 h2d. Lateral area (zone)  2prh. Here r  radius of 1 sphere. If the inscribed frustum of a cone is removed from the spherical segment, the volume remaining is 1⁄6phc2, where c  slant height of frustum  2h2 1 sa 2 a1d2.

Fig. 2.1.50 Any spherical segment. Fig. 2.1.46 Regular pyramid. Right Circular Cone Volume  1⁄3pr2h. Lateral area  prs. Here

r  radius of base; h  altitude; s 5 slant height 5 2r 2 1 h2. Frustum of Regular Pyramid (Fig. 2.1.47) Volume  1⁄6hran[1  (a/a)  (a/a)2]. Lateral area  slant height  half sum of perimeters of bases  slant height  perimeter of midsection  1⁄2sn(r  r). Here r,r radii of inscribed circles; s 5 2sr 2 rrd2 1 h2; a,a  sides of lower and upper bases; n  number of sides. Frustum of Right Circular Cone (Fig. 2.1.48) Volume  1⁄3pr 2 h[1  (r/r)  (r/r)2]  1⁄3ph(r 2  rr  r2)  1⁄4ph[r  r)2  1⁄3(r  r)2]. Lateral area 5 p ssr 1 rrd; s 5 2sr 2 rrd2 1 h2.

Spherical Segment of One Base. Zone (spherical “cap” of Fig. 2.1.51) Volume  1⁄6ph(3a2  h2 )  1⁄3ph2 (3r  h). Lateral area (of zone)  2prh  p(a2  h2).

NOTE.

a2  h(2r  h), where r  radius of sphere.

Spherical Sector (Fig. 2.1.51) Volume  1⁄3r  area of cap  ⁄3pr2h. Total area  area of cap  area of cone  2prh  pra.

2

NOTE.

a2  h(2r  h).

Spherical Wedge bounded by two plane semicircles and a lune (Fig. 2.1.52). Volume of wedge volume of sphere  u/360. Area of lune area of sphere  u/360. u  dihedral angle of the wedge.

Fig. 2.1.51 Spherical sector. Fig. 2.1.47 Frustum of a regular pyramid.

Fig. 2.1.52 Spherical wedge.

Fig. 2.1.48 Frustum of a right circular cone.

Any Pyramid or Cone Volume  1⁄3Bh. B  area of base; h 

perpendicular distance from vertex to plane in which base lies. Any Pyramidal or Conical Frustum (Fig. 2.1.49) Volume  1⁄3hsB 1 2BBr 1 Brd 5 1⁄3hB[1 1 sPr/Pd 1 sPr/Pd2]. Here B, B  areas of lower and upper bases; P, P perimeters of lower and upper bases.

Solid Angles Any portion of a spherical surface subtends what is called a solid angle at the center of the sphere. If the area of the given portion of spherical surface is equal to the square of the radius, the subtended solid angle is called a steradian, and this is commonly taken as the unit. The entire solid angle about the center is equal to 4p steradians. A so-called “solid right angle” is the solid angle subtended by a quadrantal (or trirectangular) spherical triangle, and a “spherical degree” (now little used) is a solid angle equal to 1⁄90 of a solid right angle. Hence 720 spherical degrees  1 steregon, or p steradians  180 spherical degrees. If u = the angle which an element of a cone makes with its axis, then the solid angle of the cone contains 2p(1  cos u) steradians. Regular Polyhedra A  area of surface; V  volume; a  edge.

Name of solid Fig. 2.1.49 Pyramidal frustum and conical frustum. Sphere Volume  V  4⁄3p r 3  4.188790r3  1⁄6pd3  2⁄3 volume of circumscribed cylinder. Area  A  4pr2  four great circles 

Tetrahedron Cube Octahedron Dodecahedron Icosahedron

Bounded by

A/a2

V/a3

4 triangles 6 squares 8 triangles 12 pentagons 20 triangles

1.7321 6.0000 3.4641 20.6457 8.6603

0.1179 1.0000 0.4714 7.6631 2.1917

2-10

MATHEMATICS

Ellipsoid (Fig. 2.1.53) Volume  4⁄3pabc, where a, b, c  semi-

axes.

Torus, or Anchor Ring (Fig. 2.1.54) Volume  2p2cr2. Area 

4p2cr.

EXAMPLE. The set of four elements (a, b, c, d) has C(4, 2)  6 two-element subsets, {a, b}, {a, c}, {a, d}, {b, c}, {b, d}, and {c, d}. (Note that {a, c} is the same set as {c, a}.) Permutations The number of ways k objects may be arranged from a set of n elements is given by

Psn, kd 5

n! sn 2 kd!

EXAMPLE. Two elements from the set (a, b, c, d ) may be arranged in P(4, 2)  12 ways: ab, ac, ad, ba, bc, bd, ca, cb, cd, da, db, and dc. Note that ac is a different arrangement than ca. Fig. 2.1.54 Torus.

Fig. 2.1.53 Ellipsoid.

Volume of a Solid of Revolution (solid generated by rotating an area bounded above by f(x) around the x axis) b

V 5 p 3 | f sxd| 2 dx a

Area of a Surface of Revolution b

A 5 2p 3 y 21 1 sdy/dxd2 dx a

Length of Arc of a Plane Curve y  f (x) between values x  a and b

x  b. s 5 3 21 1 sdy/dxd2 dx. If x  f(t) and y  g(t), for a t a b, then

PsA|Ed 5 PsA ¨ Ed/PsEd

b

s 5 3 2sdx/dtd2 1 sdy/dtd2 dt a

PERMUTATIONS AND COMBINATIONS

The product (1)(2)(3) . . . (n) is written n! and is read “n factorial.” By convention, 0!  1, and n! is not defined for negative integers. For large values of n, n! may be approximated by Stirling’s formula: n! < 2.50663nn1.5e2n n The binomial coefficient C(n, k), also written ¢ ≤, is defined as: k Csn, kd

Permutations and combinations are examined in detail in most texts on probability and statistics and on discrete mathematics. If an event can occur in s ways and can fail to occur in f ways, and if all ways are equally likely, then the probability of the event’s occurring is p  s/(s  f ), and the probability of failure is q  f/(s  f )  1  p. The set of all possible outcomes of an experiment is called the sample space, denoted S. Let n be the number of outcomes in the sample set. A subset A of the sample space is called an event. The number of outcomes in A is s. Therefore P(A)  s/n. The probability that A does not occur is P(,A)  q  1  p. Always 0  p  1 and P(S)  1. If two events cannot occur simultaneously, then A ¨ B  , and A and B are said to be mutually exclusive. Then P(A ´ B)  P(A)  P(B). Otherwise, P(A ´ B)  P(A)  P(B)  P(A ¨ B). Events A and B are independent if P(A ¨ B)  P(A)P(B). If E is an event and if P(E)  0, then the probability that A occurs once E has already occurred is called the “conditional probability of A given E,” written P( A|E) and defined as

n! k!sn 2 kd!

C(n, k) is read “n choose k” or as “binomial coefficient n-k.” Binomial coefficients have the following properties: 1. C(n, 0)  C(n, n)  1 2. C(n, 1)  C(n, n  1)  n 3. C(n  1, k)  C(n, k)  C(n, k  1) 4. C(n, k)  C(n, n  k) Binomial coefficients are tabulated in Sec. 1. Binomial Theorem

If n is a positive integer, then n

sa 1 bdn 5 g Csn, kda kb n2k k50

A and E are independent if P( A|E)  P(A). If the outcomes in a sample space X are all numbers, then X, together with the probabilities of the outcomes, is called a random variable. If xi is an outcome, then pi  P(xi). The expected value of a random variable is EsXd 5  ei pi The variance of X is VsXd 5 [xi 2 EsXd]2pi The standard deviation is SsXd 5 2[VsXd] The Binomial, or Bernoulli, Distribution If an experiment is repeated n times and the probability of a success on any trial is p, then the probability of k successes among those n trials is

f sn, k, pd 5 Csn, kdp kq n2k Geometric Distribution If an experiment is repeated until it finally succeeds, let x be the number of failures observed before the first success. Let p be the probability of success on any trial and let q  1  p. Then Psx 5 kd 5 q k # p Uniform Distribution If the random variable x assumes the values 1, 2, . . . , n, with equal probabilities, then the distribution is uniform, and

ExAMPLE. The third term of (2x  3)7 is C(7, 4)(2x)7434  [7!/ (4!3!)](2x)334  (35)(8 x3)(81)  22680x3.

1 Psx 5 kd 5 n

Combinations C(n, k) gives the number of ways k distinct objects can be chosen from a set of n elements. This is the number of k-element subsets of a set of n elements.

Hypergeometric Distribution—Sampling without Replacement If a finite population of N elements contains x successes and if n items are selected randomly without replacement, then the probability that k

LINEAR ALGEBRA

successes will occur among those n samples is Csk, xdCsN 2 k, n 2 xd hsx; N, n, kd 5 CsN, nd For large values of N, the hypergeometric distribution approaches the binomial distribution, so hsx; N, n, kd < f ¢n, k,

x ≤ N

Poisson Distribution If the average number of successes which occur in a given fixed time interval is m, then let x be the number of successes observed in that time interval. The probability that x  k is

psk, md 5

e2mm x x!

Three-dimensional vectors correspond to points in space, where v1, v2, and v3 are the x, y, and z coordinates of the point, respectively. Two- and three-dimensional vectors may be thought of as having a direction and a magnitude. See the section “Analytical Geometry.” Two vectors u and v are equal if: 1. u and v are the same type (either row or column). 2. u and v have the same dimension. 3. Corresponding components are equal; that is, ui  vi for i  1, 2, . . . , n. Note that the row vectors u 5 s1, 2, 3d

b*(k; n, p)  C(k  1, n  1)p q

n kn

The expected values and variances of these distributions are summarized in the following table:

Distribution

E(X)

V(X)

Uniform Binomial Hypergeometric Poisson Geometric Negative binomial

(n  1)/2 np nk/N m q/p nq/p

(n2  1)/12 npq [nk(N  n)(1  k/N)]/[N(N  1)] m q/p2 nq/p2

and

v 5 s3, 2, 1d

are not equal since the components are not in the same order. Also,

where e 5 2.71828 c

Negative Binomial Distribution If repeated independent trials have probability of success p, then let x be the trial number upon which success number n occurs. Then the probability that x  k is

2-11

u 5 s1, 2, 3d

and

1 v 5 £2≥ 3

are not equal since u is a row vector and v is a column vector. Vector Transpose If u is a row vector, then the transpose of u, written uT, is the column vector with the same components in the same order as u. Similarly, the transpose of a column vector is the row vector with the same components in the same order. Note that (uT )T  u. Vector Addition If u and v are vectors of the same type and the same dimension, then the sum of u and v, written u  v, is the vector obtained by adding corresponding components. In the case of row vectors, u 1 v 5 su1 1 v1, u2 1 v2, . . . , un 1 vnd Scalar Multiplication If a is a number and u is a vector, then the scalar product au is the vector obtained by multiplying each component

of u by a. au 5 sau1, au2, . . . , aund A number by which a vector is multiplied is called a scalar. The negative of vector u is written u, and

LINEAR ALGEBRA

Using linear algebra, it is often possible to express in a single equation a set of relations that would otherwise require several equations. Similarly, it is possible to replace many calculations involving several variables with a few calculations involving vectors and matrices. In general, the equations to which the techniques of linear algebra apply must be linear equations; they can involve no polynomial, exponential, or trigonometric terms. Vectors

A row vector v is a list of numbers written in a row, usually enclosed by parentheses. v 5 sv1, v2, c, vnd A column vector u is a list of numbers written in a column: u1 u2 # u5• # µ # un The numbers ui and vi may be real or complex, or they may even be variables or functions. A vector is sometimes called an ordered n-tuple. In the case where n  2, it may be called an ordered pair. The numbers vi are called components or coordinates of the vector v. The number n is called the dimension of v. Two-dimensional vectors correspond with points in the plane, where v1 is the x coordinate and v2 is the y coordinate of the point v. Twodimensional vectors also correspond with complex numbers, where z  v1  iv2.

u  1u The zero vector is the vector with all its components equal to zero. Arithmetic Properties of Vectors If u, v, and w are vectors of the same type and dimensions, and if a and b are scalars, then vector addition and scalar multiplication obey the following seven rules, known as the properties of a vector space: 1. (u  v)  w  u  (v  w) associative law 2. u  v  v  u commutative law 3. u  0  u additive identity 4. u  (u)  0 additive inverse 5. a(u  v)  au  av distributive law 6. (ab)u = a(bu) associative law of multiplication 7. 1u  u multiplicative identity Inner Product or Dot Product If u and v are vectors of the same type and dimension, then their inner product or dot product, written uv or u  v, is the scalar uv 5 u1v1 1 u2v2 1 # # # 1 unvn Vectors u and v are perpendicular or orthogonal if uv  0. Magnitude There are two equivalent ways to define the magnitude of a vector u, written |u| or ||u||. |u| 5 2su ? ud or

|u| 5 2su 21 1 u 22 1 # # # 1 u 2nd

Cross Product or Outer Product If u and v are three-dimensional vectors, then they have a cross product, also called outer product or vector product.

u 3 v 5 su2v3 2 u3v2, v1u3 2 v3u1, u1v2 2 u2v1d

2-12

MATHEMATICS

The cross product u  v is a three-dimensional vector that is perpendicular to both u and v. The cross product is not commutative. In fact, u 3 v 5 2v 3 u

¢

Cross product and inner product have two properties involving trigonometric functions. If u is the angle between vectors u and v, then uv 5 |u | |v| cos u

|u 3 v | 5 |u| |v | sin u

and

Matrices

A matrix is a rectangular array of numbers. A matrix A with m rows and n columns may be written a11 a12 a13 c a1n a21 a22 a23 c a2n A 5 • a31 a32 a33 c a3n µ c c c c c am1 am2 am3 c amn The numbers aij are called the entries of the matrix. The first subscript i identifies the row of the entry, and the second subscript j identifies the column. Matrices are denoted either by capital letters, A, B, etc., or by writing the general entry in parentheses, (aij ). The number of rows and the number of columns together define the dimensions of the matrix. The matrix A is an m  n matrix, read “m by n.” A row vector may be considered to be a 1  n matrix, and a column vector may be considered as a n  1 matrix. The rows of a matrix are sometimes considered as row vectors, and the columns may be considered as column vectors. If a matrix has the same number of rows as columns, the matrix is called a square matrix. In a square matrix, the entries aii, where the row index is the same as the column index, are called the diagonal entries. If a matrix has all its entries equal to zero, it is called a zero matrix. If a square matrix has all its entries equal to zero except its diagonal entries, it is called a diagonal matrix. The diagonal matrix with all its diagonal entries equal to 1 is called the identity matrix, and is denoted I, or Inn if it is important to emphasize the dimensions of the matrix. The 2  2 and 3  3 identity matrices are: I232 5 ¢

1 0 ≤ 0 1

1 0 0 I333 5 £0 1 0≥ 0 0 1

The entries of a square matrix aij where i  j are said to be below the diagonal. Similarly, those where i j are said to be above the diagonal. A square matrix with all entries below (resp. above) the diagonal equal to zero is called upper-triangular (resp. lower-triangular). Matrix Addition Matrices A and B may be added only if they have the same dimensions. Then the sum C  A  B is defined by cij 5 aij 1 bij That is, corresponding entries of the matrices are added together, just as with vectors. Similarly, matrices may be multiplied by scalars. Matrix Multiplication Matrices A and B may be multiplied only if the number of columns in A equals the number of rows of B. If A is an m  n matrix and B is an n  p matrix, then the product C  AB is an m  p matrix, defined as follows: c 5a b 1a b 1# # #1a b ij

i1 1j

i2 2j

EXAMPLE.

¢

1 2 3 4 ≤ ¢ ≤ 5 5 6 7 8

1331237 1341238 17 20 ≤ 5 ¢ ≤ 5331637 5341638 57 68

Matrix multiplication is not commutative. Even if A and B are both square, it is hardly ever true that AB  BA. Matrix multiplication does have the following properties: 1. (AB)C  A(BC) associative law 2. AsB 1 Cd 5 AB 1 AC r distributive laws 3. sB 1 CdA 5 BA 1 CA If A is square, then also 4. AI  IA  A multiplicative identity If A is square, then powers of A, AA, and AAA are denoted A2 and A3, respectively. The transpose of a matrix A, written AT, is obtained by writing the rows of A as columns. If A is m  n, then AT is n  m. EXAMPLE.

¢

T 1 4 1 2 3 ≤ 5 £2 5≥ 4 5 6 3 6

The transpose has the following properties: 1. (AT )T  A 2. (A  B)T  AT  BT 3. (AB)T  BTAT Note that in property 3, the order of multiplication is reversed. If AT  A, then A is called symmetric. Linear Equations

A linear equation in two variables is of the form a1x1 1 a2x2 5 b

or

a1x 1 a2 y 5 b

depending on whether the variables are named x1 and x2 or x and y. In n variables, such an equation has the form a1x1 1 a2x2 1 # # # anxn 5 b Such equations describe lines and planes. Often it is necessary to solve several such equations simultaneously. A set of m linear equations in n variables is called an m  n system of simultaneous linear equations. Systems with Two Variables 1  2 Systems An equation of the form

a1x 1 a2y 5 b has infinitely many solutions which form a straight line in the xy plane. That line has slope a1 /a2 and y intercept b/a2. 2  2 Systems A 2  2 system has the form a11x 1 a12y 5 b1

a21x 1 a22y 5 b2

Solutions to such systems do not always exist. CASE 1. The system has exactly one solution (Fig. 2.1.55a). The lines corresponding to the equations intersect at a single point. This occurs whenever the two lines have different slopes, so they are not

in nj

n

5 g aikbkj k51

The entry cij may also be defined as the dot product of row i of A with the transpose of column j of B.

Fig. 2.1.55 Lines corresponding to linear equations. (a) One solution; (b) no solutions; (c) infinitely many solutions.

LINEAR ALGEBRA

on the ij entry. Combining pivoting, the properties of the elementary row operations, and the fact:

parallel. In this case, a11 a12 a21 2 a22

2-13

a11a22 2 a21a12 2 0

so

CASE 2. The system has no solutions (Fig. 2.1.55b). This occurs whenever the two lines have the same slope and different y intercepts, so they are parallel. In this case,

|In3n| 5 1 provides a technique for finding the determinant of n  n matrices. EXAMPLE.

Find |A| where

a11 a12 a21 5 a22

1 2 24 A 5 £5 23 27≥ 3 22 3

CASE 3. The system has infinitely many solutions (Fig. 2.1.55c). This occurs whenever the two lines coincide. They have the same slope and y intercept. In this case,

First, pivot on the entry in row 1, column 1, in this case, the 1. Multiplying row 1 by 5, then adding row 1 to row 2, we first multiply the determinant by 5, then do not change it:

a11 a12 b1 a21 5 a22 5 b2 The value a11 a22  a21 a12 is called the determinant of the system. A larger n  n system also has a determinant (see below). A system has exactly one solution when its determinant is not zero. 3  2 Systems Any system with more equations than variables is called overdetermined. The only case in which a 3  2 system has exactly one solution is when one of the equations can be derived from the other two. One basic way to solve such a system is to treat any two equations as a 2  2 system and see if the solution to that subsystem of equations is also a solution to the third equation. Matrix Form for Systems of Equations The 2  2 system of linear equations a11x1 1 a12x2 5 b1

a21x1 1 a22x2 5 b2

25|A| 5 3

Next, multiply row 1 by 3⁄5 and add row 1 to row 3: 23|A| 5 3

or as

a11 a12 b x ≤ ¢ 1≤ 5 ¢ 1≤ a21 a22 x2 b2

det A 5 a11a22 2 a21a12 In general, any m  n system of simultaneous linear equations may be written as Ax 5 b where A is an m  n matrix, x is an n-dimensional column vector, and b is an m-dimensional column vector. An n  n (square) system of simultaneous linear equations has exactly one solution whenever its determinant is not zero. Then the system and the matrix A are called nonsingular. If the determinant is zero, the system is called singular. Elementary Row Operations on a Matrix There are three operations on a matrix which change the matrix: 1. Multiply each entry in row i by a scalar k (not zero). 2. Interchange row i with row j. 3. Add row i to row j. Similarly, there are three elementary column operations. The elementary row operations have the following effects on |A|: 1. Multiplying a row (or column) by k multiplies |A| by k. 2. Interchanging two rows (or columns) multiplies |A| by 1. 3. Adding one row (or column) to another does not change |A|. Pivoting, or Reducing, a Column The process of changing the ij entry of a matrix to 1 and changing the rest of column j to zero, by using elementary row operations, is known as reducing column j or as pivoting

26 12 23 26 12 213 13 3 5 3 0 213 13 3 22 3 0 28 15

1 |A| 5 3 0 0

2 24 213 13 3 2 8 15

Next, pivot on the entry in row 2, column 2. Multiplying row 2 by 8⁄13 and then adding row 2 to row 3, we get: 2

1 2 24 1 2 24 8 |A| 5 3 0 8 28 3 5 3 0 8 28 3 13 0 28 15 0 0 7

Next, divide row 2 by 8⁄13.

Ax  b

where A is the 2  2 matrix and x and b are two-dimensional column vectors. Then, the determinant of A, written det A or |A|, is the same as the determinant of the 2  2 system:

23 0 3

Next, divide row 1 by 3:

may be written as a matrix equation as follows: ¢

25 210 20 25 210 20 5 23 27 3 5 3 0 213 13 3 3 22 3 3 22 3

1 2 24 |A| 5 3 0 213 13 3 0 0 7 The determinant of a triangular matrix is the product of its diagonal elements, in this case 91. Inverses Whenever |A| is not zero, that is, whenever A is nonsingular, then there is another n  n matrix, denoted A1, read “A inverse” with the property

AA21 5 A21A 5 In3n Then the n  n system of equations Ax 5 b can be solved by multiplying both sides by A1, so x 5 In 3 nx 5 A21Ax 5 A21b x 5 A21b

so

The matrix A1 may be found as follows: 1. Make a n  2n matrix, with the first n columns the matrix A and the last n columns the identity matrix Inn. 2. Pivot on each of the diagonal entries of this matrix, one after another, using the elementary row operations. 3. After pivoting n times, the matrix will have in the first n columns the identity matrix, and the last n columns will be the matrix A1. EXAMPLE.

Solve the system x1 1 2x2 2 4x3 5 24 5x1 2 3x2 2 7x3 5 6 3x1 2 2x2 1 3x3 5 11

2-14

MATHEMATICS

A nonzero vector v satisfying

We must invert the matrix 1 2 24 A 5 £5 23 27≥ 3 22 3

sA 2 xiI dv 5 0

This is the same matrix used in the determinant example above. Adjoin the identity matrix to make a 3  6 matrix 1 2 24 1 £5 23 27 0 3 22 3 0 Perform the elementary row operations in determinant example. STEP 1.

Pivot on row 1, column 1. 1 £0 0

STEP 2.

24 13 15

2 213 28

1 25 23

0 1 0

0 0≥ 1

Pivot on row 2, column 2. 1 0 22 £0 1 21 0 0 7

STEP 3.

0 0 1 0≥ 0 1 exactly the same order as in the

3⁄13

2⁄13 0 21⁄13 0≥ 28⁄13 1

5⁄13 1⁄13

Pivot on row 3, column 3. 1 £0 0

0 1 0

23

⁄91 36⁄91

0 0 1

1⁄91

22⁄91 215⁄91 28⁄91

26⁄91 13⁄91≥ 13⁄91

Now, the inverse matrix appears on the right. To solve the equation,

is called an eigenvector of A associated with the eigenvalue xi. Eigenvectors have the special property Av 5 xiv Any multiple of an eigenvector is also an eigenvector. A matrix is nonsingular when none of its eigenvalues are zero. Rank and Nullity It is possible that the product of a nonzero matrix A and a nonzero vector v is zero. This cannot happen if A is nonsingular. The set of all vectors which become zero when multiplied by A is called the kernel of A. The nullity of A is the dimension of the kernel. It is a measure of how singular a matrix is. If A is an m  n matrix, then the rank of A is defined as n  nullity. Rank is at most m. The technique of pivoting is useful in finding the rank of a matrix. The procedure is as follows: 1. Pivot on each diagonal entry in the matrix, starting with a11. 2. If a row becomes all zero, exchange it with other rows to move it to the bottom of the matrix. 3. If a diagonal entry is zero but the row is not all zero, exchange the column containing the entry with a column to the right not containing a zero in that row. When the procedure can be carried no further, the nullity is the number of rows of zeros in the matrix. EXAMPLE.

Find the rank and nullity of the 3  2 matrix: 1 £2 4

x 5 A21b 23⁄91

so,

x 5 £36⁄91 1⁄91

22⁄91 215⁄91 28⁄91

24 13⁄91≥ £ 6≥ 13⁄91 11 26⁄91

Pivoting on row 1, column 1, yields

s24 3 23 1 6 3 22 1 11 3 26d/91 2 5 £ s24 3 36 1 6 3 215 1 11 3 13d/91≥ 5 £21≥ s24 3 1 1 6 3 28 1 11 3 13d/91 1 The solution to the system is then x1 5 2

1 £0 0

x3 5 1

possible to take the complex conjugate aij* of each entry, aij. This is called the conjugate of A and is denoted A*. 1. If aij  aji, then A is symmetric. 2. If aij  aji, then A is skew or antisymmetric. 3. If AT  A1, then A is orthogonal. 4. If A  A1, then A is involutory. 5. If A  A*, then A is hermitian. 6. If A  A*, then A is skew hermitian. 7. If A1  A*, then A is unitary. Eigenvalues and Eigenvectors If A is a square matrix and x is a variable, then the matrix B  A  xI is the characteristic matrix, or eigenmatrix, of A. The determinant |A  xI | is a polynomial of degree n, called the characteristic polynomial of A. The roots of this polynomial, x1, x2, . . . , xn, are the eigenvalues of A. Note that some sources define the characteristic matrix as xI  A. If n is odd, then this multiplies the characteristic equation by 1, but the eigenvalues are not changed. A5 2

22 5 2 2 1

B5 2

22 2 x 5 2 2 12x

Then the characteristic polynomial is

1 £0 0

0 1≥ 0

Nullity is therefore 1. Rank is 3  1  2.

If the rank of a matrix is n, so that Rank  nullity  m the matrix is said to be full rank. TRIGONOMETRY Formal Trigonometry Angles or Rotations An angle is generated by the rotation of a ray, as Ox, about a fixed point O in the plane. Every angle has an initial line (OA) from which the rotation started (Fig. 2.1.56), and a terminal line (OB) where it stopped; and the counterclockwise direction of rotation is taken as positive. Since the rotating ray may revolve as often as desired, angles of any magnitude, positive or negative, may be obtained. Two angles are congruent if they may be superimposed so that their initial lines coincide and their terminal lines coincide; i.e., two congruent angles are either equal or differ by some multiple of 360. Two angles are complementary if their sum is 90; supplementary if their sum is 180.

|B| 5 s22 2 xds1 2 xd 2 s2ds5d 5 x 2 1 x 2 2 2 10 5 x 2 1 x 2 12 5 sx 1 4dsx 2 3d The eigenvalues are 4 and 3.

0 21≥ 23

Pivoting on row 2, column 2, yields x2 5 21

Special Matrices If A is a matrix of complex numbers, then it is

EXAMPLE.

1 1≥ 1

Fig. 2.1.56 Angle.

TRIGONOMETRY

2-15

(The acute angles of a right-angled triangle are complementary.) If the initial line is placed so that it runs horizontally to the right, as in Fig. 2.1.57, then the angle is said to be an angle in the 1st, 2nd, 3rd, or 4th quadrant according as the terminal line lies across the region marked I, II, III, or IV.

perpendicular from P on OA or OA produced. In the right triangle OMP, the three sides are MP  “side opposite” O (positive if running upward); OM  “side adjacent” to O (positive if running to the right); OP  “hypotenuse” or “radius” (may always be taken as positive); and the six ratios between these sides are the principal trigonometric

Fig. 2.1.57 Circle showing quadrants.

Fig. 2.1.58 Unit circle showing elements used in trigonometric functions.

Units of Angular Measurement

1. Sexagesimal measure. (360 degrees  1 revolution.) Denoted on many calculators by DEG. 1 degree  1  1⁄90 of a right angle. The degree is usually divided into 60 equal parts called minutes (), and each minute into 60 equal parts called seconds (); while the second is subdivided decimally. But for many purposes it is more convenient to divide the degree itself into decimal parts, thus avoiding the use of minutes and seconds. 2. Centesimal measure. Used chiefly in France. Denoted on calculators by GRAD. (400 grades  1 revolution.) 1 grade  1⁄100 of a right angle. The grade is always divided decimally, the following terms being sometimes used: 1 “centesimal minute”  1⁄100 of a grade; 1 “centesimal second”  1⁄100 of a centesimal minute. In reading Continental books it is important to notice carefully which system is employed. 3. Radian, or circular, measure. (p radians  180 degrees.) Denoted by RAD. 1 radian  the angle subtended by an arc whose length is equal to the length of the radius. The radian is constantly used in higher mathematics and in mechanics, and is always divided decimally. Many theorems in calculus assume that angles are being measured in radians, not degrees, and are not true without that assumption. 1 radian  578.30  578.2957795131  5781744 s .806247  1808>p 18  0.01745 . . . radian  0.01745 32925 radian 1  0.00029 08882 radian 1 s  0.00000 48481 radian Table 2.1.2

sine of x 5 sin x 5 opp/hyp 5 MP/OP cosine of x 5 cos x 5 adj/hyp 5 OM/OP tangent of x 5 tan x 5 opp/adj 5 MP/OM cotangent of x 5 cot x 5 adj/opp 5 OM/MP secant of x 5 sec x 5 hyp/adj 5 OP/OM cosecant of x 5 csc x 5 hyp/opp 5 OP/MP The last three are best remembered as the reciprocals of the first three: cot x 5 1/ tan x

sec x 5 1/ cos x

csc x 5 1/ sin x

Trigonometric functions, the exponential functions, and complex numbers are all related by the Euler formula: eix  cos x  i sin x, where i 5 221. A special case of this ei p  1. Note that here x must be measured in radians. Variations in the functions as x varies from 0 to 360 are shown in Table 2.1.3. The variations in the sine and cosine are best remembered by noting the changes in the lines MP and OM (Fig. 2.1.59) in the “unit circle” (i.e., a circle with radius  OP  1), as P moves around the circumference.

Signs of the Trigonometric Functions

If x is in quadrant sin x and csc x are cos x and sec x are tan x and cot x are

I

II

III

IV

  

  

  

  

Definitions of the Trigonometric Functions Let x be any angle whose initial line is OA and terminal line OP (see Fig. 2.1.58). Drop a

Table 2.1.3

functions of the angle x; thus:

Fig. 2.1.59 Unit circle showing angles in the various quadrants.

Ranges of the Trigonometric Functions Values at

x in DEG x in RAD

08 to 908 (0 to p/2)

908 to 1808 (p/2 to p)

1808 to 2708 (p to 3p/2)

2708 to 3608 (3p/2 to 2p)

308 (p/6)

458 (p/4)

608 (p/3)

sin x csc x

0 to 1  ` to 1

1 to 0 1 to  `

0 to 1  ` to 1

1 to 0 1 to  `

1⁄2

1⁄2 !2

1⁄2 !3

cos x sec x

1 to 0 1 to  `

0 to 1  ` to 1

1 to 0 1 to  `

0 to 1  ` to 1

tan x cot x

0 to  `  ` to 0

 ` to 0 0 to  `

0 to  `  ` to 0

 ` to 0 0 to  `

2

1⁄2 !3 2

⁄3 !3

1⁄2 !3

!3

!2

1⁄2 !2

!2 1 1

⁄3 !3

2

1⁄2

2

!3 1⁄3 !3

2-16

MATHEMATICS

To Find Any Function of a Given Angle (Reduction to the first quadrant.) It is often required to find the functions of any angle x from a table that includes only angles between 0 and 90. If x is not already between 0 and 360, first “reduce to the first revolution” by simply adding or subtracting the proper multiple of 360 [for any function of (x)  the same function of (x n  360)]. Next reduce to first quadrant per table below.

90 and 180 (p/2 and p)

If x is between

180 from x (p)

270 from x (3p/2)

 sin (x  180)  csc (x  180)  cos (x  180)  sec (x  180)  tan (x  180)  cot (x  180)

 cos (x  270)  sec (x  270)  sin (x  270)  csc (x  270)  cot (x  270)  tan (x  270)

NOTE. The formulas for sine and cosine are best remembered by aid of the unit circle. To Find the Angle When One of Its Functions Is Given In general, there will be two angles between 0 and 360 corresponding to any given function. The rules showing how to find these angles are tabulated below. First find an acute angle x0 such that

sin x   a cos x   a tan x   a cot x   a

sin x0  a cos x0  a tan x0  a cot x0  a

sin x  a cos x  a tan x  a cot x  a

sin x0  a cos x0  a tan x0  a cot x0  a

Then the required angles x1 and x2 will be* x0 x0 x0 x0

and 180  x0 and [360  x0] and [180  x0] and [180  x0]

[180  x0] and [360  x0] 180  x0 and [180  x0] 180  x0 and [360  x0] 180  x0 and [360  x0]

* The angles enclosed in brackets lie outside the range 0 to 180 deg and hence cannot occur as angles in a triangle.

Relations Among the Functions of a Single Angle

sin2 x 1 cos2 x 5 1 sin x tan x 5 cos x cos x 1 5 tan x sin x 1 2 2 1 1 tan x 5 sec x 5 cos2 x 1 1 1 cot 2 x 5 csc 2 x 5 sin2 x tan x 1 sin x 5 21 2 cos2 x 5 5 21 1 tan2 x 21 1 cot 2 x cot x 1 cos x 5 21 2 sin2 x 5 5 21 1 tan2 x 21 1 cot 2 x cot x 5

sin x  sin y  2 cos 1⁄2(x  y) sin 1⁄2(x  y) cos x  cos y  2 cos 1⁄2(x  y) cos 1⁄2(x  y) cos x  cos y  2 sin 1⁄2(x  y) sin 1⁄2(x  y) sin sx 1 yd sin sx 1 yd tan x 1 tan y 5 cos x cos y ; cot x 1 cot y 5 sin x sin y sin sx 2 yd sin sx 2 yd tan x 2 tan y 5 cos x cos y ; cot x 2 cot y 5 sin x sin y sin2 x  sin2 y  cos2 y  cos2 x  sin (x  y) sin (x  y) cos2 x  sin2 y  cos2 y  sin2 x  cos (x  y) cos (x  y) sin (45  x)  cos (45  x) tan (45  x)  cot (45  x) sin (45  x)  cos (45  x) tan (45  x)  cot (45  x) In the following transformations, a and b are supposed to be positive, c 5 2a 2 1 b 2, A  the positive acute angle for which tan A  a/b, and B  the positive acute angle for which tan B  b/a: a cos x  b sin x  c sin (A  x)  c cos (B  x) a cos x  b sin x  c sin (A  x)  c cos (B  x) Functions of Multiple Angles and Half Angles

sin 2x  2 sin x cos x; sin x  2 sin 1⁄2x cos 1⁄2x cos 2x  cos2 x  sin2 x  1  2 sin2 x  2 cos2 x  1 cot 2 x 2 1 2 cot x 3 tan x 2 tan3 x sin 3x 5 3 sin x 2 4 sin3 x; tan 3x 5 1 2 3 tan2 x cos 3x  4 cos3 x  3 cos x sin snxd 5 n sin x cos n21 x 2 snd3 sin3 x cos n23 x 1snd5 sin5 x cosn25x 2 # # # n 2 n22 cos snxd 5 cos x 2 snd2 sin x cos x 1 snd4 sin4 x cosn24 x 2 # # # tan 2x 5

2 tan x 1 2 tan2 x

Functions of the Sum and Difference of Two Angles

cot 2x 5

where (n)2, (n)3, . . . , are the binomial coefficients. sin 1⁄2x 5 6 21⁄2 s1 2 cos xd. 1 2 cos x 5 2 sin2 1⁄2 x cos 1⁄2x 5 6 21⁄2 s1 1 cos xd. 1 1 cos x 5 2 cos2 1⁄2 x tan 1⁄2x 5 6

Functions of Negative Angles sin (x)  sin x; cos (x)  cos x;

tan (x)  tan x.

sin (x  y)  sin x cos y  cos x sin y cos (x  y)  cos x cos y  sin x sin y

270 and 360 (3p/2 and 2p)

90 from x (p/2)

The “reduced angle” (x  90, or x  180, or x  270) will in each case be an angle between 0 and 90, whose functions can then be found in the table.

Given

180 and 270 (p and 3p/2)

 cos (x  90)  sec (x  90)  sin (x  90)  csc (x  90)  cot (x  90)  tan (x  90)

Subtract Then sin x csc x cos x sec x tan x cot x

tan (x  y)  (tan x  tan y)/(1  tan x tan y) cot (x  y)  (cot x cot y  1)/(cot x  cot y) sin (x  y)  sin x cos y  cos x sin y cos (x  y)  cos x cos y  sin x sin y tan (x  y)  (tan x  tan y)/(1  tan x tan y) cot (x  y)  (cot x cot y  1)/(cot y  cot x) sin x  sin y  2 sin 1⁄2(x  y) cos 1⁄2(x  y)

tan ¢

1 2 cos x sin x 1 2 cos x 5 5 Å 1 1 cos x 1 1 cos x sin x

x 1 1 sin x 1 458≤ 5 6 Å 1 2 sin x 2

Here the  or  sign is to be used according to the sign of the lefthand side of the equation.

TRIGONOMETRY Approximations for sin x, cos x, and tan x For small values of x, x measured in radians, the following approximations hold:

sin x < x

tan x < x

cos x < 1 2

sin x , x , tan x

cos x ,

To find the remaining sides, use b5

x2 2

The following actually hold: sin x x ,1

As x approaches 0, lim [(sin x)/x]  1. Inverse Trigonometric Functions The notation sin1 x (read: arcsine of x, or inverse sine of x) means the principal angle whose sine is x. Similarly for cos1 x, tan1 x, etc. (The principal angle means an angle between 90 and 90 in case of sin1 and tan1, and between 0 and 180 in the case of cos1.)

a sin B sin A

c5

a sin C sin A

Or, drop a perpendicular from either B or C on the opposite side, and solve by right triangles. Check: c cos B  b cos C  a. CASE 2. GIVEN TWO SIDES (say a and b) AND THE INCLUDED ANGLE (C); AND SUPPOSE a  b (Fig. 2.1.63). Method 1: Find c from c2  a2  b2  2ab cos C; then find the smaller angle, B, from sin B  (b/c) sin C; and finally, find A from A  180  (B  C). Check: a cos B  b cos A  c. Method 2: Find 1⁄2(A  B) from the law of tangents: tan 1⁄2 sA 2 Bd 5 [sa 2 bd/sa 1 bd cot 1⁄2C

Solution of Plane Triangles

The “parts” of a plane triangle are its three sides a, b, c, and its three angles A, B, C (A being opposite a). Two triangles are congruent if all their corresponding parts are equal. Two triangles are similar if their corresponding angles are equal, that is, A1  A2, B1  B2, and C1  C2. Similar triangles may differ in scale, but they satisfy a1/a2  b1/b2  c1/c2. Two different triangles may have two corresponding sides and the angle opposite one of those sides equal (Fig. 2.1.60), and still not be congruent. This is the angle-side-side theorem. Otherwise, a triangle is uniquely determined by any three of its parts, as long as those parts are not all angles. To “solve” a triangle means to find the unknown parts from the known. The fundamental formulas are Law of sines:

2-17

and 1⁄2(A  B) from 1⁄2(A  B)  90  C/2; hence A  1⁄2(A  B)  1⁄2(A  B) and B  1⁄2(A  B)  1⁄2(A  B).. Then find c from c  a sin C/sin A or c  b sin C/sin B. Check: a cos B  b cos A  c. Method 3: Drop a perpendicular from A to the opposite side, and solve by right triangles. CASE 3. GIVEN THE THREE SIDES (provided the largest is less than the sum of the other two) (Fig. 2.1.64). Method 1: Find the largest angle A (which may be acute or obtuse) from cos A  (b2  c2  a2)/2bc and then find B and C (which will always be acute) from sin B  b sin A/a and sin C  c sin A/a. Check: A  B  C  180.

a sin A 5 b sin B

Law of cosines: c2 5 a 2 1 b 2 2 2ab cos C Fig. 2.1.63 Triangle with two sides and the included angle given.

Fig. 2.1.60 Triangles with an angle, an adjacent side, and an opposite side given. Right Triangles Use the definitions of the trigonometric functions, selecting for each unknown part a relation which connects that unknown with known quantities; then solve the resulting equations. Thus, in Fig. 2.1.61, if C  90, then A  B  90, c2  a2  b2,

sin A 5 a/c tan A 5 a/b

cos A 5 b/c cot A 5 b/a

If A is very small, use tan 1⁄2 A 5 2c 2 bd/sc 1 bd. Oblique Triangles There are four cases. It is highly desirable in all

these cases to draw a sketch of the triangle approximately to scale before commencing the computation, so that any large numerical error may be readily detected.

Fig. 2.1.61 Right triangle.

Fig. 2.1.62 Triangle with two angles and the included side given.

Fig. 2.1.64 Triangle with three sides given.

Method 2: Find A, B, and C from tan 1⁄2A  r/(s  a), tan 1⁄2B  r/(s  b), tan 1⁄2C  r/(s  c), where s  1⁄2(a  b  c), and r  2ss 2 adss 2 bdss 2 cd/s. Check: A  B  C  180. Method 3: If only one angle, say A, is required, use sin 1⁄2 A 5 2ss 2 bdss 2 cd/bc or

cos 1⁄2 A 5 2sss 2 ad/bc

according as 1⁄2 A is nearer 0 or nearer 90. CASE 4. GIVEN TWO SIDES (say b and c) AND THE ANGLE OPPOSITE ONE OF THEM (B). This is the “ambiguous case” in which there may be two solutions, or one, or none. First, try to find C  c sin B/b. If sin C  1, there is no solution. If sin C = 1, C  90 and the triangle is a right triangle. If sin C 1, this determines two angles C, namely, an acute angle C1, and an obtuse angle C2  180  C1. Then C1 will yield a solution when and only when C1  B 180 (see Case 1); and similarly C2 will yield a solution when and only when C2  B 180 (see Case 1). Other Properties of Triangles (See also Geometry, Areas, and Volumes.) Area  1⁄2ab sin C 5 2sss 2 adss 2 bdss 2 cd 5 rs where s  1⁄2(a  b  c), and r  radius of inscribed circle  2ss 2 adss 2 bdss 2 cd/s. Radius of circumscribed circle  R, where 2R 5 a/sin A 5 b/sin B 5 c/sin C

CASE 1. GIVEN TWO ANGLES (provided their sum is 180) AND ONE SIDE (say a, Fig. 2.1.62). The third angle is known since A  B  C  180.

C abc A B r 5 4R sin sin sin 5 2 2 2 4Rs

2-18

MATHEMATICS

The length of the bisector of the angle C is 2 2absss 2 cd 2ab[sa 1 bd 2 c ] 5 a1b a1b 2

z5

2

The median from C to the middle point of c is m  1 b 2d 2 c2.

closely related to the logarithmic function, and are especially valuable in the integral calculus. sinh21 sy/ad 5 ln sy 1 2y 2 1 a 2d 2 ln a cosh21 sy/ad 5 ln sy 1 2y 2 2 a 2d 2 ln a a1y y tanh21 a 5 1⁄2 ln a 2 y

1⁄2 22sa 2

Hyperbolic Functions

The hyperbolic sine, hyperbolic cosine, etc., of any number x, are functions of x which are closely related to the exponential ex, and which have formal properties very similar to those of the trigonometric functions, sine, cosine, etc. Their definitions and fundamental properties are as follows: sinh x 5 1⁄2 sex 2 e2xd cosh x 5 1⁄2 sex 1 e2xd tanh x 5 sinh x/cosh x cosh x 1 sinh x 5 ex cosh x 2 sinh x 5 e2x csch x 5 1/sinh x sech x 5 1/cosh x coth x 5 1/tanh x cosh2 x 2 sinh2 x 5 1 1 2 tanh2 x 5 sech2 x 1 2 coth2 x 5 2csch2 x

y1a y coth21 a 5 1⁄2 ln y 2 a ANALYTICAL GEOMETRY The Point and the Straight Line Rectangular Coordinates (Fig. 2.1.67) Let P1  (x1, y1), P2  (x2, y2).

Then, distance P1P2 5 2sx2 2 x1d2 1 sy2 2 y1d2 slope of P1 P2  m  tan u  (y2  y1)/(x2  x1); coordinates of midpoint are x  1⁄2(x1  x2), y  1⁄2(y1  y2); coordinates of point 1/nth of the way from P1 to P2 are x  x1  (1/n)(x2  x1), y  y1  (1/n)(y2  y1). Let m1, m2 be the slopes of two lines; then, if the lines are parallel, m1  m2; if the lines are perpendicular to each other, m1  1/m2.

sinh s2xd 5 2sinh x coshs2xd 5 cosh x tanh s2xd 5 2tanh x sinh sx 6 yd 5 sinh x cosh y 6 cosh x sinh y cosh sx 6 yd 5 cosh x cosh y 6 sinh x sinh y tanh sx 6 yd 5 stanh x 6 tanh yd/s1 6 tanh x tanh yd sinh 2x 5 2 sinh x cosh x cosh 2x 5 cosh2 x 1 sinh2 x tanh 2x 5 s2 tanh xd/s1 1 tanh2 xd sinh 1⁄2x 5 21⁄2 scosh x 2 1d cosh 1⁄2x 5 21⁄2 scosh x 1 1d tanh 1⁄2x 5 scosh x 2 1d/ssinh xd 5 ssinh xd/scosh x 1 1d

Fig. 2.1.68 Graph of straight line showing intercepts.

Fig. 2.1.67 Graph of straight line. Equations of a Straight Line

1. Intercept form (Fig. 2.1.68). x/a  y/b  1. (a, b  intercepts of the line on the axes.) 2. Slope form (Fig. 2.1.69). y  mx  b. (m  tan u  slope; b  intercept on the y axis.) 3. Normal form (Fig. 2.1.70). x cos v  y sin v  p. (p  perpendicular from origin to line; v  angle from the x axis to p.)

The hyperbolic functions are related to the rectangular hyperbola, x2  y2  a2 (Fig. 2.1.66), in much the same way that the trigonometric functions are related to the circle x2  y2  a2 (Fig. 2.1.65); the analogy, however, concerns not angles but areas. Thus, in either figure, let A Fig. 2.1.69 Graph of straight line showing slope and vertical intercept.

Fig. 2.1.70 Graph of straight line showing perpendicular line from origin.

4. Parallel-intercept form (Fig. 2.1.71). c  intercept on the parallel x  k).

Fig. 2.1.65 Circle.

y2b x 5 (b  y intercept, c2b k

Fig. 2.1.66 Hyperbola.

represent the shaded area, and let u  A/a2 (a pure number). Then for the coordinates of the point P we have, in Fig. 2.1.65, x  a cos u, y  a sin u; and in Fig. 2.1.66, x  a cosh u, y  a sinh u. The inverse hyperbolic sine of y, denoted by sinh1 y, is the number whose hyperbolic sine is y; that is, the notation x  sinh1 y means sinh x  y. Similarly for cosh1 y, tanh1 y, etc. These functions are

Fig. 2.1.71 Graph of straight line showing intercepts on parallel lines.

5. General form. Ax  By  C  0. [Here a  C/A, b  C/B, m  A/B, cos v  A/R, sin v  B/R, p  C/R, where R  6 2A2 1 B 2 (sign to be so chosen that p is positive).] 6. Line through (x1, y1) with slope m. y  y1  m(x  x1).

ANALYTICAL GEOMETRY

y 2 y1 7. Line through (x1, y1) and (x2, y2). y 2 y 5 2 1 x2 2 x1 sx 2 x1d. 8. Line parallel to x axis. y  a; to y axis: x  b.

Angles and Distances If u  angle from the line with slope m1 to

the line with slope m2, then tan u 5

sin u. For every value of the parameter u, there corresponds a point (x, y) on the circle. The ordinary equation x2  y2  a2 can be obtained from the parametric equations by eliminating u. The equation of a circle with radius a and center at (h, k) in parametric form will be x 5 h 1 a cos u; y 5 k 1 a sin u.

m2 2 m1 1 1 m2m1

If parallel, m1  m2. If perpendicular, m1m2  1. If u  angle between the lines Ax  By  C  0 and Ax  By  C  0, then AAr 1 BBr cos u 5 6 2sA2 1 B 2dsAr2 1 Br2d If parallel, A/A  B/B. If perpendicular, AA  BB  0. The equation of a line through (x1, y1) and meeting a given line y  mx  b at an angle u, is y 2 y1 5

2-19

m 1 tan u sx 2 x1d 1 2 m tan u

Fig. 2.1.73 Parameters of a circle. The Parabola

The parabola is the locus of a point which moves so that its distance from a fixed line (called the directrix) is always equal to its distance from a fixed point F (called the focus). See Fig. 2.1.74. The point halfway from focus to directrix is the vertex, O. The line through the focus, perpendicular to the directrix, is the principal axis. The breadth of the curve at the focus is called the latus rectum, or parameter,  2p, where p is the distance from focus to directrix.

The distance from (x0, y0) to the line Ax  By  C  0 is Ax0 1 By0 1 C

2 2A2 1 B 2 where the vertical bars mean “the absolute value of.” The distance from (x0, y0) to a line which passes through (x1, y1) and makes an angle u with the x axis is D5 2

D 5 sx0 2 x1d sin u 2 sy0 2 y1d cos u Polar Coordinates (Fig. 2.1.72) Let (x, y) be the rectangular and

(r, u) the polar coordinates of a given point P. Then x  r cos u; y  r sin u; x2  y2  r2.

Fig. 2.1.74 Graph of parabola. NOTE. Any section of a right circular cone made by a plane parallel to a tangent plane of the cone will be a parabola. Equation of parabola, principal axis along the x axis, origin at vertex

(Fig. 2.1.74): y2  2px.

Polar equation of parabola, referred to F as origin and Fx as axis (Fig. 2.1.75): r  p/(1  cos u). Equation of parabola with principal axis parallel to y axis: y  ax2  bx  c. This may be rewritten, using a technique called completing the

Fig. 2.1.72 Polar coordinates. Transformation of Coordinates If origin is moved to point (x0, y0), the new axes being parallel to the old, x  x0  x, y  y0  y. If axes are turned through the angle u, without change of origin,

x 5 xr cos u 2 yr sin u

y 5 xr sin u 1 yr cos u

square:

b2 b b2 y 5 a Bx 2 1 a x 1 2 R 1 c 2 4a 4a 2

5 a Bx 1

The Circle

b b2 R 1c2 2a 4a

The equation of a circle with center (a, b) and radius r is sx 2 ad2 1 sy 2 bd2 5 r 2 If center is at the origin, the equation becomes x2  y2  r 2. If circle goes through the origin and center is on the x axis at point (r, 0), equation becomes x2  y2  2rx. The general equation of a circle is x 2 1 y 2 1 Dx 1 Ey 1 F 5 0 It has center at (D/2, E/2), and radius 5 2sD/2d2 1 sE/2d2 2 F (which may be real, null, or imaginary). Equations of Circle in Parametric Form It is sometimes convenient to express the coordinates x and y of the moving point P (Fig. 2.1.73) in terms of an auxiliary variable, called a parameter. Thus, if the parameter be taken as the angle u from the x axis to the radius vector OP, then the equations of the circle in parametric form will be x  a cos u; y  a

Fig. 2.1.75 Polar plot of parabola.

Fig. 2.1.76 Vertical parabola showing rays passing through the focus.

2-20

MATHEMATICS

Then: vertex is the point [b/2a, c  b2/4a]; latus rectum is p  1/2a; and focus is the point [b/2a, c  b2/4a  1/4a]. A parabola has the special property that lines parallel to its principal axis, when reflected off the inside “surface” of the parabola, will all pass through the focus (Fig. 2.1.76). This property makes parabolas useful in designing mirrors and antennas.

where v is the angle which the tangent at P makes with PF or PF. At end of major axis, R  b2/a  MA; at end of minor axis, R  a2/b  NB (see Fig. 2.1.81).

The Ellipse

The ellipse (as shown in Fig. 2.1.77), has two foci, F and F, and two directrices, DH and DH. If P is any point on the curve, PF  PF is constant,  2a; and PF/PH (or PF/PH) is also constant,  e, where e is the eccentricity (e 1). Either of these properties may be taken as the definition of the curve. The relations between e and the semiaxes a and b are as shown in Fig. 2.1.78. Thus, b 2 5 a 2 s1 2 e2d, ae 5 2a 2 2 b 2, e2  1  (b/a)2. The semilatus rectum  p  a(1  e2)  b2/a. Note that b is always less than a, except in the special case of the circle, in which b  a and e  0. Fig. 2.1.81 Ellipse showing radius of curvature. The Hyperbola

The hyperbola has two foci, F and F, at distances ae from the center, and two directrices, DH and DH, at distances a /e from the center (Fig. 2.1.82). If P is any point of the curve, | PF  PF| is constant,  2a; and PF/PH (or PF/PH) is also constant,  e (called the eccentricity), where e  1. Either of these properties may be taken as the Fig. 2.1.78 Ellipse showing semiaxes.

Fig. 2.1.77 Ellipse.

Any section of a right circular cone made by a plane which cuts all the elements of one nappe of the cone will be an ellipse; if the plane is perpendicular to the axis of the cone, the ellipse becomes a circle. Equation of ellipse, center at origin: y2 x2 1 251 2 a b

b y 5 6 a 2a 2 2 x 2

or

If P  (x, y) is any point of the curve, PF  a  ex, PF  a  ex. Equations of the ellipse in parametric form: x  a cos u, y  b sin u, where u is the eccentric angle of the point P  (x, y). See Fig. 2.1.81. Polar equation, focus as origin, axes as in Fig. 2.1.79. r  p/(1  e cos u). Equation of the tangent at (x1, y1): b2x1x  a2y1y  a2b2. The line y  mx  k will be a tangent if k 5 6 2a 2m 2 1 b 2.

Ellipse as a Flattened Circle, Eccentric Angle If the ordinates in a circle are diminished in a constant ratio, the resulting points will lie on an ellipse (Fig. 2.1.80). If Q traces the circle with uniform velocity, the corresponding point P will trace the ellipse, with varying velocity. The angle u in the figure is called the eccentric angle of the point P. A consequence of this property is that if a circle is drawn with its horizontal scale different from its vertical scale, it will appear to be an ellipse. This phenomenon is common in computer graphics. The radius of curvature of an ellipse at any point P  (x, y) is

R 5 a b sx /a 1 y /b d 2

4

definition of the curve. The curve has two branches which approach more and more nearly two straight lines called the asymptotes. Each asymptote makes with the principal axis an angle whose tangent is b/a. The relations between e, a, and b are shown in Fig. 2.1.83: b2  a2(e2  1), ae 5 2a 2 1 b 2, e2  1  (b/a)2. The semilatus rectum, or ordinate at the focus, is p  a(e2  1)  b2/a.

Fig. 2.1.80 Ellipse as a flattened circle.

Fig. 2.1.79 Ellipse in polar form.

2 2

Fig. 2.1.82 Hyperbola.

2

4 3/2

5 p/ sin v 3

Fig. 2.1.83 Hyperbola showing the asymptotes.

Any section of a right circular cone made by a plane which cuts both nappes of the cone will be a hyperbola. Equation of the hyperbola, center as origin: y2 x2 2 251 2 a b

or

b y 5 6 a 2x 2 2 a 2

ANALYTICAL GEOMETRY

2-21

If P  (x, y) is on the right-hand branch, PF  ex  a, PF  ex  a. If P is on the left-hand branch, PF  ex  a, PF  ex  a. Equations of Hyperbola in Parametric Form (1) x  a cosh u, y  b sinh u. Here u may be interpreted as A/ab, where A is the area shaded in Fig. 2.1.84. (2) x  a sec v, y  b tan v, where v is an auxiliary angle of no special geometric interest.

Fig. 2.1.87 Equilateral hyperbola.

Fig. 2.1.84 Hyperbola showing parametric form. Polar equation, referred to focus as origin, axes as in Fig. 2.1.85:

The length a  Th /w is called the parameter of the catenary, or the distance from the lowest point O to the directrix DQ (Fig. 2.1.89). When a is very large, the curve is very flat. The rectangular equation, referred to the lowest point as origin, is y  a [cosh (x/a)  1]. In case of very flat arcs (a large), y  x2/2a    ; s  x  1⁄6x3/a2    , approx, so that in such a case the catenary closely resembles a parabola.

r 5 p/s1 2 e cos ud Equation of tangent at (x1, y1): b2x1x  a2y1y  a2b2. The line y 

mx  k will be a tangent if k 5 6 2a 2m 2 2 b 2.

Fig. 2.1.88 Hyperbola with asymptotes as axes.

Fig. 2.1.85 Hyperbola in polar form.

The triangle bounded by the asymptotes and a variable tangent is of constant area,  ab. Conjugate hyperbolas are two hyperbolas having the same asymptotes with semiaxes interchanged (Fig. 2.1.86). The equations of the hyperbola conjugate to x2/a2  y2/b2  1 is x2/a2  y2/b2  1.

Calculus properties of the catenary are often discussed in texts on the calculus of variations (Weinstock, “Calculus of Variations,” Dover; Ewing, “Calculus of Variations with Applications,” Dover). Problems on the Catenary (Fig. 2.1.89) When any two of the four quantities, x, y, s, T/w are known, the remaining two, and also the parameter a, can be found, using the following: a 5 x/z T 5 wa cosh z s/x 5 ssinh zd/z

s 5 a sinh z y/x 5 scosh z 2 1d/z wx/T 5 z cosh z

Fig. 2.1.86 Conjugate hyperbolas. Equilateral Hyperbola (a  b) Equation referred to principal axes (Fig. 2.1.87): x2  y2  a2. NOTE. p  a (Fig. 2.1.87). Equation referred to asymptotes as axes (Fig. 2.1.88): xy  a2/2.

Asymptotes are perpendicular. Eccentricity  22. Any diameter is equal in length to its conjugate diameter.

Fig. 2.1.89 Catenary. NOTE. If wx/T 0.6627, then there are two values of z, one less than 1.2, and one greater. If wx/T  0.6627, then the problem has no solution.

The Catenary

Given the Length 2L of a Chain Supported at Two Points A and B Not in the Same Level, to Find a (See Fig. 2.1.90; b and c are supposed

The catenary is the curve in which a flexible chain or cord of uniform density will hang when supported by the two ends. Let w  weight of the chain per unit length; T  the tension at any point P; and Th, Tv  the horizontal and vertical components of T. The horizontal component Th is the same at all points of the curve.

NOTE. The coordinates of the midpoint M of AB (see Fig. 2.1.90) are x0  a tanh1 (b/L), y0  (L/tanh z)  a, so that the position of the lowest point is determined.

known.) Let s 2L2 2 b 2d/c 5 s/x; use s/x  sinh z/z to find z. Then a  c/z.

2-22

MATHEMATICS

Fig. 2.1.94). For the equations, put b  a in the equations of the epior hypotrochoid, below. Radius of curvature at any point P is R5 At A, R  0; at D, R 5

4asc 6 ad 3 sin 1⁄2u c 6 2a

4asc 6 ad . c 6 2a

Fig. 2.1.90 Catenary with ends at unequal levels. Other Useful Curves

The cycloid is traced by a point on the circumference of a circle which rolls without slipping along a straight line. Equations of cycloid, in parametric form (axes as in Fig. 2.1.91): x  a(rad u  sin u), y  a(1  cos u), where a is the radius of the rolling circle, and rad u is the radian measure of the angle u through which it has rolled. The radius of curvature at any point P is PC 5 4a sin su/2d 5 2 22ay.

Fig. 2.1.94 Hypocycloid. Special Cases If a  1⁄2c, the hypocycloid becomes a straight line, diameter of the fixed circle (Fig. 2.1.95). In this case the hypotrochoid traced by any point rigidly connected with the rolling circle (not necessarily on the circumference) will be an ellipse. If a  1⁄4c, the curve Fig. 2.1.91 Cycloid.

The trochoid is a more general curve, traced by any point on a radius of the rolling circle, at distance b from the center (Fig. 2.1.92). It is a prolate trochoid if b a, and a curtate or looped trochoid if b  a. The equations in either case are x  a rad u  b sin u, y  a  b cos u.

Fig. 2.1.92 Trochoid.

Fig. 2.1.95 Hypocycloid is straight line when the radius of inside circle is half that of the outside circle.

The epicycloid (or hypocycloid) is a curve generated by a point on the circumference of a circle of radius a which rolls without slipping on the outside (or inside) of a fixed circle of radius c (Fig. 2.1.93 and

generated will be the four-cusped hypocycloid, or astroid (Fig. 2.1.96), whose equation is x2/3  y2/3  c2/3. If a  c, the epicycloid is the cardioid, whose equation in polar coordinates (axes as in Fig. 2.1.97) is r  2c(1  cos u). Length of cardioid  16c. The epitrochoid (or hypotrochoid) is a curve traced by any point rigidly attached to a circle of radius a, at distance b from the center, when this

Fig. 2.1.93 Epicycloid.

Fig. 2.1.96 Astroid.

ANALYTICAL GEOMETRY

circle rolls without slipping on the outside (or inside) of a fixed circle of radius c. The equations are a a x 5 sc 6 ad cos ¢ c u ≤ 6 b cos B ¢1 6 c ≤u R a a y 5 sc 6 ad sin ¢ c u ≤ 2 b sin B ¢1 6 c ≤u R

2-23

v( angle POQ), are r  c sec v, rad u  tan v  rad v. Here, r  OP, and rad u  radian measure of angle, AOP (Fig. 2.1.98). The spiral of Archimedes (Fig. 2.1.99) is traced by a point P which, starting from O, moves with uniform velocity along a ray OP, while the ray itself revolves with uniform angular velocity about O. Polar equation: r  k rad u, or r  a(u/360). Here a  2pk  the distance measured along a radius, from each coil to the next. The radius of curvature at P is R  (k2  r2)3/2/(2k2  r2). The logarithmic spiral (Fig. 2.1.100) is a curve which cuts the radii from O at a constant angle v, whose cotangent is m. Polar equation: r  aem rad u. Here a is the value of r when u  0. For large negative values of u, the curve winds around O as an asymptotic point. If PT and PN are the tangent and normal at P, the line TON being perpendicular to OP (not shown in figure), then ON  rm, and PN 5 r 21 1 m 2 5 r/ sin v. Radius of curvature at P is PN.

Fig. 2.1.97 Cardioid.

where u  the angle which the moving radius makes with the line of centers; take the upper sign for the epi- and the lower for the hypotrochoid. The curve is called prolate or curtate according as b a or b  a. When b  a, the special case of the epi- or hypocycloid arises.

Fig. 2.1.100 Logarithmic spiral.

The tractrix, or Schiele’s antifriction curve (Fig. 2.1.101), is a curve such that the portion PT of the tangent between the point of contact and the x axis is constant  a. Its equation is

Fig. 2.1.98 Involute of circle.

The involute of a circle is the curve traced by the end of a taut string which is unwound from the circumference of a fixed circle, of radius c. If QP is the free portion of the string at any instant (Fig. 2.1.98), QP will be tangent to the circle at Q, and the length of QP  length of arc QA; hence the construction of the curve. The equations of the curve in parametric form (axes as in figure) are x  c(cos u  rad u sin u), y  c(sin u  rad u cos u), where rad u is the radian measure of the angle u which OQ makes with the x axis. Length of arc AP  1⁄2c(rad u)2; radius of curvature at P is QP. Polar equations, in terms of parameter

a x 5 6a B cosh21 y 2

Å

y 2 1 2 ¢a≤ R

or, in parametric form, x  a(t  tanh t), y  a/cosh t. The x axis is an asymptote of the curve. Length of arc BP  a loge (a /y).

Fig. 2.1.101 Tractrix.

The tractrix describes the path taken by an object being pulled by a string moving along the x axis, where the initial position of the object is B and the opposite end of the string begins at O.

Fig. 2.1.99 Spiral of Archimedes.

Fig. 2.1.102 Lemniscate.

2-24

MATHEMATICS

The lemniscate (Fig. 2.1.102) is the locus of a point P the product of whose distances from two fixed points F, F is constant, equal to 1⁄2a2. The distance FFr 5 a 22, Polar equation is r 5 a 2cos 2u. Angle between OP and the normal at P is 2u. The two branches of the curve cross at right angles at O. Maximum y occurs when u  30 and r 5 a/ 22, and is equal to 1⁄4a 22. Area of one loop  a2/2. The helix (Fig. 2.1.103) is the curve of a screw thread on a cylinder of radius r. The curve crosses the elements of the cylinder at a constant angle, v. The pitch, h, is the distance between two coils of the helix, measured along an element of the cylinder; hence h  2pr tan v. Length Fig. 2.1.103 Helix. of one coil  2s2prd2 1 h2 5 2pr> cos v. If the cylinder is rolled out on a plane, the development of the helix will be a straight line, with slope equal to tan v. DIFFERENTIAL AND INTEGRAL CALCULUS Derivatives and Differentials Derivatives and Differentials A function of a single variable x may be denoted by f(x), F(x), etc. The value of the function when x has the value x0 is then denoted by f(x0), F(x0), etc. The derivative of a function y  f(x) may be denoted by f (x), or by dy/dx. The value of the derivative at a given point x  x0 is the rate of change of the function at that point; or, if the function is represented by a curve in the usual way (Fig. 2.1.104), the value of the derivative at any point shows the slope of the curve (i.e., the slope of the tangent to the curve) at that point (positive if the tangent points upward, and negative if it points downward, moving to the right).

Fig. 2.1.104 Curve showing tangent and derivatives.

The increment y (read: “delta y”) in y is the change produced in y by increasing x from x0 to x0  x; i.e., y  f(x0  x)  f(x0). The differential, dy, of y is the value which y would have if the curve coincided with its tangent. (The differential, dx, of x is the same as x when x is the independent variable.) Note that the derivative depends only on the value of x0, while y and dy depend not only on x0 but on the value of x as well. The ratio y/x represents the secant slope, and dy/dx the slope of tangent (see Fig. 2.1.104). If x is made to approach zero, the secant approaches the tangent as a limiting position, so that the derivative is f rsxd 5

f sx0 1 xd 2 f sx0d dy y 5 lim B R 5 lim B R xS0 dx xS0 x x

Also, dy  f(x) dx. The symbol “lim” in connection with x S 0 means “the limit, as x approaches 0, of . . . .” (A constant c is said to be the limit of a variable

u if, whenever any quantity m has been assigned, there is a stage in the variation process beyond which |c  u| is always less than m; or, briefly, c is the limit of u if the difference between c and u can be made to become and remain as small as we please.) To find the derivative of a given function at a given point: (1) If the function is given only by a curve, measure graphically the slope of the tangent at the point in question; (2) if the function is given by a mathematical expression, use the following rules for differentiation. These rules give, directly, the differential, dy, in terms of dx; to find the derivative, dy/dx, divide through by dx. Rules for Differentiation (Here u, v, w, . . . represent any functions of a variable x, or may themselves be independent variables. a is a constant which does not change in value in the same discussion; e  2.71828.) 1. d(a  u)  du 2. d(au)  a du 3. d(u  v  w    )  du  dv  dw     4. d(uv)  u dv  v du dw dv du 5. dsuvwcd 5 suvwcd¢ u 1 v 1 w 1 c≤ v du 2 u dv u 6. d v 5 v2 7. d(um)  mum1 du. Thus, d(u2)  2u du; d(u3)  3u2 du; etc. du 8. d 2u 5 2 2u du 1 9. d ¢ u ≤ 5 2 2 u 10. d(eu)  eu du 11. d(au)  (ln a)au du 12. d ln u 5 du u 13. d log u 5 log e du 5 s0.4343 cd du 10 10 u u 14. d sin u  cos u du 15. d csc u  cot u csc u du 16. d cos u  sin u du 17. d sec u  tan u sec u du 18. d tan u  sec2 u du 19. d cot u  csc2 u du du 21 20. d sin u 5 21 2 u 2 du 21 21. d csc u 5 2 u 2u 2 2 1 du 21 22. d cos u 5 21 2 u 2 du 21 23. d sec u 5 u 2u 2 2 1 du 24. d tan21 u 5 1 1 u2 du 25. d cot 21 u 5 2 1 1 u2 26. d ln sin u  cot u du 2 du 27. d ln tan u 5 sin 2u 28. d ln cos u  tan u du 2 du 29. d ln cot u 5 2 sin 2u 30. d sinh u  cosh u du 31. d csch u  csch u coth u du 32. d cosh u  sinh u du 33. d sech u  sech u tanh u du

DIFFERENTIAL AND INTEGRAL CALCULUS

34. d tanh u  sech2 u du 35. d coth u  csch2 u du du 21 36. d sinh u 5 2u 2 1 1 du 21 37. d csch u 5 2 u 2u 2 1 1 du 21 38. d cosh u 5 2u 2 2 1 du 21 39. d sech u 5 2 u 21 2 u 2 du 40. d tanh21 u 5 1 2 u2 du 41. d coth21 u 5 1 2 u2 42. dsu vd 5 su v21dsu ln u dv 1 v dud Derivatives of Higher Orders The derivative of the derivative is called the second derivative; the derivative of this, the third derivative; and so on. If y  f(x), f rsxd 5 Dxy 5 f ssxd 5 D 2x y 5

dy dx d 2y dx 2 d 3y

2-25

If increments x, y (or dx, dy) are assigned to the independent variables x, y, the increment, u, produced in u  f (x, y) is u 5 f sx 1 x, y 1 yd 2 fsx, yd while the differential, du, i.e., the value which u would have if the partial derivatives of u with respect to x and y were constant, is given by du 5 s fxd # dx 1 s fyd # dy Here the coefficients of dx and dy are the values of the partial derivatives of u at the point in question. If x and y are functions of a third variable t, then the equation dy du dx 5 s fxd 1 s fyd dt dt dt expresses the rate of change of u with respect to t, in terms of the separate rate of change of x and y with respect to t. Implicit Functions If f(x, y)  0, either of the variables x and y is said to be an implicit function of the other. To find dy/dx, either (1) solve for y in terms of x, and then find dy/dx directly; or (2) differentiate the equation through as it stands, remembering that both x and y are variables, and then divide by dx; or (3) use the formula dy/dx  ( fx /fy), where fx and fy are the partial derivatives of f(x, y) at the point in question. Maxima and Minima

flection.

A function of one variable, as y  f(x), is said to have a maximum at a point x  x0, if at that point the slope of the curve is zero and the concavity downward (see Fig. 2.1.106); a sufficient condition for a maximum is f(x0)  0 and f (x0) negative. Similarly, f(x) has a minimum if the slope is zero and the concavity upward; a sufficient condition for a minimum is f (x 0)  0 and f (x 0) positive. If f (x0)  0 and f(x0)  0, the point x0 will be a point of inflection. If f(x0)  0 and f (x0)  0 and f (x 0)  0, the point x0 will be a maximum if f (x0) 0, and a minimum if f (x 0)  0. It is usually sufficient, however, in any practical case, to find the values of x which make f(x)  0, and then decide, from a general knowledge of the curve or the sign of f (x) to the right and left of x0, which of these values (if any) give maxima or minima, without investigating the higher derivatives.

Fig. 2.1.105 Curve showing concavity.

Fig. 2.1.106 Curve showing maxima and minima.

f - sxd 5 D 3x y 5

dx 3

etc.

NOTE. If the notation d 2y/dx2 is used, this must not be treated as a fraction, like dy/dx, but as an inseparable symbol, made up of a symbol of operation d 2/dx2, and an operand y.

The geometric meaning of the second derivative is this: if the original function y  f(x) is represented by a curve in the usual way, then at any point where f (x) is positive, the curve is concave upward, and at any point where f (x) is negative, the curve is concave downward (Fig. 2.1.105). When f (x)  0, the curve usually has a point of in-

Functions of two or more variables may be denoted by f (x, y, . . .), F(x, y, . . .), etc. The derivative of such a function u  f (x, y, . . .) formed on the assumption that x is the only variable (y, . . . being regarded for the moment as constants) is called the partial derivative of u with respect to x, and is denoted by fx (x, y) or Dxu, or dxu/dx, or 'u/'x. Similarly, the partial derivative of u with respect to y is fy(x, y) or Dyu, or dyu/dy, or 'u/'y. NOTE. In the third notation, dxu denotes the differential of u formed on the assumption that x is the only variable. If the fourth notation, 'u/'x, is used, this must not be treated as a fraction like du/dx; the '/'x is a symbol of operation, operating on u, and the “'x” must not be separated.

Partial derivatives of the second order are denoted by fxx, fxy, fyy, or by Du, Dx(Dyu), D 2y u, or by '2u/'x 2, '2u/'x 'y, '2u/'y 2, the last symbols being “inseparable.” Similarly for higher derivatives. Note that fxy  fyx.

A function of two variables, as u  f (x, y), will have a maximum at a point (x0, y0) if at that point fx  0, fy  0, and fxx 0, fyy 0; and a minimum if at that point fx  0, fy  0, and fxx  0, fyy  0; provided, in each case, ( fxx)( fyy)  ( fxy)2 is positive. If fx  0 and fy  0, and fxx and fyy have opposite signs, the point (x0, y0) will be a “saddle point” of the surface representing the function. Indeterminate Forms

In the following paragraphs, f(x), g(x) denote functions which approach 0; F(x), G(x) functions which increase indefinitely; and U(x) a function which approaches 1, when x approaches a definite quantity a. The problem in each case is to find the limit approached by certain combinations of these functions when x approaches a. The symbol S is to be read “approaches” or “tends to.” CASE 1. “0/0.” To find the limit of f(x)/g(x) when f(x) S 0 and g(x) S 0, use the theorem that lim [ f(x)/g(x)]  lim [ f (x)/g(x)],

2-26

MATHEMATICS

where f(x) and g(x) are the derivatives of f(x) and g(x). This second limit may be easier to find than the first. If f(x) S 0 and g(x) S 0, apply the same theorem a second time: lim [ f(x)/g(x)]  lim [ f (x)/ g(x)], and so on. CASE 2. “ ` / ` .” If F(x) S ` and G(x) S ` , then lim [F(x)/ G(x)]  lim [F(x)/G(x)], precisely as in Case 1. CASE 3. “0  ` .” To find the limit of f(x)  F(x) when f (x) S 0 and F(x) S ` , write lim [f(x)  F(x)]  lim{ f(x)/[1/F(x)]} or  lim {F(x)/ [1/f(x)]}, then proceed as in Case 1 or Case 2. CASE 4. The limit of combinations “00” or [f(x)]g(x); “1`” or [U(x)]F(x); “ ` 0” or [F(x)]b(x) may be found since their logarithms are limits of the type evaluated in Case 3. CASE 5. “ `  ` .” If F(x) S ` and G(x) S ` , write lim [Fsxd 2 Gsxd] 5 lim

1/Gsxd 2 1/Fsxd 1/[Fsxd # Gsxd]

then proceed as in Case 1. Sometimes it is shorter to expand the functions in series. It should be carefully noticed that expressions like 0/0, ` / ` , etc., do not represent mathematical quantities. Curvature

The radius of curvature R of a plane curve at any point P (Fig. 2.1.107) is the distance, measured along the normal, on the concave side of the curve, to the center of curvature, C, this point being the limiting position of the point of intersection of the normals at P and a neighboring point Q, as Q is made to approach P along the curve. If the equation of the curve is y  f(x), R5

[1 1 syrd2]3>2 ds 5 du ys

where ds  2dx 2 1 dy 2  the differential of arc, u  tan1 [ f(x)]  the angle which the tangent at P makes with the x axis, and y  f(x) and y  f (x) are the first and second derivatives of f (x) at the point P. Note that dx  ds cos u and dy  ds sin u. The curvature, K, at the point P, is K  1/R  du/ds; i.e., the curvature is the rate at which the angle u is changing with respect to the length of arc s. If the slope of the curve is small, K < f ssxd.

is easy. The most common integrable forms are collected in the following brief table; for a more extended list, see Peirce, “Table of Integrals,” Ginn, or Dwight, “Table of Integrals and other Mathematical Data,” Macmillan, or “CRC Mathematical Tables.” GENERAL FORMULAS 1. 3 a du 5 a 3 du 5 au 1 C 2. 3 su 1 vd dx 5 3 u dx 1 3v dx 3. 3 u dv 5 uv 2 3 v du

(integration by parts)

4. 3 f sxd dx 5 3 f [Fs yd]Fr syd dy, x 5 Fs yd (change of variables) 5. 3 dy 3 f sx, yd dx 5 3 dx 3 f sx, yd dy FUNDAMENTAL INTEGRALS x n11 6. 3 x n dx 5 1 C, when n 2 21 n11 dx 7. 3 x 5 ln x 1 C 5 ln cx 8. 3 ex dx 5 ex 1 C 9. 3 sin x dx 5 2cos x 1 C 10. 3 cos x dx 5 sin x 1 C dx 11. 3 5 3 csc 2 x dx 5 2 cot x 1 C sin 2 x dx 12. 3 5 3sec2 x dx 5 tan x 1 C cos 2 x dx 13. 3 5 sin 21 x 1 C 5 2 cos 21 x 1 C 21 2 x 2 dx 14. 3 5 tan 21 x 1 C 5 2 cot 21 x 1 C 1 1 x2 RATIONAL FUNCTIONS sa 1 bxdn11 15. 3 sa 1 bxdn dx 5 1C sn 1 1d b

Fig. 2.1.107 Curve showing radius of curvature.

If the equation of the curve in polar coordinates is r  f(u), where r  radius vector and u  polar angle, then R5

[r 2 1 srrd2]3>2 r 2 rrs 1 2srrd2 2

where r  f(u) and r  f (u). The evolute of a curve is the locus of its centers of curvature. If one curve is the evolute of another, the second is called the involute of the first. Indefinite Integrals

An integral of f(x) dx is any function whose differential is f(x) dx, and is denoted by f(x) dx. All the integrals of f(x) dx are included in the expression f(x) dx  C, where  f(x) dx is any particular integral, and C is an arbitrary constant. The process of finding (when possible) an integral of a given function consists in recognizing by inspection a function which, when differentiated, will produce the given function; or in transforming the given function into a form in which such recognition

dx 1 1 16. 3 5 ln sa 1 bxd 1 C 5 ln csa 1 bxd a 1 bx b b dx 1 1C 17. 3 n 5 2 except when n 5 1 x sn 2 1dx n21 dx 1 52 1C 18. 3 bsa 1 bxd sa 1 bxd2 dx 11x 19. 3 5 1⁄2 ln 1 C 5 tanh21 x 1 C, when x , 1 12x 1 2 x2 dx x21 20. 3 2 5 1⁄2 ln 1 C 5 2coth21 x 1 C, when x . 1 x11 x 21 dx b 1 5 tan21 a a x b 1 C 21. 3 Ä a 1 bx 2 2ab 2ab 1 bx dx 1 1C 5 ln 22. 3 a 2 bx 2 2 2ab 2ab 2 bx b 1 5 tanh21 a a x b 1 C Ä 2ab



[a . 0, b . 0]

DIFFERENTIAL AND INTEGRAL CALCULUS

dx 23. 3 5 a 1 2bx 1 cx 2 1

tan21

b 1 cx

1C

∂ [ac 2 b 2 . 0]

2ac 2 b 2 2ac 2 b 2 2 2b 2 ac 2 b 2 cx 1 5 ln 1C 2 2 2 2b 2 ac 2b 2 ac 1 b 1 cx ∂ [b 2 2 ac . 0] b 1 cx 1 tanh21 1C 52 2b 2 2 ac 2b 2 2 ac dx 1 24. 3 52 1 C, when b 2 5 ac b 1 cx a 1 2bx 1 cx 2 sm 1 nxd dx n 25. 3 5 ln sa 1 2bx 1 cx 2d 2c a 1 2bx 1 cx 2 dx mc 2 nb 1 3 a 1 2bx 1 cx 2 c fsxd dx 26. In 3 , if f(x) is a polynomial of higher than the first a 1 2bx 1 cx 2 degree, divide by the denominator before integrating dx 1 27. 3 5 sa 1 2bx 1 cx 2dp 2sac 2 b 2ds p 2 1d b 1 cx 3 sa 1 2bx 1 cx 2dp21 s2p 2 3dc dx 1 2sac 2 b 2ds p 2 1d 3 sa 1 2bx 1 cx 2dp21 sm 1 nxd dx n 28. 3 3 52 2cs p 2 1d sa 1 2bx 1 cx 2dp dx mc 2 nb 1 1 3 sa 1 2bx 1 cx 2dp c sa 1 2bx 1 cx 2d p21 x m21 sa 1 bxdn11 29. 3 x m21 sa 1 bxdn dx 5 sm 1 ndb sm 2 1da m22 2 x sa 1 bxdn dx sm 1 ndb 3 x m sa 1 bxdn na 5 1 x m21 sa 1 bxdn21 dx m1n m 1 n3

2 30. 3 2a 1 bx dx 5 s 2a 1 bxd3 1 C 3b dx 2 5 2a 1 bx 1 C 31. 3 b 2a 1 bx sm 1 nxd dx

33. 3

2a 1 bx dx

5

2 s3mb 2 2an 1 nbxd 2a 1 bx 1 C 3b 2

sm 1 nxd 2a 1 bx

; substitute y 5 2a 1 bx, and use 21 and 22

n

35. 36. 37. 38.

f sx, 2a 1 bxd

n

dx; substitute 2a 1 bx 5 y n Fsx, 2a 1 bxd dx x x 5 sin 21 a 1 C 5 2 cos 21 a 1 C 3 2a 2 2 x 2 dx x 5 ln sx 1 2a 2 1 x 2d 1 C 5 sinh21 a 1 C 3 2a 2 1 x 2 dx x 5 ln sx 1 2x 2 2 a 2d 1 C 5 cosh21 a 1 C 3 2 2 2x 2 a dx 3 2a 1 2bx 1 cx 2 1 5 ln sb 1 cx 1 2c 2a 1 2bx 1 cx 2d 1 C, where c . 0 2c

34. 3

5 5

1 2c 1 2c 21

sinh21 cosh

b 1 cx

2ac 2 b 2 21 b 1 cx 2b 2 2 ac b 1 cx 21

1 C, when ac 2 b 2 . 0 1 C, when b 2 2 ac . 0

sin 1 C, when c , 0 22c 2b 2 2 ac sm 1 nxd dx n 5 c 2a 1 2bx 1 cx 2 39. 3 2a 1 2bx 1 cx 2 mc 2 nb 1 3 c

dx

2a 1 2bx 1 cx 2 m m22 m21 sm 2 1da x x dx dx x X 5 mc 2 40. 3 3 X mc 2a 1 2bx 1 cx 2 s2m 2 1db x m21 2 2 3 X dx when X 5 2a 1 2bx 1 cx mc x a2 41. 3 2a 2 1 x 2 dx 5 2a 2 1 x 2 1 ln sx 1 2a 2 1 x 2d 1 C 2 2 x x a2 5 2a 2 1 x 2 1 sinh21 a 1 C 2 2 x x a2 42. 3 2a 2 2 x 2 dx 5 2a 2 2 x 2 1 sin 21 a 1 C 2 2 x a2 43. 3 2x 2 2 a 2 dx 5 2x 2 2 a 2 2 ln sx 1 2x 2 2 a 2d 1 C 2 2 x x a2 5 2x 2 2 a 2 2 cosh21 a 1 C 2 2 44. 3 2a 1 2bx 1 cx 2 dx 5

b 1 cx 2a 1 2bx 1 cx 2 2c dx ac 2 b 2 1C 1 2c 3 2a 1 2bx 1 cx 2

TRANSCENDENTAL FUNCTIONS ax 45. 3 a x dx 5 1C ln a 46. 3 x neax dx 5

IRRATIONAL FUNCTIONS

32. 3

5

2-27

nsn 2 1d x neax n # # # 6 n! d 1C a c1 2 ax 1 a 2x 2 2 a nx n

47. 3 ln x dx 5 x ln x 2 x 1 C ln x ln x 1 48. 3 2 dx 5 2 x 2 x 1 C x sln xdn 1 49. 3 x dx 5 sln xdn11 1 C n11 50. 3 sin 2 x dx 5 21⁄4 sin 2x 1 1⁄2 x 1 C 5 21⁄2 sin x cos x 1 1⁄2x 1 C 51. 3 cos 2 x dx 5 1⁄4 sin 2x 1 1⁄2x 1 C 5 1⁄2 sin x cos x 1 1⁄2x 1 C cos mx 52. 3 sin mx dx 5 2 m 1 C 53. 3 cos mx dx 5

sin mx m 1C

cos sm 1 ndx cos sm 2 ndx 54. 3 sin mx cos nx dx 5 2 2 1C 2sm 1 nd 2sm 2 nd sin sm 2 ndx sin sm 1 ndx 55. 3 sin mx sin nx dx 5 2 1C 2sm 2 nd 2sm 1 nd sin sm 2 ndx sin sm 1 ndx 56. 3 cos mx cos nx dx 5 1 1C 2sm 2 nd 2sm 1 nd

2-28

MATHEMATICS

dy A 1 B cos x 1 C sin x 77. 3 dx 5 A 3 a 1 p cos y a 1 b cos x 1 c sin x cos y dy 1 sB cos u 1 C sin ud 3 a 1 p cos y sin y dy 2 sB sin u 2 C cos ud 3 where b 5 p cos u, c 5 p a 1 p cos y’ sin u and x 2 u 5 y a sin bx 2 b cos bx ax 78. 3 eax sin bx dx 5 e 1C a2 1 b 2 a cos bx 1 b sin bx ax 79. 3 eax cos bx dx 5 e 1C a2 1 b 2

57. 3 tan x dx 5 2ln cos x 1 C 58. 3 cot x dx 5 ln sin x 1 C dx x 59. 3 5 ln tan 1 C 2 sin x dx x p 60. 3 cos x 5 ln tan a 1 b 1 C 4 2 dx x 61. 3 5 tan 1 C 1 1 cos x 2 dx x 62. 3 5 2 cot 1 C 1 2 cos x 2

80. 3 sin21 x dx 5 x sin21 x 1 21 2 x 2 1 C

63. 3 sin x cos x dx 5 1⁄2 sin x 1 C 2

dx 64. 3 5 ln tan x 1 C sin x cos x cos x sin n21 x n21 65.* 3 sin n x dx 5 2 1 n 3 sin n22 x dx n sin x cos n21 x n21 66.* 3 cos n x dx 5 1 n 3 cos n22 x dx n tan n21 x 67. 3 tan n x dx 5 2 3 tan n22 x dx n21 cot n21 x 68. 3 cot n x dx 5 2 2 3 cot n22 x dx n21 dx dx cos x n22 52 1 69. 3 n 2 1 3 sin n22 x sin n x sn 2 1d sin n21 x dx dx sin x n22 5 1 70. 3 cos n x n 2 1 3 cos n22 x sn 2 1d cos n21 x 71.† 3 sin p x cos q x dx 5

sin

p11

x cos p1q

q21

72.† 3 sin 2p x cos q x dx 5 2

73.† 3 sin p x cos 2q x dx 5

sin

sin

2q11

x cos x q21 q2p22 1 sin p x cos 2q12 x dx q21 3

dx a2b 2 74. 3 tan 21 a tan 1⁄2xb 1 C, 5 a 1 b cos x Äa 1 b 2a 2 2 b 2 when a 2 . b 2, b 1 a cos x 1 sin x 2b 2 2 a 2 1 5 ln 1 C, a 1 b cos x 2b 2 2 a 2 when a 2 , b 2,

b2a tanh21 a tan 1⁄2 xb 1 C, when a 2 , b 2 Äb 1 a 2b 2 a cos x dx dx x a 75. 3 5 2 3 1C a 1 b cos x b b a 1 b cos x sin x dx 1 76. 3 5 2 ln sa 1 b cos xd 1 C a 1 b cos x b 5

2

2

83. 3 cot 21 x dx 5 x cot 21 x 1 1⁄2 ln s1 1 x 2d 1 C 84. 3 sinh x dx 5 cosh x 1 C 85. 3 tanh x dx 5 ln cosh x 1 C 86. 3 cosh x dx 5 sinh x 1 C 87. 3 coth x dx 5 ln sinh x 1 C 88. 3 sech x dx 5 2 tan 21 sexd 1 C 89. 3 csch x dx 5 ln tanh sx/2d 1 C 90. 3 sinh2 x dx 5 1⁄2 sinh x cosh x 2 1⁄2 x 1 C 91. 3 cosh2 x dx 5 1⁄2 sinh x cosh x 1 1⁄2 x 1 C

q11

x cos x p21 p2q22 sin 2p12 x cos q x dx 1 p21 3

p11

82. 3 tan21 x dx 5 x tan21 x 2 1⁄2 ln s1 1 x 2d 1 C

x

q21 sin p21 x cos q11 x 1 sin p x cos q22 x dx 5 2 p 1 q3 p1q p21 sin p22 x cos q x dx 1 p 1 q3 2p11

81. 3 cos21 x dx 5 x cos21 x 2 21 2 x 2 1 C

2

* If n is an odd number, substitute cos x  z or sin x  z. † If p or q is an odd number, substitute cos x  z or sin x  z.

92. 3 sech2 x dx 5 tanh x 1 C 93. 3 csch2 x dx 5 2coth x 1 C Hints on Using Integral Tables It happens with frustrating frequency that no integral table lists the integral that needs to be evaluated. When this happens, one may (a) seek a more complete integral table, (b) appeal to mathematical software, such as Mathematica, Maple, MathCad or Derive, (c) use numerical or approximate methods, such as Simpson’s rule (see section “Numerical Methods”), or (d) attempt to transform the integral into one which may be evaluated. Some hints on such transformation follow. For a more complete list and more complete explanations, consult a calculus text, such as Thomas, “Calculus and Analytic Geometry,” Addison-Wesley, or Anton, “Calculus with Analytic Geometry,” Wiley. One or more of the following “tricks” may be successful.

TRIGONOMETRIC SUBSTITUTIONS 1. If an integrand contains 2sa 2 2 x 2d, substitute x  a sin u, and 2sa 2 2 x 2d  a cos u. 2. Substitute x  a tan u and 2sx 2 1 a 2d  a sec u. 3. Substitute x  a sec u and 2sx 2 2 a 2d  a tan u. COMPLETING THE SQUARE 4. Rewrite ax2  bx  c  a[x  b/(2a)]2  (4ac  b2)/(4a); then substitute u  x  b/(2a) and B  (4ac  b2)/(4a).

DIFFERENTIAL AND INTEGRAL CALCULUS

PARTIAL FRACTIONS 5. For a ratio of polynomials, where the denominator has been completely factored into linear factors pi(x) and quadratic factors qj(x), and where the degree of the numerator is less than the degree of the denominator, then rewrite r(x)/[p1(x) . . . pn(x)q1(x) . . . qm(x)]  A1 /p1(x)  . . .  An /pn(x)  (B1x  C1)/q1(x) . . .  (Bm x  Cm)/qm(x). INTEGRATION BY PARTS 6. Change the integral using the formula 3 u dv 5 uv 2 3 v du where u and dv are chosen so that (a) v is easy to find from dv, and (b) v du is easier to find than u dv. Kasube suggests (“A Technique for Integration by Parts,” Am. Math. Month., vol. 90, no. 3, Mar. 1983): Choose u in the order of preference LIATE, that is, Logarithmic, Inverse trigonometric, Algebraic, Trigonometric, Exponential. EXAMPLE. Find 3 x ln x dx. The logarithmic ln x has higher priority than does the algebraic x, so let u  ln (x) and dv  x dx. Then du  (1/x) dx; v 5 x 2/2, so 3 x ln x dx 5 uv 2 3 v du 5 sx 2/2d ln x 2 3 sx 2/2ds1/xd dx 5 (x2/2)

Properties of Definite Integrals

Definite Integrals The definite integral of f (x) dx from x  a to b

b

a

a

b

3 5 23 ;

c

b

b

a

c

a

3 1 3 5 3

MEAN-VALUE THEOREM FOR INTEGRALS b

b

a

a

3 Fsxdf sxd dx 5 FsXd 3 f sxd dx provided f(x) does not change sign from x  a to x  b; here X is some (unknown) value of x intermediate between a and b. MEAN VALUE. The mean value of f(x) with respect to x, between a and b, is f 5

b 1 f sxd dx 3 b2a a x5b

THEOREM ON CHANGE OF VARIABLE. In evaluating 3

fsxd dx, f(x) dx

x5a

may be replaced by its value in terms of a new variable t and dt, and x  a and x  b by the corresponding values of t, provided that throughout the interval the relation between x and t is a one-to-one correspondence (i.e., to each value of x there corresponds one and only one value of t, and to each value of t there corresponds one and only x5b

one value of x). So 3

t5gsbd

f sxd dx 5 3

x5a

ln x 2 3 x/2 dx 5 sx 2/2d ln x 2 x 2/4 1 C.

x  b, denoted by 3 f sxd dx, is the limit (as n increases indefinitely)

2-29

f sgstdd grstd dt.

t5gsad

DIFFERENTIATION WITH RESPECT TO THE UPPER LIMIT. If b is variable, then b

3 f sxd dx is a function b, whose derivative is a

d b f sxd dx 5 f sbd db 3a

a

of a sum of n terms: b

[ f sx1d x 1 f sx2d x 1 f sx3d x 1 # # # 1f sxnd x] 3 f sxd dx 5 nlim S`

DIFFERENTIATION WITH RESPECT TO A PARAMETER

a

built up as follows: Divide the interval from a to b into n equal parts, and call each part  x,  (b  a)/n; in each of these intervals take a value of x (say, x1, x2, . . . , xn), find the value of the function f(x) at each of these points, and multiply it by x, the width of the interval; then take the limit of the sum of the terms thus formed, when the number of terms increases indefinitely, while each individual term approaches zero. b

Geometrically, 3 fsxd dx is the area bounded by the curve y  f(x), a

the x axis, and the ordinates x  a and x  b (Fig. 2.1.108); i.e., briefly, the “area under the curve, from a to b.” The fundamental theorem for the evaluation of a definite integral is the following:

b 'f sx, cd ' b f sx, cd dx 5 3 dx 'c 3a 'c a

Functions Defined by Definite Integrals The following definite

integrals have received special names:

when k2 1.

a

i.e., the definite integral is equal to the difference between two values of any one of the indefinite integrals of the function in question. In other words, the limit of a sum can be found whenever the function can be integrated.

Fig. 2.1.108 Graph showing areas to be summed during integration.

dx 21 2 k 2 sin2 x u

2. Elliptic integral of the second kind 5 Esu, kd 5 3 21 2 k 2 sin2 x 0

dx, when k2 1. 3, 4. Complete elliptic integrals of the first and second kinds; put u  p/2 in (1) and (2).

b

3 f sxd dx 5 B 3 f sxd dxR x5b 2 B 3 f sxd dxR x5a

u

0

1. Elliptic integral of the first kind  Fsu, kd 5 3

5. The probability integral 5

2 2p

x

2x 3 e dx. 2

0 `

6. The gamma function  snd 5 3 x n21e2x dx. 0

Approximate Methods of Integration. Mechanical Quadrature

(See also section “Numerical Methods.”) 1. Use Simpson’s rule (see also Scarborough, “Numerical Mathematical Analyses,” Johns Hopkins Press). 2. Expand the function in a converging power series, and integrate term by term. 3. Plot the area under the curve y  f(x) from x  a to x  b on squared paper, and measure this area roughly by “counting squares.” Double Integrals The notation   f(x, y) dy dx means [ f(x, y) dy] dx, the limits of integration in the inner, or first, integral being functions of x (or constants).

2-30

MATHEMATICS

EXAMPLE. To find the weight of a plane area whose density, w, is variable, say w  f(x, y). The weight of a typical element, dx dy, is f(x, y) dx dy. Keeping x and dx constant and summing these elements from, say, y  F1(x) to y  F2(x), as determined by the shape of the boundary (Fig. 2.1.109), the weight of a typical strip perpendicular to the x axis is y5F2sxd

dx 3

f sx, yd dy

y5F1sxd

Finally, summing these strips from, say, x  a to x  b, the weight of the whole area is x5b

3 x5a

y5F2sxd

B dx 3

f sx, yd dy R or, briefly, 3 3 f sx, yd dy dx

y5F1sxd

Fig. 2.1.109 Graph showing areas to be summed during double integration.

xn has limit zero. A series of partial sums of an alternating sequence is called an alternating series. THEOREM. An alternating series converges whenever the sequence xn has limit zero. A series is a geometric series if its terms are of the form ar n. The value r is called the ratio of the series. Usually, for geometric series, the index is taken to start with n  0 instead of n  1. THEOREM. A geometric series with xn  arn, n  0, 1, 2, . . . , converges if and only if 1 r 1, and then the limit of the series is a/(1  r). The partial sums of a geometric series are sn  a(1  r n)/ (1  r). The series defined by the sequence xn  1/n, n  1, 2, . . . , is called the harmonic series. The harmonic series diverges. A series with each term xn  0 is called a “positive series.” There are a number of tests to determine whether or not a positive series sn converges. 1. Comparison test. If c1  c2      cn is a positive series that converges, and if 0 xn cn, then the series x1  x2      xn also converges. If d1  d2      dn diverges and xn  dn, then x1  x2      xn also diverges. 2. Integral test. If f(t) is a strictly decreasing function and f(n)  xn, `

then the series sn and the integral 3 f std dt either both converge or both 1

Triple Integrals The notation 3 3 3 f sx, y, zd dz dy dx means

3 b 3 B 3 f sx, y, zd dz R dyr dx Such integrals are known as volume integrals. EXAMPLE. To find the mass of a volume which has variable density, say, w  f(x, y, z). If the shape of the volume is described by a x b, F1(x) y F2(x), and G1(x, y) z G2(x, y), then the mass is given by F2sxd

b

3 3 a

F1sxd

G2sx, yd

3

diverge. 3. P test. The series defined by xn  1/np converges if p  1 and diverges if p  1 or p 1. If p  1, then this is the harmonic series. 4. Ratio test. If the limit of the sequence xn1/xn  r, then the series diverges if r  1, and it converges if 0 r 1. The test is inconclusive if r  1. 5. Cauchy root test. If L is the limit of the nth root of the nth term, lim x 1/n n , then the series converges if L 1 and diverges if L  1. If L  1, then the test is inconclusive. A power series is an expression of the form a0  a1x  a2x2      `

f sx, y, zd dz dy dx

G1sx, yd

SERIES AND SEQUENCES Sequences

A sequence is an ordered list of numbers, x1, x2, . . . , xn, . . . . An infinite sequence is an infinitely long list. A sequence is often defined by a function f(n), n  1, 2, . . . . The formula defining f(n) is called the general term of the sequence. The variable n is called the index of the sequence. Sometimes the index is taken to start with n  0 instead of n  1. A sequence converges to a limit L if the general term f(n) has limit L as n goes to infinity. If a sequence does not have a unique limit, the sequence is said to “diverge.” There are two fundamental ways a function can diverge: (1) It may become infinitely large, in which case the sequence is said to be “unbounded,” or (2) it may tend to alternate among two or more values, as in the sequence xn  (1)n. A sequence alternates if its odd-numbered terms are positive and its even-numbered terms are negative, or vice versa. Series

A series is a sequence of sums. The terms of the sums are another sequence, x1, x2, . . . . Then the series is the sequence defined by n

sn  x1  x2      xn  g xi. The sequence sn is also called the i51

sequence of partial sums of the series.

If the sequence of partial sums converges (resp. diverges), then the series is said to converge (resp. diverge). If the limit of a series is S, then the sequence defined by rn  S  sn is called the “error sequence” or the “sequence of truncation errors.” Convergence of Series THEOREM. If a series sn  x1  x2      xn converges, then it is necessary (but not sufficient) that the sequence

anxn     or g aix i. i50

The range of values of x for which a power series converges is the interval of convergence of the power series. General Formulas of Maclaurin and Taylor If f(x) and all its derivatives are continuous in the neighborhood of the point x  0 (or x  a), then, for any value of x in this neighborhood, the function f(x) may be expressed as a power series arranged according to ascending powers of x (or of x  a), as follows: f sxd 5 f s0d 1

f sxd 5 f sad 1

f rs0d f - s0d 3 f ss0d 2 x1 x 1 x 1c 1! 2! 3! f sn21d s0d n21 1 sPndx n 1 x sn 2 1d!

(Maclaurin)

f rsad f - sad f ssad sx 2 ad 1 sx 2 ad2 1 sx 2 ad31 1! 2! 3!

c1

f sn21d sad sn 2 1d!

sx 2 adn21 1 sQ ndsx 2 adn

(Taylor)

Here (Pn)xn, or (Qn)(x  a)n, is called the remainder term; the values of the coefficients Pn and Qn may be expressed as follows: Pn 5 [ f snd ssxd]/n! 5 [s1 2 tdn21f snd stxd]/sn 2 1d!

Q n 5 5 f snd[a 1 ssx 2 ad]6/n!

5 5 s1 2 tdn21f snd[a 1 tsx 2 ad]6/sn 2 1d!

where s and t are certain unknown numbers between 0 and 1; the s form is due to Lagrange, the t form to Cauchy. The error due to neglecting the remainder term is less than sPndx n, or sQ nd(x  a)n, where Pn, or Q n, is the largest value taken on by Pn, or

ORDINARY DIFFERENTIAL EQUATIONS

Qn, when s or t ranges from 0 to 1. If this error, which depends on both n and x, approaches 0 as n increases (for any given value of x), then the general expression with remainder becomes (for that value of x) a convergent infinite series. The sum of the first few terms of Maclaurin’s series gives a good approximation to f(x) for values of x near x  0; Taylor’s series gives a similar approximation for values near x  a. The MacLaurin series of some important functions are given below. Power series may be differentiated term by term, so the derivative of a power series a0  a1x  a2x2      an x n is a1  2a2x      naxxn  1 . . . . The power series of the derivative has the same interval of convergence, except that the endpoints may or may not be included in the interval.

`

1 5 g xn 12x n50 ` sm 1 n 2 1d! 1 511 g xn m s1 2 xd n51 sm 2 1d!n!

ln a

x x2 x3 x4 1 1 1 1c 1! 2! 3! 4! 2

tan 21 y 5 y 2

y3 y5 y7 1 2 1c 7 3 5

[21 # y # 11]

cot 21 y 5 1⁄2p 2 tan21 y.

x4 x6 x2 1c 1 1 2! 4! 6!

[2` , x , `]

21 , x , 1

sinh21 y 5 y 2

y3 3y 5 5y 7 1 2 1c 40 112 6

[21 , y , 11]

tanh21 y 5 y 1

y3 y7 y5 1c 1 1 7 3 5

[21 , y , 11]

3

x3 x4 x5 c x2 1 2 1 2 3 4 5

[21 , x , 11]

x3 x4 x5 c x2 2 2 2 2 2 3 4 5

[21 , x , 11] [21 , x , 11]

x11 1 1 1 1 b 5 2a x 1 3 1 5 1 7 1 c b x21 7x 3x 5x [x , 21 or 11 , x] x21 1 x21 1 x21 1 a b 1 a b 1 cd x11 3 x11 5 x11 3

5

[0 , x , `] 3 x x 1 ln sa 1 xd 5 ln a 1 2 c 1 a b 2a 1 x 3 2a 1 x 5 x 1 a b 1 cd 5 2a 1 x

Series for the Trigonometric Functions In the following formulas, all angles must be expressed in radians. If D  the number of degrees in the angle, and x  its radian measure, then x  0.017453D.

x3 x5 x7 1 2 1c 3! 7! 5! x2 x4 x6 x8 cos x 5 1 2 1 2 1 2c 2! 4! 8! 6!

ORDINARY DIFFERENTIAL EQUATIONS

An ordinary differential equation is one which contains a single independent variable, or argument, and a single dependent variable, or function, with its derivatives of various orders. A partial differential equation is one which contains a function of several independent variables, and its partial derivatives of various orders. The order of a differential equation is the order of the highest derivative which occurs in it. A solution of a differential equation is any relation among the variables, involving no derivatives, though possibly involving integrations which, when substituted in the given equation, will satisfy it. The general solution of an ordinary differential equation of the nth order will contain n arbitrary constants. If specific values of the arbitrary constants are chosen, then a solution is called a particular solution. For most problems, all possible particular solutions to a differential equation may be found by choosing values for the constants in a general solution. In some cases, however, other solutions exist. These are called singular solutions. EXAMPLE. The differential equation (yy)2  a2  y2  0 has general solution (x  c)2  y2  a2, where c is an arbitrary constant. Additionally, it has the two singular solutions y  a and y  a. The singular solutions form two parallel lines tangent to the family of circles given by the general solution.

The example illustrates a general property of singular solutions; at each point on a singular solution, the singular solution is tangent to some curve given in the general solution. Methods of Solving Ordinary Differential Equations

[0 , a , 1`, 2a , x , 1`]

sin x 5 x 2

[21 # y # 11]

cosh x 5 1 1

m m 3 c m 2 x1 x 1 x 1 1! 2! 3! [a . 0, 2` , x , 1` ]

1

y3 5y 7 3y 5 1 1c 1 40 112 6

21 , x , 1

[2` , x , 1` ]

11x x7 x3 x5 1 1 cb b 5 2ax 1 1 7 12x 3 5

ln x 5 2 c

sin 21 y 5 y 1

[2` , x , `]

where m  ln a  (2.3026)(log10 a).

ln a

[2p , x , 1p]

x3 x5 x7 1 1 1c 3! 7! 5!

Exponential and Logarithmic Series

ln s1 2 xd 5 2x 2

2x 5 x7 x3 x 1 2 2 2c cot x 5 x 2 2 3 45 945 4725

sinh x 5 x 1

Geometrical Series

ln s1 1 xd 5 x 2

[2p/2 , x , 1p/2]

Series for the Hyperbolic Functions (x a pure number)

The range of values of x for which each of the series is convergent is stated at the right of the series.

a x 5 emx 5 1 1

62x 9 x3 2x 5 17x 7 1 1 1 1c 3 15 315 2835

cos21 y 5 1⁄2p 2 sin21 y;

Series Expansions of Some Important Functions

ex 5 1 1

tan x 5 x 1

2-31

[2` , x , 1`] [2` , x , 1`]

DIFFERENTIAL EQUATIONS OF THE FIRST ORDER 1. If possible, separate the variables; i.e., collect all the x’s and dx on one side, and all the y’s and dy on the other side; then integrate both sides, and add the constant of integration. 2. If the equation is homogeneous in x and y, the value of dy/dx in terms of x and y will be of the form dy/dx  f(y/x). Substituting y  xt will enable the variables to be separated. dt Solution: log e x 5 3 1 C. f std 2 t

2-32

MATHEMATICS

3. The expression f(x, y) dx  F(x, y) dy is an exact differential if 'fsx, yd 'Fsx, yd ( P, say). In this case the solution of f(x, y) dx  5 'y 'x F(x, y) dy  0 is f sx, yd dx 1 [Fsx, yd 2 P dx] dy 5 C Fsx, yd dy 1 [ f sx, yd 2 P dy] dx 5 C

or

dx 4. Linear differential equation of the first order: 1 f sxd # y 5 dx F(x). Solution: y 5 e2P[ePFsxd dx 1 C ], where P 5 f sxd dx dy 1 f sxd # y 5 Fsxd # y n. Substituting dx y1  n  v gives (dv/dx)  (1  n)f(x) . v  (1  n)F(x), which is linear in v and x. 6. Clairaut’s equation: y  xp  f (p), where p  dy/dx. The solution consists of the family of lines given by y  Cx  f(C), where C is any constant, together with the curve obtained by eliminating p between the equations y  xp  f( p) and x  f ( p)  0, where f ( p) is the derivative of f ( p). 7. Riccati’s equation. p  ay2  Q(x)y  R(x)  0, where p  dy/dx can be reduced to a second-order linear differential equation (d2u/dx2)  Q(x)(du/dx)  R(x)  0 by the substitution y  du/dx. 8. Homogeneous equations. A function f (x, y) is homogeneous of degree n if f(rx, ry)  rm f(x, y), for all values of r, x, and y. In practice, this means that f(x, y) looks like a polynomial in the two variables x and y, and each term of the polynomial has total degree m. A differential equation is homogeneous if it has the form f(x, y)  0, with f homogeneous. (xy  x2) dx  y2 dy  0 is homogeneous. Cos (xy) dx  y2 dy  0 is not. If an equation is homogeneous, then either of the substitutions y  vx or x  vy will transform the equation into a separable equation. 9. dy/dx  f [(ax  by  c)/(dx  ey  g)] is reduced to a homogeneous equation by substituting u  ax  by  c, v  dx  ey  g, if ae  bd  0, and z  ax  by, w  dx  ey if ae  bd  0. 5. Bernoulli’s equation:

DIFFERENTIAL EQUATIONS OF THE SECOND ORDER 10. Dependent variable missing. If an equation does not involve the variable y, and is of the form F(x, dy/dx, d 2y/dx2)  0, then it can be reduced to a first-order equation by substituting p  dy/dx and dp/dx  d 2y/dx2. 11. Independent variable missing. If the equation is of the form F(y, dy/dx, d 2y/dx 2)  0, and so is missing the variable x, then it can be reduced to a first-order equation by substituting p  dy/dx and p(dp/dy)  d 2y/dx2. d 2y 5 2n2y. 12. dx 2 Solution: y  C1 sin (nx  C2), or y  C3 sin nx  C4 cos nx. d 2y 5 1n2y. 13. dx 2 Solution: y  C1 sinh (nx  C2), or y  C3 enx  C4enx. d 2y 5 f syd. 14. dx 2 dy 1 C2, where P 5 3 f syd dy. Solution: x 5 3 2C1 1 2P d 2y 5 fsxd. 15. dx 2 Solution: or

y 5 3 P dx 1 C1x 1 C2

where P 5 3 f sxd dx,

y 5 xP 2 3 x f sxd dx 1 C1x 1 C2.

dy dy d 2y dz ≤ . Putting 5 z, 2 5 , dx dx dx dx dz zdz x5 3 and y5 3 1 C1, 1 C2 fszd fszd then eliminate z from these two equations. dy d 2y 1 a 2y 5 0. 17. The equation for damped vibrations: 2 1 2b dx dx CASE 1. If a2  b2  0, let m 5 2a 2 2 b 2. Solution: 16.

d 2y

dx 2

5f¢

y 5 C1e2bx sin smx 1 C2d or

y 5 e2bx[C3 sin smxd 1 C4 cos smxd]

CASE 2. If a2  b2  0, solution is y  ebx(C1  C2 x). CASE 3. If a2  b2 0, let n 5 2b 2 2 a 2. Solution: y 5 C1e2bx sinh snx 1 C2d or y 5 C3e2sb1ndx 1 C4e2sb2ndx dy d 2y 1 2b 1 a 2y 5 c. 18. dx dx 2 c Solution: y 5 2 1 y1, where y1  the solution of the corresponding a equation with second member zero [see type 17 above]. dy d 2y 1 2b 1 a 2y 5 c sin skxd. 19. dx dx 2 Solution: y  R sin (kx  S)  y1 where R 5 c/ 2sa 2 2 k 2d2 1 4b 2k 2, tan S  2bk/(a2  k2), and y1  the solution of the corresponding equation with second member zero [see type 17 above]. dy d 2y 1 2b 1 a 2y 5 fsxd. 20. dx dx 2 Solution: y  R sin (kx  S)  y1 where R 5 c/ 2sa 2 2 k 2d2 1 4b 2k 2, tan S  2bk/(a2  k2), and y1  the solution of the corresponding equation with second member zero [see type 17 above]. If b2 a2, 1 B em1x 3 e2m1x f sxd dx 2 em2x 3 e2m2x f sxd dxR 2 2b 2 2 a 2 where m1 5 2b 1 2b 2 2 a 2 and m2 5 2b 2 2b 2 2 a 2. y0 5

If b2 a2, let m 5 2a 2 2 b 2, then 1 y0 5 m e2bx B sin smxd 3 ebx cos smxd # f sxd dx 2 cos smxd 3 ebx sin smxd # f sxd dxd If b 2 5 a 2, y0 5 e2bx B x 3 ebxf sxd dx 2 3 x # ebx f sxd dxR. Types 17 to 20 are examples of linear differential equations with constant coefficients. The solutions of such equations are often found most simply by the use of Laplace transforms. (See Franklin, “Fourier Methods,” pp. 198–229, McGraw-Hill or Kreyszig, “Advanced Engineering Mathematics,” Wiley.) Linear Equations

For the linear equation of the nth order An sxd d ny/dx n 1 An21 sxd d n21y/dx n21 1 c 1 A1 sxddy/dx 1 A0 sxdy 5 Esxd the general solution is y  u  c1u1  c2u2      cnun. Here u, the particular integral, is any solution of the given equation, and u1, u2, . . . , un form a fundamental system of solutions of the homogeneous equation obtained by replacing E(x) by zero. A set of solutions is fundamental, or independent, if its Wronskian determinant W(x) is not

ORDINARY DIFFERENTIAL EQUATIONS

zero, where u1 u r1 # Wsxd 5 6 # #

u2 u r2 # # #

u sn21d u sn21d 1 2

c u n c ur n c # c # 6 c # c u sn21d n

For any n functions, W(x)  0 if some one ui is linearly dependent on the others, as un  k1u1  k2u2      kn1un1 with the coefficients ki constant. And for n solutions of a linear differential equation of the nth order, if W(x)  0, the solutions are linearly independent. Constant Coefficients To solve the homogeneous equation of the nth order Andny/dxn  An1 dn1 y/dxn1      A1dy/dx  A0 y  0, An  0, where An, An1, . . . , A0 are constants, find the roots of the auxiliary equation An p n 1 An21 p n21 1 c1 A1 p 1 A0 5 0 For each simple real root r, there is a term cerx in the solution. The terms of the solution are to be added together. When r occurs twice among the n roots of the auxiliary equation, the corresponding term is erx(c1  c2x). When r occurs three times, the corresponding term is erx(c1  c2x  c3x2), and so forth. When there is a pair of conjugate complex roots a  bi and a  bi, the real form of the terms in the solution is eax(c1 cos bx  d1 sin bx). When the same pair occurs twice, the corresponding term is eax[(c1  c2 x) cos bx  (d1  d2 x) sin bx], and so forth. Consider next the general nonhomogeneous linear differential equation of order n, with constant coefficients, or And ny/dx n 1 An21d n21y/dx n21 1 # # # 1 A1dy/dx 1 A0 y 5 Esxd We may solve this by adding any particular integral to the complementary function, or general solution, of the homogeneous equation obtained by replacing E(x) by zero. The complementary function may be found from the rules just given. And the particular integral may be found by the methods of the following paragraphs. Undetermined Coefficients In the last equation, let the right member E(x) be a sum of terms each of which is of the type k, k cos bx, k sin bx, keax, kx, or more generally, kxmeax, kxmeax cos bx, or kxmeax sin bx. Here m is zero or a positive integer, and a and b are any real numbers. Then the form of the particular integral I may be predicted by the following rules. CASE 1. E(x) is a single term T. Let D be written for d/dx, so that the given equation is P(D)y  E(x), where P(D)  AnDn  An  1Dn  1      A1D  A0 y. With the term T associate the simplest polynomial Q(D) such that Q(D)T  0. For the particular types k, etc., Q(D) will be D, D2  b2, D2  b2, D  a, D2; and for the general types kxm eax, etc., Q(D) will be (D  a)m1, (D2  2aD  a2  b2)m+1, (D2  2aD  a2  b2)m1. Thus Q(D) will always be some power of a first- or second-degree factor, Q(D)  Fv, F  D  a, or F  D2 – 2aD  a2  b2. Use the method described under Constant Coefficients to find the terms in the solution of P(D)y  0 and also the terms in the solution of Q(D)P(D)y  0. Then assume the particular integral I is a linear combination with unknown coefficients of those terms in the solution of Q(D)P(D)y  0 which are not in the solution of P(D)y  0. Thus if Q(D)  Fq and F is not a factor of P(D), assume I  (Ax q1  Bx q2      L)eax when F  D  a, and assume I  (Ax q1  Bx q2      L)eax cos bx  (Mx q1  Nx q2      R)eax sin bx when F  D2  2aD  a2  b2. When F is a factor of P(D) and the highest power of F which is a divisor of P(D) is Fk, try the I above multiplied by xk. CASE 2. E(x) is a sum of terms. With each term in E(x), associate a polynomial Q(D)  Fq as before. Arrange in one group all the terms that have the same F. The particular integral of the given equation will be the sum of solutions of equations each of which has one group on the right. For any one such equation, the form of the particular integral

2-33

is given as for Case 1, with q the highest power of F associated with any term of the group on the right. After the form has been found in Case 1 or 2, the unknown coefficients follow when we substitute back in the given differential equation, equate coefficients of like terms, and solve the resulting system of simultaneous equations. Variation of Parameters. Whenever a fundamental system of solutions u1, u2, . . . , un for the homogeneous equation is known, a particular integral of An sxdd ny/dx n 1 An21 sxdd n21y/dx n21 1 c 1 A1 sxddy/dx 1 A0 sxdy 5 Esxd may be found in the form y 5 vkuk. In this and the next few summations, k runs from 1 to n. The vk are functions of x, found by integrating their derivatives vrk, and these derivatives are the solutions of the n simultaneous equations v kr uk 5 0, v kr u kr 5 0, v kr us k 5 0, c , v rku sn22d 5 0, An sxdv rku sn21d 5 Esxd. To find the vk from vk  k k v kr dx 1 ck, any choice of constants will lead to a particular integral. x

The special choice vk 5 3 v rk dx leads to the particular integral having 0

y, y, y, . . . . , y (n 1) each equal to zero when x  0. The Cauchy-Euler Equidimensional Equation This has the form knx nd ny/dx n 1 kn21x n21d n21y/dx n21 1 c 1 k1 x dy/dx 1 k0 y 5 Fsxd

The substitution x  et, which makes x dy/dx 5 dy/dt x kd ky/dx k 5 sd/dt 2 k 1 1d c sd/dt 2 2dsd/dt 2 1d dy/dt transforms this into a linear differential equation with constant coefficients. Its solution y  g(t) leads to y  g(ln x) as the solution of the given Cauchy-Euler equation. Bessel’s Equation The general Bessel equation of order n is: x 2ys 1 xyr 1 sx 2 2 n2dy 5 0 This equation has general solution y 5 AJn sxd 1 BJ2n sxd when n is not an integer. Here, Jn(x) and Jn(x) are Bessel functions (see section on Special Functions). In case n  0, Bessel’s equation has solution ` s21dkH x 2k k y 5 AJ0 sxd 1 B B J0 sxd ln sxd 2 g R 22k sk!d2 k51 where Hk is the kth partial sum of the harmonic series, 1  1⁄2  1⁄3      1/k. In case n  1, the solution is y 5 AJ1 sxd 1 B bJ1 sxd ln sxd 1 1/x `

s21dk sHk 1 Hk21dx 2k21

k51

22kk!sk 2 1d!

2 Bg

Rr

In case n  1, n is an integer, solution is `

s21dk11 sn 2 1d!x 2k2n R 2k112n k!s1 2 ndk k50 2 2k1n ` s21dk11 sH 1 H k k11dx 1 1⁄2 B g Rr 2k1n 2 k!sk 1 nd! k50

y 5 AJn sxd 1 B bJn sxd ln sxd 1 B g

Solutions to Bessel’s equation may be given in several other forms, often exploiting the relation between Hk and ln (k) or the so-called Euler constant. General Method of Power Series Given a general differential equation F(x, y, y, . . .)  0, the solution may be expanded as a Maclaurin series, so y 5 `n50anx n, where an  f (n)(0)/n!. The power

2-34

MATHEMATICS

series for y may be differentiated formally, so that yr 5 `n51nanx n21 5 `n50 sn 1 1d an11x n, and ys 5 `n52 nsn 2 1d anx n22 5 `n50 sn 1 1d sn 1 2dan12 x n. Substituting these series into the equation F(x, y, y, . . .)  0 often gives useful recursive relationships, giving the value of an in terms of previous values. If approximate solutions are useful, then it may be sufficient to take the first few terms of the Maclaurin series as a solution to the equation. EXAMPLE. Consider y  y  xy  0. The procedure gives `n50 sn 1 1d sn 1 2d an12 x n 2 `n50 sn 1 1d an11 x n 1 x `n50 an x n 5 `n50 sn 1 1d sn 1 2dan12 x n 2 `n50 sn 1 1d an11 x n 1 `n51 an21 x n 5 s2a2 2 a1dx 0 1 `n51 [sn 1 1dsn 1 2dan12 2 sn 1 1dan11 1 an21] x n 5 0. Thus 2a2  a1  0 and, for n  0, (n  1)(n  2)an2  (n  1) an1  an1  0. Thus, a0 and a1 may be determined arbitrarily, but thereafter, the values of an are determined recursively.

PARTIAL DIFFERENTIAL EQUATIONS

Partial differential equations (PDEs) arise when there are two or more independent variables. Two notations are common for the partial derivatives involved in PDEs, the “del” or fraction notation, where the first partial derivative of f with respect to x would be written ' f/'x, and the subscript notation, where it would be written fx. In the same way that ordinary differential equations often involve arbitrary constants, solutions to PDEs often involve arbitrary functions. EXAMPLE. fxy  0 has as its general solution g(x)  h(y). The function g does not depend on y, so gy  0. Similarly, fx  0.

PDEs usually involve boundary or initial conditions dictated by the application. These are analogous to initial conditions in ordinary differential equations. In solving PDEs, it is seldom feasible to find a general solution and then specialize that general solution to satisfy the boundary conditions, as is done with ordinary differential equations. Instead, the boundary conditions usually play a key role in the solution of a problem. A notable exception to this is the case of linear, homogeneous PDEs since they have the property that if f1 and f2 are solutions, then f1  f2 is also a solution. The wave equation is one such equation, and this property is the key to the solution described in the section “Fourier Series.” Often it is difficult to find exact solutions to PDEs, so it is necessary to resort to approximations or numerical solutions. (See also “Numerical Methods” below.) Classification of PDEs Linear A PDE is linear if it involves only first derivatives, and then only to the first power. The general form of a linear PDE, in two independent variables, x and y, and the dependent variable z, is P(x, y, z) fx  Q(x, y, z)fy  R(x, y, z), and it will have a solution of the form z  f (x, y) if its solution is a function, or F(x, y, z)  0 if the solution is not a function. Elliptic Laplace’s equation fxx  fyy  0 and Poisson’s equation fxx  fyy  g(x, y) are the prototypical elliptic equations. They have analogs in more than two variables. They do not explicitly involve the variable time and generally describe steady-state or equilibrium conditions, gravitational potential, where boundary conditions are distributions of mass, electrical potential, where boundary conditions are electrical charges, or equilibrium temperatures, and where boundary conditions are points where the temperature is held constant. Parabolic Tt  Txx  Tyy represents the dynamic condition of diffusion or heat conduction, where T(x, y, t) usually represents the temperature at time t at the point (x, y). Note that when the system reaches steady state, the temperature is no longer changing, so Tt  0, and this becomes Laplace’s equation. Hyperbolic Wave propagation is described by equations of the type utt  c2(uxx  uyy), where c is the velocity of waves in the medium.

VECTOR CALCULUS Vector Fields A vector field is a function that assigns a vector to each point in a region. If the region is two-dimensional, then the vectors assigned are two-dimensional, and the vector field is a two-dimensional vector field, denoted F(x, y). In the same way, a three-dimensional vector field is denoted F(x, y, z). A three-dimensional vector field can always be written:

Fsx, y, zd 5 f1 sx, y, zdi 1 f2 sx, y, zdj 1 f3 sx, y, zdk where i, j, and k are the basis vectors (1, 0, 0), (0, 1, 0), and (0, 0, 1), respectively. The functions f1, f2, and f3 are called coordinate functions of F. Gradient If f(x, y) is a function of two variables, then the gradient is the associated vector field grad s f d 5 fxi 1 fy j. Similarly, the gradient of a function of three variables is fxi 1 fy j 1 fzk. At any point, the direction of the gradient vector points in the direction in which the function is increasing most rapidly, and the magnitude is the rate of increase in that direction. Parameterized Curves If C is a curve from a point A to a point B, either in two dimensions or in three dimensions, then a parameterization of C is a vector-valued function r(t)  r1(t)i  r2(t)j  r3(t)k, which satisfies r(a)  A, r(b)  B, and r(t) is on the curve C, for a # t # b. It is also necessary that the function r(t) be continuous and one-to-one. A given curve C has many different parameterizations. The derivative of a parameterization r(t) is a vector-valued function r r std 5 r r1 stdi 1 r r2 stdj 1 r r3 stdk. The derivative is the velocity function of the parameterization. It is always tangent to the curve C, and the magnitude is the speed of the parameterization. Line Integrals If F is a vector field, C is a curve, and r(t) is a parameterization of C, then the line integral, or work integral, of F along C is b

W 5 3 F ? dr 5 3 Fsrstdd ? rrstd dt a

C

This is sometimes called the work integral because if F is a force field, then W is the amount of work necessary to move an object along the curve C from A to B. Divergence and Curl The divergence of a vector field F is div F  f1x  f2y  f3z. If F represents the flow of a fluid, then the divergence at a point represents the rate at which the fluid is expanding at that point. Vector fields with div F  0 are called incompressible. The curl of F is curl F 5 s f3y 2 f2z d i 1 s f1z 2 f3x d j 1 s f2x 2 f1y dk If F is a two-dimensional vector field, then the first two terms of the curl are zero, so the curl is just curl F 5 s f2x 2 f1y d k If F represents the flow of a fluid, then the curl represents the rotation of the fluid at a given point. Vector fields with curl F  0 are called irrotational.

Two important facts relate div, grad, and curl: 1. div (curl F)  0 2. curl (grad f )  0 Conservative Vector Fields A vector field F  f1i  f2 j  f3k is conservative if all of the following are satisfied: f1y 5 f2x

f1z 5 f3x

and

f2z 5 f3y

If F is a two-dimensional vector field, then the second and third conditions are always satisfied, and so only the first condition must be checked. Conservative vector fields have three important properties: 1. 3 F # dr has the same value regardless of what curve C is chosen that C

connects the points A and B. This property is called path independence. 2. F is the gradient of some function f (x, y, z). 3. Curl F  0.

LAPLACE AND FOURIER TRANSFORMS

In the special case that F is a conservative vector field, if F  grad ( f ), then # 3 F dr 5 f sBd 2 f sAd

Properties of Laplace Transforms Fssd 5 ls fstdd

f(t)

THEOREMS ABOUT LINE AND SURFACE INTEGRALS

Two important theorems relate line integrals with double integrals. If R is a region in the plane and if C is the curve tracing the boundary of R in the positive (counterclockwise) direction, and if F is a continuous vector field with continuous first partial derivatives, line integrals on C are related to double integrals on R by Green’s theorem and the divergence theorem. Green’s Theorem

# # 3 F dr 5 3 3 curl sFd dS R

The right-hand double integral may also be written as 3 3 |curl sFd| dA.

2. 3. 4. 5.

Name of rule

`

2st 3 e f std dt

1. f(t)

C

C

Table 2.1.5

2-35

Definition

0

fstd 1 gstd k f (t) f rstd f sstd

6. f -std 7. 3f std dt

Fssd 1 Gssd kF(s) sFssd 2 f s01d s 2Fssd 2 sf s01d 2 f rs01d s 3Fssd 2 s 2f s01d 2 sf rs01d 2 f ss01d

Addition Scalar multiples Derivative laws

s1/sdFssd

Integral law

1 s1/sd 3f std dt|01 8. 9. 10. 11. 12.

f(bt) eatf (t) f * gstd ua stdfst 2 ad 2 t f st d

(1/b)F(s/b) Fss 2 ad F(s)G(s) Fssde2at Frssd

Change of scale First shifting Convolution Second shifting Derivative in s

R

Green’s theorem describes the total rotation of a vector field in two different ways, on the left in terms of the boundary of the region and on the right in terms of the rotation at each point within the region. Divergence Theorem

# 3 F d N 5 3 3 div sFd dA C

R

where N is the so-called normal vector field to the curve C. The divergence theorem describes the expansion of a region in two distinct ways, on the left in terms of the flux across the boundary of the region and on the right in terms of the expansion at each point within the region. Both Green’s theorem and the divergence theorem have corresponding theorems involving surface integrals and volume integrals in three dimensions. LAPLACE AND FOURIER TRANSFORMS Laplace Transforms The Laplace transform is used to convert equations involving a time variable t into equations involving a frequency

Table 2.1.4

Fssd 5 ls fstdd

12. 13. 14. 15.

a/s 1/s 0 t , a e2as/S ua std 5 e 1 t.a da std 5 u ra std e2as at e 1/ss 2 ad s1>rde2t/r 1/srs 1 1d 2at ke k/ss 1 ad a/ss 2 1 a 2d sin at s/ss 2 1 a 2d cos at b/[ss 1 ad2 1 b 2] e2at sin bt e at ebt 1 2 a2b a2b ss 2 adss 2 bd 1/s 2 t t2 2/s 3 tn n!/s n11 ta sa 1 1d/s a11

16. 17. 18. 19. 20. 21. 22.

sinh at cosh at t neat t cos at t sin at sin at 2 at cos at arctan a/s

11.

Therefore, Fssd 5 l[ f(t)]. The Laplace integral is defined as

a/ss 2 2 a 2d s/ss 2 2 a 2d n!/ss 2 adn11 ss 2 2 a 2d/ss 2 1 a 2d2 2as/ss 2 1 a 2d2 2a 3/ss 2 1 a 2d2 ssin atd/t

`

`

0

0

l 5 3 e2st dt. Therefore, l[ f std] 5 3 e2stf std dt Name of function

1. a 2. 1

4. 5. 6. 7. 8. 9. 10.

f(t)  a function of time s  a complex variable of the form ss 1 jvd F(s)  an equation expressed in the transform variable s, resulting from operating on a function of time with the Laplace integral l  an operational symbol indicating that the quantity which it prefixes is to be transformed into the frequency domain f(0)  the limit from the positive direction of f(t) as t approaches zero  f(0 )  the limit from the negative direction of f(t) as t approaches zero

Laplace Transforms

f(t)

3.

variable s. There are essentially three reasons for doing this: (1) higherorder differential equations may be converted to purely algebraic equations, which are more easily solved; (2) boundary conditions are easily handled; and (3) the method is well-suited to the theory associated with the Nyquist stability criteria. In Laplace-transformation mathematics the following symbols and equations are used (Tables 2.1.4 and 2.1.5):

Direct Transforms EXAMPLE. f std 5 sin bt

Heavyside or step function

`

Dirac or impulse function

l[ f std] 5 ls sin btd 5 3 sin bt e2st dt 0

but

sin bt 5 l ssin btd 5 5

Gamma function (see “Special Functions”)

ejbt 2 e2jbt 2j

where

j 2 5 21

1 ` jbt se 2 e2jbtde2st dt 2j 30 ` 1 21 ¢ ≤ es2s1jbdt 2 2j s 2 jb 0

2 5

` 1 21 ¢ ≤ es2s2jbdt 2 2j s 1 jb 0

b s 2 1 b2

Table 2.1.4 lists the transforms of common time-variable expressions. Some special functions are frequently encountered when using Laplace methods.

2-36

MATHEMATICS

The Heavyside, or step, function ua(t) sometimes written u(t  a), is zero for all t a and 1 for all t  a. Its value at t  a is defined differently in different applications, as 0, 1⁄2, or 1, or it is simply left undefined. In Laplace applications, the value of a function at a single point does not matter. The Heavyside function describes a force which is “off” until time t  a and then instantly goes “on.” The Dirac delta function, or impulse function, da(t), sometimes written d(t  a), is the derivative of the Heavyside function. Its value is always zero, except at t  a, where its value is “positive infinity.” It is sometimes described as a “point mass function.” The delta function describes an impulse or an instantaneous transfer of momentum. The derivative of the Dirac delta function is called the dipole function. It is less frequently encountered. The convolution f * g(t) of two functions f (t) and g(t) is defined as t

f * gstd 5 3 f sudgst 2 ud du

output quantity or the error. The solution gained from the transformed equation is expressed in terms of the complex variable s. For many design or analysis purposes, the solution in s is sufficient, but in some cases it is necessary to retransform the solution in terms of time. The process of passing from the complex-variable (frequency domain) expression to that of time (time domain) is called an inverse transformation. It is represented symbolically as l21Fssd 5 fstd For any f(t) there is only one direct transform, F(s). For any given F(s) there is only one inverse transform f(t). Therefore, tables are generally used for determining inverse transforms. Very complete tables of inverse transforms may be found in Gardner and Barnes, “Transients in Linear Systems.” As an example of the inverse procedure consider an equation of the form xstd dt K 5 axstd 1 3 b

0

Laplace transforms are often used to solve differential equations arising from so-called linear systems. Many vibrating systems and electrical circuits are linear systems. If an input function fi(t) describes the forces exerted upon a system and a response or output function fo(t) describes the motion of the system, then the transfer function T(s)  Fo (s)/Fi(s). Linear systems have the special property that the transfer function is independent of the input function, within the elastic limits of the system. Therefore, Fo ssd Go ssd 5 Fi ssd Gi ssd This gives a technique for describing the response of a system to a complicated input function if its response to a simple input function is known. EXAMPLE. Solve y  2y  3y  8et subject to initial conditions y(0)  2 and y(0)  0. Let y  f (t) and Y  F(s). Take Laplace transforms of both sides and substitute for y(0) and y(0), and get s 2Y 2 2s 1 2ssY 2 2d 2 3Y 5

8 s21

It is desired to obtain an expression for x(t) resulting from an instantaneous change in the quantity K. Transforming the last equation yields f 21 s01d Xssd K s 5 Xssda 1 sb 1 s If f 1(0)/s  0 Xssd 5

then

xstd 5 l 21[Xssd] 5 l21

Fourier Coefficients Fourier coefficients are used to analyze periodic functions in terms of sines and cosines. If f(x) is a function with period 2L, then the Fourier coefficients are defined as

Y5

5

FoGi Fi 4ss 2 1 1d

ss 2 1 4d2 2s2d 16 12 2 R 5 2 B 16 ss 2 1 22d2 s 1 22

nps 1 L f ssd cos ds L 32L L

n 5 0, 1, 2, c

bn 5

nps 1 L f ssd sin ds L 32L L

n 5 1, 2, c

fsxd 5

y 5 e23t 1 et 1 2tet

Go ssd 5

an 5

Equivalently, the bounds of integration may be taken as 0 to 2L instead of –L to L. Many treatments take L 5 p or L 5 1. Then the Fourier theorem states that

Using the tables of transforms to find what function has Y as its transform, we get

EXAMPLE. A vibrating system responds to an input function fi(t)  sin t with a response fo (t)  sin 2t. Find the system response to the input gi(t)  sin 2t. Apply the invariance of the transfer function, and get

` a0 npx npx 1 g an cos ¢ ≤ 1 bn sin ¢ ≤ L L 2 n51

The series on the right is called the “Fourier series of the function f(x).” The convergence of the Fourier series is usually rapid, so that the function f(x) is usually well-approximated by the sum of the first few sums of the series. Examples of the Fourier Series If y  f(x) is the curve in Figs. 2.1.110 to 2.1.112, then in Fig. 2.1.110, y5

3px 5px px h 4h 1 1 cos c 1 c≤ 2 2 ¢cos c 1 cos c 1 2 9 25 p

Applying formulas 8 and 21 from Table 2.1.4 of Laplace transforms, go std 5 2 sin 2t 2 3⁄4 sin 2t 1 3⁄2 t cos 2t Inversion When an equation has been transformed, an explicit solution for the unknown may be directly determined through algebraic manipulation. In automatic-control design, the equation is usually the differential equation describing the system, and the unknown is either the

K/a s 1 1/ab

K From Table 2.1.4, xstd 5 a e2t/ab

Solve for Y, apply partial fractions, and get 2s 2 1 2s 1 4 ss 1 3dss 2 1d2 s11 1 5 1 s13 ss 2 1d2 1 1 2 5 1 1 ss 1 3d ss 2 1d ss 2 1d2

K/a s 1 1/ab

Fig. 2.1.110 Saw-tooth curve.

SPECIAL FUNCTIONS

In Fig. 2.1.111,

2-37

to describe its Fourier transform:

3px 5px px 4h 1 1 y 5 p ¢sin c 1 sin c 1 sin c 1 # # # ≤ 3 5

Aswd 5 3

`

f sxd cos wx dx

2`

Bswd 5 3

`

f sxd sin wx dx

2`

Then the Fourier integral equation is `

f sxd 5 3 Aswd cos wx 1 Bswd sin wx dw 0

The complex Fourier transform of f(x) is defined as Fig. 2.1.111 Step-function curve or square wave.

Fswd 5 3

`

f sxd eiwx dx

2`

Then the complex Fourier integral equation is

In Fig. 2.1.112, 3px px 2px 2h 1 1 y 5 p ¢ sin c 2 sin c 1 sin c 2 # # # ≤ 2 3

Fig. 2.1.112 Linear-sweep curve.

f sxd 5

Heat Equation The Fourier transform may be used to solve the onedimensional heat equation ut(x, t)  uxx(x, t), for t  0, given initial condition u(x, 0)  f(x). Let F(s) be the complex Fourier transform of f(x), and let U(s, t) be the complex Fourier transform of u(x, t). Then the transform of ut(x, t) is dU(s, t)/dt. Transforming ut(x, t)  uxx (s, t) yields dU/dt  s2U  0 and U(s, 0)  f (s). Solving this using the Laplace transform gives U(s, t)  2 F(s)es t. Applying the complex Fourier integral equation, which gives u(x, t) in terms of U(s, t), gives

usx, td 5 If the Fourier coefficients of a function f(x) are known, then the coefficients of the derivative f(x) can be found, when they exist, as follows: a rn 5 nbn

b rn 5 2nan

where a rn and b rn are the Fourier coefficients of f(x). The complex Fourier coefficients are defined by: cn 5 1⁄2 san 2 ibnd c0 5 1⁄2a0 cn 5 1⁄2 san 1 ibnd `

f sxd 5 g cnein px/L n52`

Wave Equation Fourier series are often used in the solution of the wave equation a2uxx  utt where 0 x L, t  0, and initial conditions are u(x, 0)  f(x) and ut(x, 0)  g(x). This describes the position of a vibrating string of length L, fixed at both ends, with initial position f (x) and initial velocity g(x). The constant a is the velocity at which waves are propagated along the string, and is given by a2  T/p, where T is the tension in the string and p is the mass per unit length of the string. If f(x) is extended to the interval L x L by setting f(x)   f(x), then f may be considered periodic of period 2L. Its Fourier coefficients are L npx bn 5 3 fsxd sin p dx L 2L

usx, td 5 g bn sin n51

1 ` ` 2 f sydeissy2xdes t ds dy 2p 32` 30

Applying the Euler formula, eix  cos x  i sin x, usx, td 5

1 ` ` 2 f syd cos sssy 2 xddes t ds dy 2p 32` 30

Gamma Function The gamma function is a generalization of the factorial function. It arises in Laplace transforms of polynomials, in continuous probability, applications involving fractals and fractional derivatives, and in the solution to certain differential equations. It is defined by the improper integral: `

sxd 5 3 t x21e2t dt 0

The integral converges for x  0 and diverges otherwise. The function is extended to all negative values, except negative integers, by the relation sx 1 1d 5 xsxd The gamma function is related to the factorial function by sn 1 1d 5 n!

n 5 1, 2, c

The solution to the wave equation is `

5

1 ` Uss, tde2isx ds 2p 30

SPECIAL FUNCTIONS

Then the complex form of the Fourier theorem is

an 5 0

1 ` Fswde2iwx dx 2p 30

for all positive integers n. An important value of the gamma function is s0.5d 5 p1>2

npx npt cos L L

Fourier transform A nonperiodic function f(x) requires two functions

Other values of the gamma function are found in CRC Standard Mathematical Tables and similar tables. Beta Function The beta function is a function of two variables and is a generalization of the binomial coefficients. It is closely related to

2-38

MATHEMATICS

the gamma function. It is defined by the integral: 1

Bsx, yd 5 3 t x21 s1 2 td y21 dt

for x, y . 0

0

The beta function can also be represented as a trigonometric integral, by substituting t  sin2 u, as p/2

Bsx, yd 5 2 3

s sin ud2x21 scos ud2y21 d u

0

The beta function is related to the gamma function by the relation Bsx, yd 5

sxdsyd sx 1 yd

This relation shows that B(x, y)  B(y, x). Bernoulli Functions The Bernoulli functions are a sequence of periodic functions of period 1 used in approximation theory. Note that for any number x, [x] represents the largest integer less than or equal to x. [3.14]  3 and [1.2]  2. The Bernoulli functions Bn(x) are defined recursively as follows: 1. B0(x)  1 2. B1(x)  x  [x]  1⁄2 3. Bn  1 is defined so that Brn11 sxd 5 Bn sxd and so that Bn  1 is periodic of period 1. B1 is a special case of the linear-sweep curve (Fig. 2.1.112). Bessel Functions of the First Kind Bessel functions of the first kind arise in the solution of Bessel’s equation of order v: x 2ys 1 xyr 1 sx 2 2 v 2dy 5 0 When this is solved using series methods, the recursive relations define the Bessel functions of the first kind of order v: `

Jv sxd 5 g k50

s21dk x v12k ¢ ≤ k!sv 1 k 1 1d 2

Chebyshev Polynomials The Chebyshev polynomials arise in the

solution of PDEs of the form s1 2 x 2dys 2 xyr 1 n2y 5 0 and in approximation theory. They are defined as follows: T0 sxd 5 1 T1 sxd 5 x

T2 sxd 5 2x 2 2 1 T3 sxd 5 4x 3 2 3x

For n  3, they are defined recursively by the relation Tn11 sxd 2 2xTn sxd 1 Tn21 sxd 5 0 Chebyshev polynomials are said to be orthogonal because they have the property 1 Tn sxdTm sxd 3 s1 2 x 2d1/2 dx 5 0 21

for n 2 m

NUMERICAL METHODS Introduction Classical numerical analysis is based on polynomial approximation of the infinite operations of integration, differentiation, and interpolation. The objective of such analyses is to replace difficult or impossible exact computations with easier approximate computations. The challenge is to make the approximate computations short enough and accurate enough to be useful. Modern numerical analysis includes Fourier methods, including the fast Fourier transform (FFT) and many problems involving the way computers perform calculations. Modern aspects of the theory are changing very rapidly. Errors Actual value  calculated value  error. There are several sources of errors in a calculation: mistakes, round-off errors, truncation errors, and accumulation errors.

Round-off errors arise from the use of a number not sufficiently accurate to represent the actual value of the number, for example, using 3.14159 to represent the irrational number p, or using 0.56 to represent 9⁄16 or 0.5625. Truncation errors arise when a finite number of steps are used to approximate an infinite number of steps, for example, the first n terms of a series are used instead of the infinite series. Accumulation errors occur when an error in one step is carried forward into another step. For example, if x  0.994 has been previously rounded to 0.99, then 1  x will be calculated as 0.01, while its true value is 0.006. An error of less than 1 percent is accumulated into an error of over 50 percent in just one step. Accumulation errors are particularly characteristic of methods involving recursion or iteration, where the same step is performed many times, with the results of one iteration used as the input for the next. Simultaneous Linear Equations The matrix equation Ax  b can be solved directly by finding A1, or it can be solved iteratively, by the method of iteration in total steps: 1. If necessary, rearrange the rows of the equation so that there are no zeros on the diagonal of A. 2. Take as initial approximations for the values of xi:

b1 x s0d 1 5 a11

b2 x s0d 2 5 a22

c

bn x s0d n 5 a nn

3. For successive approximations, take # # # 2 a x skdd/a x sk11d 5 sbi 2 ai1x skd i 1 2 in n ii Repeat step 3 until successive approximations for the values of xi reach the specified tolerance. A property of iteration by total steps is that it is self-correcting: that is, it can recover both from mistakes and from accumulation errors. Zeros of Functions An iterative procedure for solving an equation f(x)  0 is the Newton-Raphson method. The algorithm is as follows: 1. Choose a first estimate of a root x0. 2. Let xk  1  xk  f(xk)/f(xk). Repeat step 2 until the estimate xk converges to a root r. 3. If there are other roots of f(x), then let g(x)  f(x)/(x  r) and seek roots of g(x). False Position If two values x0 and x1 are known, such that f (x0) and f (x1) are opposite signs, then an iterative procedure for finding a root between x0 and x1 is the method of false position. 1. Let m  [ f (x1)  f (x0)]/(x1  x0). 2. Let x2  x1  f (x1)/m. 3. Find f(x2). 4. If f (x2) and f (x1) have the same sign, then let x1  x2. Otherwise, let x0  x2. 5. If x1 is not a good enough estimate of the root, then return to step 1. Functional Equatities To solve an equation of the form f(x)  g(x), use the methods above to find roots of the equation f(x)  g(x)  0. Maxima One method for finding the maximum of a function f(x) on an interval [a, b] is to find the roots of the derivative f (x). The maximum of f(x) occurs at a root or at an endpoint a or b. Fibonacci Search An iterative procedure for searching for maxima works if f(x) is unimodular on [a, b]. That is, f has only one maximum, and no other local maxima, between a and b. This procedure takes advantage of the so-called golden ratio, r 5 0.618034 5 s 25 2 1d/2, which arises from the Fibonacci sequence. 1. If a is a sufficiently good estimate of the maximum, then stop. Otherwise, proceed to step 2. 2. Let x1  ra  (1  r)b, and let x2  (1  r)a  rb. Note x1 x2. Find f(x1) and f (x2). a. If f (x1)  f (x2), then let a  x1 and b  x2, and go to step 1. b. If f(x1) f(x2), then let a  x1, and go to step 1. c. If f(x1)  f(x2), then let b  x2, and return to step 1.

NUMERICAL METHODS

In cases b and c, computation is saved since the new value of one of x1 and x2 will have been used in the previous step. It has been proved that the Fibonacci search is the fastest possible of the general “cutting” type of searches. Steepest Ascent If z  f(x, y) is to be maximized, then the method of steepest ascent takes advantage of the fact that the gradient, grad ( f ) always points in the direction that f is increasing the fastest. 1. Let (x0, y0) be an initial guess of the maximum of f. 2. Let e be an initial step size, usually taken to be small. 3. Let (xk  1, yk  1)  (xk, yk)  e grad f (xk, yk)/|grad f (xk, yk)|. 4. If f(xk  1, yk  1) is not greater than f (xk, yk), then replace e with e/2 (cut the step size in half ) and reperform step 3. 5. If (xk, yk) is a sufficiently accurate estimate of the maximum, then stop. Otherwise, repeat step 3. Minimization The theory of minimization exactly parallels the theory of maximization, since minimizing z  f (x) occurs at the same value of x as maximizing w   f (x). Numerical Differentiation In general, numerical differentiation should be avoided where possible, since differentiation tends to be very sensitive to small errors in the value of the function f (x). There are several approximations to f (x), involving a “step size” h usually taken to be small:

2-39

may be approximated by n

g [ f sxid 1 f sxi21d] i51

xi11 2 xi 2

If the values xi are equally spaced at distance h and if fi is written for f (xi), then the above formula reduces to h [ f0 1 2f1 1 2f2 1 # # # 1 2fn21 1 fn] 2 The error in the trapezoid rule is given by sb 2 ad3|f s std|

|E n| #

12n2

where t is some value a # t # b. Simpson’s Rule The most widely used rule for numerical integration approximates the curve with parabolas. The interval a x b must be divided into n/2 subintervals, each of length 2h, where n is an even number. Using the notation above, the integral is approximated by h [ f0 1 4 f1 1 2 f2 1 4 f3 1 # # # 1 4 fn21 1 fn] 3 The error term for Simpson’s rule is given by |E n| , nh5| f s4d std|/180, where a t b. Simpson’s rule is generally more accurate than the trapezoid rule.

fsx 1 hd 2 f sxd f rsxd 5 h f rsxd 5

f sx 1 hd 2 f sx 2 hd 2h

f rsxd 5

f sx 1 2hd 1 f sx 1 hd 2 f sx 2 hd 2 f sx 2 2hd 6h

Other formulas are possible. If a derivative is to be calculated from an equally spaced sequence of measured data, y1, y2, . . . , yn, then the above formulas may be adapted by taking yi  f(xi). Then h  xi 1  xi is the distance between measurements. Since there are usually noise or measurement errors in measured data, it is often necessary to smooth the data, expecting that errors will be averaged out. Elementary smoothing is by simple averaging, where a value yi is replaced by an average before the derivative is calculated. Examples include: yi d

yi11 1 yi 1 yi21 3

yi d

yi12 1 yi11 1 yi 1 yi21 1 yi22 5

Ordinary Differential Equations Modified Euler Method Consider a first-order differential equation dy/dx  f (x, y) and initial condition y  y0 and x  x0. Take xi equally spaced, with xi 1  xi  h. Then the method is: 1. Set n  0. 2. y nr 5 f sxn, ynd and ys 5 fx sxn, ynd 1 y nr fy sxn, ynd, where fx and fy denote partial derivatives. 3. y nr 11 5 f sxn11, yn11d. Predictor steps: 4. For n . 0, y *n11 5 yn21 1 2hy rn. * 5. y rn11 5 f sxn11, y *n11d. Corrector steps: 6. y #n 1 1 5 yn 1 [ y n11 * 1 y nr ]h/2. # 7. y rn11 5 f sxn11, y #n11d. 8. If required accuracy is not yet obtained for yn1 and y rn11, then substitute y# for y*, in all its forms, and repeat the corrector steps. Otherwise, set n  n  1 and return to step 2. Other predictor-corrector methods are described in the literature. Runge-Kutta Methods These make up a family of widely used methods for ordinary differential equations. Given dy/dx  f (x, y) and h  interval size, third-order method (error proportional to h4):

k0 5 hf sxnd

More information may be found in the literature under the topics linear filters, digital signal processing, and smoothing techniques.

k0 h k1 5 hf ¢ xn 1 , yn 1 ≤ 2 2

Numerical Integration

k2 5 hf sxn 1 h, yn 1 2k1 2 k0d

Numerical integration requires a great deal of calculation and is usually done with the aid of a computer. All the methods described here, and many others, are widely available in packaged computer software. There is often a temptation to use whatever software is available without first checking that it really is appropriate. For this reason, it is important that the user be familiar with the methods being used and that he or she ensure that the error terms are tolerably small. Trapezoid Rule If an interval a # x # b is divided into subintervals x0, x1, . . . , xn, then the definite integral b

3 f sxd dx a

yn11 5 yn 1

k0 1 4k1 1 k2 6

Higher-order Runge-Kutta methods are described in the literature. In general, higher-order methods yield smaller error terms. Partial Differential Equations

Approximate solutions to partial differential equations require even more computational effort than ordinary differential equations. A typical problem involves finding the value of a function of two or more variables on the interior of a region, given the values of the function on the boundary of that region and also given a partial differential equation that the function must satisfy.

2-40

MATHEMATICS

'2z 1 < 2 sz1,0 2 2z0,0 1 z21,0d 'x 2 a '2z 1 < 2 sz1,1 2 z21,1 1 z21,21 2 z1,21d 'x'y 4a '2z 1 < 2 sz0,1 2 2z0,0 1 z0,21d 'y 2 a

A typical solution strategy involves representing the continuous problem by its values on a finite set of points, called nodes, within the region, and representing the differential equation by difference equations on the value of the function at the nodes. The boundary value conditions are represented by assigning values to those nodes at or near the boundary of the region. z0,2

z –2,0

z –1,1

z 0,1

z 1,1

z –1,0

z 0,0

z 1,0

z –1,–1

z 0,–1

z 1,–1

Other formulas are possible, and there are also formulas for higher order partial derivatives. Applying the appropriate difference formulas to each node in a net gives an equation for each node. If there are n nodes, then n linear equations involving n unknowns arise. Though n is likely to be rather large, the system of linear equations is sparse, that is, most of the entries in the n 3 n matrix that describes the system are zero. Special methods exist for solving sparse systems of simultaneous linear equations relatively efficiently.

z 2,0

EXAMPLE.

Solve the generalized Laplace equation =2f 5

z 0,–2

'2 f 'x

2

1

'2 f 'y 2

5 gsx, yd

where g(x, y) is a given function of x and y. At each point sx i, yjd on the interior of the region, we approximate the partial derivatives as

Fig. 2.1.113 Rectangular net.

The pattern of nodes within the region is called a net. A rectangular net is shown in Fig. 2.1.113. There is also an extensive theory of triangular nets. Some techniques use an irregular pattern of nodes, with the nodes more concentrated in more critical parts of the region. Such techniques are called finite element methods. If the distance between nodes in a net is given by a, then the various partial derivatives at a value z0,0 5 f sx0, y0d may be given by 'z 1 < sz1,0 2 z21,0d 'x 2a 'z 1 < sz0,1 2 z0,21d 'y 2a

'2z 1 < 2 szi11, j 2 2zi, j 1 zi21, jd 'x 2 a

and

'2z 1 < 2 szi, j11 2 2zi, j 1 zi, j21d. 'y 2 a

This gives a difference equation 1 sz 1 z i21, j 1 z i, j11 1 z i, j21 2 4z i, jd 5 gsx i, yjd a 2 i11, j

that must be satisfied at each point (xi, yj). Relaxation methods are a family of techniques for finding approximate solutions to such systems of equations. They involve finding the points (xi, y j) for which the difference equations are farthest from being true, and adjusting or relaxing the value of zi, j so that the equation is satisfied at that point.

2.2 COMPUTERS by Thomas J. Cockerill REFERENCES: Manuals from Computer Manufacturers. Knuth, “The Art of Computer Programming,” vols 1, 2, and 3, Addison-Wesley. Yourdon and Constantine, “Structured Design,” Prentice-Hall. DeMarco, “Structured Analysis and System Specification,” Prentice-Hall. Moshos, “Data Communications,” West Publishing. Date, “An Introduction to Database Systems,” 4th ed., Addison-Wesley. Wiener and Sincovec, “Software Engineering with Modula-2 and ADA,” Wiley. Hamming, “Numerical Methods for Scientists and Engineers,” McGraw-Hill. Bowers and Sedore, “SCEPTRE: A Computer Program for Circuit and System Analysis,” Prentice-Hall. Tannenbaum, “Operating Systems,” Prentice-Hall. Lister, “Fundamentals of Operating Systems, 3d ed., SpringerVerlag. American National Standard Programming Language FORTRAN, ANSI X3.198-1992. Jensen and Wirth, “PASCAL: User Manual and Report,” Springer. Communications, Journal, and Computer Surveys, ACM Computer Society. Computer, Spectrum, IEEE.

COMPUTER PROGRAMMING Machine Types

Computers are machines used for automatically processing information represented by mechanical, electrical, or optical means. They may be

classified as analog or digital according to the techniques used to represent and process the information. Analog computers represent information as physically measurable, continuous quantities and process the information by components that have been interconnected to form an analogous model of the problem to be solved. Digital computers, on the other hand, represent information as discrete physical states which have been encoded into symbolic formats, and process the information by sequences of operational steps which have been preplanned to solve the given problem. When compared to analog computers, digital computers have the advantages of greater versatility in solving scientific, engineering, and commercial problems that involve numerical and nonnumerical information; of an accuracy dictated by significant digits rather than that which can be measured; and of exact reproducibility of results that stay unvitiated by small, random fluctuations in the physical signals. In the past, multiple-purpose analog computers offered advantages of speed and cost in solving a sophisticated class of complex problems dealing with networks of differential equations, but these advantages have disappeared with the advances in solid-state computers. Other than the

COMPUTER DATA STRUCTURES

occasional use of analog techniques for embedding computations as part of a larger system, digital techniques now account almost exclusively for the technology used in computers. Digital information may be represented as a series of incremental, numerical steps which may be manipulated to position control devices using stepping motors. Digital information may also be encoded into symbolic formats representing digits, alphabetic characters, arithmetic numbers, words, linguistic constructs, points, and pictures which may be processed by a variety of mechanized operators. Machines organized in this manner can handle a more general class of both numerical and nonnumerical problems and so form by far the most common type of digital machines. In fact, the term computer has become synonymous with this type of machine. Digital Machines

Digital machines consist of two kinds of circuits: memory cells, which effectively act to delay signals until needed, and logical units, which perform basic Boolean operations such as AND, OR, NOT, XOR, NAND, and NOR. Memory circuits can be simply defined as units where information can be stored and retrieved on demand. Configurations assembled from the Boolean operators provide the macro operators and functions available to the machine user through encoded instructions. Both data and the instructions for processing the data can be stored in memory. Each unit of memory has an address at which the contents can be retrieved, or “read.” The read operation makes the contents at an address available to other parts of the computer without destroying the contents in memory. The contents at an address may be changed by a write operation which inserts new information after first nullifying the previous contents. Some types of memory, called read-only memory (ROM), can be read from but not written to. They can only be changed at the factory. Abstractly, the address and the contents at the address serve roles analogous to a variable and the value of the variable. For example, the equation z  x  y specifies that the value of x added to the value of y will produce the value of z. In a similar way, the machine instruction whose format might be: add, address 1, address 2, address 3 will, when executed, add the contents at address 1 to the contents at address 2 and store the result at address 3. As in the equation where the variables remain unaltered while the values of the variables may be changed, the addresses in the instruction remain unaltered while the contents at the address may change. Computers differ from other kinds of mechanical and electrical machines in that computers perform work on information rather than on forces and displacements. A common form of information is numbers. Numbers can be encoded into a mechanized form and processed by the four rules of arithmetic (  , ,  , ). But numbers are only one kind of information that can be manipulated by the computer. Given an encoded alphabet, words and languages can be formed and the computer can be used to perform such processes as information storage and retrieval, translation, and editing. Given an encoded representation of points and lines, the computer can be used to perform such functions as drawing, recognizing, editing, and displaying graphs, patterns, and pictures. Because computers have become easily accessible, engineers and scientists from every discipline have reformatted their professional activities to mechanize those aspects which can supplant human thought and decision. In this way, mechanical processes can be viewed as augmenting human physical skills and strength, and information processes can be viewed as augmenting human mental skills and intelligence. COMPUTER DATA STRUCTURES Binary Notation

Digital computers represent information by strings of digits which assume one of two values: 0 or 1. These units of information are called bits, a word contracted from the term binary digits. A string of bits may represent either numerical or nonnumerical information.

2-41

In order to achieve efficiency in handling the information, the computer groups the bits together into units containing a fixed number of bits which can be referenced as discrete units. By encoding and formatting these units of information, the computer can act to process them. Units of 8 bits, called bytes, are common. A byte can be used to encode the basic symbolic characters which provide the computer with input-output information such as the alphabet, decimal digits, punctuation marks, and special characters. Bit groups may be organized into larger units of 4 bytes (32 bits) called words, or even larger units of 8 bytes called double words; and sometimes into smaller units of 2 bytes called half words. Besides encoding numerical information and other linguistic constructs, these units are used to encode a repertoire of machine instructions. Older machines and special-purpose machines may have other word sizes. Computers process numerical information represented as binary numbers. The binary numbering system uses a positional notation similar to the decimal system. For example, the decimal number 596.37 represents the value 5  102  9  101  6  100  3  101  7  102. The value assigned to any of the 10 possible digits in the decimal system depends on its position relative to the decimal point (a weight of 10 to zero or positive exponent is assigned to the digits appearing to the left of the decimal point, and a weight of 10 to a negative exponent is applied to digits to the right of the decimal point). In a similar manner, a binary number uses a radix of 2 and two possible digits: 0 and 1. The radix point in the positional notation separates the whole from the fractional part of the number, just as in the decimal system. The binary number 1011.011 represents a value 1  23  0  22  1  21  1  20  0  21  1  22  1  23. The operators available in the computer for setting up the solution of a problem are encoded into the instructions of the machine. The instruction repertoire always includes the usual arithmetic operators to handle numerical calculations. These instructions operate on data encoded in the binary system. However, this is not a serious operational problem, since the user specifies the numbers in the decimal system or by mnemonics, and the computer converts these formats into its own internal binary representation. On occasions when one must express a number directly in the binary system, the number of digits needed to represent a numerical value becomes a handicap. In these situations, a radix of 8 or 16 (called the octal or hexadecimal system, respectively) constitutes a more convenient system. Starting with the digit to the left or with the digit to the right of the radix point, groups of 3 or 4 binary digits can be easily converted to equivalent octal or hexadecimal digits, respectively. Appending nonsignificant 0s as needed to the rightmost and leftmost part of the number to complete the set of 3 or 4 binary digits may be necessary. Table 2.2.1 lists the conversions of binary digits to their equivalent octal and hexadecimal representations. In the hexadecimal system, the letters A through F augment the set of decimal digits to represent the digits for 10 through 15. The following examples illustrate the conversion Table 2.2.1 Binary-Hexadecimal and Binary-Octal Conversion Binary

Hexadecimal

Binary

Octal

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

0 1 2 3 4 5 6 7 8 9 A B C D E F

000 001 010 011 100 101 110 111

0 1 2 3 4 5 6 7

2-42

COMPUTERS Table 2.2.2 Schemes for Encoding Decimal Digits Decimal digit

BCD

Excess-3

4221 code

0 1 2 3 4 5 6 7 8 9

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001

0011 0100 0101 0110 0111 1000 1001 1010 1011 1100

0000 0001 0010 0011 0110 1001 1100 1101 1110 1111

between binary numbers and octal or hexadecimal numbers using the table. binary number octal number

011 3

binary number hexadecimal number

011 3

110 6

101 5

. .

001 1

111 7

0110 6

1111 F

0101 5

. .

0011 3

1110 E

100 4

Formats for Numerical Data

Three different formats are used to represent numerical information internal to the computer: fixed-point, encoded decimal, and floating-point. A word or half word in fixed-point format is given as a string of 0s and 1s representing a binary number. The program infers the position of the radix point (immediately to the right of the word representing integers, and immediately to the left of the word representing fractions). Algebraic numbers have several alternate forms: 1’s complement, 2’s complement, and signed-magnitude. Most often 1’s and 2’s complement forms are adopted because they lead to a simplification in the hardware needed to perform the arithmetic operations. The sign of a 1’s complement number can be changed by replacing the 0s with 1s and the 1s with 0s. To change the sign of a 2’s complement number, reverse the digits as with a 1’s-complement number and then add a 1 to the resulting binary number. Signed-magnitude numbers use the common representation of an explicit  or  sign by encoding the sign in the leftmost bit as a 0 or 1, respectively. Many computers provide an encoded-decimal representation as a convenience for applications needing a decimal system. Table 2.2.2 gives three out of over 8000 possible schemes used to encode decimal digits in which 4 bits represent each decade value. Many other codes are possible using more bits per decade, but four bits per decimal digit are common because two decimal digits can then be encoded in one byte. The particular scheme selected depends on the properties needed by the devices in the application. The floating-point format is a mechanized version of the scientific notation ( M  10 E, where M and E represent the signed mantissa and signed exponent of the number). This format makes possible the use of a machine word to encode a large range of numbers. The signed mantissa and signed exponent occupy a portion of the word. The exponent is implied as a power of 2 or 16 rather than of 10, and the radix point is implied to the left of the mantissa. After each operation, the machine adjusts the exponent so that a nonzero digit appears in the most significant digit of the mantissa. That is, the mantissa is normalized so that its value lies in the range of 1/b # M 1 where b is the implied base of the number system (e.g.: 1/2 # M 1 for a radix of 2, and 1/16 # M 1 for a radix of 16). Since the zero in this notation has many logical representations, the format uses a standard recognizable form for zero, with a zero mantissa and a zero exponent, in order to avoid any ambiguity. When calculations need greater precision, floating-point numbers use a two-word representation. The first word contains the exponent and mantissa as in the one-word floating point. Precision is increased by appending the extra word to the mantissa. The terms single precision and double precision make the distinction between the one- and

Fig. 2.2.1 ASCII code set.

two-word representations for floating-point numbers, although extended precision would be a more accurate term for the two-word form since the added word more than doubles the number of significant digits. The equivalent decimal precision of a floating-point number depends on the number n of bits used for the unsigned mantissa and on the implied base b (binary, octal, or hexadecimal). This can be simply expressed in equivalent decimal digits p as: 0.0301 (n  log2b) p 0.0301 n. For example, a 32-bit number using 7 bits for the signed exponent of an implied base of 16, 1 bit for the sign of the mantissa, and 24 bits for the value of the mantissa gives a precision of 6.02 to 7.22 equivalent decimal digits. The fractional parts indicate that some 7-digit and some 8-digit numbers cannot be represented with a mantissa of 24 bits. On the other hand, a double-precision number formed by adding another word of 32 bits to the 24-bit mantissa gives a precision of 15.65 to 16.85 equivalent decimal digits. The range r of possible values in floating-point notation depends on the number of bits used to represent the exponent and the implied radix. For example, for a signed exponent of 7 bits and an implied base of 16, then 1664 # r # 1663. Formats of Nonnumerical Data

Logical elements, also called Boolean elements, have two possible values which simply represent 0 or 1, true or false, yes or no, OFF or ON, etc. These values may be conveniently encoded by a single bit. A large variety of codes are used to represent the alphabet, digits, punctuation marks, and other special symbols. The most popular ones are the 7-bit ASCII code and the 8-bit EBCDIC code. ASCII and EBCDIC find their genesis in punch-tape and punch-card technologies, respectively, where each character was encoded as a combination of punched holes in a column. Both have now evolved into accepted standards represented by a combination of 0s and 1s in a byte. Figure 2.2.1 shows the ASCII code. (ASCII stands for American Standard Code for Information Interchange.) The possible 128 bit patterns divide the code into 96 graphic characters (although the codes 0100000 and 1111111 do not represent any printable graphic symbol) and 32 control characters which represent nonprintable characters used in communications, in controlling peripheral machines, or in expanding the code set with other characters or fonts. The graphic codes and the control codes are organized so that subsets of usable codes with fewer bits can be formed and still maintain the pattern. Data Structure Types

The above types of numerical and nonnumerical data formats are recognized and manipulated by the hardware operations of the computer. Other more complex data structures may be programmed into the

COMPUTER ORGANIZATION

computer by building upon these primitive data types. The programmable data structures might include arrays, defined as ordered lists of elements of identical type; sets, defined as unordered lists of elements of identical type; records, defined as ordered lists of elements that need not be of the same type; files, defined as sequential collections of identical records; and databases, defined as organized collections of different records or file types. COMPUTER ORGANIZATION

the premise that data and instructions that will shortly be needed are located near those currently being used. If the information is not found in the cache, then it is transferred from the main memory. The effective average access times offered by the combined configuration of RAM and cache results in a more powerful (faster) computer. Central Processing Unit

The CPU makes available a repertoire of instructions which the user uses to set up the problem solutions. Although the specific format for instructions varies among machines, the following illustrates the pattern:

Principal Components

The principal components of a computer system consist of a central processing unit (referred to as the CPU or platform), its working memory, user interface, file storage, and a collection of add-ons and peripheral devices. A computer system can be viewed as a library of collected data and packages of assembled sequences of instructions that can be executed in the prescribed order by the CPU to solve specific problems or perform utility functions for the users. These sequences are variously called programs, subprograms, routines, subroutines, procedures, functions, etc. Collectively they are called software and are directly accessible to the CPU through the working memory. The file devices act analogously to a bookshelf—they store information until it is needed. Only after a program and its data have been transferred from the file devices or from peripheral devices to the working memory can the individual instructions and data be addressed and executed to perform their intended functions. The CPU functions to monitor the flow of data and instructions into and out of memory during program execution, control the order of instruction execution, decode the operation, locate the operand(s) needed, and perform the operation specified. Two characteristics of the memory and storage components dictate the roles they play in the computer system. They are access time, defined as the elapsed time between the instant a read or write operation has been initiated and the instant the operation is completed, and size, defined by the number of bytes in a module. The faster the access time, the more costly per bit of memory or storage, and the smaller the module. The principal types of memory and storage components from the fastest to the slowest are registers which operate as an integral part of the CPU, cache and main memory which form the working memory, and mass and archival storage which serve for storing files. The interrelationships among the components in a computer system and their primary performance parameters will be given in context in the following discussion. However, hundreds of manufacturers of computers and computer products have a stake in advancing the technology and adding new functionality to maintain their competitive edge. In such an environment, no performance figures stay current. With this caveat, performance figures given should not be taken as absolutes but only as an indication of how each component contributes to the performance of the total system. Throughout the discussion (and in the computer world generally), prefixes indicating large numbers are given by the symbols k for kilo (103), M for mega (106), G for giga (109), and T for tera (1012). For memory units, however, these symbols have a slightly altered meaning. Memories are organized in binary units whereby powers of two form the basis for all addressing schemes. According, k refers to a memory size of 1024 (210) units. Similarly M refers to 10242 (1,048,576), G refers to 10243, and T refers to 10244. For example, 1-Mbyte memory indicates a size of 1,048,576 bytes. Memory

The main memory, also known as random access memory (RAM), is organized into fixed size bit cells (words, bytes, half words, or double words) which can be located by address and whose contents contain the instructions and data currently being executed. The CPU acts to address the individual memory cells during program execution and transfers their contents to and from its internal registers. Optionally, the working memory may contain an auxiliary memory, called cache, which is faster than the main memory. Cache operates on

2-43

name: operator, operand(s) The name designates an address whose contents contain the operator and one or more operands. The operator encodes an operation permitted by the hardware of the CPU. The operand(s) refer to the entities used in the operation which may be either data or another instruction specified by address. Some instructions have implied operand(s) and use the bits which would have been used for operand(s) to modify the operator. To begin execution of a program, the CPU first loads the instructions serially by address into the memory either from a peripheral device, or more frequently, from storage. The internal structure of the CPU contains a number of memory registers whose number, while relatively few, depend on the machine’s organization. The CPU acts to transfer the instructions one at a time from memory into a designated register where the individual bits can be interpreted and executed by the hardware. The actions of the following steps in the CPU, known as the fetch-execute cycle, control the order of instruction execution. Step 1: Manually or automatically under program control load the address of the starting instruction into a register called the program register (PR). Step 2: Fetch and copy the contents at the address in PR into a register called the program content register (PCR). Step 3: Prepare to fetch the next instruction by augmenting PR to the next address in normal sequence. Step 4: Interpret the instruction in PCR, retrieve the operands, execute the encoded operation, and then return to step 2. Note that the executed instruction may change the address in PR to start a different instruction sequence. The speed of machines can be compared by calculating the average execution time of an instruction. Table 2.2.3 illustrates a typical instruction mix used in calculating the average. The instruction mix gives the relative frequency each instruction appears in a compiled list of typical programs and so depends on the types of problems one expects the machine to solve (e.g., scientific, commercial, or combination). The equation t 5 g witi i

expresses the average instruction execution time t as a function of the execution time ti for instruction i having a relative frequency wi in the instruction mix. The reciprocal of t measures the processor’s performance as the average number of instructions per second (ips) it can execute. Table 2.2.3

Instruction Mix

i

Instruction type

Weight wi

1 2 3 4 5 6 7 8

Add: Floating point Fixed point Multiple: Floating point Load/store register Shift: One character Branch: Conditional Unconditional Move 3 words in memory Total

0.07 0.16 0.06 0.12 0.11 0.21 0.17 0.10 1.00

For machines designed to support scientific and engineering calculations, the floating-point arithmetic operations dominate the time needed to execute an average instruction mix. The speed for such machines is

2-44

COMPUTERS

given by the average number of floating-point operations which can be executed per second (flops). This measure, however, can be misleading in comparing different machine models. For example, when the machine has been configured with a cluster of processors cooperating through a shared memory, the rate of the configuration (measured in flops) represents the simple sum of the individual processors’ flops rates. This does not reflect the amount of parallelism that can be realized within a given problem. To compare the performance of different machine models, users often assemble and execute a suite of programs which characterize their particular problem load. This idea has been refined so that in 1992 two suites of benchmark programs representing typical scientific, mathematical, and engineering applications were standardized: Specint92 for integer operations, and Specfp92 for floating-point operations. Since these benchmark programs were established, many more standardized benchmarks have been created to model critical workloads and applications, and the set is constantly being revised and expanded. Performance ratings for computers are often reported on the basis of results from running a selected subset of the available benchmark programs. Computer performance depends on a number of interrelated factors used in their design and fabrication, among them compactness, bus size, clock frequency, instruction set size, and number of coprocessors. The speed that energy can be transmitted through a wire, bounded theoretically at 3  1010 cm/s, limits the ultimate speed at which the electronic circuits can operate. The further apart the electronic elements are from each other, the slower the operations. Advances in integrated circuits have produced compact microprocessors operating in the nanosecond range. The microprocessor’s bus size (the width of its data path, or the number of bits that can be sent simultaneously in parallel) affect its performance in two ways: by the number of memory cells that can be directly addressed, and by the number of bits each memory reference can fetch and process at a time. For example, a 16-bit microprocessor can reference 216 16-bit memory cells and process 16 bits at a time. In order to handle the individual bits, the number of transistors that must be packed into the microprocessor goes up geometrically with the width of the data path. The earliest microprocessors were 8-bit devices, meaning that every memory reference retrieved 8 bits. To retrieve more bits, say 16, 32, or 64 bits, the 8-bit microprocessor had to make multiple references. Microprocessors have become more powerful as the packing technology has improved. While normally the circuits operate asynchronously, a computer clock times the sequencing of the instructions. Clock speed is given in hertz (Hz, one cycle per second). Each instruction takes an integral number of cycles to complete, with one cycle being the minimum. If an instruction completes its operations in the middle of a cycle, the start of the next instruction must wait for the beginning of the next cycle. Two schemes are used to implement the computer instruction set in the microprocessors. The more traditional complex instruction set computer (CISC) microprocessors implement by hard-wiring some 300 instruction types. Strange to say, the faster alternate-approach reduced instruction set computer (RISC) implements only about 10 to 30 percent of the instruction types by hard wiring, and implements the remaining more-complex instructions by programming them at the factory into read-only memory. Since the fewer hard-wired instructions are more frequently used and can operate faster in the simpler, more-compact RISC environment, the average instruction time is reduced. To achieve even greater effectiveness and speed calls for more complex coordination in the execution of instructions and data. In one scheme, microprocessors in a cluster share a common memory to form the machine organization (a multiprocessor or parallel processor). The total work which may come from a single program or independent programs is parceled out to the individual machines which operate independently but are coordinated to work in parallel with the other machines in the cluster. Faster speeds can be achieved when the individual processors can work on different parts of the problem or can be assigned to those parts of the problem solution for which they have been especially designed (e.g., input-output operations or computational

operations). Two other schemes, pipelining and array processing, divide an instruction into the separate tasks that must be performed to complete its execution. A pipelining machine executes the tasks concurrently on consecutive pieces of data. An array processor executes the tasks of the different instructions in a sequence simultaneously and coordinates their completion (which might mean abandoning a partially completed instruction if it had been initiated prematurely). These schemes are usually associated with the larger and faster machines. User Interface

The user interface is provided to enable the computer user to initiate or terminate tasks, to interrogate the computer to determine the status of tasks during execution, to give and receive instructions, and to otherwise monitor the operation of the system. The user interface is continuously evolving, but commonly consists of a relatively slow speed keyboard input, a display, and a pointing device. Important display characteristics for usability are the size of the screen and the resolution. The display has its own memory, which refreshes and controls the display. Examples of display devices that have seen common usage are cathode-ray tube (CRT) displays, flatpanel displays, and thin-film displays. For convenience and manual speed, a pointing device such as a mouse, trackball, or touchpad can be used to control the movement of the cursor on the display. The pointing device can also be used to locate and select options available on the display. File Devices

File devices serve to store libraries of directly accessible programs and data in electronically or optically readable formats. The file devices record the information in large blocks rather than by individual addresses. To be used, the blocks must first be transferred into the working memory. Depending on how selected blocks are located, file devices are categorized as sequential or direct-access. On sequential devices the computer locates the information by searching the file from the beginning. Direct-access devices, on the other hand, position the read-write mechanism directly at the location of the needed information. Searching on these devices works concurrently with the CPU and the other devices making up the computer configuration. Magnetic or optical disks that offer a wide choice of options form the commercial direct-access devices. Some file devices are read only and are used to deliver programs or data to be installed or only referenced by the user. Examples of read-only devices are CD-ROM and DVD-ROM. Other file devices can be both read from and written to. Example of read-write devices are magnetic disk drives, CD RJW drives, and DVD-RW drives. The recording surface consists of a platter (or platters) of recording material mounted on a common spindle rotated at high speed. The read-write heads may be permanently positioned along the radius of the platter or may be mounted on a common arm that can be moved radially to locate any specified track of information. Information is recorded on the tracks circumferentially using fixed-size blocks called pages or sectors. Pages divide the storage and memory space alike into blocks of 4096 bytes so that program transfers can be made without creating unusable space. Sectors nominally describe the physical division of the storage space into equal segments for easier positioning of the read-write heads. The access time for retrieving information from a disk depends on three separately quoted factors, called seek time, latency time, and transfer time. Seek time gives the time needed to position the read-write heads from their current track position to the track containing the information. Since the faster fixed-head disks require no radial motion, only latency and transfer time need to be factored into the total access time for these devices. Latency time is the time needed to locate the start of the information along the circumferential track. This time depends on the speed of revolution of the disk, and, on average, corresponds to the time required to revolve the platter half a turn. The average latency time can be reduced by repeating the information several times around the track. Transfer time, usually quoted as a rate, gives the rate at which information can be transferred to memory after it has been located. There is a large variation in transfer rates depending on the disk system selected.

DISTRIBUTED COMPUTING

Computer architects sometimes refer to file storage as mass storage or archival storage, depending on whether or not the libraries can be kept off-line from the system and mounted when needed. Disk drives with mountable platter(s) and tape drives constitute the archival storage. Sealed disks that often have fixed heads for faster access are the medium of choice for mass storage. Peripheral Devices and Add-ons

Peripheral devices function as self-contained external units that work on line to the computer to provide or receive information or to control the flow of information. Add-ons are a special class of units whose circuits can be integrated into the circuitry of the computer hardware to augment the basic functionality of the processors. Section 15 covers the electronic technology associated with these devices. An input device may be defined as any device that provides a machine-readable source of information. For engineering work, the most common forms of input are touch-tone dials, mark sensing, bar codes, and keyboards (usually in conjunction with a printing mechanism or video scope). Many bench instruments have been reconfigured to include digital devices to provide direct input to computers. Because of the datahandling capabilities of the computer, these instruments can be simpler, smaller, and less expensive than the hand instruments they replace. New devices have also been introduced: devices for visual measurement of distance, area, speed, and coordinate position of an object; or for inspecting color or shades of gray for computer-guided vision. Other methods of input that are finding greater acceptance include handwriting recognition, printed character recognition, voice digitizers, and picture digitizers. Traditionally, output devices play the role of producing displays for the interpretation of results. A large variety of printers, graphical plotters, video displays, and audio sets have been developed for this purpose. A variety of actuators have been developed for driving control mechanisms. For complex numerical control, programmable controllers (called PLCs) can simultaneously control and update data from multiple tasks. These electronically driven mechanisms and controllers, working with input devices, make possible systems for automatic testing of products, real-time control, and robotics with learning and adaptive capabilities.

2-45

workstation, one can expect a machine designed as a workstation to offer higher performance than a PC and to support the more specialized peripherals and sophisticated professional software. Nevertheless, the boundary between PCs and workstations changes as the technology advances. Notebook PCs and the smaller sized palmtop PCs are portable, battery-operated machines. These machines find excellent use as portable PCs in some applications and as data acquisition systems. However their undersized keyboards and small displays may limit their usefulness for sustained operations. Computers larger than a PC or a workstation, called mainframes (and sometimes minis or maxis, depending on size), serve to support multiusers and multiapplications. A remotely accessible computing center may house several mainframes which either operate alone or cooperate with each other. Their high speed, large memories, and high reliability allow them to handle complex programs and they have been especially well suited to applications that do not allow for any significant downtime such as banking operations, business transactions, and reservation systems. At the upper extreme end of the computer spectrum is the supercomputer, the class of the fastest machines that can address large, complex scientific/engineering problems which cannot reasonably be transferred to other machines. Obviously this class of computer must have cache and main memory sizes and speeds commensurate with the speed of the platform. While mass memory sizes must also be large, computers which support large databases may often have larger memories than do supercomputers. Large, complex technical problems must be run with high-precision arithmetic. Because of this, performance is measured in double-precision flops. To realize the increasing demand for higher performance, the designers of supercomputers work at the edge of available technology, especially in the use of multiple processors. With multiple processors, however, performance depends as much on the time spent in communication between processors as on the computational speed of the individual processors. In the final analysis, to muster the supercomputer’s inherent speed, the development of the software becomes the problem. Some users report that software must often be hand-tailored to the specific problem. The power of the machines, however, can replace years of work in analysis and experimentation.

Computer Sizes

DISTRIBUTED COMPUTING

Computer size refers not only to the physical size but also to the number of electronics elements in the system, and so reflects the performance of the system. Between the two ends of the spectrum from the largest and fastest to the smallest and slowest are machines that vary in speed and complexity. Although no nomenclature has been universally adopted that indicates computer size, the following descriptions illustrate a few generally understood terms used for some common configurations. The choice of which computer is appropriate often requires serious evaluation. In the cases where serious evaluation is required, it is necessary to evaluate common performance benchmarks for the machine. In addition, it may be necessary to evaluate the machine by using a mix of applications designed to simulate current usage of existing machines or the expected usage of the proposed machine. Personal computers (PCs) have been made possible by the advances in solid-state technology. The name applies to computers that can fit the total complement of hardware on a desktop and operate as stand-alone systems so as to provide immediate dedicated services to an individual user. This popular computer size has been generally credited for spreading computer literacy in today’s society. Because of its commercial success, many peripheral devices, add-ons, and software products have been (and are continually being) developed. Laptop PCs are personal computers that have the low weight and size of a briefcase and can easily be transported when peripherals are not immediately needed. The term workstation describes computer systems which have been designed to support complex engineering, scientific, or business applications in a professional environment. Although a top-of-the-line PC or a PC connected as a peripheral to another computer can function like a

Organization of Data Facilities

A distributed computer system can be defined as a collection of computer resources which are remotely located from each other and are interconnected to cooperate in providing their respective services. The resources include both the equipment and the software. Resources distributed to reside near the vicinity where the data is collected or used have an obvious advantage over centralization. But to provide information in a timely and reliable manner, these islands of automation must be integrated. The size and complexity of an enterprise served by a distributed information system can vary from a single-purpose office to a multipleplant conglomerate. An enterprise is defined as a system which has been created to accomplish a mission in its environment and whose goals involve risk. Internally it consists of organized functions and facilities which have been prepared to provide its services and accomplish its mission. When stimulated by an external entity, the enterprise acts to produce its planned response. An enterprise must handle both the flow of material (goods) and the flow of information. The information system tracks the material in the material system, but itself handles only the enterprise’s information. The technology for distributing and integrating the total information system comes under the industrial strategy known as computer-integrated business (CIB) or computer-integrated manufacturing (CIM). The following reasons have been cited for developing CIB and CIM: Most data generated locally has only local significance. Data integrity resides where it is generated.

2-46

COMPUTERS

The quality and consistency of operational decisions demands not only that all parts of the system work with the same data but that they can receive it in a reliable and timely manner. If a local processor fails, it may disrupt local operations, but the remaining system should continue to function independently. Small cohesive processors can be best managed and maintained locally. Through standards, selection of local processes can be made from the best products in a competitive market that can be integrated into the total system. Obsolete processors can be replaced by processors implemented by more advance technology that conform to standards without the cost of tailoring the products to the existing system. Figure 2.2.2 depicts the total information system of an enterprise. The database consists of the organized collection of data the processors use in their operations. Because of differences in their communication requirements, the automated procedures are shown separated into those used in the office and those used on the production floor. In a business environment, the front office operations and back office operations make this separation. While all processes have a critical deadline, the production floor handles real-time operations, defined as processes which must complete their response within a critical deadline or else the results of the operations become moot. This places different constraints on the local-area networks (LANs) that serve the communication needs within the office or within the production floor. To communicate with entities outside the enterprise, the enterprise uses a wide-area network (WAN), normally made up from available public network facilities. For efficient and effective operation, the processes must be interconnected by the communications to share the data in the database and so integrate the services provided.

noise inherent in the components. The formula: C 5 W log 2 s1 1 S/Nd gives the capacity C in bits/s in terms of the signal to noise ratio S/N and the bandwidth W. Since the signal to noise ratio is normally given in decibels divisible by 3 (e.g., 12, 18, 21, 24) the following formula provides a workable approximation to the formula above: C 5 WsS/Nddb/3 where (S/N)db is the signal-to-noise ratio expressed in decibels. Other forms of noise, signal distortions, and the methods of signal modulation reduce this theoretical capacity appreciably. Nominal transmission speeds for electronic channels vary from 1000 bits to almost 20 Mbits per second. Fiber optics, however, form an almost noise-free medium. The transmission speed in fiber optics depends on the amount a signal spreads due to the multiple reflected paths it takes from its source to its destination. Advances in fiber technology have reduced this spread to give unbelievable rates. Effectively, the speeds available in today’s optical channels make possible the transmission over a common channel, using digital techniques, of all forms of information: text, voice, and pictures. Besides agreeing on speed, the transmitter and receiver must agree on the mode of transmission and on the timing of the signals. For stations located remotely from each other, transmission occurs by organizing the bits into groups and transferring them, one bit after another, in a serial mode. One scheme, called asynchronous or start-stop transmission, uses separate start and stop signals to frame a small group of bits representing a character. Separate but identical clocks at the transmitter and receiver time the signals. For transmission of larger blocks at faster rates, the stations use synchronous transmission which embeds the clock information within the transmitted bits. Communication Layer Model

Figure 2.2.3 depicts two remotely located stations that must cooperate through communication in accomplishing their respective tasks. The communications substructure provides the communication services needed by the application. The application tasks themselves, however, are outside the scope of the communication substructure. The distinction here is similar to that in a telephone system which is not concerned with the application other than to provide the needed communication service. The figure shows the communication facilities packaged into a hierarchical modular layer architecture in which each node contains identical kinds of functions at the same layer level. The layer functions represent abstractions of real facilities, but need not represent specific hardware or software. The entities at a layer provide communication Fig. 2.2.2 Composite view of an enterprise’s information system. Communication Channels

A communication channel provides the connecting path for transmitting signals between a computing system and a remotely located application. Physically the channel may be formed by a wire line using copper, coaxial cable, or optical-fiber cable; or may be formed by a wireless line using radio, microwave, or communication satellites; or may be a combination of these lines. Capacity, defined as the maximum rate at which information can be transmitted, characterizes a channel independent of the morphic line. Theoretically, an ideal noiseless channel that does not distort the signals has a channel capacity C given by: C  2W where C is in pulses per second and W is the channel bandwidth. For digital transmission, the Hartley-Shannon theorem sets the capacity of a channel limited by the presence of gaussian noise such as the thermal

Fig. 2.2.3 Communication layer architecture.

RELATIONAL DATABASE TECHNOLOGY

services to the layer above or can request the services available from the layer below. The services provided or requested are available only at service points which are identified by addresses at the boundaries that interface the adjacent layers. The top and bottom levels of the layered structure are unique. The topmost layer interfaces and provides the communication services to the noncommunication functions performed at a node dealing with the application task (the user’s program). This layer also requests communication services from the layer below. The bottom layer does not have a lower layer through which it can request communication services. This layer acts to create and recognize the physical signals transmitted between the bottom entities of the communicating partners (it arranges the actual transmission). The medium that provides the path for the transfer of signals (a wire, usually) connects the service access points at the bottom layers, but itself lies outside the layer structure. Virtual communication occurs between peer entities, those at the same level. Peer-to-peer communication must conform to layer protocol, defined as the rules and conventions used to exchange information. Actual physical communication proceeds from the upper layers to the bottom, through the communication medium (wire), and then up through the layer structure of the cooperating node. Since the entities at each layer both transmit and receive data, the protocol between peer layers controls both input and output data, depending on the direction of transmission. The transmitting entities accomplish this by appending control information to each data unit that they pass to the layer below. This control information is later interpreted and removed by the peer entities receiving the data unit. Communication Standards

Table 2.2.4 lists a few of the hundreds of forums seeking to develop and adopt voluntary standards or to coordinate standards activities. Often users establish standards by agreement that fixes some existing practice. The ISO, however, has described a seven-layer model, called the Reference Model for Open Systems Interconnection (OSI), for coordinating and expediting the development of new implementation standards. The term open systems refers to systems that allow devices to be interconnected and to communicate with each other by conforming to common implementation standards. The ISO model is not of itself an implementation standard nor does it provide a basis for appraising existing implementations, but it partitions the communication facilities into layers of related function which can be independently standardized by different teams of experts. Its importance lies in the fact that both vendors and users have agreed to provide and accept implementation standards that conform to this model. Table 2.2.4 Standards CCITT ISO ANSI EIA IEEE MAP/TOP NIST

Some Groups Involved with Communication Comité Consultatif de Télégraphique et Téléphonique International Organization for Standardization American National Standards Institute Electronic Industries Association Institute of Electrical and Electronics Engineers Manufacturing Automation Protocols and Technical and Office Protocols Users Group National Institute of Standards and Technology

The following lists the names the ISO has given the layers in its ISO model together with a brief description of their roles. Application layer provides no services to the other layers but serves as the interface for the specialized communication that may be required by the actual application, such as file transfer, message handling, virtual terminal, or job transfer. Presentation layer relieves the node from having to conform to a particular syntactical representation of the data by converting the data formats to those needed by the layer above. Session layer coordinates the dialogue between nodes including arranging several sessions to use the same transport layer at one time.

2-47

Transport layer establishes and releases the connections between peers to provide for data transfer services such as throughput, transit delays, connection setup delays, error rate control, and assessment of resource availability. Network layer provides for the establishment, maintenance, and release of the route whereby a node directs information toward its destination. Data link layer is concerned with the transfer of information that has been organized into larger blocks by creating and recognizing the block boundaries. Physical layer generates and detects the physical signals representing the bits, and safeguards the integrity of the signals against faulty transmission or lack of synchronization. The IEEE has formulated several implementation standards for office or production floor LANs that conform to the lower two layers of the ISO model. The functions assigned to the ISO data link layer have been distributed over two sublayers, a logical link control (LLC) upper sublayer that generates and interprets the link control commands, and a medium access control (MAC) lower sublayer that frames the data units and acquires the right to access the medium. From this structure, the IEEE has formulated standards for the MAC sublayer and ISO physical layer combination, and a common standard for the LLC sublayer. CSMA/CD standardized the access method developed by the Xerox Corporation under its trademark Ethernet. The nodes in the network are attached to a common bus. All nodes hear every message transmitted, but accept only those messages addressed to themselves. When a node has a message to transmit, it listens for the line to be free of other traffic before it initiates transmission. However, more than one node may detect the free line and may start to transmit. In this situation the signals will collide and produce a detectable change in the energy level present in the line. Even after a station detects a collision it must continue to transmit to make sure that all stations hear the collision (all data frames must be of sufficient length to be present simultaneously on the line as they pass each station). On hearing a collision, all stations that are transmitting wait a random length of time and then attempt to retransmit. The MAP/TOP (Manufacturing Automation Protocols and Technical and Office Protocols) Users Group started under the auspices of General Motors and Boeing Information Systems and now has a membership of many thousands of national and international corporations. The corporations in this group have made a commitment to open systems that will allow them to select the best products through standards, agreed to by the group, that will meet their respective requirements. These standards have also been adopted by NIST for governmentwide use under the title Government Open Systems Interconnections Profile (GOSIP). The common carriers who offer WAN communication services through their public networks have also developed packet-switching networks for public use. Packet switching transmits data in a purely digital format, which, when embellished, can replace the common circuitswitching technology used in analog communications such as voice. A packet is a fixed-sized block of digital data with embedded control information. The network serves to deliver the packets to their destination in an efficient and reliable manner. Wireless network capabilities for computers based on the IEEE 802.11 set of standards evolved rapidly. The 802.11 standards enabled the implementation of wireless LAN having a performance comparable to that of wired LAN. The wireless LAN network is a set of wireless access points attached to a LAN that allow computers access the LAN. The coverage of the wireless access points can overlap allowing mobility within a region of coverage and allowing transfer from one access point to another if the computer is moved from one coverage region to another.

RELATIONAL DATABASE TECHNOLOGY Design Concepts

As computer hardware has evolved from small working memories and tape storage to large working memories and large disk storage, so has database technology moved from accessing and processing of a single,

2-48

COMPUTERS

sequential file to that of multiple, random-access files. A relational database can be defined as an organized collection of interconnected tables or records. The records appear like the flat files of older technology. In each record the information is in columns (fields) which identify attributes, and rows (tuples) which list particular instances of the attributes. One column (or more), known as the primary key, identifies each row. Obviously, the primary key must be unique for each row. If the data is to be handled in an efficient and orderly way, the records cannot be organized in a helter-skelter fashion such as simply transporting existing flat files into relational tables. To avoid problems in maintaining and using the database, redundancy should be eliminated by storing each fact at only one place so that, when making additions or deletions, one need not worry about duplicates throughout the database. This goal can be realized by organizing the records into what is known as the third normal form. A record is in the third normal form if and only if all nonkey attributes are mutually independent and fully dependent on the primary key. The advantages of relational databases, assuming proper normalization, are: Each fact can be stored exactly once. The integrity of the data resides locally, where it is generated and can best be managed. The tables can be physically distributed yet interconnected. Each user can be given his/her own private view of the database without altering its physical structure. New applications involving only a part of the total database can be developed independently. The system can be automated to find the best path through the database for the specified data. Each table can be used in many applications by employing simple operators without having to transfer and manipulate data superfluous to the application. A large, comprehensive system can evolve from phased design of local systems. New tables can be added without corrupting everyone’s view of the data. The data in each table can be protected differently for each user (read-only, write-only). The tables can be made inaccessible to all users who do not have the right to know. Relational Database Operators

A database system contains the structured collection of data, an on-line catalog and dictionary of data items, and facilities to access and use the data. The system allows users to: Add new tables Remove old tables Insert new data into existing tables Delete data from existing tables Retrieve selected data Manipulate data extracted from several tables Create specialized reports

Table 2.2.5

As might be expected, these systems include a large collection of operators and built-in functions in addition to those normally used in mathematics. Because of the similarity between database tables and mathematical sets, special set-like operators have been developed to manipulate tables. Table 2.2.5 lists seven typical table operators. The list of functions would normally also include such things as count, sum, average, find the maximum in a column, and find the minimum in a column. A rich collection of report generators offers powerful and flexible capabilities for producing tabular listings, text, graphics (bar charts, pie charts, point plots, and continuous plots), pictorial displays, and voice output. SOFTWARE ENGINEERING Programming Goals

Software engineering encompasses the methodologies for analyzing program requirements and for structuring programs to meet the requirements over their life cycle. The objectives are to produce programs that are: Well documented Easily read Proved correct Bug- (error-) free Modifiable and maintainable Implementable in modules Control-Flow Diagrams

A control-flow diagram, popularly known as a flowchart, depicts all possible sequences of a program during execution by representing the control logic as a directed graph with labeled nodes. The theory associated with flowcharts has been refined so that programs can be structured to meet the above objectives. Without loss of generality, the nodes in a flowchart can be limited to the three types shown in Fig. 2.2.4. A function may be either a transformer which converts input data values into output data values or a transducer which converts that data’s morphological form. A label placed in the rectangle specifies the function’s action. A predicate node acts to bifurcate the path through the node. A question labels the diamond representing a predicate node. The answer to the question yields a binary value: 0 or 1, yes or no, ON or OFF. One of the output lines is selected accordingly. A connector serves to rejoin separated paths. Normally the circle representing a connector does not contain a label, but when the flowchart is used to document a computer program it may be convenient to label the connector. Structured programming theory models all programs by their flowcharts by placing minor restrictions on their lines and nodes. Specifically, a flowchart is called a proper program if it has precisely one input and one output line, and for every node there exists a path from the input line through the node to the output line. The restriction prohibiting multiple input or output lines can easily be circumvented by funneling the lines through collector nodes. The other restriction simply discards unwanted program structures, since a program with a path that does not reach the output may not terminate.

Relational Database Operators

Operator

Input

Output

Select Project Union Intersection Difference Join

A table and a condition A table and an attribute Two tables Two tables Two tables Two tables and a condition

Divide

A table, two attributes, and list of values

A table of all tuples that satisfy the given condition A table of all values in the specified attribute A table of all unique tuples appearing in one table or the other A table of all tuples the given tables have in common A table of all tuples appearing in the first and not in the second table A table concatenating the attributes of the tuples that satisfy the given condition A table of values appearing in one specified attribute of the given table when the table has tuples that satisfies every value in the list in the other given attribute

SOFTWARE ENGINEERING

2-49

Fig. 2.2.4 Basic flowchart nodes.

Not all proper programs exhibit the desirable properties of meeting the objectives listed above. Figure 2.2.5 lists a group of proper programs whose graphs have been identified as being well-structured and useful as basic building blocks for creating other well-structured programs. The name assigned to each of these graph suggests the process each represents. CASE is just a convenient way of showing multiple IFTHENELSEs more compactly.

Fig. 2.2.6 Illustration of a control-flow diagram.

Figure 2.2.7 shows the four basic elements used to construct a data-flow diagram. The roles each element plays in the system are:

Fig. 2.2.5 Basic flowchart building blocks.

The structured programming theorem states: any proper program can be reconfigured to an equivalent program producing the same transformation of the data by a flowchart containing at most the graphs labeled BLOCK, IFTHENELSE, and REPEATUNTIL. Every proper program has one input line and one output line like a function block. The synthesis of more complex well-structured programs is achieved by substituting any of the three building blocks mentioned in the theorem for a function node. In fact, any of the basic building blocks would do just as well. A program so structured will appear as a block of function nodes with a top-down control flow. Because of the top-down structure, the arrow points are not normally shown. Figure 2.2.6 illustrates the expansion of a program to find the roots of ax2  bx  c  0. The flowchart is shown in three levels of detail.

Rectangular boxes lie outside the system and represent the input data sources or output data sinks that communicate with the system. The sources and sinks are also called terminators. Circles (bubbles) represent processes or actions performed by the system in accomplishing its function. Twin parallel lines represent a data file used to collect and store data from among the processes or from a process over time which can be later recalled. Arcs or vectors connect the other elements and represent data flows. A label placed with each element makes clear its role in the system. The circles contain verbs and the other elements contain nouns. The arcs tie the system together. An arc between a terminator and a process represents input to or output from the system. An arc between two processes represents output from one process which is input to the other. An arc between a process and a file represents data gathered by the process and stored in the file, or retrieval of data from the file. Analysis starts with a contextual view of the system studied in its environment. The contextual view gives the name of the system, the collection of terminators, and the data flows that provide the system inputs and outputs; all accompanied by a statement of the system objective. Details on the terminators and data they provide may also be described by text, but often the picture suffices. It is understood that the form of the input and output may not be dictated by the designer since they often involve organizations outside the system. Typical inputs in

Data-Flow Diagrams Data-flow diagrams structure the actions of a program into a network

by tracking the data as it passes through the program. They depict the interworkings of a system by the processes performing the work and the communication between the processes. Data-flow diagrams have proved valuable in analyzing existing or new systems to determine the system requirements and in designing systems to meet those requirements.

Fig. 2.2.7 Data-flow diagram elements.

2-50

COMPUTERS SOFTWARE SYSTEMS Software Techniques

Fig. 2.2.8 Illustration of a data-flow diagram.

industrial systems include customer orders, payment checks, purchase orders, requests for quotations, etc. Figure 2.2.8a illustrates a context diagram for a repair shop. Figure 2.2.8b gives many more operational details showing how the parts of the system interact to accomplish the system’s objectives. The designer can restructure the internal processors and the formats of the data flows. The bubbles in a diagram can be broken down into further details to be shown in another data-flow diagram. This can be repeated level after level until the processes become manageable and understandable. To complete the system description, each bubble in the dataflow charts is accompanied by a control-flow diagram or its equivalent to describe the algorithm used to accomplish the actions and a data dictionary describing the items in the data flows and in the databases. The techniques of data-flow diagrams lend themselves beautifully to the analysis of existing systems. In a complex system it would be unusual for an individual to know all the details, but all system participants know their respective roles: what they receive, whence they receive it, what they do, what they send, and where they send it. By carefully structuring interviews, the complete system can be synthesized to any desired level of detail. Moreover, each system component can be verified because what is sent from one process must be received by another and what is received by a process must be used by the process. To automate the total system or parts of the system, control bubbles containing transition diagrams can be implemented to control the timing of the processes.

Two basic operations form the heart of nonnumerical techniques such as those found in handling large database tables. One basic operation, called sorting, collates the information in a table by reordering the items by their key into a specified order. The other basic operation, called searching, seeks to find items in a table whose keys have the same or related value as a given argument. The search operation may or may not be successful, but in either case further operations follow the search (e.g., retrieve, insert, replace). One must recognize that computers cannot do mathematics. They can perform a few basic operations such as the four rules of arithmetic, but even in this case the operations are approximations. In fact, computers represent long integers, long rationals, and all the irrational numbers like p and e only as approximations. While computer arithmetic and the computer representation of numbers exceed the precision one commonly uses, the size of problems solved in a computer and the number of operations that are performed can produce misleading results with large computational errors. Since the computer can handle only the four rules of arithmetic, complex functions must be approximated by polynomials or rational fractions. A rational fraction is a polynomial divided by another polynomial. From these curve-fitting techniques, a variety of weightedaverage formulas can be developed to approximate the definite integral of a function. These formulas are used in the procedures for solving differential and integral equations. While differentiation can also be expressed by these techniques, it is seldom used, since the errors become unacceptable. Taking advantage of the machine’s speed and accuracy, one can solve nonlinear equations by trial and error. For example, one can use the Newton-Raphson method to find successive approximations to the roots of an equation. The computer is programmed to perform the calculations needed in each iteration and to terminate the procedure when it has converged on a root. More sophisticated routines can be found in the libraries for finding real, multiple, and complex roots of an equation. Matrix techniques have been commercially programmed into libraries of prepared modules which can be integrated into programs written in all popular engineering programming languages. These libraries not only contain excellent routines for solving simultaneous linear equations and the eigenvalues of characteristic matrices, but also embody procedures guarding against ill-conditioned matrices which lead to large computational errors. Special matrix techniques called relaxation are used to solve partial differential equations on the computer. A typical problem requires setting up a grid of hundreds or thousands of points to describe the region and expressing the equation at each point by finite-difference methods. The resulting matrix is very sparse with a regular pattern of nonzero elements. The form of the matrix circumvents the need for handling large arrays of numbers in the computer and avoids problems in computational accuracy normally found in dealing with extremely large matrices. The computer is an excellent tool for handling optimization problems. Mathematically these problems are formulated as problems in finding the maximum or minimum of a nonlinear equation. The excellent techniques that have been developed can deal effectively with the unique complexities these problems have, such as saddle points which represent both a maximum and a minimum. Another class of problems, called linear programming problems, is characterized by the linear constraint of many variables which plot into regions outlined by multidimensional planes (in the two-dimensional case, the region is a plane enclosed by straight lines). Techniques have been developed to find the optimal solution of the variables satisfying some given value or cost objective function. The solution to the problem proceeds by searching the corners of the region defined by the constraining equations to find points which represent minimum points of a cost function or maximum points of a value function.

SOFTWARE SYSTEMS

The best known and most widely used techniques for solving statistical problems are those of linear statistics. These involve the techniques of least squares (otherwise known as regression). For some problems these techniques do not suffice, and more specialized techniques involving nonlinear statistics must be used, albeit a solution may not exist. Artificial intelligence (AI) is the study and implementation of programs that model knowledge systems and exhibit aspects of intelligence in problem solving. Typical areas of application are in learning, linguistics, pattern recognition, decision making, and theorem proving. In AI, the computer serves to search a collection of heuristic rules to find a match with a current situation and to make inferences or otherwise reorganize knowledge into more useful forms. AI techniques have been utilized to build sophisticated systems, called expert systems, to aid in producing a timely response in problems involving a large number of complex conditions. Operating Systems

2-51

Multiprogramming operating systems process several jobs concurrently. A job may be initiated any time memory and other resources which it needs become available. Many jobs may be simultaneously active in the system and maintained in a partial state of completion. The order of execution depends on the priority assignments. Jobs are executed to completion or put into a wait state until a pending request for service has been satisfied. It should be noted that, while the CPU can execute only a single program at any moment of time, operations with peripheral and storage devices can occur concurrently. Timesharing operating systems process jobs in a way similar to multiprogramming except for the added feature that each job is given a short slice of the available time to complete its tasks. If the job has not been completed within its time slice or if it requests a service from an external device, it is put into a wait status and control passes to the next job. Effectively, the length of the time slice determines the priority of the job. Program Preparation Facilities

The operating system provides the services that support the needs that computer programs have in common during execution. Any list of services would include those needed to configure the resources that will be made available to the users, to attach hardware units (e.g., memory modules, storage devices, coprocessors, and peripheral devices) to the existing configuration, to detach modules, to assign default parameters to the hardware and software units, to set up and schedule users’ tasks so as to resolve conflicts and optimize throughput, to control system input and output devices, to protect the system and users’ programs from themselves and from each other, to manage storage space in the file devices, to protect file devices from faults and illegal use, to account for the use of the system, and to handle in an orderly way any exception which might be encountered during program execution. A well-designed operating system provides these services in a user-friendly environment and yet makes itself and the computer operating staff transparent to the user. The design of a computer operating system depends on the number of users which can be expected. The focus of single-user systems relies on the monitor to provide a user-friendly system through dialog menus with icons, mouse operations, and templets. Table 2.2.6 lists some popular operating systems for PCs by their trademark names. The design of a multiuser system attempts to give each user the impression that he/she is the lone user of the system. In addition to providing the accoutrements of a user-friendly system, the design focuses on the order of processing the jobs in an attempt to treat each user in a fair and equitable fashion. The basic issues for determining the order of processing center on the selection of job queues: the number of queues (a simple queue or a mix of queues), the method used in scheduling the jobs in the queue (first come–first served, shortest job next, or explicit priorities), and the internal handling of the jobs in the queue (batch, multiprogramming, or timesharing). Table 2.2.6 Some Popular PC Operating Systems Trademark

Supplier

Windows Unix Sun/OS Macintosh Linux

Microsoft Corp. Unix Systems Laboratory Inc. Sun Microsystems Inc. Apple Computer Inc. Multiple suppliers

Batch operating systems process jobs in a sequential order. Jobs are collected in batches and entered into the computer with individual job instructions which the operating system interprets to set up the job, to allocate resources needed, to process the job, and to provide the input/ output. The operating system processes each job to completion in the order it appears in the batch. In the event a malfunction or fault occurs during execution, the operating system terminates the job currently being executed in an orderly fashion before initiating the next job in sequence.

For the user, the crucial part of a language system is the grammar which specifies the language syntax and semantics that give the symbols and rules used to compose acceptable statements and the meaning associated with the statements. Compared to natural languages, computer languages are more precise, have a simpler structure, and have a clearer syntax and semantics that allows no ambiguities in what one writes or what one means. For a program to be executed, it must eventually be translated into a sequence of basic machine instructions. The statements written by a user must first be put on some machinereadable medium or typed on a keyboard for entry into the machine. The translator (compiler) program accepts these statements as input and translates (compiles) them into a sequence of basic machine instructions which form the executable version of the program. After that, the translated (compiled) program can be run. During the execution of a program, a run-time program must also be present in the memory. The purpose of the run-time system is to perform services that the user’s program may require. For example, in case of a program fault, the run-time system will identify the error and terminate the program in an orderly manner. Some language systems do not have a separate compiler to produce machine-executable instructions. Instead the run-time system interprets the statements as written, converts them into a pseudo-code, and executes the coded version. Commonly needed functions are made available as prepared modules, either as an integral part of the language or from stored libraries. The documentation of these functions must be studied carefully to assure correct selection and utilization. Languages may be classified as procedure-oriented or problemoriented. With procedure-oriented languages, all the detailed steps must be specified by the user. These languages are usually characterized as being more verbose than problem-oriented languages, but are more flexible and can deal with a wider range of problems. Problem-oriented languages deal with more specialized classes of problems. The elements of problem-oriented languages are usually familiar to a knowledgeable professional and so are easier to learn and use than procedure-oriented languages. The most elementary form of a procedure-oriented language is called an assembler. This class of language permits a computer program to be written directly in basic computer instructions using mnemonic operators and symbolic operands. The assembler’s translator converts these instructions into machine-usable form. A further refinement of an assembler permits the use of macros. A macro identifies, by an assigned name and a list of formal parameters, a sequence of computer instructions written in the assembler’s format and stored in its subroutine library. The macroassembler includes these macro instructions in the translated program along with the instructions written by the programmer. Besides these basic language systems there exists a large variety of other language systems. These are called higher-level language systems since they permit more complex statements than are permitted by a

2-52

COMPUTERS

macroassembler. They can also be used on machines produced by different manufacturers or on machines with different instruction repertoires. One language of historical value is ALGOL 60. It is a landmark in the theoretical development of computer languages. It was designed and standardized by an international committee whose goal was to formulate a language suitable for publishing computer algorithms. Its importance lies in the many language features it introduced which are now common in the more recent languages which succeeded it and in the scientific notation which was used to define it. FORTRAN (FORmula TRANslator) was one of the first languages catering to the engineering and scientific community where algebraic formulas specify the computations used within the program. It has been standardized several times. Each version has expanded the language features and has removed undesirable features which lead to unstructured programs. The PASCAL language couples the ideas of ALGOL 60 to those of structured programming. By allowing only appropriate statement types, it guarantees that any program written in the language will be well structured. In addition, the language introduced new data types and allows programmers to define new complex data structures based on the primitive data types. The definition of the Ada language was sponsored by the Department of Defense as an all-encompassing language for the development and maintenance of very large, software-intensive projects over their life cycle. While it meets software engineering objectives in a manner similar to Pascal, it has many other features not normally found in programming languages. Like other attempts to formulate very large all-inclusive languages, it is difficult to learn and has not found popular favor. Nevertheless, its many unique features make it especially valuable in implementing programs which cannot be easily implemented in other languages (e.g., programs for parallel computations in embedded computers). By edict, subsets of Ada were forbidden. Modula-2 was designed to retain the inherent simplicity of PASCAL but include many of the advanced features of Ada. Its advantage lies in implementing large projects involving many programmers. The compilers for this language have rigorous interface cross-checking mechanisms to avoid poor interfaces between components. Another troublesome area is in the implicit use of global data. Modula-2 retains the Ada facilities that allow programmers to share data and avoids incorrectly modifying the data in different program units. The C language was developed by AT&T’s Bell Laboratories and subsequently standardized by ANSI. It has a reputation for translating programs into compact and fast code, and for allowing program segments to be precompiled. Its strength rests in the flexibility of the language; for example, it permits statements from other languages to be included in-line in a C program and it offers the largest selection of operators that mirror those available in an assembly language. Because of its flexibility, programs written in C can become unreadable. The C language was also developed by AT&T Bell Laboratories and standardized by ANSI. The C language was designed as an extension of C so that the concepts required to design and implement a program would be expressed more directly and concisely. Use of classes to embody the design concept is one of the primary concepts of C. Problem-oriented languages have been developed for every discipline. A language might deal with a specialized application within an engineering field, or it might deal with a whole gamut of applications covering one or more fields. A class of problem-oriented languages that deserves special mention are those for solving problems in discrete simulation. GPSS, Simscript, and SIMULA are among the most popular. A simulation (another word for model) of a system is used whenever it is desirable to watch a succession of many interrelated events or when there is interplay between the system under study and outside forces. Examples are problems in human-machine interaction and in the modeling of business systems. Typical human-machine problems are the servicing of automatic equipment by a crew of operators (to study crew size and assignments, typically), or responses by shared maintenance crews to equipment subject to unpredictable (random) breakdown. Business models often involve

transportation and warehousing studies. A business model could also study the interactions between a business and the rest of the economy such as competitive buying in a raw materials market or competitive marketing of products by manufacturers. Physical or chemical systems may also be modeled. For example, to study the application of automatic control values in pipelines, the computer model consists of the control system, the valves, the piping system, and the fluid properties. Such a model, when tested, can indicate whether fluid hammer will occur or whether valve action is fast enough. It can also be used to predict pressure and temperature conditions in the fluid when subject to the valve actions. Another class of problem-oriented languages makes the computer directly accessible to the specialist with little additional training. This is achieved by permitting the user to describe problems to the computer in terms that are familiar in the discipline of the problem and for which the language is designed. Two approaches are used. Figures 2.2.9 and 2.2.10 illustrate these. One approach sets up the computer program directly from the mathematical equations. In fact, problems were formulated in this manner in the past, where analog computers were especially well-suited. Anyone familiar with analog computers finds the transitions to these languages easy. Figure 2.2.9 illustrates this approach using the MIMIC language to write the program for the solution of the initial-value problem: $ # # M y 1 Z y 1 Ky 5 1 and y s0d 5 ys0d 5 0 MIMIC is a digital simulation language used to solve systems of ordinary differential equations. The key step in setting up the solution is to isolate the highest-order derivative on the left-hand side of the equation and equate it to an expression composed of the remaining terms. For the equation above, this results in: # $ y 5 s1 2 Z y 2 Kyd/M The highest-order derivative is derived by equating it to the expression on the right-hand side of the equation. The lower-order derivatives in the expression are generated successively by integrating the highestorder derivative. The MIMIC language permits the user to write these statements in a format closely resembling mathematical notation. The alternate approach used in problem-oriented languages permits the setup to be described to the computer directly from the block diagram of the physical system. Figure 2.2.10 illustrates this approach using the SCEPTRE language. SCEPTRE statements are written under headings and subheadings which identify the type of component being described. This language may be applied to network problems of electrical digital-logic elements, mechanical-translation or rotational elements, or transfer-function blocks. The translator for this language develops and sets up the equations directly from this description of the

MIMIC statements

Explanation

DY2  (1Z * DY1  K * Y)/M

Differential equation to be solved. “*” is used for multiplication and DY2, DY1, and Y $ are # defined mnemonics for y , y , and y. INT (A, B) is used to perform integration. It forms successive values of B   Adt. T is a reserved name representing the independent variable. This statement will terminate execution when T $ 10. Values must be furnished for M, K, and Z. An input with these values must appear after the END card. Three point plots $ # are produced on the line printer; y , y , and y vs. t.

DY1  INT(DY2,0.) Y  INT(DY1,0.) FIN(T,10.)

CON(M, K, Z)

PLO(T,DY2) PLO(T,DY1) PLO(T,Y) END

Necessary last statement.

Fig. 2.2.9 Illustration of a MIMIC program.

SOFTWARE SYSTEMS A node is assigned to: • ground • any mass • point between two elements The prefix of element name specifies its type, i.e., M for mass, K for spring, D for damper, and R for force. (a) SCEPTRE statements MECHANICAL DESCRIPTION ELEMENTS M1, 1  3  10. K1, 1  2  40. D1, 2  3  .5 R1, 1  3  7.32 OUTPUT SM1, VM1

RUN CONTROL STOPTIME  10. END

Explanation

Specifies the elements and their position in the diagram using the node numbers.

Results are listed on the line printer. Prefix on the element specifies the quantity to be listed; S for displacement, V for velocity. TIME is reserved name for independent variable. Statement will terminate execution of program when TIME is equal to or greater than 10. Necessary statement. (b)

Fig. 2.2.10 Illustration of SCEPTRE program. (a) Problem to be solved; (b) SCEPTRE program.

network diagram, and so relieves the user from the mathematical aspects of the problem. Application Packages

An application package differs from a language in that its components have been organized to solve problems in a particular application rather than to create the components themselves. The user interacts with the package by initiating the operations and providing the data. From an operational view, packages are built to minimize or simplify interactions with the users by using a menu to initiate operations and entering the data through templets. Perhaps the most widely used application package is the word processor. The objective of a word processor is to allow users to compose text in an electronically stored format which can be corrected or modified, and from which a hard copy can be produced on demand. Besides the basic typewriter operations, it contains functions to manipulate text in blocks or columns, to create headers and footers, to number pages, to find and correct words, to format the data in a variety of ways, to create labels, and to merge blocks of text together. The better word processors have an integrated dictionary, a spelling checker to find and correct misspelled words, a grammar checker to find grammatical errors, and a thesaurus. They often have facilities to prepare complex mathematical equations and to include and manipulate graphical artwork, including editing color pictures. When enough page- and document formatting capability has been added, the programs are known as desktop publishing programs. One of the programs that contributed to the early acceptance of personal computers was the spread sheet program. These programs simulate the common spread sheet with its columns and rows of interrelated data. The computerized approach has the advantage that the equations are stored so that the results of a change in data can be shown quickly after any change is made in the data. Modern spread sheet programs have many capabilities, including the ability to obtain information from other spread sheets, to produce a variety of reports, and to prepare equations which have complicated logical aspects. Tools for project management have been organized into commercially available application packages. The objectives of these programs

2-53

are in the planning, scheduling, and controlling the time-oriented activities describing the projects. There are two basically similar techniques used in these packages. One, called CPM (critical path method), assumes that the project activities can be estimated deterministically. The other, called PERT (project evaluation and review technology), assumes that the activities can be estimated probabilistically. Both take into account such items as the requirement that certain tasks cannot start before the completion of other tasks. The concepts of critical path and float are crucial, especially in scheduling the large projects that these programs are used for. In both cases tools are included for estimating project schedules, estimating resources needed and their schedules, and representing the project activities in graphical as well as tabular form. A major use of the digital computer is in data reduction, data analysis, and visualization of data. In installations where large amounts of data are recorded and kept, it is often advisable to reduce the amount of data by ganging the data together, by averaging the data with numerical filters to reduce the amount of noise, or by converting the data to a more appropriate form for storage, analysis, visualization, or future processing. This application has been expanded to produce systems for evaluation, automatic testing, and fault diagnosis by coupling the data acquisition equipment to special peripherals that automatically measure and record the data in a digital format and report the data as meaningful, nonphysically measurable parameters associated with a mathematical model. Computer-aided design/computer-aided manufacturing (CAD/CAM) is an integrated collection of software tools which have been designed to make way for innovative methods of fabricating customized products to meet customer demands. The goal of modern manufacturing is to process orders placed for different products sooner and faster, and to fabricate them without retooling. CAD has the tools for prototyping a design and setting up the factory for production. Working within a framework of agile manufacturing facilities that features automated vehicles, handling robots, assembly robots, and welding and painting robots, the factory sets itself up for production under computer control. Production starts with the receipt of an order on which customers may pick options such as color, size, shapes, and features. Manufacturing proceeds with greater flexibility, quality, and efficiency in producing an increased number of products with a reduced workforce. Effectively, CAD/CAM provides for the ultimate just-in-time (JIT) manufacturing. Collaboration software is a set of software tools that have been designed to enhance timely and effective communication within a software preparation group. The use of collaboration software has been an important tool to improve the speed and effectiveness of communication for local teams and especially for geographically dispersed teams. With such software, all team members have access to current information such as financial or experimental data, reports, and presentations. Proposals and updates can be provided rapidly to all team members in response to changing conditions. Working within the framework of the collaboration tools can also provide the team with a history that can be very valuable in addressing new problems and in providing new team members with information required to allow them to quickly become effective members of the team. Two other types of application package illustrate the versatility of data management techniques. One type ties on-line equipment to a computer for collecting real-time data from the production lines. An animated, pictorial display of the production lines forms the heart of the system, allowing supervision in a central control station to continuously track operations. The other type collects time-series data from the various activities in an enterprise. It assists in what is known as management by exception. It is especially useful where the detailed data is so voluminous that it is feasible to examine it only in summaries. The data elements are processed and stored in various levels of detail in a seamless fashion. The system stores the reduced data and connects it to the detailed data from which it was derived. The application package allows management, through simple computer operations, to detect a problem at a higher level and to locate and pinpoint its cause through examination of successively lower levels.

Section

3

Mechanics of Solids and Fluids BY

PETER L. TEA, JR. Professor of Physics Emeritus, The City College of The City University of

New York VITTORIO (RINO) CASTELLI Senior Research Fellow, Retired, Xerox Corp.; Engineering

Consultant J. W. MURDOCK Late Consulting Engineer LEONARD MEIROVITCH University Distinguished Professor Emeritus, Department of

Engineering Science and Mechanics, Virginia Polytechnic Institute and State University

3.1 MECHANICS OF SOLIDS by Peter L. Tea, Jr. Physical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 Systems and Units of Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 Statics of Rigid Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3 Center of Gravity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6 Moment of Inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10 Dynamics of Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14 Work and Energy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17 Impulse and Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18 Gyroscopic Motion and the Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19 3.2 FRICTION by Vittorio (Rino) Castelli Static and Kinetic Coefficients of Friction . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20 Rolling Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25 Friction of Machine Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25 3.3 MECHANICS OF FLUIDS by J. W. Murdock Fluids and Other Substances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-30 Fluid Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31 Fluid Statics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-33

Fluid Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-36 Fluid Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-37 Dimensionless Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-41 Dynamic Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-43 Dimensional Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-44 Forces of Immersed Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-46 Flow in Pipes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-47 Piping Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-50 ASME Pipeline Flowmeters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-53 Pitot Tubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-57 ASME Weirs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-57 Open-Channel Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-59 Flow of Liquids from Tank Openings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-60 Water Hammer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-61 Computational Fluid Dynamics (CFD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-61 3.4 VIBRATION by Leonard Meirovitch Single-Degree-of-Freedom Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-61 Multi-Degree-of-Freedom Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-70 Distributed-Parameter Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-72 Approximate Methods for Distributed Systems . . . . . . . . . . . . . . . . . . . . . . 3-75 Vibration-Measuring Instruments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-78

3-1

3.1 MECHANICS OF SOLIDS by Peter L. Tea, Jr. REFERENCES: Beer and Johnston, “Mechanics for Engineers,” McGraw-Hill. Ginsberg and Genin, “Statics and Dynamics,” Wiley. Higdon and Stiles, “Engineering Mechanics,” Prentice-Hall. Holowenko, “Dynamics of Machinery,” Wiley. Housnor and Hudson, “Applied Mechanics,” Van Nostrand. Meriam, “Statics and Dynamics,” Wiley. Mabie and Ocvirk, “Mechanisms and Dynamics of Machinery,” Wiley. Synge and Griffith, “Principles of Mechanics,” McGraw-Hill. Timoshenko and Young, “Advanced Dynamics,” McGraw-Hill. Timoshenko and Young, “Engineering Mechanics,” McGraw-Hill.

PHYSICAL MECHANICS Definitions Force is the action of one body on another which will cause acceleration

of the second body unless acted on by an equal and opposite action counteracting the effect of the first body. It is a vector quantity. Time is a measure of the sequence of events. In newtonian mechanics it is an absolute quantity. In relativistic mechanics it is relative to the frames of reference in which the sequence of events is observed. The common unit of time is the second. Inertia is that property of matter which causes a resistance to any change in the motion of a body. Mass is a quantitative measure of inertia. Acceleration of Gravity Every object which falls in a vacuum at a given position on the earth’s surface will have the same acceleration g. Accurate values of the acceleration of gravity as measured relative to the earth’s surface include the effect of the earth’s rotation and flattening at the poles. The international gravity formula for the acceleration of gravity at the earth’s surface is g  32.0881(1  0.005288 sin2   0.0000059 sin2 2) ft/s2, where  is latitude in degrees. For extreme accuracy, the local acceleration of gravity must also be corrected for the presence of large water or land masses and for height above sea level. The absolute acceleration of gravity for a nonrotating earth discounts the effect of the earth’s rotation and is rarely used, except outside the earth’s atmosphere. If g0 represents the absolute acceleration at sea level, the absolute value at an altitude h is g  g0R2/(R  h)2, where R is the radius of the earth, approximately 3,960 mi (6,373 km). Weight is the resultant force of attraction on the mass of a body due to a gravitational field. On the earth, units of weight are based upon an acceleration of gravity of 32.1740 ft/s2 (9.80665 m/s2). Linear momentum is the product of mass and the linear velocity of a particle and is a vector. The moment of the linear-momentum vector about a fixed axis is the angular momentum of the particle about that fixed axis. For a rigid body rotating about a fixed axis, angular momentum is defined as the product of moment of inertia and angular velocity, each measured about the fixed axis. An increment of work is defined as the product of an incremental displacement and the component of the force vector in the direction of the displacement or the component of the displacement vector in the direction of the force. The increment of work done by a couple acting on a body during a rotation of d in the plane of the couple is dU  M d. Energy is defined as the capacity of a body to do work by reason of its motion or configuration (see Work and Energy). A vector is a directed line segment that has both magnitude and direction. In script or text, a vector is distinguished from a scalar V by a boldface-type V. The magnitude of the scalar is the magnitude of the vector, V  |V|. A frame of reference is a specified set of geometric conditions to which other locations, motion, and time are referred. In newtonian mechanics, the fixed stars are referred to as the primary (inertial) frame of reference. Relativistic mechanics denies the existence of a primary 3-2

reference frame and holds that all reference frames must be described relative to each other. SYSTEMS AND UNITS OF MEASUREMENTS

In absolute systems, the units of length, mass, and time are considered fundamental quantities, and all other units including that of force are derived. In gravitational systems, the units of length, force, and time are considered fundamental qualities, and all other units including that of mass are derived. In the SI system of units, the unit of mass is the kilogram (kg) and the unit of length is the metre (m). A force of one newton (N) is derived as the force that will give 1 kilogram an acceleration of 1 m/s2. In the English engineering system of units, the unit of mass is the pound mass (lbm) and the unit of length is the foot (ft). A force of one pound (1 lbf) is the force that gives a pound mass (1 lbm) an acceleration equal to the standard acceleration of gravity on the earth, 32.1740 ft/s2 (9.80665 m/s2). A slug is the mass that will be accelerated 1 ft/s2 by a force of 1 lbf. Therefore, 1 slug  32.1740 lbm. When described in the gravitational system, mass is a derived unit, being the constant of proportionality between force and acceleration, as determined by Newton’s second law. General Laws

NEWTON’S LAWS I. If a balanced force system acts on a particle at rest, it will remain at rest. If a balanced force system acts on a particle in motion, it will remain in motion in a straight line without acceleration. II. If an unbalanced force system acts on a particle, it will accelerate in proportion to the magnitude and in the direction of the resultant force. III. When two particles exert forces on each other, these forces are equal in magnitude, opposite in direction, and collinear. Fundamental Equation The basic relation between mass, acceleration, and force is contained in Newton’s second law of motion. As applied to a particle of mass, F  ma, force  mass  acceleration. This equation is a vector equation, since the direction of F must be the direction of a, as well as having F equal in magnitude to ma. An alternative form of Newton’s second law states that the resultant force is equal to the time rate of change of momentum, F  d(mv)/dt. Law of the Conservation of Mass The mass of a body remains unchanged by any ordinary physical or chemical change to which it may be subjected. (True in classical mechanics.) Law of the Conservation of Energy The principle of conservation of energy requires that the total mechanical energy of a system remain unchanged if it is subjected only to forces which depend on position or configuration. Law of the Conservation of Momentum The linear momentum of a system of bodies is unchanged if there is no resultant external force on the system. The angular momentum of a system of bodies about a fixed axis is unchanged if there is no resultant external moment about this axis. Law of Mutual Attraction (Gravitation) Two particles attract each other with a force F proportional to their masses m1 and m2 and inversely proportional to the square of the distance r between them, or F  Gm1m2/r2, in which G is the gravitational constant. The value of the gravitational constant is G  6.673  1011 m3/kg  s2 in SI or absolute units, or G  3.44  108 lbf  ft2/slug2 in engineering gravitational units. It should be pointed out that the unit of force F in the SI system is the newton and is derived, while the unit force in the gravitational system is the pound-force and is a fundamental quantity.

STATICS OF RIGID BODIES EXAMPLE. Each of two solid steel spheres 6 in in diam will weigh 32.0 lb on the earth’s surface. This is the force of attraction between the earth and the steel sphere. The force of mutual attraction between the spheres if they are just touching is 0.000000136 lbf. (For spheres, center-to-center distance may be used.)

STATICS OF RIGID BODIES General Considerations

If the forces acting on a rigid body do not produce any acceleration, they must neutralize each other, i.e., form a system of forces in equilibrium. Equilibrium is said to be stable when the body with the forces acting upon it returns to its original position after being displaced a very small amount from that position; unstable when the body tends to move still farther from its original position than the very small displacement; and neutral when the forces retain their equilibrium when the body is in its new position. External and Internal Forces The forces by which the individual particles of a body act on each other are known as internal forces. All other forces are called external forces. If a body is supported by other bodies while subject to the action of forces, deformations and forces will be produced at the points of support or contact and these internal forces will be distributed throughout the body until equilibrium exists and the body is said to be in a state of tension, compression, or shear. The forces exerted by the body on the supports are known as reactions. They are equal in magnitude and opposite in direction to the forces with which the supports act on the body, known as supporting forces. The supporting forces are external forces applied to the body. In considering a body at a definite section, it will be found that all the internal forces act in pairs, the two forces being equal and opposite. The external forces act singly. General Law When a body is at rest, the forces acting externally to it must form an equilibrium system. This law will hold for any part of the body, in which case the forces acting at any section of the body become external forces when the part on either side of the section is considered alone. In the case of a rigid body, any two forces of the same magnitude, but acting in opposite directions in any straight line, may be added or removed without change in the action of the forces acting on the body, provided the strength of the body is not affected.

3-3

Resultant of Any Number of Forces Applied to a Rigid Body at the Same Point Resolve each of the given forces F into components

along three rectangular coordinate axes. If A, B, and C are the angles made with XX, YY, and ZZ, respectively, by any force F, the components will be F cos A along XX, F cos B along YY, F cos C along ZZ; add the components of all the forces along each axis algebraically and obtain F cos A  X along XX, F cos B  Y along YY, and F cos C  Z along ZZ. The resultant R 5 2sXd2 1 sYd2 1 sZd2. The angles made by the resultant with the three axes are Ar with XX, Br with YY, Cr with ZZ, where cos Ar 5 X/R

cos Br 5 Y/R

cos Cr 5 Z/R

The direction of the resultant can be determined by plotting the algebraic sums of the components. If the forces are all in the same plane, the components of each of the forces along one of the three axes (say ZZ) will be 0; i.e., angle Cr  90 and R 5 2sXd2 1 sYd2, cos Ar  X/R, and cos Br  Y/R. For equilibrium, it is necessary that R  0; i.e., X, Y, and Z must each be equal to zero. General Law In order that a number of forces acting at the same point shall be in equilibrium, the algebraic sum of their components along any three coordinate axes must each be equal to zero. When the forces all act in the same plane, the algebraic sum of their components along any two coordinate axes must each equal zero. When the Forces Form a System in Equilibrium Three unknown forces can be determined if the lines of action of the forces are all known and are in different planes. If the forces are all in the same plane, the lines of action being known, only two unknown forces can be determined. If the lines of action of the unknown forces are not known, only one unknown force can be determined in either case. Couples and Moments Couple Two parallel forces of equal magnitude (Fig. 3.1.3) which act in opposite directions and are not collinear form a couple. A couple cannot be reduced to a single force.

Composition, Resolution, and Equilibrium of Forces

The resultant of several forces acting at a point is a force which will produce the same effect as all the individual forces acting together. Forces Acting on a Body at the Same Point The resultant R of two forces F1 and F2 applied to a rigid body at the same point is represented in magnitude and direction by the diagonal of the parallelogram formed by F1 and F2 (see Figs. 3.1.1 and 3.1.2). R 5 2F 21 1 F 22 1 2F1F2 cos a sin a1 5 sF2 sin ad/R

sin a2 5 sF1 sin ad/R

When a 5 908, R 5 2F 21 1 F 22, sin a1 5 F2/R, and sin a2 5 F1/R. When a 5 08, R 5 F1 1 F2 r When a 5 1808, R 5 F1 2 F2

Forces act in same straight line.

A force R may be resolved into two component forces intersecting anywhere on R and acting in the same plane as R, by the reverse of the operation shown by Figs. 3.1.1 and 3.1.2; and by repeating the operation with the components, R may be resolved into any number of component forces intersecting R at the same point and in the same plane.

Fig. 3.1.1

Fig. 3.1.2

Fig. 3.1.3 Displacement and Change of a Couple The forces forming a couple may be moved about and their magnitude and direction changed, provided they always remain parallel to each other and remain in either the original plane or one parallel to it, and provided the product of one of the forces and the perpendicular distance between the two is constant and the direction of rotation remains the same. Moment of a Couple The moment of a couple is the product of the magnitude of one of the forces and the perpendicular distance between the lines of action of the forces. Fa  moment of couple; a  arm of couple. If the forces are measured in pounds and the distance a in feet, the unit of rotation moment is the foot-pound. If the force is measured in newtons and the distance in metres, the unit is the newton-metre. In the cgs system the unit of rotation moment is 1 cm-dyne. Rotation moments of couples acting in the same plane are conventionally considered to be positive for counterclockwise moments and negative for clockwise moments, although it is only necessary to be consistent within a given problem. The magnitude, direction, and sense of rotation of a couple are completely determined by its moment axis, or moment vector, which is a line drawn perpendicular to the plane in which the couple acts, with an arrow indicating the direction from which the couple will appear to have right-handed rotation; the length of the line represents the magnitude of the moment of the couple. Figure 3.1.4a shows a counterclockwise couple, designated . Figure 3.1.4b shows a clockwise couple, designated .

3-4

MECHANICS OF SOLIDS

T

a single vector (having no specific line of action) perpendicular to the plane of the couple. If you let the relaxed fingers of your right hand represent the “swirl” of the forces in the couple, your stiff right thumb will indicate the direction of the vector. Vector couples may be added vectorially in the same manner by which concurrent forces are added. Couples lying in the same or parallel planes are added algebraically. Let 28 lbf  ft (38 N  m), 42 lbf  ft (57 N  m), and 70 lbf  ft (95 N  m) be the moments of three couples in the same or parallel planes; their resultant is a single couple lying in the same or in a parallel plane, whose moment is M  28  42  70  56 lbf  ft (M  38  57  95  76 N  m). Moment of couple = −180 lbf . ft 18 lbf from couple Y F

F

F

F = 18 lbf Point p

x

F X

(+)

(−)

(a)

(b)

Fig. 3.1.4

X 10 ft Y

18 lbf from couple

Fig. 3.1.5

If the polygon formed by the moment vectors of several couples closes itself, the couples form an equilibrium system. Two couples will balance each other

when they lie in the same or parallel planes and have the same moment in magnitude, but opposite in sign. Combination of a Couple and a Single Force in the Same Plane

(Fig. 3.1.5) Given a force F  18 lbf (80 N) acting as shown at distance x from YY, and a couple whose moment is 180 lbf  ft (244 N  m) in the same or parallel plane, to find the resultant. A couple may be changed to any other couple in the same or a parallel plane having the same moment and same sign. Let the couple consist of two forces of 18 lbf (80 N) each and let the arm be 10 ft (3.05 m). Place the couple in such a manner that one of its forces is opposed to the given force at p. This force of the couple and the given force being of the same magnitude and opposite in direction will neutralize each other, leaving the other force of the couple acting at a distance of 10 ft (3.05 m) from p and parallel and equal to the given force 18 lbf (80 N). General Rule The resultant of a couple and a single force lying in the same or parallel planes is a single force, equal in magnitude, in the same direction and parallel to the single force, and acting at a distance from the line of action of the single force equal to the moment of the couple divided by the single force. The moment of the resultant force about any point on the line of action of the given single force must be of the same sense as that of the couple, positive if the moment of the couple is positive, and negative if the moment of the couple is negative. If the moment of the couple in Fig. 3.1.5 had been  instead of , the resultant would have been a force of 18 lbf (80 N) acting in the same direction and parallel to F, but at a distance of 10 ft (3.05 m) to the right of it, making the moment of the resultant about any point on F positive. To effect a parallel displacement of a single force F over a distance a, a couple whose moment is Fa must be added to the system. The sense of the couple will depend upon which way it is desired to displace force F. The moment of a force with respect to a point is the product of the force F and the perpendicular distance from the point to the line of action of the force. The Moment of a Force with Respect to a Straight Line If the force is resolved into components parallel and perpendicular to the given line, the moment of the force with respect to the line is the product of the magnitude of the perpendicular component and the distance from its line of action to the given line.

of action makes angles Ar , Br , and Cr with axes XX, YY, and ZZ, where cos Ar  X/R, cos Br  Y/R, and cos Cr  Z/R; and there are three couples which may be combined by their moment vectors into a single resultant couple having the moment M r 5 2sM xd2 1 sM yd2 1 sM zd2, whose moment vector makes angles of Am , Bm , and Cm with axes XX, YY, and ZZ, such that cos Am  Mx /Mr , cos Bm  My /Mr, cos Cm  Mz /Mr. If this single resulting couple is in the same plane as the single resulting force at the origin or a plane parallel to it, the system may be reduced to a single force R acting at a distance from R equal to Mr /R. If the couple and force are not in the same or parallel planes, it is impossible to reduce the system to a single force. If R  0, i.e., if X, Y, and Z all equal zero, the system will reduce to a single couple whose moment is Mr. If Mr  0, i.e., if Mx, My, and Mz all equal zero, the resultant will be a single force R. When the forces are all in the same plane, the cosine of one of the angles Ar, Br, or Cr  0, say, Cr  90. Then R 5 2sXd2 1 sYd2, M r 5 2M 2x 1 M 2y , and the final resultant is a force equal and parallel to R, acting at a distance from R equal to Mr /R. A system of forces in the same plane can always be replaced by either a couple or a single force. If R  0 and Mr  0, the resultant is a couple. If Mr  0 and R  0, the resultant is a single force. A rigid body is in equilibrium when acted upon by a system of forces whenever R  0 and Mr  0, i.e., when the following six conditions hold true: X  0, Y  0, Z  0, Mx  0, My  0, and Mz  0. When the system of forces is in the same plane, equilibrium prevails when the following three conditions hold true: X  0, Y  0, M  0. Forces Applied to Support Rigid Bodies

The external forces in equilibrium acting upon a body may be statically determinate or indeterminate according to the number of unknown forces existing. When the forces are all in the same plane and act at a common point, two unknown forces may be determined if their lines of action are known, or one unknown force if its line of action is unknown. When the forces are all in the same plane and are parallel, two unknown forces may be determined if the lines of action are known, or one unknown force if its line of action is unknown. When the forces are anywhere in the same plane, three unknown forces may be determined if their lines of action are known, if they are not parallel or do not pass through a common point; if the lines of action are unknown, only one unknown force can be determined. If the forces all act at a common point but are in different planes, three unknown forces can be determined if the lines of action are known, one if unknown. If the forces act in different planes but are parallel, three unknown forces can be determined if their lines of action are known, one if unknown. The first step in the solution of problems in statics is the determination of the supporting forces. The following data are required for the complete knowledge of supporting forces: magnitude, direction, and point of application. According to the nature of the problem, none, one, or two of these quantities are known. One Fixed Support The point of application, direction, and magnitude of the load are known. See Fig. 3.1.6. As the body on which the forces act is in equilibrium, the supporting force P must be equal in magnitude and opposite in direction to the resultant of the loads L. In the case of a rolling surface, the point of application of the support is obtained from the center of the connecting bolt A (Fig. 3.1.7), both the direction and magnitude being unknown. The point of application and

Forces with Different Points of Application Composition of Forces If each force F is resolved into components parallel to three rectangular coordinate axes XX, YY, ZZ, the magnitude of the resultant is R 5 2sXd2 1 sYd2 1 sZd2, and its line

Fig. 3.1.6

Fig. 3.1.7

STATICS OF RIGID BODIES

line of action of the support at B are known, being determined by the rollers. When three forces acting in the same plane on the same rigid body are in equilibrium, their lines of action must pass through the same point O. The load L is known in magnitude and direction. The line of action of the support at B is known on account of the rollers. The point of application of the support at A is known. The three forces are in equilibrium and are in the same plane; therefore, the lines of action must meet at the point O. In the case of the rolling surfaces shown in Fig. 3.1.8, the direction of the support at A is known, the magnitude unknown. The line of action and point of application of the supporting force at B are known, its

3-5

closing sides of the individual triangles, the magnitude and direction of the resultant R of any number of forces in the same plane and intersecting

Fig. 3.1.10

at a single point can be found. In Fig. 3.1.11 the lines representing the forces start from point O, and in the force polygon (Fig. 3.1.12) they are joined in any order, the arrows showing their directions following around the polygon in the same direction. The magnitude of the resultant at the point of application of the forces is represented by the closing side R of the force polygon; its direction, as shown by the arrow, is counter to that in the other sides of the polygon. If the forces are in equilibrium, R must equal zero, i.e., the force polygon must close.

Fig. 3.1.8

Fig. 3.1.9

magnitude unknown. The lines of action of the three forces must meet in a point, and the supporting force at A must be perpendicular to the plane XX. If a member of a truss or frame in equilibrium is pinned at two points and loaded at these two points only, the line of action of the forces exerted on the member or by the member at these two points must be along a line connecting the pins. If the external forces acting upon a rigid body in equilibrium are all in the same plane, the equations X  0, Y  0, and M  0 must be satisfied. When trusses, frames, and other structures are under discussion, these equations are usually used as V  0, H  0, M  0, where V and H represent vertical and horizontal components, respectively. In Fig. 3.1.9, the directions and points of application of the supporting forces are known, but all three are of unknown magnitudes. Including load L, there are four forces, and the three unknown magnitudes may be determined by X 5 0, Y 5 0, and M 5 0. The supports are said to be statically determinate when the laws of equilibrium are sufficient for their determination. When the conditions are not sufficient for the determination of the supports or other forces, the structure is said to be statically indeterminate; the unknown forces can then be determined from considerations involving the deformation of the material. When several bodies are so connected to one another as to make up a rigid structure, the forces at the points of connection must be considered as internal forces and are not taken into consideration in the determination of the supporting forces for the structure as a whole. The distortion of any practically rigid structure under its working loads is so small as to be negligible when determining supporting forces. When the forces acting at the different joints in a built-up structure cannot be determined by dividing the structure up into parts, the structure is said to be statically indeterminate internally. A structure may be statically indeterminate internally and still be statically determinate externally. Fundamental Problems in Graphical Statics

A force may be represented by a straight line in a determined position, and its magnitude by the length of the straight line. The direction in which it acts may be indicated by an arrow. Polygon of Forces The parallelogram of two forces intersecting each other (see Figs. 3.1.4 and 3.1.5) leads directly to the graphic composition by means of the triangle of forces. In Fig. 3.1.10, R is called the closing side, and represents the resultant of the forces F1 and F2 in magnitude and direction. Its position is given by the point of application O. By means of repeated use of the triangle of forces and by omitting the

Fig. 3.1.11

Fig. 3.1.12

If in a closed polygon one of the forces is reversed in direction, this force becomes the resultant of all the others. Determination of Stresses in Members of a Statically Determinate Plane Structure with Loads at Rest

It will be assumed that the loads are applied at the joints of the structure, i.e., at the points where the different members are connected, and that the connections are pins with no friction. The stresses in the members must then be along lines connecting the pins, unless any member is loaded at more than two points by pin connections. Equilibrium In order that the whole structure should be in equilibrium, it is necessary that the external forces (loads and supports) shall form a balanced system. Graphical and analytical methods are both of service. Supporting Forces When the supporting forces are to be determined, it is not necessary to pay any attention to the makeup of the structure under consideration so long as it is practically rigid; the loads may be taken as they occur, or the resultant of the loads may be used instead. When the stresses in the members of the structure are being determined, the loads must be distributed at the joints where they belong. Method of Joints When all the external forces have been determined, any joint at which there are not more than two unknown forces may be taken and these unknown forces determined by the methods of the stress polygon, resolution, or moments. In Fig. 3.1.13, let O be the joint of a structure and F be the only known force; but let O1 and O2 be two members of the structure joined at O. Then the lines of action of the unknown forces are known and their magnitude may be determined (1) by a stress polygon which, for equilibrium, must close; (2) by resolution into H and V components, using the condition of equilibrium H  0, V  0; or (3) by moments, using any convenient point on the line of action of O1 and O2 and the condition of equilibrium M  0. No more than two unknown forces can be determined. In this manner, proceeding from joint to joint, the stresses in all the members of the truss can usually be determined if the structure is statically determinate internally.

3-6

MECHANICS OF SOLIDS

Method of Sections The structure may be divided into parts by passing a section through it cutting some of its members; one part may then be treated as a rigid body and the external forces acting upon it determined. Some of these forces will be the stresses in the members themselves. For example, let xx (Fig. 3.1.14) be a section taken through a truss loaded at P1, P2, and P3, and supported on rollers at S. As the whole truss is in equilibrium, any part of it must be also, and consequently the part shown to the left of xx must be in equilibrium under the action of the forces acting externally to it. Three of these forces are the

Quadrant, AB (Fig. 3.1.18a) x0  y0  2r/  0.6366r. Semicircumference, AC (Fig. 3.1.18b) y0  2r/  0.6366r; x0  0.

Y

Fig. 3.1.16

X

c

stresses in the members aa, bb, and bc, and are the unknown forces to be determined. They can be determined by applying the condition of equilibrium of forces acting in the same plane but not at the same point. H  0, V  0, M  0. The three unknown forces can be determined only if they are not parallel or do not pass through the same point; if, however, the forces are parallel or meet in a point, two unknown forces only can be determined. Sections may be passed through a structure cutting members in any convenient manner, as a rule, however, cutting not more than three members, unless members are unloaded. For the determination of stresses in framed structures, see Sec. 12.2. CENTER OF GRAVITY

Consider a three-dimensional body of any size, shape, and weight, but preferably thin and flat. If it is suspended as in Fig. 3.1.15 by a cord from any point A, it will be in equilibrium under the action of the tension in the cord and the weight W. If the experiment is repeated by suspending the body from point B, it will again be in equilibrium. Use a plumb bob through point A and draw the “first line” on the body. If the lines of action of the weight were marked in each case, they would be concurrent at a point G known as the center of gravity or center of mass. Whenever the density of the body is uniform, it will be a constant factor and like geometric shapes of different densities will have the same center of gravity. The term centroid is used in this case since the location of the center of gravity is of geometric concern only. If densities are nonuniform, like geometric shapes will have the same centroid but different centers of gravity.

r Y

Fig. 3.1.17

Fig. 3.1.18a

Y

X

r y0

D

C

First line W

X

Y

Fig. 3.1.18b

Fig. 3.1.19

CENTROIDS OF PLANE AREAS Triangle Centroid lies at the intersection of the lines joining the vertices with the midpoints of the sides, and at a distance from any side equal to one-third of the corresponding altitude. Parallelogram Centroid lies at the point of intersection of the diagonals. Trapezoid (Fig. 3.1.20) Centroid lies on the line joining the middle points m and n of the parallel sides. The distances ha and hb are h a 5 hsa 1 2bd/3sa 1 bd

h b 5 hs2a 1 bd/3sa 1 bd

Draw BE  a and CF  b; EF will then intersect mn at centroid. Any Quadrilateral The centroid of any quadrilateral may be determined by the general rule for areas, or graphically by dividing it into two sets of triangles by means of the diagonals. Find the centroid of each of the four triangles and connect the centroids of the triangles belonging to the same set. The intersection of these lines will be centroid of area. Thus, in Fig. 3.1.21, O, O1, O2, and O3 are, respectively, the centroids of the triangles ABD, ABC, BDC, and ACD. The intersection of O1O3 with OO2 gives the centroid of the quadrilateral area. B O1

A

A e

B

r

B

t lin

X x0

C X0

A

B s Fir

A

y0

X

Combination of Arcs and Straight Line (Fig. 3.1.19) AD and BC are two quadrants of radius r. y0  {(AB)r  2[0.5r(r  0.6366r)]} [AB  2(0.5r)] (Symmetrical about YY ).

Fig. 3.1.14

A

A c

X

Y

Fig. 3.1.13

Y B

CG

O

G

O3

D

W

Fig. 3.1.15 Centroids of Technically Important Lines, Areas, and Solids

CENTROIDS OF LINES Straight Lines The centroid is at its middle point. Circular Arc AB (Fig. 3.1.16) x0  r sin c/rad c; y0  2r sin2 1⁄2 c/rad c. (rad c  angle c measured in radians.) Circular Arc AC (Fig. 3.1.17) x0  r sin c/rad c; y0  0.

C

O2

Fig. 3.1.20

Fig. 3.1.21

Segment of a Circle (Fig. 3.1.22) x0  2⁄3 r sin3 c/(rad c  cos c sin c). A segment may be considered to be a sector from which a triangle is subtracted, and the general rule applied. Sector of a Circle (Fig. 3.1.23) x0  2⁄3 r sin c/rad c; y0  4⁄3 r sin2 1⁄2 c/rad c. Semicircle x0  4⁄3 r/  0.4244r; y0  0. Quadrant (90 sector) x0  y0  4⁄3 r/  0.4244r.

MOMENT OF INERTIA Parabolic Half Segment (Fig. 3.1.24) Area ABO: x0  3⁄5 x1; y0  3⁄8 y1. Parabolic Spandrel (Fig. 3.1.24) Area AOC: xr0 5 3⁄10 x 1; yr0 5 3⁄4y1.

Fig. 3.1.22

Truncated Circular Cone If h is the height of the frustum and R and r the radii of the bases, the distance from the surface of the base whose radius is R to the centroid is h(R2  2Rr  3r2)/4(R2  Rr  r2).

Fig. 3.1.23

Fig. 3.1.27

Segment of a Sphere (Fig. 3.1.28)

4(3r  h).

Quadrant of an Ellipse (Fig. 3.1.25) Area OAB: x0  4⁄3 (a/); y0  ⁄3 (b/). The centroid of a figure such as that shown in Fig. 3.1.26 may be determined as follows: Divide the area OABC into a number of parts by lines drawn perpendicular to the axis XX, e.g., 11, 22, 33, etc. These parts will be approximately either triangles, rectangles, or trapezoids. The area of each division may be obtained by taking the product of its

4

Y A b O

Fig. 3.1.28

Volume ABC: x0  3(2r  h)2/

Hemisphere x0  3r/8. Hollow Hemisphere x0  3(R4  r4)/8(R3  r3), where R and r are,

Fig. 3.1.24

X

3-7

x0

y0 a

B

X

Y

respectively, the outer and inner radii. Sector of a Sphere (Fig. 3.1.28) Volume OABCO: xr0 5 3⁄8 s2r 2 hd. Ellipsoid, with Semiaxes a, b, and c For each octant, distance from center of gravity to each of the bounding planes  3⁄8  length of semiaxis perpendicular to the plane considered. The formulas given for the determination of the centroid of lines and areas can be used to determine the areas and volumes of surfaces and solids of revolution, respectively, by employing the theorems of Pappus, Sec. 2.1. Determination of Center of Gravity of a Body by Experiment The center of gravity may be determined by hanging the body up from different points and plumbing down; the point of intersection of the plumb lines will give the center of gravity. It may also be determined as shown in Fig. 3.1.29. The body is placed on knife-edges which rest on platform scales. The sum of the weights registered on the two scales (w1  w2) must equal the weight (w) of the body. Taking a moment axis at either end (say, O), w2 A/w  x0  distance from O to plane containing the center of gravity. (See also Fig. 3.1.15 and accompanying text.)

Fig. 3.1.26

Fig. 3.1.25

mean height and its base. The centroid of each area may be obtained as previously shown. The sum of the moments of all the areas about XX and YY, respectively, divided by the sum of the areas will give approximately the distances from the center of gravity of the whole area to the axes XX and YY. The greater the number of areas taken, the more nearly exact the result.

Fig. 3.1.29

CENTROIDS OF SOLIDS Prism or Cylinder with Parallel Bases The centroid lies in the center of the line connecting the centers of gravity of the bases. Oblique Frustum of a Right Circular Cylinder (Fig. 3.1.27) Let 1 2 3 4 be the plane of symmetry. The distance from the base to the centroid is 1⁄2 h  (r2 tan2 c)/8h, where c is the angle of inclination of the oblique section to the base. The distance of the centroid from the axis of the cylinder is r2 tan c/4h. Pyramid or Cone The centroid lies in the line connecting the centroid of the base with the vertex and at a distance of one-fourth of the altitude above the base. Truncated Pyramid If h is the height of the truncated pyramid and A and B the areas of its bases, the distance of its centroid from the surface of A is

hsA 1 2 2AB 1 3Bd/4sA 1 2AB 1 Bd

Graphical Determination of the Centroids of Plane Areas

See Fig.

3.1.40. MOMENT OF INERTIA

The moment of inertia of a solid body with respect to a given axis is the limit of the sum of the products of the masses of each of the elementary particles into which the body may be conceived to be divided and the square of their distance from the given axis. If dm  dw/g represents the mass of an elementary particle and y its distance from an axis, the moment of inertia I of the body about this axis will be I  y2 dm  y2 dw/g. The moment of inertia may be expressed in weight units (Iw  y2 dw), in which case the moment of inertia in weight units, Iw, is equal to the moment of inertia in mass units, I, multiplied by g.

3-8

MECHANICS OF SOLIDS

If I  k2m, the quantity k is called the radius of gyration or the radius of inertia.

If a body is considered to be composed of a number of parts, its moment of inertia about an axis is equal to the sum of the moments of inertia of the several parts about the same axis, or I  I1  I2  I3      In. The moment of inertia of an area with respect to a given axis that lies in the plane of the area is the limit of the sum of the products of the elementary areas into which the area may be conceived to be divided and the square of their distance (y) from the axis in question. I  y2 dA  k2A, where k  radius of gyration. The quantity y2 dA is more properly referred to as the second moment of area since it is not a measure of inertia in a true sense. Formulas for moments of inertia and radii of gyration of various areas follow later in this section. Relation between the Moments of Inertia of an Area and a Solid

The moment of inertia of a solid of elementary thickness about an axis is equal to the moment of inertia of the area of one face of the solid about the same axis multiplied by the mass per unit volume of the solid times the elementary thickness of the solid. Moments of Inertia about Parallel Axes The moment of inertia of an area or solid about any given axis is equal to the moment of inertia about a parallel axis through the center of gravity plus the square of the distance between the two axes times the area or mass. In Fig. 3.1.30a, the moment of inertia of the area ABCD about axis YY is equal to I0 (or the moment of inertia about Y0Y0 through the center of gravity of the area and parallel to YY) plus x 20 A, where A  area of ABCD. In Fig. 3.1.30b, the moment of inertia of the mass m about YY 5 I0 1 x 20 m. Y0Y0 passes through the centroid of the mass and is parallel to YY.

Principal Moments of Inertia In every plane area, a given point being taken as the origin, there is at least one pair of rectangular axes

Fig. 3.1.31

Fig. 3.1.32

in the plane of the area about one of which the moment of inertia is a maximum, and a minimum about the other. These moments of inertia are called the principal moments of inertia, and the axes about which they are taken are the principal axes of inertia. One of the conditions for principal moments of inertia is that the product of inertia Ixy shall equal zero. Axes of symmetry of an area are always principal axes of inertia. Relation between Products of Inertia and Parallel Axes In Fig. 3.1.33, X0X0 and Y0Y0 pass through the center of gravity of the area parallel to the given axes XX and YY. If Ixy is the product of inertia for XX and YY, and Ix0 y0 that for X0X0 and Y0Y0, then Ixy 5 Ix0y0 1 abA. C.G.

Area A

Fig. 3.1.33 Mohr’s Circle The principal moments of inertia and the location of the principal axes of inertia for any point of a plane area may be established graphically as follows. Given at any point A of a plane area (Fig. 3.1.34), the moments of inertia Ix and Iy about axes X and Y, and the product of inertia Ixy relative to X and Y. The graph shown in Fig. 3.1.34b is plotted on rectangular coordinates with moments of inertia as abscissas and products of inertia

Fig. 3.1.30

Polar Moment of Inertia The polar moment of inertia (Fig. 3.1.31) is taken about an axis perpendicular to the plane of the area. Referring to Fig. 3.1.31, if Iy and Ix are the moments of inertia of the area A about YY and XX, respectively, then the polar moment of inertia Ip  Ix  Iy, or the polar moment of inertia is equal to the sum of the moments of inertia about any two axes at right angles to each other in the plane of the area and intersecting at the pole. Product of Inertia This quantity will be represented by Ixy, and is xy dy dx, where x and y are the coordinates of any elementary part into which the area may be conceived to be divided. Ixy may be positive or negative, depending upon the position of the area with respect to the coordinate axes XX and XY. Relation between Moments of Inertia about Axes Inclined to Each Other Referring to Fig. 3.1.32, let Iy and Ix be the moments of inertia

of the area A about YY and XX, respectively, Iry and Irx the moments about Y9Y9 and X9X9, and Ixy and Irxy the products of inertia for XX and YY, and X9X9 and Y9Y9, respectively. Also, let c be the angle between the respective pairs of axes, as shown. Then, Iry 5 Iy cos2 c 1 Ix sin2 c 1 Ixy sin 2c Irx 5 Ix cos2 c 1 Iy sin2 c 1 Ixy sin 2c Ix 2 Iy sin 2c 1 Ixy cos 2c Irxy 5 2

Fig. 3.1.34

as ordinates. Lay out Oa  Ix and ab  Ixy (upward for positive products of inertia, downward for negative). Lay out Oc  Iy and cd  negative of Ixy. Draw a circle with bd as diameter. This is Mohr’s circle. The maximum moment of inertia Irx 5 Of; the minimum moment of inertia is Iry 5 Og. The principal axes of inertia are located as follows. From axis AX (Fig. 3.1.34a) lay out angular distance   1⁄2 bef. This locates axis AX, one principal axis sIrx 5 Of d. The other principal axis of inertia is AY, perpendicular to AX sIrx 5 Ogd. The moment of inertia of any area may be considered to be made up of the sum or difference of the known moments of inertia of simple figures. For example, the dimensioned figure shown in Fig. 3.1.35 represents the section of a rolled shape with hole oprs and may be divided

MOMENT OF INERTIA

into the semicircle abc, rectangle edkg, and triangles mfg and hkl, from which the rectangle oprs is to be subtracted. Referring to axis XX, Ixx 5 p44/8 for semicircle abc 1 s2 3 113d/3 for rectangle edkg 1 2[s5 3 33d/36 1 102 s5 3 3d/2] for the two triangles mfg and hkl From the sum of these there is to be subtracted Ixx  [(2  32)/12  42(2  3)] for the rectangle oprs. If the moment of inertia of the whole area is required about an axis parallel to XX, but passing through the center of gravity of the whole area, I0 5 Ixx 2 x 20 A, where x0  distance from XX to center of gravity. The moments of inertia of built-up sections used in structural work may be found in the same manner, the moments of inertia of the Fig. 3.1.35 different rolled sections being given in Sec. 12.2. Moments of Inertia of Solids For moments of inertia of solids about parallel axes, Ix 5 I0 1 x 20 m. Moment of Inertia with Reference to Any Axis Let a mass particle dm of a body have x, y, and z as coordinates, XX, YY, and ZZ being the coordinate axes and O the origin. Let X9X9 be any axis passing through the origin and making angles of A, B, and C with XX, YY, and ZZ, respectively. The moment of inertia with respect to this axis then becomes equal to Irx 5 cos2 Asy 2 1 z 2d dm 1 cos 2 Bsz 2 3 x 2d dm 1 cos 2 Csx 2 1 y 2d dm 2 2 cos B cos Cyz dm 2 2 cos C cos Azx dm 2 2 cos A cos Bxy dm Let the moment of inertia about XX  Ix  (y2  z2) dm, about YY  Iy  (z2  x2) dm, and about ZZ  Iz  (x2  y2) dm. Let the products of inertia about the three coordinate axes be Iyz 5 yz dm

Izx 5 zx dm

3-9

Solid right circular cone about an axis through its apex and perpendicular to its axis: I  3M[(r2/4)  h2]/5. (h  altitude of cone, r  radius of base.) Solid right circular cone about its axis of revolution: I  3Mr2/10. Ellipsoid with semiaxes a, b, and c: I about diameter 2c (z axis)  4mabc (a2  b2)/15. [Equation of ellipsoid: (x2/a2)  (y2/b2)  (z2/c2)  1.] Ring with Circular Section (Fig. 3.1.36) Iyy  1⁄2m2Ra2(4R2  3a2); Ixx  m2Ra2[R2  (5a2/4)]. Y

Y

Fig. 3.1.36

x

dx 艎

Fig. 3.1.37

Approximate Moments of Inertia of Solids In order to determine the moment of inertia of a solid, it is necessary to know all its dimensions. In the case of a rod of mass M (Fig. 3.1.37) and length l, with shape and size of the cross section unknown, making the approximation that the weight is all concentrated along the axis of the rod, the moment l

of inertia about YY will be Iyy 5 3 sM/ldx 2 dx 5 Ml 2/3 0

A thin plate may be treated in the same way (Fig. 3.1.38): Iyy  l 1 2 2 3 sM/ldx dx 5 3 Ml . 0 Thin Ring, or Cylinder (Fig. 3.1.39) Assume the mass M of the ring or cylinder to be concentrated at a distance r from O. The moment of inertia about an axis through O perpendicular to plane of ring or along the axis of the cylinder will be I  Mr2; this will be greater than the exact moment of inertia, and r is sometimes taken as the distance from O to the center of gravity of the cross section of the rim.

Ixy 5 xy dm

Then the moment of inertia Irx becomes equal to Ix cos 2 A 1 Iy cos 2 B 1 Iz cos 2 C 2 2Iyz cos B cos C 2 2Izx cos C cos A 2 2Ixy cos A cos B The moment of inertia of any solid may be considered to be made up of the sum or difference of the moments of inertia of simple solids of which the moments of inertia are known. Moments of Inertia of Important Solids (Homogeneous)

w  weight per unit of volume of the body m  w/g  mass per unit of volume of the body M  W/g  total mass of body r  radius I  moment of inertia (mass units) Iw  I  g  moment of inertia (weight units) Solid circular cylinder about its axis: I  r4ma/2  Mr2/2. (a  length of axis of cylinder.) Solid circular cylinder about an axis through the center of gravity and perpendicular to axis of cylinder: I  M[r2  (a2/3)]/4. Hollow circular cylinder about its axis: I 5 Msr 21 1 r 22d/2 (r1 and r2  outer and inner radii). Thin-walled hollow circular cylinder about its axis: I  Mr2. Solid sphere about a diameter: I  8mr5/15  2Mr2/5. Thin hollow sphere about a diameter: I  2Mr2/3. Thick hollow sphere about a diameter: I 5 8mpsr 51 2 r 52d/15. (r1 and r2 are outer and inner radii.) Rectangular prism about an axis through center of gravity and perpendicular to a face whose dimensions are a and b: I  M(a2  b2)/12.

Fig. 3.1.38

Fig. 3.1.39

Flywheel Effect The moment of inertia of a solid is often called flywheel effect in the solution of problems dealing with rotating bodies. Graphical Determination of the Centroids and Moments of Inertia of Plane Areas Required to find the center of gravity of the area MNP

(Fig. 3.1.40) and its moment of inertia about any axis XX. Draw any line SS parallel to XX and at a distance d from it. Draw a number of lines such as AB and EF across the figure parallel to XX. From E and F draw ER and FT perpendicular to SS. Select as a pole any

Fig. 3.1.40

3-10

MECHANICS OF SOLIDS

point on XX, preferably the point nearest the area, and draw OR and OT, cutting EF at E9 and F9. If the same construction is repeated, using other lines parallel to XX, a number of points will be obtained, which, if connected by a smooth curve, will give the area MNP. Project E and F onto SS by lines ER and FT. Join F and T with O, obtaining E and F; connect the points obtained using other lines parallel to XX and obtain an area MNP. The area MNP  d  moment of area MNP about the line XX, and the distance from XX to the centroid MNP  area MNP  d/area MNP. Also, area MNP  d 2  moment of inertia of MNP about XX. The areas MNP9 and MNP can best be obtained by use of a planimeter. KINEMATICS Kinematics is the study of the motion of bodies without reference to the

forces causing that motion or the mass of the bodies. The displacement of a point is the directed distance that a point has moved on a geometric path from a convenient origin. It is a vector, having both magnitude and direction, and is subject to all the laws and characteristics attributed to vectors. In Fig. 3.1.41, the displacement of the point A from the origin O is the directed distance O to A, symbolized by the vector s. The velocity of a point is the time rate of change of displacement, or v  ds/dt. The acceleration of a point is the time rate of change of velocity, or a  dv/dt.

A velocity-time curve offers a convenient means for the study of acceleration. The slope of the curve at any point will represent the acceleration at that time. In Fig. 3.1.43a the slope is constant; so the acceleration must be constant. In the case represented by the full line, the acceleration is positive; so the velocity is increasing. The dotted line shows a negative acceleration and therefore a decreasing velocity. In Fig. 3.1.43b the slope of the curve varies from point to point; so the acceleration must also vary. At p and q the slope is zero; therefore, the acceleration of the point at the corresponding times must also be zero. The area under the velocity-time curve between any two ordinates such as NL and HT will represent the distance moved in time interval LT. In the case of the uniformly accelerated motion shown by the full line in Fig. 3.1.43a, the area LNHT is 1⁄2(NL  HT )  (OT  OL)  mean velocity multiplied by the time interval  space passed over during this time interval. In Fig. 3.1.43b the mean velocity can be obtained from the equation of the curve by means of the calculus, or graphically by approximation of the area.

Fig. 3.1.43

An acceleration-time curve (Fig. 3.1.44) may be constructed by plotting accelerations as ordinates, and times as abscissas. The area under this curve between any two ordinates will represent the total increase in velocity during the time interval. The area ABCD represents the total increase in velocity between time t1 and time t2. General Expressions Showing the Relations between Space, Time, Velocity, and Acceleration for Rectilinear Motion

SPECIAL MOTIONS

Fig. 3.1.41

The kinematic definitions of velocity and acceleration involve the four variables, displacement, velocity, acceleration, and time. If we eliminate the variable of time, a third equation of motion is obtained, ds/v  dt  dv/a. This differential equation, together with the definitions of velocity and acceleration, make up the three kinematic equations of motion, v  ds/dt, a  dv/dt, and a ds  v dv. These differential equations are usually limited to the scalar form when expressed together, since the last can only be properly expressed in terms of the scalar dt. The first two, since they are definitions for velocity and acceleration, are vector equations. A space-time curve offers a convenient means for the study of the motion of a point that moves in a straight line. The slope of the curve at any point will represent the velocity at that time. In Fig. 3.1.42a the slope is constant, as the graph is a straight line; the velocity is therefore uniform. In Fig. 3.1.42b the slope of the curve varies from point to point, and the velocity must also vary. At p and q the slope is zero; therefore, the velocity of the point at the corresponding times must also be zero.

Fig. 3.1.42

Uniform Motion If the velocity is constant, the acceleration must be zero, and the point has uniform motion. The space-time curve becomes a straight line inclined toward the time axis (Fig. 3.1.42a). The velocitytime curve becomes a straight line parallel to the time axis. For this motion a  0, v  constant, and s  s0  vt. Uniformly Accelerated or Retarded Motion If the velocity is not uniform but the acceleration is constant, the point has uniformly accelerated motion; the acceleration may be either positive or negative. The space-time curve becomes a parabola and the velocity-time curve becomes a straight line inclined toward the time axis (Fig. 3.1.43a). The acceleration-time curve becomes a straight line parallel to the time axis. For this motion a  constant, v  v0  at, s  s0  v0t  1⁄2 at2. If the point starts from rest, v0  0. Care should be taken concerning the sign  or  for acceleration. Composition and Resolution of Velocities and Acceleration Resultant Velocity A velocity is said to be the resultant of two other velocities when it is represented by a vector that is the geometric sum of the vectors representing the other two velocities. This is the parallelogram of motion. In Fig. 3.1.45, v is the resultant of v1 and v2

Fig. 3.1.44

Fig. 3.1.45

KINEMATICS

and is represented by the diagonal of a parallelogram of which v1 and v2 are the sides; or it is the third side of a triangle of which v1 and v2 are the other two sides. Polygon of Motion The parallelogram of motion may be extended to the polygon of motion. Let v1, v2, v3, v4 (Fig. 3.1.46a) show the directions of four velocities imparted in the same plane to point O. If the lines v1, v2, v3, v4 (Fig. 3.1.46b) are drawn parallel to and proportional to the velocities imparted to point O, v will represent the resultant velocity imparted to O. It will make no difference in what order the velocities are taken in constructing the motion polygon. As long as the arrows showing the direction of the motion follow each other in order about the polygon, the resultant velocity of the point will be represented in magnitude by the closing side of the polygon, but opposite in direction.

3-11

in the path is resolved by means of a parallelogram into components tangent and normal to the path, the normal acceleration an  v2/, where   radius of curvature of the path at the point in question, and the tangential acceleration at  dv/dt, where v  velocity tangent to the path at the same point. a 5 2a 2n 1 a 2t . The normal acceleration is constantly directed toward the concave side of the path.

Fig. 3.1.48

Fig. 3.1.46 Resolution of Velocities Velocities may be resolved into component velocities in the same plane, as shown by Fig. 3.1.47. Let the velocity of

EXAMPLE. Figure 3.1.49 shows a point moving in a curvilinear path. At p1 the velocity is v1; at p2 the velocity is v2. If these velocities are drawn from pole O (Fig. 3.1.49b), v will be the difference between v2 and v1. The acceleration during travel p1p2 will be v/t, where t is the time interval. The approximation becomes closer to instantaneous acceleration as shorter intervals t are employed.

point O be vr. In Fig. 3.1.47a this velocity is resolved into two components in the same plane as vr and at right angles to each other. vr 5 2sv1d2 1 sv2 d2 In Fig. 3.1.47b the components are in the same plane as vr , but are not at right angles to each other. In this case, vr 5 2sv1d2 1 sv2d2 1 2v1v2 cos B If the components v1 and v2 and angle B are known, the direction of vr can be determined. sin bOc  (v1/vr) sin B. sin cOa  (v2/vr) sin B. Where v1 and v2 are at right angles to each other, sin B  1.

Fig. 3.1.49

Fig. 3.1.47

Accelerations may be combined and resolved in the same manner as velocities, but in this case the lines or vectors represent accelerations instead of velocities. If the acceleration had components of magnitude a1 and a2, the magnitude of the resultant acceleration would be a 5 2sa1d2 1 sa2d2 1 2a1a2 cos B, where B is the angle between the vectors a1 and a2. Resultant Acceleration

Curvilinear Motion in a Plane

The linear velocity v  ds/dt of a point in curvilinear motion is the same as for rectilinear motion. Its direction is tangent to the path of the point. In Fig. 3.1.48a, let P1P2P3 be the path of a moving point and V1, V2, V3 represent its velocity at points P1, P2, P3, respectively. If O is taken as a pole (Fig. 3.1.48b) and vectors V1, V2, V3 representing the velocities of the point at P1, P2, and P3 are drawn, the curve connecting the terminal points of these vectors is known as the hodograph of the motion. This velocity diagram is applicable only to motions all in the same plane. Acceleration Tangents to the curve (Fig. 3.1.48b) indicate the directions of the instantaneous velocities. The direction of the tangents does not, as a rule, coincide with the direction of the accelerations as represented by tangents to the path. If the acceleration a at some point

The acceleration v/t can be resolved into normal and tangential components leading to an  vn/t, normal to the path, and ar  vp/t, tangential to the path.

Velocity and acceleration may be expressed in polar coordinates such that v 5 2v 2r 1 v 2u and a 5 2a 2r 1 a 2u. Figure 3.1.50 may be used to explain the r and  coordinates. EXAMPLE. At P1 the velocity is v1, with components v1r in the r direction and v1 in the  direction. At P2 the velocity is v2, with components v2r in the r direction and v2 in the  direction. It is evident that the difference in velocities v2  v1  v will have components vr and v, giving rise to accelerations ar and a in a time interval t.

In polar coordinates, vr  dr/dt, ar  d 2r/dt2  r(d/dt)2, v  r(d/dt), and a  r(d 2/dt2)  2(dr/dt)(d/dt). If a point P moves on a circular path of radius r with an angular velocity of  and an angular acceleration of , the linear velocity of the point P is v  r and the two components of the linear acceleration are an  v2/r  2r  v and at  r. If the angular velocity is constant, the point P travels equal circular paths in equal intervals of time. The projected displacement, velocity, and acceleration of the point P on the x and y axes are sinusoidal functions of time, and the motion is said to be harmonic motion. Angular velocity is usually expressed in radians per second, and when the number (N) of revolutions traversed per minute (N/min) by the point P is known, the angular velocity of the radius r is   2N/60  0.10472N.

3-12

MECHANICS OF SOLIDS

Fig. 3.1.50

Fig. 3.1.51

In Fig. 3.1.51, let the angular velocity of the line OP be a constant . Let the point P start at X9 and move to P in time t. Then the angle   t. If OP  r, OA 5 s 5 r cos vt. The velocity v of the point A on the x axis will equal ds/dt  r sin t, and the acceleration a  dv/dt  2r cos t. The period  is the time necessary for the point P to complete one cycle of motion   2/, and it is also equal to the time necessary for A to complete a full cycle on the x axis from X to X and return.

line and relating the motion of all other parts of the rigid body to these motions. If a rigid body moves so that a straight line connecting any two of its particles remains parallel to its original position at all times, it is said to have translation. In rectilinear translation, all points move in straight lines. In curvilinear translation, all points move on congruent curves but without rotation. Rotation is defined as angular motion about an axis, which may or may not be fixed. Rigid body motion in which the paths of all particles lie on parallel planes is called plane motion.

Curvilinear Motion in Space

Angular Motion

If three dimensions are used, velocities and accelerations may be resolved into components not in the same plane by what is known as the parallelepiped of motion. Three coordinate systems are widely used, cartesian, cylindrical, and spherical. In cartesian coordinates, v 5 2v 2x 1 v 2y 1 v 2z and a 5 2a 2x 1 a 2y 1 a 2z . In cylindrical coordinates, the radius vector R of displacement lies in the rz plane, which is at an angle with the xz plane. Referring to (a) of Fig. 3.1.52, the  coordinate is perpendicular to the rz plane. In this system v 5 2v 2r 1 v 2u 1 v 2z and a 5 2a 2r 1 a 2u 1 a 2z where vr  dr/dt, ar  d 2r/dt2  r(d/dt)2, v  r(d/dt), and a  r(d 2/dt2)  2(dr/dt)(d/dt). In spherical coordinates, the three coordinates are the R coordinate, the  coordinate, and the  coordinate as in (b) of Fig. 3.1.52. The velocity and acceleration are v 5 2v 2R 1 v 2u 1 v 2f

Angular displacement is the change in angular position of a given line as

measured from a convenient reference line. In Fig. 3.1.53, consider the motion of the line AB as it moves from its original position A9B9. The angle between lines AB and A9B9 is the angular displacement of line AB, symbolized as . It is a directed quantity and is a vector. The usual notation used to designate angular displacement is a vector normal to

and a 5 2a 2R 1 a 2u 1 a 2f, where vR  dR/dt, vf  R(df/dt), vu  R cos f(du/dt), aR  d 2R/dt 2  R(df/dt)2  R cos2 f(du/dt)2, af  R(d 2f/dt 2)  R cos f sin f (du/dt)2  2(dR/dt)(df/dt), and au  R cos f (d 2u/dt 2)  2[(dR/dt) cos f  R sin f (df/dt)] du/dt. Fig. 3.1.53

Fig. 3.1.52 Motion of Rigid Bodies

A body is said to be rigid when the distances between all its particles are invariable. Theoretically, rigid bodies do not exist, but materials used in engineering are rigid under most practical working conditions. The motion of a rigid body can be completely described by knowing the angular motion of a line on the rigid body and the linear motion of a point on this

the plane in which the angular displacement occurs. The length of the vector is proportional to the magnitude of the angular displacement. For a rigid body moving in three dimensions, the line AB may have angular motion about any three orthogonal axes. For example, the angular displacement can be described in cartesian coordinates as   x  y  z , where u 5 2u 2x 1 u 2y 1 u 2z . Angular velocity is defined as the time rate of change of angular displacement,   d/dt. Angular velocity may also have components about any three orthogonal axes. Angular acceleration is defined as the time rate of change of angular velocity,   d/dt  d 2 dt2. Angular acceleration may also have components about any three orthogonal axes. The kinematic equations of angular motion of a line are analogous to those for the motion of a point. In referring to Table 3.1.1,   d/dt   d/dt, and  d   d. Substitute  for s,  for v, and  for a. Motion of a Rigid Body in a Plane Plane motion is the motion of a rigid body such that the paths of all par-

ticles of that rigid body lie on parallel planes.

KINEMATICS

3-13

Table 3.1.1 s  f(t)

Variables

v  f(t) t

s 5 s0 1 3 v dt

Displacement

a2  f(t) t

s 5 s0 1 3 3 a dt dt

t0

Velocity

v  ds/dt

Acceleration

a  d 2 s/dt2

t

t0 t0 t

v 5 v0 1 3 a dt t0

a  dv/dt

a  f(s, v) v

s 5 s0 1 3 sv/ad dv v

v0 s0

v0

s

3 v dv 5 3 a ds a  v dv/ds

Instantaneous Axis When the axis about which any body may be considered to rotate changes its position, any one position is known as an instantaneous axis, and the line through all positions of the instantaneous axis as the centrode. When the velocity of two points in the same plane of a rigid body having plane motion is known, the instantaneous axis for the body will be at the intersection of the lines drawn from each point and perpendicular to its velocity. See Fig. 3.1.54, in which A and B are two points on the rod AB, v1 and v2 representing their velocities. O is the instantaneous axis for AB; therefore point C will have velocity shown in a line perpendicular to OC. Linear velocities of points in a body rotating about an instantaneous axis are proportional to their distances from this axis. In Fig. 3.1.54, v1: v2: v3  AO : OB : OC. If the velocities of A and B were parallel, the lines OA and OB would also be parallel and there would be no instantaneous axis. The motion of the rod would be translation, and all points would be moving with the same velocity in parallel straight lines. If a body has plane motion, the components of the velocities of any two points in the body along the straight line joining them must be equal. Ax

must be equal to By and Cz in Fig. 3.1.54. EXAMPLE. In Fig. 3.1.55a, the velocities of points A and B are known—they are v1 and v2, respectively. To find the instantaneous axis of the body, perpendiculars AO and BO are drawn. O, at the intersection of the perpendiculars, is the instantaneous axis of the body. To find the velocity of any other point, like C, line OC is drawn and v3 erected perpendicular to OC with magnitude equal to v1 (CO/AO). The angular velocity of the body will be   v1/AO or v2/BO or v3/CO. The instantaneous axis of a wheel rolling on a rack without slipping (Fig. 3.1.55b) lies at the point of contact O, which has zero linear velocity. All points of the wheel will have velocities perpendicular to radii to O and proportional in magnitudes to their respective distances from O.

Another way to describe the plane motion of a rigid body is with the use of relative motion. In Fig. 3.1.56 the velocity of point A is v1. The angular velocity of the line AB is AB. The velocity of B relative to A is AB  rAB. Point B is considered to be moving on a circular path around A as a center. The direction of relative velocity of B to A would be tangent to the circular path in the direction that AB would make B move. The velocity of B is the vector sum of the velocity A added to the velocity of B relative to A, vB  vA  vB/A. The acceleration of B is the vector sum of the acceleration of A added to the acceleration of B relative to A, aB  aA  aB/A. Care must be taken to include the complete relative acceleration of B to A. If B is considered to move on a circular path about A, with a velocity relative to A, it will have an acceleration relative to A that has both normal and tangential components: aB/A  (aB/A)n  (aB/A)t.

Fig. 3.1.56

If B is a point on a path which lies on the same rigid body as the line AB, a particle P traveling on the path will have a velocity vp at the instant P passes over point B such that vP  vA  vB/A  vP/B, where the velocity vP/B is the velocity of P relative to point B. The particle P will have an acceleration aP at the instant P passes over the point B such that aP  aA  aB/A  aP/B  2AB  vP/B. The term aP/B is the acceleration of P relative to the path at point B. The last term 2AB vP/B is frequently referred to as the coriolis acceleration. The direction is always normal to the path in a sense which would rotate the head of the vector vP/B about its tail in the direction of the angular velocity of the rigid body AB. EXAMPLE. In Fig. 3.1.57, arm AB is rotating counterclockwise about A with a constant angular velocity of 38 r/min or 4 rad/s, and the slider moves outward with a velocity of 10 ft/s (3.05 m/s). At an instant when the slider P is 30 in (0.76 m) from the center A, the acceleration of the slider will have two components. One component is the normal acceleration directed toward the center A. Its magnitude is 2r  42 (30/12)  40 ft/s2 [2r  42 (0.76)  12.2 m/s2]. The second is the coriolis acceleration directed normal to the arm AB, upward and to the left. Its magnitude is 2v  2(4)(10)  80 ft/s2 [2v  2(4)(3.05)  24.4 m/s2].

Fig. 3.1.57 General Motion of a Rigid Body Fig. 3.1.54

Fig. 3.1.55

The general motion of a point moving in a coordinate system which is itself in motion is complicated and can best be summarized by using

3-14

MECHANICS OF SOLIDS

vector notation. Referring to Fig. 3.1.58, let the point P be displaced a vector distance R from the origin O of a moving reference frame x, y, z which has a velocity vo and an acceleration ao. If point P has a velocity and an acceleration relative to the moving reference plane, let these be vr and ar. The angular velocity of the moving reference fame is , and

Fig. 3.1.58

plane is (3/5)(90)  (4/5)(36)  9.36  15.84 lbf (70.46 N) downward. F  (W/9) a  (90/g) a; therefore, a  0.176 g  5.66 ft/s2 (1.725 m/s2). In SI units, F  ma  70.46  40.8a; and a  1.725 m/s2. The body is acted upon by 5

constant forces and starts from rest; therefore, v 5 3 a dt, and at the end of 5 s, 0 the velocity would be 28.35 ft/s (8.91 m/s). EXAMPLE 2. The force with which a rope acts on a body is equal and opposite to the force with which the body acts on the rope, and each is equal to the tension in the rope. In Fig. 3.1.60a, neglecting the weight of the pulley and the rope, the tension in the cord must be the force of 27 lbf. For the 18-lb mass, the unbalanced force is 27  18  9 lbf in the upward direction, i.e., 27  18  (18/g)a, and a  16.1 ft/s2 upward. In Fig. 3.1.60b the 27-lb force is replaced by a 27-lb mass. The unbalanced force is still 27  18  9 lbf, but it now acts on two masses so that 27  18  (45a/g) and a  6.44 ft/s2. The 18-lb mass is accelerated upward, and the 27-lb mass is accelerated downward. The tension in the rope is equal to 18 lbf plus the unbalanced force necessary to give it an upward acceleration of g/5 or T  18  (18/g)(g/5)  21.6 lbf. The tension is also equal to 27 lbf less the unbalanced force necessary to give it a downward acceleration of g/5 or T  27  (27/g)  (g/5)  21.6 lbf.

the origin of the moving reference frame is displaced a vector distance R1 from the origin of a primary (fixed) reference frame X, Y, Z. The velocity and acceleration of P are vP  vo    R  vr and aP  ao  (d/dt)  R    (  R)  2  vr  ar. DYNAMICS OF PARTICLES

Consider a particle of mass m subjected to the action of forces F1, F2, F3, . . . , whose vector resultant is R  F. According to Newton’s first law of motion, if R  0, the body is acted on by a balanced force system, and it will either remain at rest or move uniformly in a straight line. If R 2 0, Newton’s second law of motion states that the body will accelerate in the direction of and proportional to the magnitude of the resultant R. This may be expressed as F  ma. If the resultant of the force system has components in the x, y, and z directions, the resultant acceleration will have proportional components in the x, y, and z direction so that Fx  max, Fy  may, and Fz  maz. If the resultant of the force system varies with time, the acceleration will also vary with time. In rectilinear motion, the acceleration and the direction of the unbalanced force must be in the direction of motion. Forces must be in balance and the acceleration equal to zero in any direction other than the direction of motion. EXAMPLE 1. The body in Fig. 3.1.59 has a mass of 90 lbm (40.8 kg) and is subjected to an external horizontal force of 36 lbf (160 N) applied in the direction shown. The coefficient of friction between the body and the inclined plane is 0.1. Required, the velocity of the body at the end of 5 s, if it starts from rest.

(a)

(b)

Fig. 3.1.60 In SI units, in Fig. 3.1.60a, the unbalanced force is 120  80  40 N, in the upward direction, i.e., 120  80  8.16a, and a  4.9 m/s2 (16.1 ft/s2). In Fig. 3.1.60b the unbalanced force is still 40 N, but it now acts on the two masses so that 120  80  20.4a and a  1.96 m/s2 (6.44 ft/s2). The tension in the rope is the weight of the 8.16-kg mass in newtons plus the unbalanced force necessary to give it an upward acceleration of 1.96 m/s2, T  9.807(8.16)  (8.16)(1.96)  96 N (21.6 lbf ). General Formulas for the Motion of a Body under the Action of a Constant Unbalanced Force

Let s  distance, ft; a  acceleration, ft/s2; v  velocity, ft/s; v0  initial velocity, ft/s; h  height, ft; F  force; m  mass; w  weight; g  acceleration due to gravity. Initial velocity  0 F  ma  (w/g)a v  at s  1⁄2 at2  1⁄2 vt v 5 22as 5 22gh sfalling freely from restd

Fig. 3.1.59 First determine all the forces acting externally on the body. These are the applied force F  36 lbf (106 N), the weight W  90 lbf (400 N), and the force with which the plane reacts on the body. The latter force can be resolved into component forces, one normal and one parallel to the surface of the plane. Motion will be downward along the plane since a static analysis will show that the body will slide downward unless the static coefficient of friction is greater than 0.269. In the direction normal to the surface of the plane, the forces must be balanced. The normal force is (3/5)(36)  (4/5)(90)  93.6 lbf (416 N). The frictional force is 93.6  0.1  9.36 lbf (41.6 N). The unbalanced force acting on the body along the

Initial velocity  v0 F  ma  (w/g)a v  v0  at s  v0t  1⁄2 at2  1⁄2 v0t  1⁄2 vt If a body is to be moved in a straight line by a force, the line of action of this force must pass through its center of gravity. General Rule for the Solution of Problems When the Forces Are Constant in Magnitude and Direction

Resolve all the forces acting on the body into two components, one in the direction of the body’s motion and one at right angles to it. Add the

DYNAMICS OF PARTICLES

components in the direction of the body’s motion algebraically and find the unbalanced force, if any exists. In curvilinear motion, a particle moves along a curved path, and the resultant of the unbalanced force system may have components in directions other than the direction of motion. The acceleration in any given direction is proportional to the component of the resultant in that direction. It is common to utilize orthogonal coordinate systems such as cartesian coordinates, polar coordinates, and normal and tangential coordinates in analyzing forces and accelerations. EXAMPLE. A conical pendulum consists of a weight suspended from a cord or light rod and made to rotate in a horizontal circle about a vertical axis with a constant angular velocity of N r/min. For any given constant speed of rotation, the angle , the radius r, and the height h will have fixed values. Looking at Fig. 3.1.61, we see that the forces in the vertical direction must be balanced, T cos   w. The forces in the direction normal to the circular path of rotation are unbalanced such that T sin   (w/g)an  (w/g)2r. Substituting r  l sin  in this last equation gives the value of the tension in the cord T  (w/g)l2. Dividing the second equation by the first and substituting tan   r/h yields the additional relation that h  g/2.

3-15

to the body to constantly deviate it toward the axis. This deviating force is known as centripetal force. The equal and opposite resistance offered by the body to the connection is called the centrifugal force. The acceleration toward the axis necessary to keep a particle moving in a circle about that axis is v2/r; therefore, the force necessary is ma  mv2/r  wv2/gr  w2N2r/900g, where N  r/min. This force is constantly directed toward the axis. The centrifugal force of a solid body revolving about an axis is the same as if the whole mass of the body were concentrated at its center of gravity.

Centrifugal force  wv2/gr  mv2/r  w2r/g, where w and m are the weight and mass of the whole body, r is the distance from the axis about which the body is rotating to the center of gravity of the body,  the angular velocity of the body about the axis in radians, and v the linear velocity of the center of gravity of the body. Balancing

A rotating body is said to be in standing balance when its center of gravity coincides with the axis upon which it revolves. Standing balance may be obtained by resting the axle carrying the body upon two horizontal plane surfaces, as in Fig. 3.1.63. If the center of gravity of the wheel A lies on the axis of the shaft B, there will be no movement, but if the center of gravity does not lie on the axis of the shaft, the shaft will roll until the center of gravity of the wheel comes directly under the

Fig. 3.1.63

Fig. 3.1.61

An unresisted projectile has a motion compounded of the vertical motion of a falling body, and of the horizontal motion due to the horizontal component of the velocity of projection. In Fig. 3.1.62 the only force acting after the projectile starts is gravity, which causes an accelerating downward. The horizontal component of the original velocity v0 is not changed by gravity. The projectile will rise until the velocity

axis of the shaft. The center of gravity may be brought to the axis of the shaft by adding or taking away weight at proper points on the diameter passing through the center of gravity and the center of the shaft. Weights may be added to or subtracted from any part of the wheel so long as its center of gravity is brought to the center of the shaft. A rotating body may be in standing balance and not in dynamic balance. In Fig. 3.1.64, AA and BB are two disks whose centers of gravity are at o and p, respectively. The shaft and the disks are in standing balance if the disks are of the same weight and the distances of o and p from the center of the shaft are equal, and o and p lie in the same axial plane but on opposite sides of the shaft. Let the weight of each disk be w and the distances of o and p from the center of the shaft each be equal to r.

Fig. 3.1.62

given to it by gravity is equal to the vertical component of the starting velocity v0, and the equation v0 sin   gt gives the time t required to reach the highest point in the curve. The same time will be taken in falling if the surface XX is level, and the projectile will therefore be in flight 2t s. The distance s  v0 cos   2t, and the maximum height of ascent h  (v0 sin u)2/2g. The expressions for the coordinates of any point on the path of the projectile are: x  (v0 cos )t, and y  (v0 sin )t  1⁄2 gt2, giving y 5 x tan u 2 sgx 2/2v 2 cos 2 ud as the equation for the 0 curve of the path. The radius of curvature of the highest point may be found by using the general expression v2  gr and solving for r, v being taken equal to v0 cos . Simple Pendulum The period of oscillation  t 5 2p 2l/g, where l is the length of the pendulum and the length of the swing is not great compared to l. Centrifugal and Centripetal Forces When a body revolves about an axis, some connection must exist capable of applying force enough

Fig. 3.1.64

The force exerted on the shaft by AA is equal to w2r/g, where  is the angular velocity of the shaft. Also, the force exerted on the shaft by BB  w2r/g. These two equal and opposite parallel forces act at a distance x apart and constitute a couple with a moment tending to rotate the shaft, as shown by the arrows, of (w2r/g)x. A couple cannot be balanced by a single force; so two forces at least must be added to or subtracted from the system to get dynamic balance. Systems of Particles The principles of motion for a single particle can be extended to cover a system of particles. In this case, the vector resultant of all external forces acting on the system of particles must equal the total mass of the system times the acceleration of the mass center, and the direction of the resultant must be the direction of the acceleration of the mass center. This is the principle of motion of the mass center.

3-16

MECHANICS OF SOLIDS

Rotation of Solid Bodies in a Plane about Fixed Axes

For a rigid body revolving in a plane about a fixed axis, the resultant moment about that axis must be equal to the product of the moment of inertia (about that axis) and the angular acceleration, M0  I0. This is a

general statement which includes the particular case of rotation about an axis that passes through the center of gravity. Rotation about an Axis Passing through the Center of Gravity

The rotation of a body about its center of gravity can only be caused or changed by a couple. See Fig. 3.1.65. If a single force F is applied to the wheel, the axis immediately acts on the wheel with an equal force to prevent translation, and the result is a couple (moment Fr) acting on the body and causing rotation about its center of gravity.

Center of Percussion The distance from the axis of suspension to the center of percussion is q0 5 I/mrG, where I  moment of inertia of the body about its axis of suspension and rG is the distance from the axis of suspension to the center of gravity of the body. EXAMPLES. 1. Find the center of percussion of the homogeneous rod (Fig. 3.1.67) of length L and mass m, suspended at XX. 1 q0 5 mr G I
E pdsDo 1 Did> sDo 2 Did]

p 5 2rcV 5 2s1.937ds4,860ds210d 5 94,138 lbf/ft 2 5 94,138/144 5 653.8 lbf/in2 s4.507 3 106 N/m2d

At t 2: Q 2 5 0.61 3 0.08727 22 3 32.17 3 8 5 1.208 ft 3/s t2 2 t1 5

3-61

1 2 1.350 b 1 1.350 2 1.208d 1 2 1.208

t 2 2 t 1 5 205.4 s

Es Å r[1 1 sE s >E pdsDo 1 Did>sDo 2 Did] 319,000 3 144

5 ã

1.937c1 1

s319,000/28.5 3 106ds3.500 1 3.067d s3.500 2 3.067d

d

5 4,504 p 5 2s1.937ds4,504ds210d 5 87,242 lbf/ft 2 5 605.9 lbf/in2 s4.177 3 106 N/m2d

WATER HAMMER Equations Water hammer is the series of shocks, sounding like hammer blows, produced by suddenly reducing the flow of a fluid in a pipe. Consider a fluid flowing frictionlessly in a rigid pipe of uniform area A with a velocity V. The pipe has a length L, and inlet pressure p1 and a pressure p2 at L. At length L, there is a valve which can suddenly reduce the velocity at L to V  V. The equivalent mass rate of flow of a pressure wave traveling at sonic velocity c, M 5 rAc. From the impulse-momentum equation, MsV2 2 V1d 5 p2 A2 2 p1A1; for this application, ( rAc)(V  V  V )  p2 A  p1 A, or the increase in pressure p  cV. When the liquid is flowing in an elastic pipe, the

3.4

3. Maximum time for closure t 5 2L /c 5 2 3 200/4,860 5 0.08230 s or less than 1/10 s COMPUTATIONAL FLUID DYNAMICS (CFD)

The partial differential equations of fluid motion are very difficult to solve. CFD methods are utilized to provide discrete approximations for the solution of those equations. A brief introduction to CFD is included in Section 20.6. The references therein will guide the reader further.

VIBRATION

by Leonard Meirovitch REFERENCES: Harris, “Shock and Vibration Handbook,” 3d ed., McGraw-Hill. Thomson, “Theory of Vibration with Applications,” Prentice Hall. Meirovitch, “Fundamentals of Vibration,” McGraw-Hill. Meirovitch, “Principles and Techniques of Vibrations,” Prentice-Hall. SINGLE-DEGREE-OF-FREEDOM SYSTEMS Discrete System Components A system is defined as an aggregation of components acting together as one entity. The components of a vibratory mechanical system are of three different types, and they relate forces to displacements, velocities, and accelerations. The component relating forces to displacements is known as a spring (Fig. 3.4.1a). For a linear spring the force Fs is proportional to the elongation d 5 x 2 2 x 1, or

Fs 5 kd 5 ksx 2 2 x 1d

(3.4.1)

where k represents the spring constant, or the spring stiffness, and x 1 and x 2 are the displacements of the end points. The component relating

forces to velocities is called a viscous damper or a dashpot (Fig. 3.4.1b). It consists of a piston fitting loosely in a cylinder filled with liquid so that the liquid can flow around the piston when it moves relative to the cylinder. The relation between the damper force and the velocity of the piston relative to the cylinder is # # Fd 5 csx 2 2 x 1d (3.4.2) in which c is the coefficient of viscous damping; note that dots denote derivatives with respect to time. Finally, the relation between forces and accelerations is given by Newton’s second law of motion: $ Fm 5 mx (3.4.3) where m is the mass (Fig. 3.4.1c). The spring constant k, coefficient of viscous damping c, and mass m represent physical properties of the components and are the system parameters. By implication, these properties are concentrated at points,

3-62

VIBRATION

thus they are lumped, or discrete, parameters. Note that springs and dampers are assumed to be massless and masses are assumed to be rigid. Springs can be arranged in parallel and in series. Then, the proportionality constant between the forces and the end points is known as an

Table 3.4.1

Equivalent Spring Constants

Fig. 3.4.1

equivalent spring constant and is denoted by keq, as shown in Table 3.4.1.

Certain elastic components, although distributed over a given line segment, can be regarded as lumped with an equivalent spring constant given by keq  F/ , where is the deflection at the point of application of the force F. A similar relation can be given for springs in torsion. Table 3.4.1 lists the equivalent spring constants for a variety of components. Equation of Motion The dynamic behavior of many engineering systems can be approximated with good accuracy by the mass-damperspring model shown in Fig. 3.4.2. Using Newton’s second law in conjunction with Eqs. (3.4.1)–(3.4.3) and measuring the displacement x(t) from the static equilibrium position, we obtain the differential equation of motion $ # mx std 1 cx std 1 kxstd 5 Fstd (3.4.4) # which is subject to the initial conditions x(0)  x0, x s0d 5 v0, where x0 and v0 are the initial displacement and initial velocity, respectively. Equation (3.4.4) is in terms of a single coordinate. namely x(t); the system of Fig. 3.4.2 is therefore said to be a single-degree-of-freedom system. Free Vibration of Undamped Systems Assuming zero damping and external forces and dividing Eq. (3.4.4) through by m, we obtain $ x 1 v2nx 5 0 vn 5 2k/m (3.4.5) In this case, the vibration is caused by the initial excitations alone. The solution of Eq. (3.4.5) is xstd 5 A cos svnt 2 fd

(3.4.6)

phase angle do depend on the initial displacement and velocity, as follows: A 5 2x 20 1 sv0 /vnd2

rad/s

T 5 2p/vn

seconds

(3.4.9)

The reciprocal of the period provides another definition of the natural frequency, namely, fn 5

vn 1 5 T 2p

Hz

(3.4.10)

where Hz denotes hertz [1 Hz  1 cycle per second (cps)]. A large variety of vibratory systems behave like harmonic oscillators, many of them when restricted to small amplitudes. Table 3.4.2 shows a variety of harmonic oscillators together with their respective natural frequency. Free Vibration of Damped Systems Let F(t)  0 and divide through by m. Then, Eq. (3.4.4) reduces to $ # x std 1 2zvn x std 1 v2n xstd 5 0 (3.4.11) where z 5 c/2mvn (3.4.12) is the damping factor, a nondimensional quantity. The nature of the motion depends on z. The most important case is that in which 0 z 1.

(3.4.7)

Systems described by equations of the type (3.4.5) are called harmonic oscillators. Because the frequency of oscillation represents an inher-

ent property of the system, independent of the initial excitation, n is called the natural frequency. On the other hand, the amplitude and

(3.4.8)

The time necessary to complete one cycle of motion defines the period

which represents simple sinusoidal, or simple harmonic oscillation with amplitude A, phase angle f, and frequency vn 5 2k/m

f 5 tan 21v0 /x 0vn

Fig. 3.4.2

SINGLE-DEGREE-OF-FREEDOM SYSTEMS Table 3.4.2

Harmonic Oscillators and Natural Frequencies

3-63

In this case, the system is said to be underdamped and the solution of Eq. (3.4.11) is where

xstd 5 Ae2zvnt cos svd t 2 fd vd 5 s1 2 z2d1/2vn

(3.4.13) (3.4.14)

is the frequency of damped free vibration and T 5 2p/vd

(3.4.15)

is the period of damped oscillation. The amplitude and phase angle depend on the initial displacement and velocity, as follows: A 5 2x 20 1 szvn x 0 1 v0d2/v2d f 5 tan21 szvn x 0 1 v0d/x 0vd

(3.4.16)

The motion described by Eq. (3.4.13) represents decaying oscillation, where the term Ae2zvnt can be regarded as a time-dependent amplitude, providing an envelope bounding the harmonic oscillation. When z  1, the solution represents aperiodic decay. The case z  1 represents critical damping, and cc 5 2mvn

(3.4.17)

is the critical damping coefficient, although there is nothing critical about it. It merely represents the borderline between oscillatory decay and aperiodic decay. In fact, cc is the smallest damping coefficient for which the motion is aperiodic. When z  1, the system is said to be overdamped. Logarithmic Decrement

Quite often the damping factor is not known and must be determined experimentally. In the case in which the system is underdamped, this can be done conveniently by plotting x(t) versus t (Fig. 3.4.3) and measuring the response at two different times

Fig. 3.4.3

separated by a complete period. Let the times be t1 and t1  T, introduce the notation x(t1)  x1, x(t1  T)  x2, and use Eq. (3.4.13) to obtain Ae2zvnt1 cos svdt 1 2 fd x1 zvnT x 2 5 Ae2zvnst11Td cos [v st 1 Td 2 f] 5 e d 1

(3.4.18)

where cos [vd (t1  T )  f]  cos (vd t1  f  2p)  cos (vd t1  f). Equation (3.4.18) yields the logarithmic decrement x1 d 5 In x 5 zvnT 5 2

2pz

21 2 z2 which can be used to obtain the damping factor d z5 2s2pd2 1 d2

(3.4.19)

(3.4.20)

For small damping, the logarithmic decrement is also small, and the damping factor can be approximated by z
h. Then, considering the two linear interpolation functions f1 sjd 5 j

f2 sjd 5 1 2 j

(3.4.150)

0 0

the displacement at point  can be expressed as vsjd 5 ae21f1 sjd 1 aef2 sjd

dfi dfj 1 1 EA dj 3 h 0 dj dj

1

m eij 5 h 3 mfi fj dj, 0

i, j 5 1, 2

0 0

0 0

c c

2 21

21 Kh/EA (3.4.154)

(3.4.151)

where ae1 and ae are the nodal displacements for element e. Using Eqs. (3.4.143) and changing variables from x to , we can write the element stiffness and mass coefficients k eij 5

2 21 0 c 0 0 21 2 21 c 0 0 0 21 2 c 0 0 EA K5 F V h ..............................

(3.4.152)

4 1 0 c 0 0 1 4 1 c 0 0 hm 0 1 4 c 0 0 M5 F V 6 .................... 0 0 0 c 4 1 0 0 0 c 1 2

3-78

VIBRATION

For beams in bending, the displacements consist of one translation and one rotation per node; the interpolation functions are the Hermite cubics f1 sjd 5 3j2 2 2j3, f2 sjd 5 j2 2 j3 f3 sjd 5 1 2 3j2 1 2j3, f4 sjd 5 2j 1 2j2 2 j3 (3.4.155) and the element stiffness and mass coefficients are 2 2 1 1 1 d fi d fj dj meij 5 h 3 mfifj dj k eij 5 3 3 EI 2 2 h dj dj 0

0

i, j 5 1, 2, 3, 4

(3.4.156)

yielding typical element stiffness and mass matrices 12 6 EI Ke 5 3 D h 212 6

6 4 26 2

212 26 12 26

6 2 T 26 4

156 22 hm Me 5 D 420 54 213

22 4 13 23

54 13 156 222

213 23 T 222 4

Fig. 3.4.29

(3.4.157)

The treatment of two-dimensional problems, such as for membranes and plates, is considerably more complex (see Meirovitch, “Principles and Techniques of Vibration,” Prentice-Hall) than for one-dimensional problems. The various steps involved in the finite element method lend themselves to ready computer programming. There are many computer codes available commercially; one widely used is NASTRAN. VIBRATION-MEASURING INSTRUMENTS

Typical quantities to be measured include acceleration, velocity, displacement, frequency, damping, and stress. Vibration implies motion, so that there is a great deal of interest in transducers capable of measuring motion relative to the inertial space. The basic transducer of many vibrationmeasuring instruments is a mass-damper-spring enclosed in a case together with a device, generally electrical, for measuring the displacement of the mass relative to the case, as shown in Fig. 3.4.29. The equation for the displacement z(t) of the mass relative to the case is $ # $ mz std 1 cz std 1 kzstd 5 2my std (3.4.158) where y(t) is the displacement of the case relative to the inertial space. If this displacement is harmonic, y(t)  Y sin t, then by analogy with Eq. (3.4.35) the response is v 2 zstd 5 Y a v b |Gsvd| sin svt 2 fd n 5 Zsvd sin svt 2 fd

(3.4.159)

so that the magnitude factor Z()/Y  (/n) |Gsvd| is as plotted in Fig. 3.4.9 and the phase angle  is as in Fig. 3.4.4. The plot Z()/Y 2

Fig. 3.4.30

versus /n is shown again in Fig. 3.4.30 on a scale more suited to our purposes. Accelerometers are high-natural-frequency instruments. Their usefulness is limited to a frequency range well below resonance. Indeed, for small values of /n, Eq. (3.4.159) yields the approximation Zsvd
l  lo

ln (l/lo). In this equation, l is the instantaneous length, while lo is the original length. In terms of the normal strain, the natural strain becomes e 5 ln s1 1 eod. Since it is assumed that the volume remains constant, l/lo  Ao/A, and so the natural stress becomes S 5 P/A 5 sP/Aods1 eod. Ao is the original cross-sectional area. If the natural stress is plotted against strain on log-log paper, the graph is very nearly a straight line. The plastic-range relation is thus approximated by S 5 K en, where the proportionality factor K and the strain-hardening coefficient n are determined from best fits to experimental data. Values of K and n

5-50

MECHANICS OF MATERIALS Table 5.2.21

Constants K and n for Sheet Materials

Material

K, lb/in2

Treatment

0.05%C rimmed steel 0.05%C killed steel Decarburized 0.05%C steel 0.05/0.07% phos. low C SAE 4130 SAE 4130 Type 430 stainless Alcoa 24-S Reynolds R-301

Annealed Annealed and tempered Annealed in wet H2 Annealed Annealed Normalized and tempered Annealed Annealed Annealed

determined by Low and Garofalo (Proc. Soc. Exp. Stress Anal., vol. IV, no. 2,1947) are given in Table 5.2.21.

n

77,100 73,100 75,500 93,330 169,400 154,500 143,000 55,900 48,450

0.261 0.234 0.284 0.156 0.118 0.156 0.229 0.211 0.211

EXAMPLE. An annealed, stainless-steel type 430 tank has a 41-in inside diameter and has a wall 0.375 in thick. The ultimate strength of the stainless steel is 85,000 lb/in2. Compute the maximum strain as well as the pressure at fracture. The tank constitutes a biaxial stress field where S1  pd/(2t), S2  pd/(4t), and S3  0. Taking the power stress-strain relation Se 5 K ene

thus

e1 5

and

e3 5

e/S 5 S es12nd>n/K 1>n

or S s12nd/n e K 1/n S s12nd/n e K 1/n

3 ¢ S1 ≤e 5 0 4 ¢2

3 S ≤ 5 2e1 4 t

The maximum-shear theory, which is applicable to a ductile material under combined stress, is acceptable here. Thus rupture will occur at S1 2 S3 5 Su, and

Fig. 5.2.73

Se 5

The geometry of Fig. 5.2.73 can be used to arrive at a second approximate relation S 5 So 1 sep 2 eod tan u 5 So ¢1 2

e1 5

H ≤ 1 epH E

S1 2 S1 2 3 2 3 1/2 1 B ¢ ≤ 1 ¢ ≤ 1 S 21 R 5 S 5 ¢ ≤ Su Å2 Å4 1 2 2 4

[s3/4d]1/2Su]s12nd/n 3 3 110.229/0.458 85,000 1/0.225 ¢ Su ≤ 5 ¢ ≤ ¢ ≤ 4 4 143,000 K 1/n

5 0.0475 in/in s0.0475 cm/cmd

where H  tan u is a kind of plastic modulus. The deformation theory of plastic flow for the general case of combined stress is developed using the above concepts. Certain additional assumptions involved include: principal plastic-strain directions are the same as principal stress directions; the elastic strain is negligible compared to plastic strain; and the ratios of the three principal shearing strains— se1 2 e2d, se2 2 e3d, se3 2 e1d—to the principal shearing stresses— sS1 2 S2d/2, sS2 2 S3d/2, sS3 2 S1d/2—are equal. The relations between the principal strains and stresses in terms of the simple tension quantities become e1 5 e/S[S1 2 sS2 1 S3d/2] e2 5 e/S [S2 2 sS3 1 S1d/2] e3 5 e/S [S3 2 sS1 1 S2d/2] If these equations are added, the plastic-flow theory is expressed: [sS1 2 S2d2 1 sS2 2 S3d2 1 sS3 2 S1d2]/2 S 5 Å e 2se21 1 e22 1 e23d/3

since

Su 5 S1 5

or

p5

2tSu pd , then p 5 2t d

2 3 0.375 3 85,000 5 1,550 lb/in2 s109 kgf/cm2d 41

ROTATING DISKS

Rotating circular disks may be of various profiles, of constant or variable thickness, with or without centrally and noncentrally located holes, and with radial, tangential, and shearing stresses. Solution starts with the differential equations of equilibrium and compatibility and the subsequent application of appropriate boundary conditions for the derivation of working-stress equations. If the disk thickness is small compared with the diameter, the variation of stress with thickness can be assumed to be negligible, and symmetry eliminates the shearing stress. In the rotating case, the disk weight is neglected, but its inertia force becomes the body-force term in the equilibrium equations. Thus solved, the stress components in a solid disk become

In the above equation 2[sS1 2 S2d2 1 sS2 2 S3d2 1 sS3 2 S1d2]/2 5 Se and

22se12 1 e22 1 e32d/3 5 ee

are the effective, or significant, stress and strain, respectively,

sr 5

31m 2 2 rv sR 2 r 2d 8

su 5

31m 2 2 1 1 3m 2 2 rv R 2 rv r 8 8

PIPELINE FLEXURE STRESSES

where m  Poisson’s ratio; r  mass density, lb  s2/in4; v  angular speed, rad/s; R  outside disk radius; and r  radius to point in question. The largest stresses occur at the center of the solid disk and are sr 5 su 5

31m 2 2 rv R 8

5-51

The maximum radial stress sr|M occurs at r 5 2Rrh, and sr|M 5

31m 2 rv sR 2 rhd2 8

The largest tangential stress su|M exists at the inner boundary, and

A disk with a central hole of radius rh (no external forces) is subjected

to the following stresses: su|M 5

R2r 2h 31m 2 2 sr 5 rv ¢R 1 r 2h 2 2 2 r 2 ≤ 8 r su 5

R2r 2h 31m 2 2 1 1 3m 2 rv ¢R 1 r 2h 1 2 2 r ≤ 8 31m r

31m 2 2 12m 2 rv ¢ R 1 r ≤ 4 31m h

As the hole radius rh approaches zero, the tangential stress assumes a value twice that at the center of a rotating solid disk, given above.

5.3 PIPELINE FLEXURE STRESSES by Harold V. Hawkins EDITOR’S NOTE: The almost universal availability and utilization of personal computers in engineering practice has led to the development of many competing and complementary forms of piping stress analysis software. Their use is widespread, and individual packaged software allows analysis and design to take into account static and dynamic conditions, restraint conditions, aboveground and buried configurations, etc. The reader is referred to the technical literature for the most suitable and current software available for use in solving the immediate problems at hand. The brief discussion in this section addresses the fundamental concepts entailed and sets forth the solution of simple systems as an exercise in application of the principles. REFERENCES: Shipman, Design of Steam Piping to Care for Expansion, Trans. ASME, 1929. Wahl, Stresses and Reactions in Expansion Pipe Bends, Trans. ASME, 1927. Hovgaard, The Elastic Deformation of Pipe Bends, Jour. Math. Phys., Nov. 1926, Oct. 1928, and Dec. 1929. M. W. Kellog Co., “The Design of Piping Systems,” Wiley. For details of pipe and pipe fittings see Sec. 8.7. Nomenclature (see Figs. 5.3.1 and 5.3.2)

M0  end moment at origin, in  lb (N  m) M  max moment, in  lb (N  m) Fx  end reaction at origin in x direction, lb (N) Fy  end reaction at origin in y direction, lb (N) Sl  (Mr/I)a  max unit longitudinal flexure stress, lb/in2 (N/m2) St  (Mr/I)b  max unit transverse flexure stress, lb/ in2 (N/m2) Ss  (Mr/I)g  max unit shearing stress, lb/in2 (N/m2) x  relative deflection of ends of pipe parallel to x direction caused by either temperature change or support movement, or both, in (m)

Fig. 5.3.1

Fig. 5.3.2

y  same as x but parallel to y direction, in. Note that x and y are positive if under the change in temperature the end opposite the origin tends to move in a positive x or y direction, respectively. t  wall thickness of pipe, in (m) r  mean radius of pipe cross section, in (m) l  constant  tR/r2 I  moment of inertia of pipe cross section about pipe centerline, in4 (m4) E  modulus of elasticity of pipe at actual working temperature, lb/in2 (N/m2) K  flexibility index of pipe. K  1 for all straight pipe sections, K  (10  12l2)/(1  12l2) for all curved pipe sections where l  0.335 (see Fig. 5.3.3) a, b, g  ratios of actual max longitudinal flexure, transverse flexure, and shearing stresses to Mr/I for curved sections of pipe (see Fig. 5.3.3)

Fig. 5.3.3 Flexure constants of initially curved pipes.

5-52

PIPELINE FLEXURE STRESSES

A, B, C, F, G, H  constants given by Table 5.3.2 u  angle of intersection between tangents to direction of pipe at reactions u  change in u caused by movements of supports, or by temperature change, or both, rad ds  an infinitesimal element of length of pipe s  length of a particular curved section of pipe, in (m) R  radius of curvature of pipe centerline, in (m)

other, or partly fixed. If the reactions at one end of the pipe are known, the moment distribution in the entire pipe then can be obtained by simple statics. Since an initially curved pipe is more flexible than indicated by its moment of inertia, the constant K is introduced. Its value may be taken from Fig. 5.3.3, or computed from the equation given below. K  1 for all straight pipe sections, since they act according to the simple flexure theory. In Fig. 5.3.3 are given the flexure constants K, a, b, and g for initially curved pipes as functions of the quantity l  tR/r2. The flexure constants are derived from the equations.

General Discussion

Under the effect of changes in temperature of the pipeline, or of movement of support reactions (either translation or rotation), or both, the determination of stress distribution in a pipe becomes a statically indeterminate problem. In general the problem may be solved by a slight modification of the standard arch theory: x  K  My ds(EI), y  K  Mx ds/(EI), and u  K  M ds/(EI) where the constant K is introduced to correct for the increased flexibility of a curved pipe, and where the integration is over the entire length of pipe between supports. In Table 5.3.1 are given equations derived by this method for moment and thrust at one reaction point for pipes in one plane that are fully fixed, hinged at both ends, hinged at one end and fixed at the

Table 5.3.1

K 5 s10 1 12l2d/s1 1 12l2d

when l . 0.335

a 5 2⁄3K 2s5 1 6l2d/18

l # 1.472

a 5 Ks6l2 2 1d/s6l2 1 5d

l . 1.472

b 5 18l/s1 1 12l2d g 5 [8l 2 36l3 1 s32l2 1 20/3d 3 2s4/3dl2 1 5/18] 4 s1 1 12l2d 5 s12l 1 18l 2 2d/s1 1 12l d 2

when l , 0.58 when l . 0.58

2

General Equations for Pipelines in One Plane (See Figs. 5.3.1 and 5.3.2) Type of supports

Both ends fully fixed

M0 5 Fx 5 Fy 5

Both ends hinged

2ABF 1 CGH 2 B 2H 2 A2G 2 CF 2 EI xsCH 2 A2d 1 EI ysBH 2 AFd 2ABF 1 CGH 2 B 2H 2 A2G 2 CF 2 EI xsBH 2 AFd 1 EI ysGH 2 F 2d 2ABF 1 CGH 2 B 2H 2 A2G 2 CF 2

M0 5

EI x F GH 2 F 2

Fx 5

EI x H GH 2 F 2

Fy 5 0 u 5 0

M0 5 0

M0 5 0

Fy 5 u 5

EI x C 1 EI y B CG 2 B 2 EI x B 1 EI y G CG 2 B 2 xsAB 2 CF d 1 ysAG 2 BF d CG 2 B 2

Fx 5

Fy 5 u 5

M0 5 Fx 5 Fy 5

u 5

EI x C 1 EI y B CG 2 B 2 EI x B 1 EI y G CG 1 B 2 uxsAB 2 CF d 1 ysAG 2 BF d CG 2 B 2 EI xsCF 2 ABd 1 EI ysBF 2 AGd 1 EI usCG 2 B 2d 2ABF 1 CGH 2 A3G 2 CF 2 2 B 2H EI xsCH 2 A2d 1 EI ysBH 2 AFd 1 EI usCF 2 ABd 2ABF 1 CGH 2 A2G 2 CF 2 2 B 2H EI xsBH 2 AFd 1 EI ysGH 2 F 2d 1 EI usBF 2 AGd 2ABF 1 CGH 2 A2G 2 CF 2 2 B 2H

EI x G

Fy 5 0

M0 5 0 Fx 5

In general for any specific rotation u and movement x and y . . .

EI xsCF 2 ABd 1 EI ysBF 2 AGd

u 5 0

Fx 5

Origin end only hinged, other end fully fixed

Symmetric about y-axis

Unsymmetric

2x F G

PIPELINE FLEXURE STRESSES

The increased flexibility of the curved pipe is brought about by the tendency of its cross section to flatten. This flattening causes a transverse flexure stress whose maximum is Sr. Because the maximum longitudinal and maximum transverse stresses do not occur at the same point in the pipe’s cross section, the resulting maximum shear is not one-half the difference of Sl and St; it is Ss. In the straight sections of the pipe, a  1, the transverse stress disappears, and l  1⁄2. This discussion of Ss does not include the uniform transverse or longitudinal tension stresses induced by the internal pressure in the pipe; their effects should be added if appreciable. Table 5.3.2 gives values of the constants A, B, C, F, G, and H for use in equations listed in Table 5.3.1. The values may be used (1) for the solution of any pipeline or (2) for the derivation of equations for standard shapes composed of straight sections and arcs of circles as of Fig. 5.3.5. Equations for shapes not given may be obtained by algebraic addition of those given. All measurements are from the left-hand end of the pipeline. Reactions and stresses are greatly influenced by end conditions. Formulas are given to cover the extreme conditions. The following suggestions and comments should be considered when laying out a pipeline: Avoid expansion bends, and design the entire pipeline to take care of its own expansion. The movement of the equipment to which the ends of the pipeline are attached must be included in the x and y of the equations. Maximum flexibility is obtained by placing supports and anchors so that they will not interfere with the natural movement of the pipe. That shape is most efficient in which the maximum length of pipe is working at the maximum safe stress. Excessive bending moment at joints is more likely to cause trouble than excessive stresses in pipe walls. Hence, keep pipe joints away from points of high moment. Reactions and stresses are greatly influenced by flattening of the cross section of the curved portions of the pipeline. It is recommended that cold springing allowances be discounted in stress calculations. Application to Two- and Three-Plane Pipelines Pipelines in more than one plane may be solved by the successive application of the preceding data, dividing the pipeline into two or more one-plane lines. EXAMPLE 1. The unsymmetric pipeline of Fig. 5.3.4 has fully fixed ends. From Table 5.3.2 use K  1 for all sections, since only straight segments are involved. Upon introduction of a  120 in (3.05 m), b  60 in (1.52 m), and c  180 in (4.57 m), into the preceding relations (Table 5.3.3) for A, B, C, F, G, H, the equations for the reactions at 0 from Table 5.3.1 become

5-53

M 2 5 M 1 2 Fxb 5 EI xs23.5333 3 1024d 1 EI ys21.1215 3 1024d M 3 5 M 2 1 Fyc 5 EI xs12.1345 3 1024d 1 EI ys11.2854 3 1024d Thus the maximum moment M occurs at 3. The total maximum longitudinal fiber stress (a  1 for straight pipe)

Sl 5

Fx 2prt

6

M 3r I

There is no transverse flexure stress since all sections are straight. The maximum shearing stress is either (1) one-half of the maximum longitudinal fiber stress as given above, (2) one-half of the hoop-tension stress caused by an internal radial pressure that might exist in the pipe, or (3) one-half the difference of the maximum longitudinal fiber stress and hoop-tension stress, whichever of these three possibilities is numerically greatest. EXAMPLE 2. The equations of Table 5.3.1 may be employed to develop the solution of generalized types of pipe configurations for which Fig. 5.3.5 is a typical example. If only temperature changes are considered, the reactions for the right-angle pipeline (Fig. 5.3.5) may be determined from the following equations: M 0 5 C1EI x/R2 Fx 5 C2EI x/R3 Fy 5 C3EI x/R3 In these equations, x is the x component of the deflection between reaction points caused by temperature change only. The values of C1, C2, and C3 are given in Fig. 5.3.6 for K  1 and K  2. For other values of K, interpolation may be employed.

Fig. 5.3.5

Right angle pipeline.

M 0 5 EI xs27.1608 3 10 d 1 EI ys28.3681 3 10 d 25

25

Fx 5 EI xs11.0993 3 1025d 1 EI ys13.1488 3 1026d Fy 5 EI xs13.1488 3 1026d 1 EI ys11.33717 3 1026d Also it follows that M 1 5 M 0 1 Fya 5 EI xs13.0625 3 1024d 1 EI ys17.6799 3 1025d

Fig. 5.3.4

EXAMPLE 3. With a/R  20 and b/R  3, the value of C1 is 0.185 for K  1 and 0.165 for K  2. If K  1.75, the interpolated value of C1 is 0.175. Elimination of Flexure Stresses Pipeline flexure stresses that normally would result from movement of supports or from the tendency of the pipes to expand under temperature change often may be avoided entirely through the use of expansion joints (Sec. 8.7). Their use may simplify both the design of the pipeline and the support structure. When using expansion joints, the following suggestions should be considered: (1) select the expansion joint carefully for maximum temperature range (and deflection) expected so as to prevent damage to expansion fitting; (2) provide guides to limit movement at the expansion joint to direction permitted by joint; (3) provide adequate anchors at one end of each straight section or along its midlength, forcing movement to occur at the expansion joint yet providing adequate support for the pipeline; (4) mount expansion joints adjacent to an anchor point to prevent sagging of the pipeline under its own weight and do not depend upon the expansion joint for stiffness—it is intended to be flexible; (5) give consideration to effects of corrosion, since the corrugated character of expansion joints makes cleaning difficult.

5-54

Table 5.3.2

Values of A, B C, F, G, and H for Various Piping Elements A 5 Kx ds s sx 1 x2d 2 1

B 5 K xy ds

C 5 Kx 2 ds

F 5 K y ds

G 5 K y 2 ds

H 5 K  ds

2

s 1 x1x2 ≤ 3

sy

Fy

Ax

s s y 1 y2d 2 1



s 2 sx 1 x1x2 1 x 22d 3 1

s s y 1 y2d 2 1

s 2 sy 1 y1y2 1 y 22d 3 1

spy 1 2RdKR

Fy 1 ¢ 2y 1

spy 2 2RdKR

p Fy 2 ¢ 2y 2 R≤ KR2 2

Ay



sx

A sy 1 y2d 2 1

s sx 1 x2d 2 1

A sy 1 y2d 3 1

s s2 1 y1y2 ≤ 3

s

s

s 1 sx1y1 1 x2y2d 6 2R A¢y 1 p ≤

R2 A¢x 1 ≤ 2x

pKRx 2R A¢y 2 p ≤ ¢

px 2 R≤ KR 2

Ay 1 ¢x 2

R ≤ KR2 2

Ax 1 ¢

pR 2 x≤ KR2 4

¢

py 1 R≤ KR 2

Fy 1 ¢

p R≤ KR2 2

pR 1 y≤ KR2 4

pKR

pKR 2

Ay 2 ¢x 1 ¢

¢

px 1 R≤ KR 2

px 2 R≤ KR 2

R ≤ KR2 2 Ax 1 ¢

Ay 2 ¢x 1

R ≤ KR2 2

Ay 2 ¢x 2

R ≤ KR2 2

Ay 1 B ¢1 2

Ay 2 B ¢1 2

R R KR2 4

pR 2 x≤ KR2 4

Ax 1 B ¢

22 ≤x 2 2

1

Ay 2 B ¢1 2

2

[rsu2 2 u1d

x 22 R KR2 2

¢

py 2 R≤ KR 2

Fy 1 ¢

pR 2 y≤ KR2 4

¢

py 2 R≤ KR 2

Fy 1 ¢

pR 2 y≤ KR2 4

B

py 22 1 ¢1 2 ≤ RR KR 4 2

Fy 1 B ¢ 1 2 1 ¢

B

py 22 2 ¢1 2 ≤ RR KR 4 2

py 22 B 1 ¢1 2 ≤ RR KR 4 2

R R KR2 4

Ax 1 B ¢

p 1 1 ≤R 8 4 2

x 22 R KR2 2

B

py 22 2 ¢1 2 ≤ RR KR 4 2

R R KR2 4

Ay 2 Bxscos u2 2 cos u1d R ssin2 u2 2 sin2 u1dR KR2 2

2

R ssin 2 u2 2 sin 2 u1d 4 R 2 su2 2 u1dR KR2 2

[ysu2 2 u1d 2 Rscos u2 2 cos u1d] KR

22 ≤y 2

p 1 2 ≤ RR KR2 8 4

22 ≤y 2

p 1 2 ≤ RR KR2 8 4

Fy 2 B yscos u2 2 cos u1d 1

pKR 4

p 1 2 ≤ RR KR2 8 4

Fy 2 B ¢ 1 2 2 ¢

Ax 2 B xssin u2 2 sin u1d

22 ≤y 2

22 Fy 1 B ¢ 1 2 ≤y 2 1 ¢

pKR 2

p 1 2 ≤ RR KR2 8 4

Fy 2 B ¢ 1 2 2 ¢

2 Rssin u2 2 sin u1d] KR 1

pR 1 y≤ KR2 4

R R KR2 4

22 ≤x 2 1

Fy 1 ¢

p 1 1 ≤R 8 4

22 ≤x Ay 1 B ¢1 2 2

px R ¢ ≤ KR 2 4 22

py 1 R≤ KR 2

pR 1 x≤ KR2 4

22 ≤x 2 2

px R ¢ ≤ KR 2 4 22

Ax 1 ¢

¢

R ssin 2 u2 2 sin 2 u1d 4 R 2 su2 2 u1dR KR2 2

su2 2 u1dKR

5-55

5-56

PIPELINE FLEXURE STRESSES

Fig. 5.3.6 Reactions for right-angle pipelines.

PENETRANT METHODS Table 5.3.3

Example 1 Showing Determination of Integrals Values of integrals

Part of pipe

A

B

C

2

0–1 1–2 2–3 Total 0–3

5-57

F

G

H

3

a 2

0

ab

ab 2 2

a 3

0

0

a

a 2b

b2 2

b3 3

b

3

c s2a 1 cd 2

bc s2a 1 cd 2

c 1 acsa 1 cd 3

bc

b 2c

c

a2 1 ab 2

bc ab 2 1 s2a 1 cd 2 2

a3 c3 1 a 2b 1 3 3

b2 1 bc 2

b3 1 b 2c 3

a1b1c

c 1 s2a 1 cd 2

1 acsa 1 cd

5.4 NONDESTRUCTIVE TESTING by Donald D. Dodge REFERENCES: Various authors, “Nondestructive Testing Handbook,” 8 vols., American Society for Nondestructive Testing. Boyer, “Metals Handbook,” vol. 11, American Society for Metals. Heuter and Bolt, “Sonics,” Wiley. Krautkramer, “Ultrasonic Testing of Materials,” Springer-Verlag. Spanner, “Acoustic Emission: Techniques and Applications,” Intex, American Society for Nondestructive Testing. Crowther, “Handbook of Industrial Radiography,” Arnold. Wiltshire, “A Further Handbook of Industrial Radiography,” Arnold. “Standards,” vol. 03.03, ASTM, Boiler and Pressure Vessel Code, Secs. III, V, XI, ASME. ASME Handbook, “Metals Engineering—Design,” McGraw-Hill. SAE Handbook, Secs. J358, J359, J420, J425-J428, J1242, J1267, SAE. Materials Evaluation, Jour. Am. Soc. Nondestructive Testing. Nondestructive tests are those tests that determine the usefulness, serviceability, or quality of a part or material without limiting its usefulness. Nondestructive tests are used in machinery maintenance to avoid costly unscheduled loss of service due to fatigue or wear; they are used in manufacturing to ensure product quality and minimize costs. Consideration of test requirements early in the design of a product may facilitate testing and minimize testing cost. Nearly every form of energy is used in nondestructive tests, including all wavelengths of the electromagnetic spectrum as well as vibrational mechanical energy. Physical properties, composition, and structure are determined; flaws are detected; and thickness is measured. These tests are here divided into the following basic methods: magnetic particle, penetrant, radiographic, ultrasonic, eddy current, acoustic emission, microwave, and infrared. Numerous techniques are utilized in the application of each

test method. Table 5.4.1 gives a summary of many nondestructive test methods.

half-wave direct current may be used for the location of surface defects. Half-wave direct current is most effective for locating subsurface defects. Magnetic particles may be applied dry or as a wet suspension in a liquid such as kerosene or water. Colored dry powders are advantageous when testing for subsurface defects and when testing objects that have rough surfaces, such as castings, forgings, and weldments. Wet particles are preferred for detection of very fine cracks, such as fatigue, stress corrosion, or grinding cracks. Fluorescent wet particles are used to inspect objects with the aid of ultraviolet light. Fluorescent inspection is widely used because of its greater sensitivity. Application of particles while magnetizing current is on (continuous method) produces stronger indications than those obtained if the particles are applied after the current is shut off (residual method). Interpretation of subsurface-defect indications requires experience. Demagnetization of the test object after inspection is advisable. Magnetic flux leakage is a variation whereby leakage flux due to flaws is detected electronically via a Hall-effect sensor. Computerized signal interpretation and data imaging techniques are employed. Electrified particle testing indicates minute cracks in nonconducting materials. Particles of calcium carbonate are positively charged as they are blown through a spray gun at the test object. If the object is metalbacked, such as porcelain enamel, no preparation other than cleaning is necessary. When it is not metal-backed, the object must be dipped in an aqueous penetrant solution and dried. The penetrant remaining in cracks provides a mobile electron supply for the test. A readily visible powder indication forms at a crack owing to the attraction of the positively charged particles.

MAGNETIC PARTICLE METHODS Magnetic particle testing is a nondestructive method for detecting discontinuities at or near the surface in ferromagnetic materials. After the

test object is properly magnetized, finely divided magnetic particles are applied to its surface. When the object is properly oriented to the induced magnetic field, a discontinuity creates a leakage flux which attracts and holds the particles, forming a visible indication. Magnetic-field direction and character are dependent upon how the magnetizing force is applied and upon the type of current used. For best sensitivity, the magnetizing current must flow in a direction parallel to the principal direction of the expected defect. Circular fields, produced by passing current through the object, are almost completely contained within the test object. Longitudinal fields, produced by coils or yokes, create external poles and a general-leakage field. Alternating, direct, or

PENETRANT METHODS Liquid penetrant testing is used to locate flaws open to the surface of nonporous materials. The test object must be thoroughly cleaned

before testing. Penetrating liquid is applied to the surface of a test object by a brush, spray, flow, or dip method. A time allowance (1 to 30 min) is required for liquid penetration of surface flaws. Excess penetrant is then carefully removed from the surface, and an absorptive coating, known as developer, is applied to the object to draw penetrant out of flaws, thus showing their location, shape, and approximate size. The developer is typically a fine powder, such as talc usually in suspension in a liquid. Penetrating-liquid types are (1) for test in visible light, and (2) for test under ultraviolet light (3,650 Å). Sensitivity of penetrant testing is greatest when a fluorescent penetrant is used and the object is

5-58

NONDESTRUCTIVE TESTING

Table 5.4.1

Nondestructive Test Methods*

Method

Measures or detects

Applications

Advantages

Limitations

Acoustic emission

Crack initiation and growth rate Internal cracking in welds during cooling Boiling or cavitation Friction or wear Plastic deformation Phase transformations

Pressure vessels Stressed structures Turbine or gearboxes Fracture mechanics research Weldments Sonic-signature analysis

Remote and continuous surveillance Permanent record Dynamic (rather than static) detection of cracks Portable Triangulation techniques to locate flaws

Transducers must be placed on part surface Highly ductile materials yield lowamplitude emissions Part must be stressed or operating Interfering noise needs to be filtered out

Acoustic-impact (tapping)

Debonded areas or delaminations in metal or nonmetal composites or laminates Cracks under bolt or fastener heads Cracks in turbine wheels or turbine blades Loose rivets or fastener heads Crushed core

Brazed or adhesive-bonded structures Bolted or riveted assemblies Turbine blades Turbine wheels Composite structures Honeycomb assemblies

Portable Easy to operate May be automated Permanent record or positive meter readout No couplant required

Part geometry and mass influences test results Impactor and probe must be repositioned to fit geometry of part Reference standards required Pulser impact rate is critical for repeatability

D-Sight (Diffracto)

Enhances visual inspection for surface abnormalities such as dents protrusions, or waviness Crushed core Lap joint corrosion Cold-worked holes Cracks

Detect impact damage to composites or honeycomb corrosion in aircraft lap joints Automotive bodies for waviness

Portable Fast, flexible Noncontact Easy to use Documentable

Part surface must reflect light or be wetted with a fluid

Eddy current

Surface and subsurface cracks and seams Alloy content Heat-treatment variations Wall thickness, coating thickness Crack depth Conductivity Permeability

Tubing Wire Ball bearings “Spot checks” on all types of surfaces Proximity gage Metal detector Metal sorting Measure conductivity in % IACS

No special operator skills required High speed, low cost Automation possible for symmetric parts Permanent-record capability for symmetric parts No couplant or probe contact required

Conductive materials Shallow depth of penetration (thin walls only) Masked or false indications caused by sensitivity to variations such as part geometry Reference standards required Permeability variations

Magneto-optic eddycurrent imager

Cracks Corrosion thinning in aluminum

Aluminum aircraft Structure

Real-time imaging Approximately 4-in area coverage

Frequency range of 1.6 to 100 kHz Surface contour Temperature range of 32 to 90F Directional sensitivity to cracks

Eddy-sonic

Debonded areas in metalcore or metal-faced honeycomb structures Delaminations in metal laminates or composites Crushed core

Metal-core honeycomb Metal-faced honeycomb Conductive laminates such as boron or graphitefiber composites Bonded-metal panels

Portable Simple to operate No couplant required Locates far-side debonded areas Access to only one surface required May be automated

Specimen or part must contain conductive materials to establish eddy-current field Reference standards required Part geometry

Electric current

Cracks Crack depth Resistivity Wall thickness Corrosion-induced wall thinning

Metallic materials Electrically conductive materials Train rails Nuclear fuel elements Bars, plates other shapes

Access to only one surface required Battery or dc source Portable

Edge effect Surface contamination Good surface contact required Difficult to automate Electrode spacing Reference standards required

Electrified particle

Surface flaws in nonconducting material Through-to-metal pinholes on metal-backed material Tension, compression, cyclic cracks Brittle-coating stress cracks

Glass Porcelain enamel Nonhomogeneous materials such as plastic or asphalt coatings Glass-to-metal seals

Portable Useful on materials not practical for penetrant inspection

Poor resolution on thin coatings False indications from moisture streaks or lint Atmospheric conditions High-voltage discharge

PENETRANT METHODS Table 5.4.1

Nondestructive Test Methods*

Method

5-59

(Continued )

Measures or detects

Applications

Advantages

Limitations

Filtered particle

Cracks Porosity Differential absorption

Porous materials such as clay, carbon, powdered metals, concrete Grinding wheels High-tension insulators Sanitary ware

Colored or fluorescent particles Leaves no residue after baking part over 400F Quickly and easily applied Portable

Size and shape of particles must be selected before use Penetrating power of suspension medium is critical Particle concentration must be controlled Skin irritation

Infrared (radiometry) (thermography)

Hot spots Lack of bond Heat transfer Isotherms Temperature ranges

Brazed joints Adhesive-bonded joints Metallic platings or coatings; debonded areas or thickness Electrical assemblies Temperature monitoring

Sensitive to 0.1F temperature variation Permanent record or thermal picture Quantitative Remote sensing; need not contact part Portable

Emissivity Liquid-nitrogen-cooled detector Critical time-temperature relationship Poor resolution for thick specimens Reference standards required

Leak testing

Leaks: Helium Ammonia Smoke Water Air bubbles Radioactive gas Halogens

Joints: Welded Brazed Adhesive-bonded Sealed assemblies Pressure or vacuum chambers Fuel or gas tanks

High sensitivity to extremely small, light separations not detectable by other NDT methods Sensitivity related to method selected

Accessibility to both surfaces of part required Smeared metal or contaminants may prevent detection Cost related to sensitivity

Magnetic particle

Surface and slightly subsurface flaws; cracks, seams, porosity, inclusions Permeability variations Extremely sensitive for locating small tight cracks

Ferromagnetic materials; bar, plate, forgings, weldments, extrusions, etc.

Advantage over penetrant is that it indicates subsurface flaws, particularly inclusions Relatively fast and low-cost May be portable

Alignment of magnetic field is critical Demagnetization of parts required after tests Parts must be cleaned before and after inspection Masking by surface coatings

Magnetic field (also magnetic flux leakage)

Cracks Wall thickness Hardness Coercive force Magnetic anisotropy Magnetic field Nonmagnetic coating thickness on steel

Ferromagnetic materials Ship degaussing Liquid-level control Treasure hunting Wall thickness of nonmetallic materials Material sorting

Measurement of magnetic material properties May be automated Easily detects magnetic objects in nonmagnetic material Portable

Permeability Reference standards required Edge effect Probe lift-off

Microwave (300 MHz– 300 GHz)

Cracks, holes, debonded areas, etc., in nonmetallic parts Changes in composition, degree of cure, moisture content Thickness measurement Dielectric constant Loss tangent

Reinforced plastics Chemical products Ceramics Resins Rubber Wood Liquids Polyurethane foam Radomes

Between radio waves and infrared in electromagnetic spectrum Portable Contact with part surface not normally required Can be automated

Will not penetrate metals Reference standards required Horn-to-part spacing critical Part geometry Wave interference Vibration

Liquid penetrants (dye or fluorescent)

Flaws open to surface of parts; cracks, porosity, seams, laps, etc. Through-wall leaks

All parts with nonabsorbing surfaces (forgings, weldments, castings, etc.). Note: Bleed-out from porous surfaces can mask indications of flaws

Low cost Portable Indications may be further examined visually Results easily interpreted

Surface films such as coatings, scale, and smeared metal may prevent detection of flaws Parts must be cleaned before and after inspection Flaws must be open to surface

Fluoroscopy (cinefluorography) (kinefluorography)

Level of fill in containers Foreign objects Internal components Density variations Voids, thickness Spacing or position

Flow of liquids Presence of cavitation Operation of valves and switches Burning in small solid-propellant rocket motors

High-brightness images Real-time viewing Image magnification Permanent record Moving subject can be observed

Costly equipment Geometric unsharpness Thick specimens Speed of event to be studied Viewing area Radiation hazard

5-60

NONDESTRUCTIVE TESTING

Table 5.4.1

Nondestructive Test Methods*

(Continued )

Method

Measures or detects

Applications

Advantages

Limitations

Neutron radiology (thermal neutrons from reactor, accelerator, or Californium 252)

Hydrogen contamination of titanium or zirconium alloys Defective or improperly loaded pyrotechnic devices Improper assembly of metal, nonmetal parts Corrosion products

Pyrotechnic devices Metallic, nonmetallic assemblies Biological specimens Nuclear reactor fuel elements and control rods Adhesive-bonded structures

High neutron absorption by hydrogen, boron, lithium, cadmium, uranium, plutonium Low neutron absorption by most metals Complement to X-ray or gamma-ray radiography

Very costly equipment Nuclear reactor or accelerator required Trained physicists required Radiation hazard Nonportable Indium or gadolinium screens required

Gamma radiology (cobalt 60, iridium 192)

Internal flaws and variations, porosity, inclusions, cracks, lack of fusion, geometry variations, corrosion thinning Density variations Thickness, gap, and position

Usually where X-ray machines are not suitable because source cannot be placed in part with small openings and/or power source not available Panoramic imaging

Low initial cost Permanent records; film Small sources can be placed in parts with small openings Portable Low contrast

One energy level per source Source decay Radiation hazard Trained operators needed Lower image resolution Cost related to source size

X-ray radiology

Internal flaws and variations; porosity, inclusions, cracks, lack of fusion, geometry variations, corrosion Density variations Thickness, gap, and position Misassembly Misalignment

Castings Electrical assemblies Weldments Small, thin, complex wrought products Nonmetallics Solid-propellant rocket motors Composites Container contents

Permanent records; film Adjustable energy levels (5 kV–25 meV) High sensitivity to density changes No couplant required Geometry variations do not affect direction of X-ray beam

High initial costs Orientation of linear flaws in part may not be favorable Radiation hazard Depth of flaw not indicated Sensitivity decreases with increase in scattered radiation

Radiometry X-ray, gamma ray, beta ray (transmission or backscatter)

Wall thickness Plating thickness Variations in density or composition Fill level in cans or containers Inclusions or voids

Sheet, plate, strip, tubing Nuclear reactor fuel rods Cans or containers Plated parts Composites

Fully automatic Fast Extremely accurate In-line process control Portable

Radiation hazard Beta ray useful for ultrathin coatings only Source decay Reference standards required

Reverse-geometry digital X-ray

Cracks Corrosion Water in honeycomb Carbon epoxy honeycomb Foreign objects

Aircraft structure

High-resolution 106 pixel image with high contrast

Access to both sides of object Radiation hazard

X-ray computed tomography (CT)

Small density changes Cracks Voids Foreign objects

Solid-propellant rocket motors Rocket nozzles Jet-engine parts Turbine blades

Measures X-ray opacity of object along many paths

Very expensive Trained operator Radiation hazard

Shearography electronic

Lack of bond Delaminations Plastic deformation Strain Crushed core Impact damage Corrosion in Al honeycomb

Composite-metal honeycomb Bonded structures Composite structures

Large area coverage Rapid setup and operation Noncontacting Video image easy to store

Requires vacuum thermal, ultrasonic, or microwave stressing of structure to cause surface strain

Thermal (thermochromic paint, liquid crystals)

Lack of bond Hot spots Heat transfer Isotherms Temperature ranges Blockage in coolant passages

Brazed joints Adhesive-bonded joints Metallic platings or coatings Electrical assemblies Temperature monitoring

Very low initial cost Can be readily applied to surfaces which may be difficult to inspect by other methods No special operator skills

Thin-walled surfaces only Critical time-temperature relationship Image retentivity affected by humidity Reference standards required

Sonic (less than 0.1 MHz)

Debonded areas or delaminations in metal or nonmetal composites or laminates Cohesive bond strength under controlled conditions Crushed or fractured core Bond integrity of metal insert fasteners

Metal or nonmetal composite or laminates brazed or adhesivebonded Plywood Rocket-motor nozzles Honeycomb

Portable Easy to operate Locates far-side debonded areas May be automated Access to only one surface required

Surface geometry influences test results Reference standards required Adhesive or core-thickness variations influence results

RADIOGRAPHIC METHODS Table 5.4.1

Nondestructive Test Methods*

Method

5-61

(Continued )

Measures or detects

Applications

Advantages

Limitations

Ultrasonic (0.1–25 MHz)

Internal flaws and variations; cracks, lack of fusion, porosity, inclusions, delaminations, lack of bond, texturing Thickness or velocity Poisson’s ratio, elastic modulus

Metals Welds Brazed joints Adhesive-bonded joints Nonmetallics In-service parts

Most sensitive to cracks Test results known immediately Automating and permanent-record capability Portable High penetration capability

Couplant required Small, thin, or complex parts may be difficult to inspect Reference standards required Trained operators for manual inspection Special probes

Thermoelectric probe

Thermoelectric potential Coating thickness Physical properties Thompson effect P-N junctions in semiconductors

Metal sorting Ceramic coating thickness on metals Semiconductors

Portable Simple to operate Access to only one surface required

Hot probe Difficult to automate Reference standards required Surface contaminants Conductive coatings

* From Donald J. Hagemaier, “Metal Progress Databook,” Douglas Aircraft Co., McDonnell-Douglas Corp., Long Beach, CA.

observed in a semidarkened location. After testing, the penetrant and developer are removed by washing with water, sometimes aided by an emulsifier, or with a solvent. In filtered particle testing, cracks in porous objects (100 mesh or smaller) are indicated by the difference in absorption between a cracked and a flaw-free surface. A liquid containing suspended particles is sprayed on a test object. If a crack exists, particles are filtered out and concentrate at the surface as liquid flows into the additional absorbent area created by the crack. Fluorescent or colored particles are used to locate flaws in unfired dried clay, certain fired ceramics, concrete, some powdered metals, carbon, and partially sintered tungsten and titanium carbides. RADIOGRAPHIC METHODS Radiographic test methods employ X-rays, gamma rays, or similar

penetrating radiation to reveal flaws, voids, inclusions, thickness, or structure of objects. Electromagnetic energy wavelengths in the range of 0.01 to 10 Å (1 Å  108 cm) are used to examine the interior of opaque materials. Penetrating radiation proceeds from its source in straight lines to the test object. Rays are differentially absorbed by the object, depending upon the energy of the radiation and the nature and thickness of the material. X-rays of a variety of wavelengths result when high-speed electrons in a vacuum tube are suddenly stopped. An X-ray tube contains a heated filament (cathode) and a target (anode); radiation intensity is almost directly proportional to filament current (mA); tube voltage (kV) determines the penetration capability of the rays. As tube voltage increases, shorter wavelengths and more intense X-rays are produced. When the energy of penetrating radiation increases, shorter wavelengths and more intense X-rays are produced. Also, when the energy of penetrating radiation increases, the difference in attenuation between materials decreases. Consequently, more film-image contrast is obtained at lower voltage, and a greater range of thickness can be radiographed at one time at higher voltage. Gamma rays of a specific wavelength are emitted from the disintegrating nuclei of natural radioactive elements, such as radium, and from a variety of artificial radioactive isotopes produced in nuclear reactors. Cobalt 60 and iridium 192 are commonly used for industrial radiography. The half-life of an isotope is the time required for half of the radioactive material to decay. This time ranges from a few hours to many years. Radiographs are photographic records produced by the passage of penetrating radiation onto a film. A void or reduced mass appears as a darker image on the film because of the lesser absorption of energy and the resulting additional exposure of the film. The quantity of X-rays absorbed by a material generally increases as the atomic number increases.

A radiograph is a shadow picture, since X-rays and gamma rays follow the laws of light in shadow formation. Four factors determine the best geometric sharpness of a picture: (1) The effective focal-spot size of the radiation source should be as small as possible. (2) The sourceto-object distance should be adequate for proper definition of the area of the object farthest from the film. (3) The film should be as close as possible to the object. (4) The area of interest should be in the center of and perpendicular to the X-ray beams and parallel to the X-ray film. Radiographic films vary in speed, contrast, and grain size. Slow films generally have smaller grain size and produce more contrast. Slow films are used where optimum sharpness and maximum contrast are desired. Fast films are used where objects with large differences in thickness are to be radiographed or where sharpness and contrast can be sacrificed to shorten exposure time. Exposure of a radiographic film comes from direct radiation and scattered radiation. Direct radiation is desirable, image-forming radiation; scattered radiation, which occurs in the object being X-rayed or in neighboring objects, produces undesirable images on the film and loss of contrast. Intensifying screens made of 0.005- or 0.010-in- (0.13-mm or 0.25-mm) thick lead are often used for radiography at voltages above 100 kV. The lead filters out much of the low-energy scatter radiation. Under action of X-rays or gamma rays above 88 kV, a lead screen also emits electrons which, when in intimate contact with the film, produce additional coherent darkening of the film. Exposure time can be materially reduced by use of intensifying screens above and below the film. Penetrameters are used to indicate the contrast and definition which exist in a radiograph. The type generally used in the United States is a small rectangular plate of the same material as the object being Xrayed. It is uniform in thickness (usually 2 percent of the object thickness) and has holes drilled through it. ASTM specifies hole diameters 1, 2, and 4 times the thickness of the penetrameter. Step, wire, and bead penetrameters are also used. (See ASTM Materials Specification E94.) Because of the variety of factors that affect the production and measurements of an X-ray image, operating factors are generally selected from reference tables or graphs which have been prepared from test data obtained for a range of operating conditions. All materials may be inspected by radiographic means, but there are limitations to the configurations of materials. With optimum techniques, wires 0.0001 in (0.003 mm) in diameter can be resolved in small electrical components. At the other extreme, welded steel pressure vessels with 20-in (500-mm) wall thickness can be routinely inspected by use of high-energy accelerators as a source of radiation. Neutron radiation penetrates extremely dense materials such as lead more readily than X-rays or gamma rays but is attenuated by lighter-atomic-weight materials such as plastics, usually because of their hydrogen content. Radiographic standards are published by ASTM, ASME, AWS, and API, primarily for detecting lack of penetration or lack of fusion in welded objects. Cast-metal objects are radiographed to detect

5-62

NONDESTRUCTIVE TESTING

conditions such as shrink, porosity, hot tears, cold shuts, inclusions, coarse structure, and cracks. The usual method of utilizing penetrating radiation employs film. However, Geiger counters, semiconductors, phosphors (fluoroscopy), photoconductors (xeroradiography), scintillation crystals, and vidicon tubes (image intensifiers) are also used. Computerized digital radiography is an expanding technology. The dangers connected with exposure of the human body to X-rays and gamma rays should be fully understood by any person responsible for the use of radiation equipment. NIST is a prime source of information concerning radiation safety. NRC specifies maximum permissible exposure to be a 1.25 R/1⁄4 year. ULTRASONIC METHODS Ultrasonic nondestructive test methods employ high-frequency mechanical vibrational energy to detect and locate structural discontinuities or differences and to measure thickness of a variety of materials. An electric pulse is generated in a test instrument and transmitted to a transducer, which converts the electric pulse into mechanical vibrations. These low-energy-level vibrations are transmitted through a coupling liquid into the test object, where the ultrasonic energy is attenuated, scattered, reflected, or resonated to indicate conditions within material. Reflected, transmitted, or resonant sound energy is reconverted to electrical energy by a transducer and returned to the test instrument, where it is amplified. The received energy is then usually displayed on a cathoderay tube. The presence, position, and amplitude of echoes indicate conditions of the test-object material. Materials capable of being tested by ultrasonic energy are those which transmit vibrational energy. Metals are tested in dimensions of up to 30 ft (9.14 m). Noncellular plastics, ceramics, glass, new concrete, organic materials, and rubber can be tested. Each material has a characteristic sound velocity, which is a function of its density and modulus (elastic or shear). Material characteristics determinable through ultrasonics include structural discontinuities, such as flaws and unbonds, physical constants and metallurgical differences, and thickness (measured from one side). A common application of ultrasonics is the inspection of welds for inclusions, porosity, lack of penetration, and lack of fusion. Other applications include location of unbond in laminated materials, location of fatigue cracks in machinery, and medical applications. Automatic testing is frequently performed in manufacturing applications. Ultrasonic systems are classified as either pulse-echo, in which a single transducer is used, or through-transmission, in which separate sending and receiving transducers are used. Pulse-echo systems are more common. In either system, ultrasonic energy must be transmitted into, and received from, the test object through a coupling medium, since air will not efficiently transmit ultrasound of these frequencies. Water, oil, grease, and glycerin are commonly used couplants. Two types of testing are used: contact and immersion. In contact testing, the transducer is placed directly on the test object. In immersion testing, the transducer and test object are separated from one another in a tank filled with water or by a column of water or by a liquid-filled wheel. Immersion testing eliminates transducer wear and facilitates scanning of the test object. Scanning systems have paper-printing or computerized video equipment for readout of test information. Ultrasonic transducers are piezoelectric units which convert electric energy into acoustic energy and convert acoustic energy into electric energy of the same frequency. Quartz, barium titanate, lithium sulfate, lead metaniobate, and lead zirconate titanate are commonly used transducer crystals, which are generally mounted with a damping backing in a housing. Transducers range in size from 1⁄16 to 5 in (0.15 to 12.7 cm) and are circular or rectangular. Ultrasonic beams can be focused to improve resolution and definition. Transducer characteristics and beam patterns are dependent upon frequency, size, crystal material, and construction. Test frequencies used range from 40 kHz to 200 MHz. Flaw-detection and thickness-measurement applications use frequencies between

500 kHz and 25 MHz, with 2.25 and 5 MHz being most commonly employed for flaw detection. Low frequencies (40 kHz to 1.0 MHz) are used on materials of low elastic modulus or large grain size. High frequencies (2.25 to 25 MHz) provide better resolution of smaller defects and are used on fine-grain materials and thin sections. Frequencies above 25 MHz are employed for investigation and measurement of physical properties related to acoustic attenuation. Wave-vibrational modes other than longitudinal are effective in detecting flaws that do not present a reflecting surface to the ultrasonic beam, or other characteristics not detectable by the longitudinal mode. They are useful also when large areas of plates must be examined. Wedges of plastic, water, or other material are inserted between the transducer face and the test object to convert, by refraction, to shear, transverse, surface, or Lamb vibrational modes. As in optics, Snell’s law expresses the relationship between incident and refracted beam angles; i.e., the ratio of the sines of the angle from the normal, of the incident and refracted beams in two mediums, is equal to the ratio of the mode acoustic velocities in the two mediums. Limiting conditions for ultrasonic testing may be the test-object shape, surface roughness, grain size, material structure, flaw orientation, selectivity of discontinuities, and the skill of the operator. Test sensitivity is less for cast metals than for wrought metals because of grain size and surface differences. Standards for acceptance are published in many government, national society, and company specifications (see references above). Evaluation is made by comparing (visually or by automated electronic means) received signals with signals obtained from reference blocks containing flat bottom holes between 1⁄64 and 8⁄64 in (0.40 and 0.325 cm) in diameter, or from parts containing known flaws, drilled holes, or machined notches. EDDY CURRENT METHODS Eddy current nondestructive tests are based upon correlation between

electromagnetic properties and physical or structural properties of a test object. Eddy currents are induced in metals whenever they are brought into an ac magnetic field. These eddy currents create a secondary magnetic field, which opposes the inducing magnetic field. The presence of discontinuities or material variations alters eddy currents, thus changing the apparent impedance of the inducing coil or of a detection coil. Coil impedance indicates the magnitude and phase relationship of the eddy currents to their inducing magnetic-field current. This relationship is dependent upon the mass, conductivity, permeability, and structure of the metal and upon the frequency, intensity, and distribution of the alternating magnetic field. Conditions such as heat treatment, composition, hardness, phase transformation, case depth, cold working, strength, size, thickness, cracks, seams, and inhomogeneities are indicated by eddy current tests. Correlation data must usually be obtained to determine whether test conditions for desired characteristics of a particular test object can be established. Because of the many factors which cause variation in electromagnetic properties of metals, care must be taken that the instrument response to the condition of interest is not nullified or duplicated by variations due to other conditions. Alternating-current frequencies between 1 and 5,000,000 Hz are used for eddy current testing. Test frequency determines the depth of current penetration into the test object, owing to the ac phenomenon of “skin effect.” One “standard depth of penetration” is the depth at which the eddy currents are equal to 37 percent of their value at the surface. In a plane conductor, depth of penetration varies inversely as the square root of the product of conductivity, permeability, and frequency. Highfrequency eddy currents are more sensitive to surface flaws or conditions while low-frequency eddy currents are sensitive also to deeper internal flaws or conditions. Test coils are of three general types: the circular coil, which surrounds an object; the bobbin coil, which is inserted within an object; and the probe coil, which is placed on the surface of an object. Coils are further classified as absolute, when testing is conducted without direct comparison with a reference object in another coil; or differential, when

EXPERIMENTAL STRESS AND STRAIN ANALYSIS

comparison is made through use of two coils connected in series opposition. Many variations of these coil types are utilized. Axial length of a circular test coil should not be more than 4 in (10.2 cm), and its shape should correspond closely to the shape of the test object for best results. Coil diameter should be only slightly larger than the test-object diameter for consistent and useful results. Coils may be of the air-core or magnetic-core type. Instrumentation for the analysis and presentation of electric signals resulting from eddy current testing includes a variety of means, ranging from meters to oscilloscopes to computers. Instrument meter or alarm circuits are adjusted to be sensitive only to signals of a certain electrical phase or amplitude, so that selected conditions are indicated while others are ignored. Automatic and automated testing is one of the principal advantages of the method. Thickness measurement of metallic and nonmetallic coatings on metals is performed using eddy current principles. Coating thicknesses measured typically range from 0.0001 to 0.100 in (0.00025 to 0.25 cm). For measurement to be possible, coating conductivity must differ from that of the base metal.

MICROWAVE METHODS Microwave test methods utilize electromagnetic energy to determine

characteristics of nonmetallic substances, either solid or liquid. Frequencies used range from 1 to 3,000 GHz. Microwaves generated in a test instrument are transmitted by a waveguide through air to the test object. Analysis of reflected or transmitted energy indicates certain material characteristics, such as moisture content, composition, structure, density, degree of cure, aging, and presence of flaws. Other applications include thickness and displacement measurement in the range of 0.001 in (0.0025 cm) to more than 12 in (30.4 cm). Materials that can be tested include most solid and liquid nonmetals, such as chemicals, minerals, plastics, wood, ceramics, glass, and rubber.

INFRARED METHODS

Infrared nondestructive tests involve the detection of infrared electromagnetic energy emitted by a test object. Infrared radiation is produced naturally by all matter at all temperatures above absolute zero.

5-63

Materials emit radiation at varying intensities, depending upon their temperature and surface characteristics. A passive infrared system detects the natural radiation of an unheated test object, while an active system employs a source to heat the test object, which then radiates infrared energy to a detector. Sensitive indication of temperature or temperature distribution through infrared detection is useful in locating irregularities in materials, in processing, or in the functioning of parts. Emission in the infrared range of 0.8 to 15 mm is collected optically, filtered, detected, and amplified by a test instrument which is designed around the characteristics of the detector material. Temperature variations on the order of 0.1F can be indicated by meter or graphic means. Infrared theory and instrumentation are based upon radiation from a blackbody; therefore, emissivity correction must be made electrically in the test instrument or arithmetically from instrument readings. ACOUSTIC SIGNATURE ANALYSIS Acoustic signature analysis involves the analysis of sound energy emitted from an object to determine characteristics of the object. The object may be a simple casting or a complex manufacturing system. A passive test is one in which sonic energy is transmitted into the object. In this case, a mode of resonance is usually detected to correlate with cracks or structure variations, which cause a change in effective modulus of the object, such as a nodular iron casting. An active test is one in which the object emits sound as a result of being struck or as a result of being in operation. In this case, characteristics of the object may be correlated to damping time of the sound energy or to the presence or absence of a certain frequency of sound energy. Bearing wear in rotating machinery can often be detected prior to actual failure, for example. More complex analytical systems can monitor and control manufacturing processes, based upon analysis of emitted sound energy. Acoustic emission is a technology distinctly separate from acoustic signature analysis and is one in which strain produces bursts of energy in an object. These are detected by ultrasonic transducers coupled to the object. Growth of microcracks, and other flaws, as well as incipient failure, is monitored by counting the pulses of energy from the object or recording the time rate of the pulses of energy in the ultrasonic range (usually a discrete frequency between 1 kHz and 1 MHz).

5.5 EXPERIMENTAL STRESS AND STRAIN ANALYSIS by Gary L. Cloud REFERENCES: Cloud, Optical Methods of Engineering Analysis, revised second printing, Cambridge University Press, New York, 1998. Dally and Riley Experimental Stress Analysis, 4th ed., College House Enterprises, Knoxville, Tenn. 2005. Kobayashi (ed.), Handbook on Experimental Mechanics, PrenticeHall, Englewood Cliffs, N.J., 1987.

Experimental stress and strain analysis, otherwise called experimental mechanics, is the quantitative measurement of the response of bodies and structures to stimuli, often in the form of applied loads. Applications of experimental mechanics are numerous and include, e.g., biomechanics, geomechanics, fracture mechanics, infrastructure, materials characterization, manufacturing processes, transducer design, motion analysis, mechanical design, and nondestructive testing. Many problems faced by analysts and designers stretch the limitations of analytical and numerical techniques, so experimental solutions are needed. Hybrid approaches wherein experimental data are used as input to a numerical solution that is later subjected to experimental validation are becoming more commonplace. The major problem that must be faced in measurement of strain in structures is that,

for most engineering materials, the strain induced by loads is very small, the change in length per unit gage length being on the order of parts per million (microstrain), so a sensitivity of 1 to 10 microstrain is required. A direct way to measure stress does not exist, except, possibly, by photoelasticity. To determine stress, the strain must be measured and converted to stress by use of the stress-strain relations for the material. Many techniques are available to attack problems in experimental mechanics, and engineers should be familiar with a spectrum of methods so that the most appropriate one for a given situation might be chosen. Some techniques have evolved, through long usage, into well-known standard approaches, while others are evolving or have not yet been thoroughly accepted by industrial practitioners. The former class includes resistance strain gages, photoelasticity, geometric moiré, and brittle coatings. Those less known or still in evolution include holographic interferometry, speckle interferometry, speckle shearography, acoustic emission, thermography, digital photoelasticity, digital image correlation, and moiré interferometry. This section describes in

5-64

EXPERIMENTAL STRESS AND STRAIN ANALYSIS

some detail three of the standard methods and the basic functions of several of the more advanced methods. Systems, supplies, information, and software for implementing these techniques are available from vendors. RESISTANCE STRAIN GAGES

A resistance strain gage consists of a length of a fine conducting wire or foil, often configured as a grid, that is intimately attached to the specimen. As the specimen is deformed, the gage is deformed by the same amount, so that its resistance changes. If the relationship between resistance change and strain is known for the gage material, then the observed resistance change can be converted to specimen strain at the gage location. The relationship between strain and resistance change is e5

1 R Q R F R

where e is the strain along the gage axis, R is the gage resistance, R is the change of gage resistance with strain, and F is the gage factor for the given gage. Gage factor F is determined by the gage manufacturer through batch testing in a known uniaxial stress field and is provided to the gage user, as is the gage resistance. Strain gages are employed in two distinct ways. First, they are used for direct measurement of strain. Second, they are commonly used in transducing devices in which strain measurements are used as indicators of some other physical parameter such as load, pressure, or acceleration. Strain gages of many types, sizes, and configurations are available. The most common type is the bonded metallic gage in which the sensing element is a thin foil that is mounted on a plastic substrate. Strain gages must be carefully selected for an application in accordance with manufacturers’ recommendations. Perfect bonding between gage and specimen is necessary, and rigorous but simple cleaning, bonding, wiring, and protective coating practices must be followed. Gage factors for common metallic gages are about 2; so since the strain is small, the resistance change per unit gage resistance is also small. An important part of a strain gage installation is the interface circuit that converts the small resistance change to a voltage that can be amplified and measured. Two basic interface circuits are commonly used, usually with additions that add capability and flexibility. Wheatstone Bridge The Wheatstone bridge interface circuit is shown in Fig. 5.5.1. In the usual unbalanced operational mode, the variable resistor R2 is first adjusted to give 0-V signal output EO. Then as the resistance of the gage R1 is changed by R1, the signal voltage changes proportionately. Often, more than one of and perhaps all the resistors in the circuit are strain gages. For four identical active gages and where E is the supply voltage and F is the gage factor, the relationship between output voltage and gage strains is EO 5

EF s2e1 1 e2 2 e3 1 e4d 4

This voltage is usually amplified and recorded by a voltmeter, an oscilloscope, or a digital DAQ system. Dedicated strain gage readout devices

Active or dummy gage AC or DC power supply

R1

R4

Active gage

Voltmeter E EO Resistor

R3

R2

Fig. 5.5.1 Wheatstone bridge circuit for resistance strain gages.

Variable resistor

and/or the use of circuit calibration allows direct indication in strain units. The output signal is proportional to the excitation voltage applied to the bridge, so a stable power supply must be used to limit noise and to maintain constant circuit sensitivity. Either dc or ac excitation can be used; the latter facilitates the use of filtering and carrier amplification to reduce noise. An advantage of the bridge circuit is that more than one active strain gage can be used to multiply output or to cancel unwanted inputs such as temperature effects. For example, consider the case where surface strain in a beam is to be measured. Two gages can be used. The one on the top of the beam might be subjected to tensile strain, and the one on the bottom will experience compressive strain. The gages are wired into the bridge circuit in adjacent arms, e.g., R1 and R4. The bridge output will then be twice what will be seen if only one gage is used. Any effect of axial loads on the beam will be canceled as will temperature effects, because they have the same sign. The unbalanced bridge circuit can be used to measure either static or dynamic strain. It is the most widely used basic circuit, being implemented with many extra features in most commercial strain indicators and DAQ systems. Potentiometer Circuit The potentiometer circuit, also called the ballast circuit or the voltage divider circuit, is shown in its basic form in Fig. 5.5.2. This circuit is particularly simple to fabricate. The function of the ballast resistor in the circuit is to limit the current through the strain gage while still giving usable output. Usually, this circuit is used with only one strain gage, so care must be taken that unwanted inputs, such as temperature effects, are eliminated in other ways. A disadvantage of the potentiometer circuit is that there is no way to initially balance it so that the starting output voltage is zero. The small Ballast resistor

DC power

Capacitor

Strain gage

EO

Dynamic voltmeter

Fig. 5.5.2 Potentiometer circuit for resistance strain gages.

change in output voltage that is caused by the strain gage will be superimposed on a relatively large bias voltage, meaning that one is faced with the problem of measuring accurately the small change in a large quantity. This fact limits the use of the circuit to problems involving rapidly varying or dynamic strains. A capacitor placed in the circuit acts as a high-pass filter so as to block the steady bias voltage and pass the time-varying component of interest to an amplifier. The circuit is dc-powered, and the power supply must be stable and noise-free. In spite of these limitations, this circuit is exceptionally useful and easy to use, e.g., in measuring strains in machinery. Circuit Calibration The output equations for the interface circuits are useful for basic design of the measuring system, but not for accurate calibration of the entire system for measuring strain. This calibration is best accomplished by creating a known resistance change at the input. Implementation involves placing a large shunt calibration resistor Rcal in parallel with the strain gage Rg. The equivalent calibration strain ecal is given by Rg 1 ecal 5 ¢ ≤ F Rg 1 Rcal The relationship between circuit observable output (e.g., oscilloscope trace deflection) and input strain is thus established. Temperature Effects Strain gages exhibit a resistance change with temperature that cannot be distinguished from the effects of strain. Large errors can result if the gage temperature changes during an experiment and steps are not taken to eliminate these errors. Two approaches are used singly or together. Manufacturers of strain gages provide gages that are self-compensated for broad classes of materials (e.g., brass, steel). These gages can be

GEOMETRIC MOIRÉ

used without any other temperature compensation with good, but not perfect, results. They are especially useful for single-gage applications over short periods of time where mechanical strains are relatively large. The output voltage equation for the Wheatstone bridge can be exploited to eliminate temperature effects in two related ways. In the first, an inactive dummy gage that is the same as the active gage is mounted on an unstressed coupon of the same material. It is wired into an arm of the circuit adjacent to the measuring gage. False strains from temperature are canceled. Another approach is to design the experiment so that two active measuring gages are placed in adjacent bridge arms. The desired strain signal is multiplied, and the false thermal strain is eliminated. Similar approaches are used to eliminate temperature effects on gage lead wires, leading, e.g., to the familiar “three-wire hookup” for single-gage installations. Transverse Sensitivity Resistance strain gages exhibit sensitivity to the strain component that is perpendicular to the gage axis. If the stress field is uniaxial, this transverse sensitivity does not cause errors because the gage factor is determined in such a field. In any other biaxial stress field, ignoring the effects of transverse sensitivity can lead to errors that range from negligible to very large, depending on the nature of the field and the strain component being measured. The transverse sensitivity KT is determined by the gage manufacturer by batch testing and is provided to the gage user. For typical foil gages, KT is less than 2 percent. The errors caused by transverse sensitivity are eliminated by using two gages in a biaxial stress field, even if only one strain component is sought. That is, for accurate determination of one strain in an unknown biaxial stress field, two gages are required. The two gages are mounted perpendicular to each other. Usually the gages are identical. The common gage factor F as given by the manufacturer is set on the strain indicator. The strain indicator gives an apparent strain for each gage, but the readings are not true strains. The readings are corrected as follows. Let the apparent indicated strain for gage A be QA and the reading for the second gage B be QB. The true or actual strain eA along the axis of gage A is eA 5

1 2 m0K T 1 2 K 2T

sQ A 2 K T Q Bd

Note that if QB W QA, the error caused by ignoring KT will be large. The m0 in the equation is the Poisson ratio for the material that was used by the manufacturer in determining the gage factor and the transverse sensitivity factor. It is not always available, but may be taken to be 0.285 with reasonable safety. The true strain for gage B is found by permuting the subscripts in the above equation. Strain Rosettes In a general biaxial strain field, there are three unknowns—the two principal strains and the orientations of the principal strain axes. If complete knowledge of the field is sought, then three strain measurements are required. Strain gages do not respond to shear strain, so three determinations of normal strain are used. Three ordinary gages can be employed, but gage manufacturers provide standard combinations of gages, called strain rosettes, that contain two or more gage elements in a single package which can be applied just as a single gage would. The two most common configurations of gage elements in rosettes are (1) the rectangular rosette, having gage elements at 08, 458, and 908, and (2) the delta (equiangular) rosette, having elements at 08, 608, and 1208. The analysis of rosette data involves the strain transformation equations and the transverse sensitivity correction that are described above. Consider the case in which a strain-indicating system is used, so the output is in strain units. Set the usual manufacturer’s gage factor F on the instrument. Take Qn to be the apparent strain reading from gage n. The true strain for gage n is en. Recall that the transverse sensitivity of each gage is KT. The true strains for each gage element in the rectangular rosette are then found to be e1 5 Q 1 2 K T Q 3; e2 5 s1 1 K TdQ 2 2 K T sQ 1 1 Q 3d; e3 5 Q 3 2 K T Q 1. If gage elements 1 and 3 establish an xy coordinate system, then the strain transformation equations yield ex 5 e1, ey 5 e3, and shear strain exy 5 e1 1 e3 2 2e2. With strains in the xy coordinate system known,

5-65

the principal strains and the principal axes can be found from the transformation equations or Mohr’s circle. Similar data reduction equations for other types of rosettes are easily found in basic strain gage literature. Strain Gage Transducers Resistance strain gages are used extensively in constructing transducers to measure physical quantities such as force, torque, pressure, and acceleration. Simple devices of this type can be easily fabricated in the lab for specific applications, or they may be purchased. As an example, consider the situation mentioned above in which two gages are mounted on the top and bottom of a beam to measure the strain in the beam. If the output of the circuitry is calibrated in terms of the load on the beam, then the device becomes a loadmeasuring transducer. Likewise, a common wrench is instrumented with strain gages and calibrated so that the output is indicative of the torque transmitted by the wrench. Critical factors in designing such transducers involve the design of the elastic member such that the physical quantity to be measured is converted to a detectable strain, the proper mounting of the gages to detect the strain, the design of the circuit so that it responds to the quantity of interest and rejects unwanted inputs, and the calibration of the device. Because the strain in the elastic member is not of quantitative interest in a transducer, circuit calibration is not useful or appropriate. Instead, direct calibration is accomplished by applying known values of the desired physical quantity to the transducer and relating these known values to the output. For example, if a beam is used as a load transducer, then known dead weights are applied to the beam. The circuit output is then adjusted to indicate directly the values of the applied known load. GEOMETRIC MOIRÉ

The geometric moiré or mechanical moiré effect is the mechanical occlusion of light by superimposed patterns to produce another pattern that is called a moiré fringe pattern. It is not an optical interference effect. Geometric moiré provides a basis for a family of methods for measuring deformation, strain, shape, and contour. Advanced moiré methods, including moiré interferometry, are easily understood if the principles of geometric moiré are known. This section describes the basic moiré phenomenon and its implementation for measurement of displacement and strain in a deformable body. Then two techniques for using moiré to perform measurements of out-of-plane shape and deformation are outlined. In-Plane Moiré Figure 5.5.3 illustrates the basic moiré phenomenon that appears when one system of closely spaced straight lines, a grill, is laid over a similar set. The two sets have different line spacings, and one set is also rotated slightly with respect to the other. At certain locations, the lines from one set block the gaps in the other set so that no light is allowed through. If the line spacings are sufficiently small, then the areas where the light is blocked appear to join up to form another system of broad lines, called a moiré fringe pattern. Simple experimentation with

Moiré fringes

Fig. 5.5.3 Moiré effect from superimposing two line arrays.

5-66

EXPERIMENTAL STRESS AND STRAIN ANALYSIS

such a system shows that the moiré fringes undergo large movement when one of the grills is shifted only a small amount relative to the other grill. Clearly, the moiré pattern is a motion magnifier that can be exploited to give sensitive measurements of relative motion. Figure 5.5.4 shows a cross-sectional view of how the moiré effect is used to measure displacement and strain. Light is projected through two grills that were originally identical but that now have slightly different

Incident light Specimen grill master grill Transmitted light Observed intensity Dark fringe N=0

Dark fringe N=1

Dark fringe N=2

Fig. 5.5.4 Moiré effect as used for displacement measurement.

grating is held in position with, e.g., a spring clothespin. A camera set up at some distance from the specimen-master combination is used to photograph the moiré fringes as the specimen is loaded. If a nontransparent specimen is used, the specimen grating must be made to have high visibility in reflected light. The direct superposition method can still be used, but fringe visibility will suffer. An alternative viewing method is to image the specimen grating in the back of a view camera. The master grating replaces the ground glass in the camera back, and the moiré pattern will be visible there. Direct photography of the specimen grating provides two other approaches. The first approach is to use double exposure, one exposure before loading and the other after loading. The moiré fringe pattern is then contained in the photograph. The second approach is to photograph the specimen grating for each loading state. These photographic replicas can be superimposed directly with a submaster or with each other at any later time, which is often an advantage in dealing with nonreversible processes, e.g., those involving plastic deformation. A disadvantage of any procedure that involves imaging of the specimen grating is that the optical system must be capable of resolving finely spaced lines and very accurate focus must be established. Even so, the moiré fringes might not be of good contrast. In these cases, optical Fourier processing can be implemented to improve fringe visibility and reduce noise, while offering the possibility of enhancing strain sensitivity. Figure 5.5.5 shows a moiré fringe pattern obtained with these techniques.

spacings because one grill has been stretched relative to the other. Consider that this grill, called the specimen grill, has been stretched so that 10 lines fill the space occupied by 12 lines on the second grill, called the master grill. The widths of the cracks between the lines of the gratings vary slowly from zero to a maximum. Recall that the grill lines are actually close together, so the eye or camera does not see the light passing through the individual cracks; rather, local averages are seen. The local average seems to oscillate twice from dark to light in the space shown, meaning 2 orders of moiré fringes are present. The extension or stretch in the specimen grill is u  Np, where N is the moiré fringe order and p is the line spacing or pitch of the master grill. Applying the strain-displacement relationship results in the following: e5

'sNpd 'N 'u 5 5p 'x 'x 'x

which means that the strain along an axis perpendicular to the grill lines is the grill pitch times the gradient of the moiré fringe order (or fringe spacings) along the same axis. Similar relationships can be derived for strain in other directions and for the shear strain, so the complete strain map over the extent of the specimen can be determined. Fringes caused by small relative rotations of the grills are oriented perpendicular to the grating lines, so they do not affect the measurement of normal strain. Specimen and Master Gratings Grills or gratings must be attached to the surface of the specimen and superimposed with a master grating using one of several observing techniques. For measurement of strain in metals, the gratings must be very fine, having about 10 to 100 lines per millimetre. A considerable body of art and technology has developed for creating and attaching specimen gratings. Superb photographically reproduced gratings for moiré work are available from vendors. Another good approach is to obtain fine photomechanical screens that are used by the printing industry. These screens are available as photographic negatives or as fine metallic mesh. They may be cut up and glued to the specimen using techniques similar to those used for attaching strain gages. More commonly, the purchased gratings are kept as intact masters and are copied onto the specimen and as submaster gratings by photographic contact printing, by stencil techniques, or by vacuum deposition. Contact printing into a thin coating of the photoresist that is used for making printed circuit boards has been a method of choice for many practitioners. Observing Techniques A direct and simple method of measuring displacements and strains in flat specimens involves the use of a transparent model. A grill or grating is attached or printed onto the specimen surface. The specimen is mounted in the loading frame, and a master grill is put into direct contact with the specimen grill and initially oriented to bring the moiré fringe orders to a minimum. The master

Fig. 5.5.5 Portion of moiré pattern for measuring strain near a plastically deformed hole in aluminum. Data Reduction The objective is to create a map of strain distribution from a moiré fringe pattern, so the derivative of the moiré fringe order with respect to position coordinates must be found. Conceptually, this is easily done by drawing an array of axes across the fringe pattern with the axes oriented perpendicular to the grill direction. The location of each moiré fringe pattern relative to some fiducial origin along a chosen axis is measured and plotted as fringe order versus distance in specimen space. This graph is differentiated to obtain the fringe gradient, which is then multiplied by the grill pitch to obtain normal strain as a function of position along the axis. The process is repeated for each of the axes to complete the strain map. Fringe digitization and computer processing are useful, but manual processing serves well for small jobs, since only local fringe spacings are required.

PHOTOELASTIC STRESS ANALYSIS Shadow Moiré The shadow method of geometric moiré utilizes the superimposition of a master grating with its own shadow. The fringes are loci of constant out-of-plane elevation, so they form an absolute contour map of the object being examined. This method and related laser scanning techniques are used, e.g., in manufacturing and medical diagnosis. To create a shadow moiré pattern, a master grill or grating of pitch p is placed in front of the specimen object. For large objects, the master grating can be created by stretching strings across a frame. The combination is illuminated with a collimated beam at incidence angle a. Observation is at normal incidence by means of a field lens that focuses the light to a point, at which is placed a camera that is focused on the object. The incident illumination creates a shadow of the grating on the surface of the specimen. The grating shadows on the specimen are elongated or foreshortened by a factor that depends on the inclination of the surface, and they are shifted laterally by an amount that depends on the incidence angle and the distance w from the master grating to the specimen. The camera superimposes the image of the master grating with its own distorted shadow, thereby forming a moiré fringe pattern. The fringes are numbered from some origin. If N is the moiré fringe order, then the elevation is w 5 Np/ tan a. Projection Moiré Another useful method for using moiré for outof-plane displacement measurement or determining changes of contour, as opposed to absolute contour, involves projecting the reference grating onto the specimen by means of, e.g., a slide projector. This method is especially useful when the shapes of two objects must be compared, a situation that arises in, e.g., examining sheet metal stampings. An advantage is that only a small transparency of the specimen grating is required. Alternatively, a projected grating can be created by oblique interference of collimated beams of coherent light. Another advantage is that changes of contour are determined directly, and subtraction of absolute contours is not required. A disadvantage is that the imaging system must be able to resolve the individual grating lines, as with doubleexposure in-plane moiré. To create the fringe pattern, the grating is projected at incidence angle a onto the specimen surface in its initial state, or onto a master reference surface. The projected grating image is photographed by a camera along the normal to the surface. The specimen is then distorted, or else the second surface is inserted if contour difference is sought. This grating is photographed over the first image. The developed doubleexposure image will contain moiré fringes that are numbered as usual. At any point in the image where the moiré fringe order is N, the elevation difference w between the two states of the specimen is given by w 5 Np/sin a.

OPTICAL INTERFEROMETRY

Many important methods of experimental mechanics are based on the phenomenon of optical interference. Approaches in this class include, e.g., photoelasticity, hologram interferometry, speckle interferometry, and moiré interferometry. Brief study of interference, one of the cornerstones of optics along with diffraction, facilitates quick understanding and use of all such methods. Collinear Interference Light is considered to be electromagnetic waves that are characterized by an electric vector that oscillates in time and space. The magnitude of the electric vector may be taken to be a simple harmonic plane wave described as E 5 A cos s2p/ld sz 2 vtd, where A is amplitude, l is the wavelength of the light, z is the distance along the optical path, v is the wave velocity, and t is time. Consider two waves that are of equal amplitude, identically polarized and coherent, meaning that they originate from the same source, such as a laser. These two waves travel along the same path, but one of the waves leads the other by some distance or phase difference r. The two waves will interfere with each other to produce a resultant wave whose intensity I is a function of the phase difference; I 5 4A2 cos2 spr/ld. The phase lag between two light waves, which is not visible to the eye or detectable by any other means, is converted, through the interference process, to an intensity difference, which can be measured. The

5-67

transformation of invisible phase difference to visible intensity difference is the basis of all interferometry methods of measurement. It allows determination of physical quantities such as motion to within a fraction of the wavelength of light, giving sensitivities on the order of a few nanometres. Generic Interferometer Figure 5.5.6 shows a conceptual model that contains all the important elements of an interferometric system for measurement. Light comes from a coherent source such as a laser and is split into two parts by a beam splitter. The two waves have identical phases as they leave this component, but they travel along different optical paths, one or both of which might contain optical elements or specimens. The two waves are out of phase as they are directed into a combiner, where they interfere to produce an intensity that depends on the phase difference. A detector such as a photocell, an eye, or a camera records the intensity, from which the phase difference can be inferred. The phase difference is indicative of the difference in optical path lengths between the two paths. Path 1

Combiner

Source

Detector Splitter

Path 2

Specimen

Fig. 5.5.6 The generic interferometer model.

The conceptual model seems to imply that only pointwise measurements can be made with one wave, but the process can be executed for every point in a broad field simultaneously if a beam of light is used and appropriate optical elements such as lenses, mirrors, and imaging devices are employed. The interference process produces a map of light and dark patches, called interference fringes, that are superimposed on the image of the object being studied. This whole-field capability is a great strength of optical methods in general. Interferometry can be employed in various modes. The generic model suggests that the difference between the two paths can be obtained directly with one observation. More commonly, one path is held constant, and two observations are taken for two different states of the specimen path. When they are subtracted, the change in only the specimen path is determined.

PHOTOELASTIC STRESS ANALYSIS

Photoelastic stress analysis is a method whereby polarized light is used to obtain, through interferometry, the stress state in a loaded transparent model or in a coating that is glued to the surface of a prototype. Stress Birefringence The speed of light in a vacuum is divided by the speed of light in a material to give the refractive index of the material. Optical path length is the physical path length (distance) traveled times the index of refraction. Changes or differences in optical path length cause phase lags or retardations in optical waves as they traverse a material. In certain materials, including many plastics, the index of refraction at any point is closely related to the state of stress at that point, and such materials are said to exhibit stress birefringence. Experiments on these materials show that 1. The index of refraction depends on the plane of polarization of light that passes through the material; i.e., the index is direction-dependent. 2. The state of the index of refraction can be described in terms of principal values that are associated with principal directions.

5-68

EXPERIMENTAL STRESS AND STRAIN ANALYSIS

3. The principal values of the refractive index are proportional to the principal stresses. 4. The principal axes of refractive index coincide with the principal stress axes. A conclusion is that if the directions and magnitudes of the principal refractive indexes can be ascertained, then the principal stresses and principal directions can be determined, provided that the relation between stress and refractive index is known for the material. Determination of stress by interferometric observation of refractive index changes is accomplished by the method of photoelasticity. Photoelasticity Experiment and theory show that birefringent materials can transmit light only if the light is polarized in the principal directions. If a polarized wave falls on a birefringent material, it is immediately divided into two polarized components, one in each principal direction, so the beam splitter function that is necessary for interferometry is served by the surface of the birefringent material. The two waves created traverse the material at differing speeds, depending on the two principal values of the refractive index. As these two waves exit the material, one will lag behind the other by an amount which is called the relative retardation R that depends on the local principal stresses and a material stress-optic coefficient Cs according to the equation R 5 Cs ss1 2 s2dd, where s1 and s2 are the principal stresses and d is the thickness of the specimen. Figure 5.5.7 illustrates this process. To determine R, the two wave components must be recombined interferometrically. This task is accomplished by a second polarizer in the system, known as the analyzer, which extracts the portion of each wave that lies parallel to the transmission axis of the analyzer to create the conditions for collinear interference. The intensity produced through interference is directly related to the relative retardation as described above.

Fig. 5.5.8 Isoclinic and isochromatic fringes in compressed disk.

oriented at 6458 to the crossed axes of the polarizer and analyzer. The entering plane-polarized wave is divided into two equal orthogonal components, and upon exit from the quarter-wave plate, one component will lag behind the other by l/4. The resultant of the two out-of-phase waves will describe a circular helix in space, hence the term circularly polarized. The physical effect of using circular polarization is that the directional dependence of the interference is removed. The second l/4 plate essentially cancels the effect of the first one, but does not restore the directional dependence. Photoelastic Polariscope A combination of optical devices that is used for photoelasticity is called a polariscope. Many polariscope configurations are used. One setup, called the diffused-light polariscope, is shown in Fig. 5.5.9. In this system, a diffuser scatters light in all directions, only some of which passes through the system parallel to the opti-

E1

Quarter-wave plates

E2 1 R

2

Light source

Imaging device

E

Birefrin

gent pla

te

Color filter Diffuser

Polarizer

Specimen Analyzer

Fig. 5.5.7 Effect of birefringent plate on incident polarized light.

Field lens

Fig. 5.5.9 Diffused-light photoelastic polariscope.

If this interferometric process occurs simultaneously over the entire field and a photograph is made of the result, interference fringes will be observed. Attention for the moment is limited to the case in which the polarization axis for the light entering the specimen is perpendicular to the analyzer axis. For linear polarization, two separate families of fringes are produced, as shown in Fig. 5.5.8. These families are 1. Isochromatic fringes that are loci of points having constant relative retardation R, meaning they are lines of constant s1 2 s2. From the isochromatic family, the magnitudes of stress in the specimen can be found. 2. Isoclinic fringes that are loci of points where the principal stress axes are parallel to the axes of the polarizer and analyzer. This family is used to establish the principal stress directions for the entire specimen. Circular Polarization A problem with the photoelastic measurements described above is that an isoclinic fringe always masks some of the isochromatics. The isoclinics can be eliminated by inserting two quarter-wave plates into the system, one on either side of the specimen. A quarter-wave plate is a birefringent slab that produces a quarterwavelength of relative retardation over its extent. The plates are inserted into the photoelastic measuring system with their principal axes

cal axis. The polarizer, quarter-wave plates, specimen, and analyzer are arranged as described above. These elements are usually mounted in calibrated rings so that they can be rotated relative to one another and to the specimen. A field lens is used to direct the light to a point, where a camera lens is placed to create an image of the specimen on film or a CCD array. The distance from the field lens to the camera aperture must be made equal to the focal length of the field lens; otherwise, the image is made with light that has not traveled parallel to the optical axis, and errors will be introduced. For quantitative measurements, a narrowband color filter is used to create nearly monochromatic light (single wavelength), and/or a spectral lamp is used as the light source. Material Calibration Accurate determination of the stress-optic coefficient Cs for the photoelastic model material is prerequisite to good quantitative results. Calibration is accomplished by testing a model in which the stress field is known, such as a tensile specimen, a beam in pure bending, or a disc in diametral compression. The simple tension test is the most generally valid, since the stress state in tension does not depend on any assumptions of linearity. The coefficient is dependent not only on the material, but also on time and wavelength,

PHOTOELASTIC STRESS ANALYSIS

and these factors must be taken into account. Several different calibration procedures are valid; only a basic one is described here. A “dog bone” specimen of the model material is placed in a loading frame within the polariscope. A load is applied, and the isochromatic order in the central section is carefully estimated by direct observation at a certain time t1, say 10 min, after load. The load is increased by an increment, and the isochromatic order again is estimated at t1 after the load increase. The process is repeated for several loads, with the fringe order being determined always at the same time after changing the load. A plot of fringe order versus stress in the tensile specimen is constructed. The slope of this plot gives the stress-optic coefficient for time t1 after load. When the unknown specimen of the same material is tested to determine its stress state, the isochromatic orders are recorded for the same time t1 after loading that was used in the calibration. Sometimes this is not possible. To avoid repeating the calibration testing, the isochromatic orders are often observed at a sequence of times after load so that a family of calibration results is obtained. Note that the definition of stressoptic coefficient is not standardized. Care is required in using calibration data provided by others. Recording and Interpreting Isochromatics The specimen and polariscope are arranged as described above, first with the polarizer and analyzer axes crossed. The quarter-wave plates are also crossed, and their axes are set to produce circularly polarized light. No light transits the system, so this setup is known as the dark-field arrangement. The specimen is loaded, and after waiting the prescribed time, the isochromatics are photographed. Immediately, the analyzer is rotated by 908 so as to produce a light-field arrangement, meaning that the background around the specimen will be bright. The new light-field isochromatic pattern is photographed. The reason that both light-field and dark-field patterns are recorded is that twice as many data are obtained, which aids in interpretation. A typical dark-field pattern is shown in Fig. 5.5.10.

Fig. 5.5.10 Isochromatic pattern indicating stresses in compressed disk and arch.

The dark fringes in the dark-field picture are whole-order isochromatics along which the relative retardation is an integral multiple of the wavelength. The isochromatics are numbered in order. Assigning correct orders is not a trivial task. One good technique is to keep track of them as the load is applied. Along these whole-order fringes, the relationship between stress and fringe order is found by combining equations already presented, to give s1 2 s2 5 2tmax 5

ml Csd

where m is the fringe order. The dark fringes in the light-field photograph are half-order isochromatics that lie between the whole orders. The isochromatic fringes give for every point inside the specimen only the difference between the principal stresses, which is the diameter of Mohr’s circle and equals twice the maximum shear stress. Many methods to obtain the individual stresses have been devised. The problem is simplified along a boundary where there is no load and s2 5 0. For practical purposes, a boundary stress plot can be constructed to

5-69

show stress concentrations. Normals to the boundary are erected with their lengths proportional to the local s1, and the tips of the normals are connected with a smooth curve. Often, interpolation between whole and half orders of isochromatics does not give sufficient precision for critical areas. One of several precise methods of determining fractional fringe orders must then be used. Recording and Interpreting Isoclinics For observation of isoclinics, a second model of a material that is optically less sensitive than that used for isochromatic study is often employed so that the isochromatic orders will be small and not mixed with the isoclinics. The model is placed in the polariscope, the quarter-wave plates are removed, and the polarizer and analyzer are set so their axes are horizontal and vertical, as are the axes of the specimen. The 08 isoclinic fringe appears, and it is recorded and labeled. Recording of isoclinics can be accomplished by direct tracing from a view camera back or a video monitor. They can also be photographed if heavy exposures are used to narrow the fringes. The polarizer and analyzer are then rotated by an increment of, say, 108 in the same direction, and the isoclinic for that setting is recorded and labeled. This process is repeated incrementally until 908 is reached, where the isoclinic will duplicate the first one. The direction of rotation of the polarizers is arbitrary, but the direction used must be noted, usually on the isoclinic tracing. The result will be a complete pattern of isoclinic fringes that show the principal directions everywhere in the specimen. Visualizing the stress direction map from the isoclinics is rather difficult, so an overlay orthogonal network of lines that are everywhere tangent to the principal directions, called the stress trajectory map, is often constructed from the isoclinic pattern. Scaling Stresses from Model to Prototype To use stresses obtained from a photoelasticity model, they must be transferred to the prototype component or structure, which likely is of a different material, of a different size, and subjected to different loads. Two aspects of the problem must be given attention: the difference of material properties and the differences in size and load. If the component being studied is elastic, simply connected (no holes), or multiply connected (with holes) with no unequilibrated forces on a single boundary, and if the body forces are negligible, and if no displacement boundary conditions apply, then the stress distribution is completely independent of material properties. If displacement conditions are specified, then the modulus of elasticity is important. For the other cases, only the Poisson ratio is a factor, and the error in ignoring this effect will not exceed about 7 percent. In other words, for most problems, material properties are not a factor in transferring measured stresses from model to prototype. The laws of dimensional similitude, developed from the Buckingham theorem, are used to attend to size and load differences. The resulting laws of dimensional similarity are almost intuitively obvious. Geometric similarity requires that the model be only scaled up or down relative to the prototype, meaning its shape and proportions cannot change. A similar rule applies for load similarity; the ratios of like loads between model and prototype must be constant. If displacement boundary conditions are specified, then the absolute load magnitudes must be scaled also, but this case rarely arises. If the similarity conditions are satisfied, then the stresses in model and prototype satisfy the following relationship, which shows how the stresses are scaled, sp 5 sm ¢

Pp Pm

am dm ≤ ¢a ≤ ¢ ≤ dp p

where subscript m means model and subscript p means prototype. Here Pp /Pm is the load scale factor, am /ap is the dimension scale factor, and dm /dp is the thickness scale factor. This relationship applies only to twodimensional problems, and it allows the thickness to be conveniently scaled differently from the lateral dimensions. Reflection Photoelasticity The strain distribution in nontransparent prototype components and structures can be determined by an extension of the photoelasticity methods described above. The method requires that a thin layer of birefringent material be attached to the surface of the component, using a cement that contains a light-scattering medium

5-70

EXPERIMENTAL STRESS AND STRAIN ANALYSIS

such as aluminum powder. The polariscope is then folded in the middle. The polarized light (circular or linear) falls on the photoelastic coating and undergoes a relative retardation as it traverses the coating twice. The scattered (reflected) light travels through the second quarter-wave plate and the analyzer to the eye or a camera. As the prototype is loaded, isoclinic and isochromatic fringes will be obtained. Interpretation of isochromatic fringes differs from before in that the surface strains, not stresses, are transferred to the birefringent coating by the cement. The material is calibrated in terms of principal strains, and the isochromatics are so interpreted. If stress data are required, then the prototype material properties and the stress-strain relations are used. Reflection photoelasticity is highly developed because of its practicality, especially in the industrial setting. Three-Dimensional Photoelasticity Photoelasticity can be employed to determine stress states in three-dimensional objects by several techniques. All such methods basically slice the three-dimensional model into a stack of two-dimensional cases. Two such approaches are described briefly here. The embedded polariscope technique uses a thin slice cut from the birefringent model. The slice is then glued back into the model, with polarizer and quarter-wave sheets replacing the saw kerfs. The result is a polariscope encased within the object with what is essentially a twodimensional model between the polarizers. The model is loaded, and the fringe patterns for the slice are obtained by passing unpolarized light through the model. Studies have shown that cutting the model and gluing it back together with the polarizers in place does not significantly affect the stress state if care is taken to use a matching cement. This method works very well for obtaining the stresses in the plane of a single slice of the object. The stress freezing and slicing method requires that the model be brought to its second-order transition point by careful heating. A small deformation is then applied by loading, and this deformation is maintained while the model is cooled. The model is permanently deformed, and the birefringence, which is related to the strains in this case, is “locked in.” The model can be sliced into two-dimensional cases for study in an ordinary polariscope without affecting the strain state. Again, the principal stress components in the plane of each slice are obtained. Additional observations, as by oblique incidence, are needed to completely characterize the three-dimensional state of stress. HOLOGRAPHIC INTERFEROMETRY

Holography is a method of storing and regenerating all the amplitude and phase information contained in the light that is scattered from a body that has been illuminated by a laser. Because all the information is reproduced, the reconstructed object beam is, ideally, indistinguishable from the original, so a full three-dimensional image is created. Since it is possible to record the exact shape and position of the body in two different states, it is possible to compare the two records interferometrically to obtain precise measurement of movement or deformation. This measurement technique is called holographic interferometry. The making of a hologram is by oblique interference of two beams. The setup for creating this interference follows the model of the generic interferometer described above. A laser beam is split into two parts. One part, called the reference beam, is expanded and made to fall on a highresolution photographic plate or film. The other part, called the object beam, is expanded and made to illuminate the object being studied. The object scatters this illumination, so some of it falls on the same photographic film. The reference and object beams interfere at the film to create a very fine complex grating, which is stored in the film emulsion. It can be shown that this grating contains all information about the object shape, size, and position, although there will be no recognizable image stored. Reconstruction of the holographic image is through diffraction by the exposed and developed film. The film is placed back into its original position and illuminated by only the reference beam. Grating diffraction will divide the beam into three parts, one of which, in the usual setup, recreates the object beam. If the eye is placed to receive this beam, a perfect virtual image of the object will appear in its original state.

Holographic interferometry is implemented in three different ways. In the double-exposure method, also called the frozen-fringe technique, the initial state of the specimen is recorded as outlined above, but the film is not developed at this point. The object is then loaded or moved, and a second exposure of the object is recorded on the same film. The film is developed and replaced in the setup so as to reconstruct the images. The two superimposed images will be seen as one, and it will contain interferometric fringes that indicate the deformation or motion of the specimen. Alternative approaches are the real-time (live fringe) technique and time-average technique for vibrating objects, neither of which is treated here. Holographic interferometry is primarily used for nondestructive testing of, e.g., tires and turbine components. In this application, subsurface voids, disbonds, and damage cause local perturbations in the otherwise smooth holographic fringes. SPECKLE INTERFEROMETRY

The technique offers a way to achieve interferometric sensitivities for very rapid, whole-field, noncontacting measurements of displacement and strain. No special specimen preparation is required, and video imaging can be utilized. The basic phenomenon employed in this class of techniques is that an object viewed or photographed when illuminated with laser light will exhibit a dense pattern of bright and dark granular spots. This pattern is called a speckle pattern, and it is caused by local interference of the many waves that are scattered from every point on the illuminated object and that subsequently fall on the receiving sensor array. In order that the speckles have correct properties for measurement of deformation, two beams must be mixed at the sensor. In the first case, a reference beam is used as in holography. In the second instance, two object beams are used. To utilize these speckles in electronic or digital speckle interferometry, a first digital image is captured to computer memory. The specimen is then moved or deformed, and a second image is recorded. The second digital image is subtracted from the first, pixel by pixel. This subtraction of two intensity patterns can be done in real time on a desktop computer. The resulting video image will exhibit fringes that indicate the local displacements of the surface of the specimen. If a reference beam is used, if the illuminated object is photographed at near-normal incidence, and if the object beam is nearly parallel with the reference beam at the sensor, then the fringes indicate only the outof-plane motion of the object. If two object beams are used, then they are set up to illuminate the object at equal and opposite angles of incidence, and viewing is along the normal. In this case, the fringes correspond to in-plane motion or deformation. The measured in-plane displacements are then locally differentiated to obtain strain distribution. The speckle methods just described produce speckle correlation fringes that are most commonly used in qualitative applications such as nondestructive testing. Phase-shifting techniques extend the method for quantitative measurement. A common implementation is to incorporate into one of the optical paths a mirror that can be moved by small amounts so as to change that path length. At least three images are captured for a matching number of different path length differences. Processing yields a whole-field phase change map that indicates both the sign and the magnitude of displacement. The displacement map is locally differentiated in the computer. Thus, whole-field strain measurements with high sensitivity and small gage lengths can be accomplished in only a few seconds. SPECKLE SHEAROGRAPHY

This approach uses the speckle effect described above in a different way but with much of the same equipment. An extra optical element, such as a Fresnel biprism or a split lens, creates two images of the object, one very slightly displaced from the other, on the camera sensor. A first image pair is captured, the specimen is loaded, and the second image pair is captured and subtracted from the first, typically in real time. The resulting image pair exhibits fringes that indicate the in-plane derivative

MECHANICS OF COMPOSITE MATERIALS

of the out-of-plane displacement (i.e., the change in slope), at each point of the object surface. Electronic speckle shearography may be made highly quantitative by use of phase shifting as outlined above, but it is primarily used without phase shifting as a very important nondestructive evaluation tool that is simple, quick, and easy to implement. As with holographic interferometry, the presence of flaws is indicated by local perturbations of the otherwise smooth fringe pattern. MOIRÉ INTERFEROMETRY

To implement this extension of the moiré method, a grating having spatial frequencies on the order of several hundreds of lines per millimetre is created by oblique interference of two collimated laser beams. This grating is transferred to the surface of the specimen by one of several techniques. The specimen is then replaced in the same setup that was used to make the grating, and the specimen is loaded. Reflective diffraction of the incident beams from the specimen grating creates two slightly diverging beams, where the local divergence angle is related to the local change of spatial frequency of the specimen grating that is caused by strain. The two diffracted beams are collected by a camera lens and brought together to form two coincident images that appear as one. This image will contain interference fringes resulting from local oblique interference. Interpretation of the fringe pattern is the same as for geometric moiré. This important method overcomes the sensitivity limitation of geometric moiré. Its main disadvantage is that significant specimen preparation is required. DIGITAL IMAGE CORRELATION Digital image correlation (DIC) is a noncontacting measurement method that uses digital signal processing algorithms to track the

5-71

motion in the image plane of subsets of either a coherent or an incoherent random speckle pattern. Two-dimensional DIC (2D-DIC) uses a single camera to convert images of a nominally planar object that deforms predominantly in-plane into a full field of in-plane displacements (u, v). Three-dimensional DIC (3D-DIC) converts images obtained from cameras located at multiple viewpoints into a full field of the actual, three-dimensional displacements (u, v, w) of all points in a planar or nonplanar object. Both methods require that a calibration process be performed so that the image motions are converted to displacements with optimal accuracy. The process can be performed either before or after completion of the experiment. When photographic-quality lenses are used to acquire images, calibration in 2DDIC is relatively simple, requiring only an image of a ruler or similar metric object. For 3D-DIC using photographic-quality lenses, pairs of images of a calibration object undergoing a set of 3D rotations are required to obtain the orientations and positions of the cameras. THERMOELASTICITY

Thermoelasticity is a noncontacting analysis technique that utilizes the minute local changes of temperature that are caused by stress. When a body is subjected to tensile stress, the temperature of the body reduces slightly; conversely, when the stress is compressive, a slight increase in temperature occurs. In the elastic regime, this temperature change is directly proportional to the change in the sum of principal stresses on the component surface. The only surface preparation required is to ensure uniform emissivity of the component surface at the infrared wavelengths used by the detector, and this requirement can usually be met by the application of a thin layer of matte black paint. Since the detectors are measuring photon emission from a surface, the technique can also be used obliquely, with satisfactory results obtainable at up to 608 from the normal to the surface.

5.6 MECHANICS OF COMPOSITE MATERIALS by Michael W. Hyer and Scott W. Case REFERENCES: General references: “The Composite Materials Handbook MIL-17,” American Society for Testing and Materials, West Conshohocken, Penn. Hyer, “Stress Analysis of Fiber-Reinforced Composite Materials,” WCB/McGraw-Hill, New York, 1998. Jones, “Mechanics of Composite Materials,” 2d ed., Taylor and Francis, Philadelphia, 1999. Herakovich, “Mechanics of Fibrous Composites,” Wiley, New York, 1998. Daniel and Ishai, “Engineering Mechanics of Composite Materials,” Oxford University Press, New York, 1994. Gibson, “Principles of Composite Material Mechanics,” McGraw-Hill, New York, 1994. Swanson, “Introduction to Design and Analysis with Advanced Composite Materials,” Prentice-Hall, Upper Saddle River, N.J., 1997. Vinson and Sierakowski, “The Behavior of Structures Composed of Composite Materials,” 2d ed., Martinus Nijhoff Publishers, Boston, 2002. Wolff, “Introduction to the Dimensional Stability of Composite Materials,” DEStech Publications, Lancaster, Penn., 2004. References dealing primarily with damage tolerance and failure: Hinton, Soden, and Kaddour, “Failure Criteria in Fibre-Reinforced-Polymer Composites,” Elsevier Publishers, 2004. Reifsnider and Case, “Damage Tolerance and Durability of Material Systems,” Wiley, New York, 2002.

This section deals with some of the basic issues that must be addressed in applying the classic methods of mechanics of materials to predict the mechanical behavior of fiber-reinforced composite materials. The section deals exclusively with the mechanical behavior of materials that consist of stiff and strong fibers, such as glass, carbon, or aramid, embedded in a softer material, such as a thermosetting or thermoplastic polymer, known as the matrix material, or simply, the matrix. The role of the fiber is to carry the load, while the role of the matrix is to keep the fibers aligned, transfer the load from fiber to fiber, and protect the fibers from direct mechanical contact and exposure to

environmental factor. Other fibers, such as boron, have been employed, and composite materials that use metal, such as aluminum or titanium, for the matrix have been developed. Composite materials that utilize boron fibers or a metal matrix are much less common and are quite expensive. The mechanical behavior of metal matrix composite materials can differ significantly from that of polymer matrix composite materials in that metals yield before failing, whereas the polymers used in composite materials are brittle by comparison and generally fail by cracking. This difference in the two matrix materials can lead to different failure criteria and therefore different analysis techniques to predict whether a particular structural component will fail or not. Here emphasis will be on polymer matrix composite materials. However, some of the assumptions and calculations are applicable to metal matrix composite materials if the stresses in the matrix are below the yield level. The application of the classical methods of the mechanics of materials to the analysis and design of structural components such as beams, plates, curved panels, and cylinders fabricated from fiber-reinforced composite materials requires considerably more steps than the analysis and design of a similar components made of conventional materials such as aluminum or steel. The primary reasons for this are (1) the layered nature of components fabricated of fiber-reinforced composite materials and (2) the fact that fibers in the layers result in stiffness and strength properties that are directionally dependent, not only for that layer, but also for the overall structural component. When material behavior is not directionally dependent, the material behavior is said to be isotropic. Except for slight rolling or other effects due to fabrication

5-72

MECHANICS OF COMPOSITE MATERIALS

or consolidation, which are generally ignored, conventional materials such as aluminum or steel exhibit isotropic behavior. When material properties are directionally dependent, material behavior is said to be anisotropic, or nonisotropic. There are special classes of anisotropic material behavior, and one which will be defined shortly is orthotropic material behavior. To reach a point where the only two differences between a component fabricated from a fiber-reinforced composite material and one fabricated from a conventional material are the layered nature and anisotropic behavior, several steps and assumptions have to be considered. Specifically, the fiber and matrix materials each have their own stiffness and strength properties. When these two constituents are combined, the resulting mixture, or composite, has its own stiffness and strength properties, called composite properties. Composite properties depend not only on the properties of each constituent, but also on the quantity of each constituent in the composite material. A key measure of the quantity of a constituent is the volume fraction of that constituent in a given volume of composite material. There are models for predicting the properties of a composite material based on the properties and volume fraction of each constituent, in particular, the volume fraction of fibers and the volume fraction of matrix material. These models are referred to as micromechanical models. Several micromechanical models will be introduced here for computing such properties as overall stiffness or the overall thermal expansion properties of a layer of composite material. These models are based on assumptions regarding the mechanical behavior of the individual fibers, the matrix surrounding them, and importantly, assumptions regarding the interaction between the fibers and matrix. The result is a collection of models each with its own assumptions and each representing the combined stiffness and thermal expansion effects of all the fibers and the accompanying matrix material. A layer of composite material is assumed to exhibit these combined effects, and it is assumed the combined effects are uniformly distributed throughout the volume of the layer. The assumption of having a uniform distribution of composite properties, despite the fibers and matrix having individual properties, is the concept of homogenization, or smearing, of constituent properties. Once the smeared composite properties are predicted, the individual fibers and the matrix material lose their identities and a layer is treated as a single material with a single set of properties. Each layer may have different properties, because one layer may use glass fibers and another layer may use carbon fibers, or there may be a mixture of glass and carbon fibers within one layer; but within a layer, the properties are assumed to be uniform. Homogenization is an important concept. If this concept were not accepted, it would be impossible to study the mechanical behavior of fiber-reinforced composite materials, as it would be an insurmountable task to keep track of the behavior of each fiber and the matrix material surrounding each fiber. After the effects of the fibers and matrix have been homogenized, before design and analysis can proceed, the concept of homogenization must be utilized a second time to represent the combined effects of all the layers on the overall behavior of a structural component. This second step is necessary because a component could be constructed of many, many layers of material. The assemblage of all the layers is referred to as a laminate. It becomes a large chore to keep track of the behavior of each layer in a laminate when interest may center on the overall behavior of the laminated component, e.g., such as its lowest natural frequency or its buckling load. The steps for homogenization at this level also depend on assumptions regarding the behavior of the individual layers and assumptions regarding how each layer interacts with the surrounding layers. The study of the individual layers in a laminate and how they interact is often referred to as macromechanics. The behavior of a laminate when loading, support, and other conditions are considered is often referred to as composite structural mechanics. Many of the tools of mechanics apply to composite materials and structures because the tools are based on principles that do not depend on material behavior. Ultimately, however, material properties must be defined, and it is at this step that the design and analysis of composite structures and of isotropic structures diverge.

MICROMECHANICS OF A LAYER

The practice of using fiber and matrix properties to estimate the homogenized response of a composite material has received lots of attention, and some treatment is given in almost all introductory composite materials texts. Practical reasons for such an investigation include the ability to perform tradeoff studies, such as varying the amount and type of fiber used, and the ability to estimate, during the initial design process, quantities that are difficult to determine experimentally. Three principal approaches have been employed in developing these models for homogenization: (1) mechanics of materials techniques, (2) elasticity solutions, and (3) empirical or semiempirical techniques. In the mechanics of materials approaches, simplified geometries and simplified material behavior models are usually employed. In general, these models do not capture the details of the stress and strain distributions within the fiber and matrix, and as a result, details of the actual fiber arrangement within the composite are unimportant. In the elasticity solutions, an attempt is made to determine the actual stresses and strains in the fiber and matrix materials, and as a result, the fiber packing arrangement within the composite becomes important. In such a case, the actual fiber packing is often represented by idealized packings, as shown in Fig. 5.6.1.

Fiber Matrix

Fiber Matrix

Fig. 5.6.1 Photomicrograph of actual composite cross section (left), and idealized representations of fiber packing: hexagonal (center) and square (right).

It is important to keep in mind, however, that these models are estimates of the actual material behavior, and that there is no substitute for direct experimental measurement of the desired properties. As a result, the models presented in this section are selected for the balance between ease of use and accuracy. Before we discuss the micromechanics models in detail, it is necessary to begin with a description of the thermal-mechanical response of a single layer of unidirectional composite material. In discussing this response, it is convenient to use an orthogonal coordinate system that has one axis aligned with the direction of the fiber reinforcement, a second axis that lies in the plane of the layer, and a third axis through the thickness of the layer, as illustrated in Fig. 5.6.2. Such a coordinate system is referred to as the principal material coordinate system (or often the 1-2-3 coordinate system). In this case, the 1 direction is the fiber direction, while the 2 and 3 directions are transverse directions, often

3 Matrix direction 2 Matrix direction

1 Fiber direction

Layer of composite material

Small element of composite material

Fig. 5.6.2 Layer of composite material illustrating principal material coordinate system.

MICROMECHANICS OF A LAYER

referred to as matrix directions. At this point, it is clear that the material properties will not be the same in all directions. For example, it is reasonable to expect the composite to be stronger and stiffer in the 1 direction than in either the 2 or 3 directions. It might also be expected that the 2 and 3 directions have similar—but not necessarily identical—properties because they are both directions perpendicular to the fiber direction. Accordingly, the response of an element of material at a point within the 3

2

σ3 τ23 τ13 τ13 τ12

σ1

τ23 σ2 Fig. 5.6.3 Illustration of stresses acting on the small element of composite material from Fig. 5.6.2.

layer that has different properties in the different directions is to be described. Such an element can be acted on by six stress components, as shown in Fig. 5.6.3. In Fig. 5.6.3, s1, s2, and s3 are the normal stresses, in the principal material directions; t12 is the in-plane shear stress; and t13 and t23 are through-thickness shear stresses. The corresponding normal strains are given as e1, e2, and e3; the engineering shear strains are given as g12, g13, and g23. The strains can then be related to the stresses as

e1 e2 e f 3v5 g23 g13 g12

I

2

n21 E2

1 E2 n23 2 E2

n31 E3 n32 2 E3 2

0

0

0

0

0

0

1 E3

0

0

0

0

0

0

1 G23

0

0

0

0

0

0

1 G13

0

0

0

0

0

0

1 G12

Y

e1 S11 S12 S13 0 0 0 s1 e2 S21 S22 S23 0 0 0 s2 e3 S31 S32 S33 0 0 0 s f v5F V f 3 v (5.6.2) g23 0 0 t23 0 0 0 S44 g13 0 0 0 0 S55 0 t13 g12 t12 0 0 0 0 0 S66 A direct comparison of Eqs. (5.6.1) and (5.6.2) shows that the compliances may be calculated in terms of the engineering properties as n31 n21 1 S12 5 2 S13 5 2 S11 5 E1 E2 E3 n32 n12 1 S21 5 2 S22 5 S23 5 2 E1 E2 E3 (5.6.3) n13 n23 1 S31 5 2 S32 5 2 S33 5 E1 E2 E3 1 1 1 S55 5 S66 5 G23 G13 G12 The inverse of the compliance matrix is called the stiffness matrix and is normally denoted by C. Once this inverse has been defined, the stress-strain relations can be written as s1 C11 C12 C13 0 0 0 e1 s2 C21 C22 C23 0 0 0 e2 C31 C32 C33 s3 0 0 0 e f v5F V f 3 v (5.6.4) t23 0 0 0 C44 0 0 g23 t13 0 0 0 0 C55 0 g13 t12 0 0 0 0 0 C66 g12 S44 5

1

τ12

1 E1 n12 2 E1 n13 2 E1

5-73

s1 s2 s f 3v t23 t13 t12

(5.6.1) In Eq. (5.6.1), E1 is the extensional modulus in the fiber direction, E2 is the extensional modulus in the transverse 2 direction, and E3 is the extensional modulus in the transverse 3 direction of the composite material. Poisson’s ratios vij are defined so that  snij /E i dsi is the strain in the j direction due to a stress applied in the i direction. The three shear moduli are G23, G13, and G12. The extensional moduli, the shear moduli, and Poisson’s ratios are collectively referred to as engineering properties. The square 6-by-6 matrix in Eq. (5.6.1) is referred to as the compliance matrix and is normally denoted by S. The stress-strain relations, in terms of S, are then given as

The forms of Eqs. (5.6.2) and (5.6.4) suggest that 12 compliance or 12 stiffness components are required to describe the stress-strain response of an element of composite material. However, it can be shown that the compliance matrix is symmetric, i.e., Sij  Sji. For that to be true, from Eq. (5.6.3) the following relations must hold: n13 n31 n23 n32 n12 n21 5 5 5 (5.6.5) E1 E2 E1 E3 E2 E3 The relations in Eq. (5.6.5) are referred to as the reciprocity relations. Thus, only nine independent engineering properties are needed to describe the elastic response of a layer of composite material. In addition to the strains due to the stresses, expansional strains due to temperature changes and moisture absorption are often present. For such a case, the strains are related to the stresses by a modification of Eq. (5.6.1) as n31 n21 1 2 2 0 0 0 E1 E2 E3 n32 n12 1 2 2 0 0 0 E1 E2 E3 e1 s1 n13 n23 e2 s2 1 2 2 0 0 0 E1 E2 E3 e3 s f v5 I Y f 3v g23 t23 1 0 0 0 0 0 G23 g13 t13 g12 t12 1 0 0 0 0 0 G13 0

0

a1 a2 a 1 f 3v T 1 0 0 0

0 b1 b2 b f 3v M 0 0 0

0

0

1 G12

(5.6.6)

5-74

MECHANICS OF COMPOSITE MATERIALS

In the above equation, a1,a2, and a3 are coefficients of thermal expansion; b1, b2, and b3 are coefficients of moisture expansion; T is the change in temperature from a reference temperature; and M is the change in moisture content within the material from a reference moisture content. (This reference moisture content is normally taken as zero.) Note that there are no thermally induced or moisture-induced shear strains in the principal material coordinate system. The form of the stress-strain relations in Eq. (5.6.6), and Eqs. (5.6.1), (5.6.2), and (5.6.4), defines orthotropic material behavior. Such behavior is characterized by (1) a decoupling of shear and normal responses, resulting in many of the off-diagonal terms being zero, and (2) different material properties in the three mutually perpendicular directions. Specifically, the 0s represent the fact that with orthotropic material behavior, normal strains and normal stresses are related; each particular shear strain is related to the corresponding shear stress, but there are no relations between normal strains and shear stresses, or between shear strains and normal stresses. Also, as noted, there are no thermal- or moistureinduced shear strains. The form of the stress-strain relations for isotropic materials is the same as the form of Eq. (5.6.6), the difference being that the material properties are the same in all directions, and only two engineering properties are needed (e.g., the extensional modulus E and Poisson’s ratio n) to fully describe the response. That is, for the engineering properties E 1 5 E 2 5 E 3 5 E, n23 5 n13 5 n12 5 v, G23 5 G13 5 G12 5 G 5 E/2s1 1 nd, while for the expansional properties a1 5 a2 5 a3 5 a and b1 5 b2 5 b3 5 b. Equation (5.6.6) represents the general response of a layer of composite material subjected to a three-dimensional stress state. In many practical situations involving the use of composite materials, it may be assumed that the composite is in a state of plane stress, a state in which three of the stress components are so much smaller than the other three that they may be assumed to be equal to zero for subsequent stress analysis. Some situations in which the plane-stress assumption may be employed arise when you are considering beams, plates, and cylinders in which one dimension is at least an order of magnitude greater than the other dimensions. As an example, consider a plate in which the thickness direction, which is aligned with the 3 direction, is much less than the in-plane dimensions. The out-of-plane stress components s3, t23, and t13 will then be much smaller than the in-plane stress components s1, s2, and t12. As a result, Eq. (5.6.6) can be simplified considerably. Before we examine the simplified stress-strain relations for the case of a plane stress state, a number of cautions are in order. First, the plane stress assumption can lead to inaccuracies. For example, delamination of laminates often initiates at free edges due to the presence of out-ofplane stresses—exactly the stress components ignored in a plane stress analysis. Second, locations of discontinuities in structures, such as at bonded joints, stiffeners, or ply drops, result in three-dimensional stress states. Finally, in many cases the through-thickness stresses are quite small in comparison to the in-plane stresses (particularly s1), but may still be large enough in comparison to the corresponding strength values that failure may occur. To see why the plane stress assumption is important, the effects of this assumption on the stress-strain relations given in Eq. (5.6.6) can be examined. For the case of plane stress in which s3, t23, and t13 are taken to be identically zero, the stress-strain relationship reduces to

1 E1 e1 n12 c e2 s 5 F 2 E1 g12 0

2

n12 E1

1 E2 0

The 3-by-3 matrix is referred to as the reduced compliance matrix, although the expressions for the elements of the reduced compliance matrix in terms of the engineering properties are identical to those in the full 6-by-6 compliance matrix. In addition, it is important to remember that while the normal stress s3 is equal to zero for the plane stress condition, the corresponding normal strain e3 is nonzero and is given by e3 5 2

n13 n23 s 2 s 1 a3 T 1 b3 M E1 1 E2 2

(5.6.8)

Thus, to analyze the plane stress response of a layer of composite material to mechanical, thermal, and moisture loads, it is necessary to determine four engineering properties as well as the in-plane coefficients of thermal and moisture expansion. To determine these, the approach taken is to make use of models based primarily on the mechanics of materials, and use other models when the mechanics of materials models do not yield results of sufficient accuracy (or when the more accurate models result in expressions nearly as simple as the mechanics of materials results). The basic starting point for the mechanics of materials model is the representative volume element shown in Fig. 5.6.4. This volume element consists of enough fiber and matrix that the volume fraction of fiber within the volume is the same as the average value in the composite. A model for the fiber-direction extensional modulus E1 can be developed by assuming that when a load is applied in the 1 direction, the strains in the fiber and matrix are the same. This isostrain assumption leads to the result E 1 5 Vf E f1 1 s1 2 Vf dE m1

(5.6.9)

E f1

where Vf is the fiber volume fraction, is the extensional modulus of the fiber along its length (the 1 direction), and E m1 is the matrix extensional modulus in the 1 direction. Equation (5.6.9) is often referred to as the rule-of-mixtures (ROM) model for E1 because the result is simply the fractional amount of fiber multiplied by the fiber extensional modulus added to the fractional amount of matrix multiplied by its extensional modulus. Likewise, if an isostrain model is used to estimate Poisson’s ratio of the composite material, the resulting expression is n12 5 Vf n f12 1 s1 2 Vf dnm12 n f12

(5.6.10)

nm12

where is the fiber Poisson ratio and is Poisson’s ratio of the matrix. Despite representing a round fiber with a square cross section, numerical results to be presented later in this section will justify recommending the use of rule-of-mixtures estimates for E1 and n12. An inspection of the representative volume shown in Fig. 5.6.4 reveals that the isostrain assumption is not appropriate for estimating the extensional modulus E2. Rather, a first estimate would be that the

0 s1 0 V c s2 s t12 1 G12

a1 b1 1 ca2 s T 1 cb2 s M 0 0

Matrix

(5.6.7)

Fiber

wf w

w Vf = Volume fiber = Area fiber = f w Total volume Total area Fig. 5.6.4 Representative volume element for micromechanics analysis.

MICROMECHANICS OF A LAYER

stress s2 is the same in the fiber and the matrix, and is uniform, when a transverse load is applied. This isostress assumption results in an estimate for E2 of the composite given by Vf 1 2 Vf 1 5 f 1 (5.6.11) E2 E m2 E2 E f2

cients that appear in Eq. (5.6.7)]. By using the mechanics of materials representative volume element of Fig. 5.6.4, it is possible to develop expressions for the coefficient of thermal expansion in the 1 direction a1 as a1 5

E m2

where and are the fiber and matrix transverse extensional modulus values, respectively. While this result based on the isostress assumption is often presented in introductory composite materials textbooks, it produces relatively inaccurate results, as will be shown later. An improved estimate may be developed by considering the representative volume element shown in Fig. 5.6.5. The analysis of this unit element is conducted by assuming that the central fiber/matrix element has an effective extensional modulus that may be calculated by using the rule of mixtures. From there, an analysis of the deformation of the total element leads to the result 1 5 E2

1 2 2Vf E m2

1

2Vf

(5.6.12)

E f2 2Vf 1 E m2 s1 2 2Vf d

1

5-75

a1f E 1f Vf 1 am1 E m1 s1 2 Vf d

(5.6.15)

E 1f Vf 1 E m1 s1 2 Vf d

The expression for the coefficient of moisture expansion b1 is developed by analogy. The coefficient of thermal expansion in the 2 direction a2 is given by a2 5 Ba f2 2 a

E m1 bn f sam 2 a f1ds1 2 Vf dR V f E 1 12 1

1 cam2 1 a

E f1 bn m sam 2 a f1d Vf d s1 2 V fd E 1 12 1

(5.6.16)

where E1 is the rule-of-mixtures expression for the fiber direction extensional modulus given by Eq. (5.6.9). Once again the coefficient of moisture expansion in the 2 direction may be developed by analogy. In addition to the mechanics of materials, improved mechanics of materials, and elasticity models described above, semiempirical models have been developed for calculating engineering properties of a layer based upon fiber and matrix properties. The most widely used are the Halpin-Tsai equations

1 1 h j Vf M 5 Mm 1 2 h Vf

Vf σ

σ

Vf

in which h5

A

B

A

Matrix Fig. 5.6.5 Representative volume element for improved mechanics of materials model.

This improved mechanics of materials model yields results that are in much better agreement with exact elasticity analyses than those in Eq. (5.6.11), and it is the recommended model for calculating E2. For the case of shear loading, the simple mechanics of materials analysis with an isostress assumption gives (5.6.13)

As with the composite extensional modulus in the 2 direction, often referred to as the transverse extensional modulus, this isostress model gives poor estimates of the composite shear modulus G12. It is possible to develop an analogous expression to Eq. (5.6.12) for the shear modulus. However, there is an approximate elasticity solution for a simplified geometry (concentric cylinders of fiber and matrix) available in closed form. The resulting expression for the composite shear modulus is G12 5 Gm12

sGm12 sGm12

1

G f12d

1

f G 12 d

2

Vf sGm12

2

1

Vf sGm12

f 2 G 12 d

f G 12 d

M f /M m 2 1

(5.6.18)

M f /M m 1 j

where M  composite modulus E2 or G12 f Mf  corresponding fiber modulus E f2 or G12 Mm  corresponding matrix modulus Em or nm

Fiber

Vf 1 2 Vf 1 5 f 1 G12 Gm12 G 12

(5.6.17)

and j is a parameter having admissible values 0 # j , `, which may be different for E2 and G12. A simplified form of Eqs. (5.6.17) and (5.6.18), obtained by substitution of Eq. (5.6.18) into Eq. (5.6.17), is given by M 5 Mm

(5.6.19)

sM f 1 j M m d 1 Vf sM m 2 M f d

A value of j 5 1 gives excellent results for G12—an identical result to that given by Eq. (5.6.14). A value of j 5 2 has been found to give good results for E2. An assessment of the accuracy of each of the models presented in this section may be obtained by comparing the results from these simple models to results from exact elasticity solutions. Such a comparison requires properties for the fiber and matrix materials. Representative properties for glass fibers, graphite fibers, and polymer matrix materials are taken as the values given in Table 5.6.1. Tables 5.6.2 and 5.6.3

Table 5.6.1 Representative Properties for Graphite Fiber, Glass Fiber, and Polymeric Matrix Materials Graphite fiber

(5.6.14)

This model gives results that are extremely accurate in comparison to exact results for hexagonally and square-packed arrays of fibers. In addition to these models that describe the response of a layer of composite materials to mechanical loading conditions, it is often useful to have micromechanics models for the expansional coefficients [such as the thermal expansion coefficients and moisture expansion coeffi-

sM f 1 j M m d 2 j Vf sM m 2 M f d

E 1 sGPad E 2 sGPad G12 sGPad v12 a1 s/8Cd a2 s/8Cd

233 23.1 8.96 0.200 20.540 3 1026 10.10 3 1026

* Assumed to be isotropic.

Glass fiber* 73.1 73.1 30.0 0.22 5.04  106 5.04  106

Polymer matrix* 4.62 4.62 1.699 0.36 41.4 3 1026 41.4 3 1026

5-76

MECHANICS OF COMPOSITE MATERIALS Table 5.6.2

Micromechanics Model Results for Graphite-Polymer Composite Material Fiber volume fraction

E1 (GPa), Eq. (5.6.9) E1 (GPa), elasticity n12, Eq. (5.6.10) n12, elasticity E2 (GPa), Eq. (5.6.11) E2 (GPa), Eq. (5.6.12) E2 (GPa), Eq. (5.6.19) E2 (GPa), elasticity G12 (GPa), Eq. (5.6.13) G12 (GPa), Eq. (5.6.14) G12 (GPa), elasticity a1 s/8Cd 3 106, Eq. s5.6.15d a1 s/8Cd 3 106, elasticity a2 s/8Cd 3 106, Eq. s5.6.16d a2 s/8Cd 3 106, elasticity

0.0

0.1

0.2

0.3

0.4

0.5

4.62 4.62 0.360 0.360 4.62 4.62 4.62 4.62 1.699 1.699 1.699 41.4 41.4 41.4 41.4

27.5 27.5 0.344 0.343 5.02 5.61 5.46 5.78 1.848 1.947 1.947 5.81 5.96 49.7 48.9

50.3 50.3 0.328 0.327 5.50 6.48 6.41 6.54 2.03 2.23 2.23 2.54 2.68 46.2 44.9

73.1 73.2 0.312 0.311 6.08 7.40 7.49 7.40 2.24 2.57 2.57 1.315 1.440 42.0 40.3

96.0 96.0 0.296 0.297 6.79 8.45 8.73 8.43 2.51 2.97 2.97 0.671 0.779 37.6 35.7

118.8 118.9 0.280 0.282 7.70 9.67 10.16 9.70 2.86 3.45 3.46 0.275 0.365 33.1 31.2

provide a comparison of composite properties as a function of fiber volume fraction using the various relations discussed.

SOLUTION.

5 0.276 This value compares favorably with the exact value of 0.269 found by using an elasticity solution (see Table 5.6.3). For E2, using Eq. (5.6.11), Vf 1 2 Vf 1 5 f 1 E2 E m2 E2 0.60 1 2 0.60 5 1 73.1 4.62 5 0.09478 GPa21 1 5 10.55 GPa E2 5 0.09478 GPa21

f n12 5 0.22 a1f 5 a2f 5 5.04 3 1026/8C

The matrix properties are obtained from Table 5.6.1 as E m2 5 4.62 GPa 1.699 GPa 0.36 am2 5 41.4 3 1026/8C

This value is in relatively poor agreement with the exact value of 15.56 GPa found by using the elasticity solution. For the improved mechanics of materials model, using Eq. (5.6.12), 1 2 2Vf 2Vf 1 5 1 E2 E m2 E f2 2Vf 1 E m2 s1 2 2Vf d

The fiber volume fraction is given as Vf 5 0.60. For E1, using Eq. (5.6.9), E 1 5 Vf E 1f 1 s1 2 Vf dE m1 5 s0.60ds73.1d 1 s1 2 0.60ds4.62d 5 45.7 GPa

20.60 1 2 20.60 1 4.62 73.1 20.60 1 4.62s1 2 20.60d 5 0.0622 GPa21 E 2 5 16.07 GPa 5

This value compares favorably with the exact value of 45.7 GPa found by using an elasticity solution (see Table 5.6.3).

Table 5.6.3

164.5 164.5 0.248 0.256 10.50 12.98 13.86 13.27 3.93 4.80 4.82 0.1866 0.1323 23.9 22.4

5 s0.60ds0.22d 1 s1 2 0.60ds0.36d

E f1 5 E f2 5 73.1 GPa f 5 30.0 GPa G12

5 5 5 5

141.6 141.7 0.264 0.269 8.88 11.15 11.85 11.29 3.31 4.05 4.05 7.17  103 7.87 3 1022 28.5 26.7

f 1 s1 2 Vf d nm12 n12 5 Vf n12

The fiber properties are obtained from Table 5.6.1 as

E m1 Gm12 m n12 a1m

0.7

For n12, using Eq. (5.6.10),

EXAMPLE 1. Evaluate the engineering and thermal expansion properties of a layer of glass-fiber-reinforced polymer having a fiber volume fraction of 60 percent. APPROACH. Make use of the properties given in Table 5.6.1 along with the micromechanics models described above.

0.6

Micromechanics Model Results for Glass-Polymer Composite Material Fiber volume fraction

E 1 sGPad, Eq. (5.6.9) E 1 sGPad, elasticity n12, Eq. (5.6.10) n12, elasticity E 2 sGPad, Eq. (5.6.11) E 2 sGPad, Eq. (5.6.12) E 2 sGPad, Eq. (5.6.19) E 2 sGPad, elasticity G12 sGPad, Eq. (5.6.13) G12 sGPad, Eq. (5.6.14) G12 sGPad, elasticity a1 s/8Cd 3 106, Eq. s5.6.15d a1 s/8Cd 3 106, elasticity a2 s/8Cd 3 106, Eq. s5.6.16d a2 s/8Cd 3 106, elasticity

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

4.62 4.62 0.360 0.360 4.62 4.62 4.62 4.62 1.699 1.699 1.699 41.4 41.4 41.4 41.4

11.47 11.48 0.346 0.343 5.10 6.25 5.88 5.77 1.875 2.03 2.03 18.22 18.56 45.0 43.9

18.32 18.33 0.332 0.327 5.69 7.56 7.39 6.84 2.09 2.44 2.44 12.38 12.75 42.2 40.3

25.2 25.2 0.318 0.311 6.43 9.02 9.23 8.14 2.37 2.94 2.94 9.71 10.06 38.2 35.8

32.0 32.0 0.304 0.297 7.39 10.78 11.53 9.82 2.73 3.59 3.58 8.19 8.50 33.8 31.1

38.9 38.9 0.290 0.282 8.69 13.03 14.49 12.16 3.21 4.44 4.44 7.20 7.46 29.1 26.4

45.7 45.7 0.276 0.269 10.55 16.07 18.42 15.56 3.91 5.62 5.64 6.51 6.72 24.4 21.8

52.6 52.6 0.262 0.256 13.42 20.5 23.9 20.7 5.00 7.36 7.47 6.00 6.16 19.62 17.38

STRESS-STRAIN BEHAVIOR OF A LAYER: PRINCIPAL MATERIAL COORDINATE SYSTEM This model gives better agreement with the exact value for this case (and others as well) and is the recommended model for E2. The semiempirical Halpin-Tsai expression, Eq. (5.6.19), with j 5 2 results in E 2 5 E m2

sE 2f 1 j E m2 d 2 j Vf sE m2 2 E 2f d sE 2f 1 j E m2 d 1 Vf sE m2 2 E 2f d

5 s4.62d

[73.1 1 s2ds4.62d] 2 s2ds0.60ds4.62 2 73.1d

[73.1 1 s2ds4.62d] 1 s0.60ds4.62 2 73.1d 5 18.42 GPa For G12, using Eq. (5.6.13), Vf 1 2 Vf 1 5 f 1 G12 Gm12 G 12 0.60 1 2 0.60 5 1 30.0 1.699 5 0.255 GPa21 G12 5 3.91 GPa This value compares poorly with the exact value of 5.64 GPa obtained from the elasticity solution. The concentric cylinders approximate elasticity model, Eq. (5.6.14), gives f

G12 5 Gm12

f sGm12 1 G12d 2 Vf sGm12 2 G12 d f f sGm12 1 G12 d 1 Vf sGm12 2 G12 d

5 1.699

s1.699 1 30.0d 2 0.60s1.699 2 30.0d s1.699 1 30.0d 1 0.60s1.699 2 30.0d

5 5.62 GPa This model gives good agreement with that of the exact solution, and it is the recommended model for G12. For a1, using Eq. (5.6.15), a1 5 5

5-77

values given in Table 5.6.2 or Table 5.6.3, as appropriate for the fiber type chosen. For the graphite fibers, care must be taken to use the appropriate extensional moduli and expansion coefficients, as these fibers are orthotropic. Although it has not been necessary to this point to describe the thickness of a layer, in the subsequent numerical examples a layer thickness h of 0.000125 m, or 0.125 mm, will be used. DESCRIBING A LAMINATE

With details of the properties of a layer considered, we turn our attention to describing the behavior of a number of layers, bonded together, one next to the other, to form a laminate. An important part of the description of a laminate is the specification of its stacking sequence. The stacking sequence is a list that defines the through-thickness locations and fiber orientations of all the layers in a laminate. Referring to Fig. 5.6.6, which shows the exploded view of a laminate and the laminate coordinate system, the xyz coordinate system, the stacking sequence lists the layer fiber orientations relative to the x axis of the laminate coordinate system, starting with the layer at the negativemost z position. The stacking sequence [ 30/0]S describes the six-layer laminate shown in Fig. 5.6.6 which has 1308 layers on the outside, 2308 layers inside of those, and two 08 layers in the center. The stacking sequence could have been written as [130/230/0/0/230/130]T, where T signifies that the total laminate has been described, but the shorthand notation with the subscript S is more convenient and implies that the fiber orientation arrangement is symmetric with respect to the xy plane, i.e., the z  0 location. The formal definition of a symmetric laminate will be given shortly. Other shorthand notation can be used, such as [s630/0d3/s90/0d2]S to describe a 26-layer symmetric laminate which has two subsequences that repeat, one 3 times, the other twice. The notation [630]2S is interpreted to mean [s630d2]S. Of course it is necessary to specify the material properties and thickness of each layer to completely describe a laminate. z

a1f E 1f Vf 1 am1 E m1 s1 2 Vf d

y

E 1f Vf 1 E m1 s1 2 Vf d s5.04 3 1026ds73.1ds0.60d 1 s41.4 3 1026ds4.62ds1 2 0.60d 73.1s0.60d 1 4.62s1 2 0.60d

q = + 30°

5 6.51 3 1026/8C −30°

This value compares favorably with the exact result of 6.72 3 1026/8C. For a2, using Eq. (5.6.16), a2 5 ca2f 2 a

E m1 E1

1 cam2 1 a



b n12f sam1 2 a1f ds1 2 Vf ddVf E f1 E1

x

b nm12 sam1 2 a1f dVf d s1 2 Vf d

5 c5.04 3 1026 2 a

0° −30°

4.62 b s0.22ds41.4 3 1026 45.7

q = + 30°

2 5.04 3 1026ds1 2 0.60dd s0.60d 1 c41.4 3 1026 1 a

73.1 b s0.36ds41.4 3 1026 45.7

2 5.04 3 1026ds0.60dd s1 2 0.60d 5 24.4 3 1026/8C This value compares favorably with the exact result of 21.8 3 1026/8C. NOTE: It is important to note that in the above examples and all to follow we assume that the fiber and matrix properties are quantified exactly by using three significant figures, as in Table 5.6.1. All subsequent numerical calculations are carried out by starting with these properties, and these subsequent numerical calculations will be reported rounded to three significant figures. Also note that in all numerical examples to follow, a glass-reinforced composite with 60 percent fiber volume fraction is used. Because of their simplicity and accuracy, the engineering properties are computed by employing Eqs. (5.6.9), (5.6.10), (5.6.12), (5.6.14), (5.6.15), and (5.6.16). Additional example results may be constructed by examining different fiber volume fraction or changing the fiber type, and comparing the results with the

Fig. 5.6.6 Exploded view of a [630/0]S laminate and laminate coordinate system.

STRESS-STRAIN BEHAVIOR OF A LAYER: PRINCIPAL MATERIAL COORDINATE SYSTEM

Within the context of the macromechanical theory presented here to study laminate deformations and stresses, referred to as classical lamination theory, or CLT, a layer is assumed to be in a state of plane stress. Such a condition was previously described. Accordingly, from Eq. (5.6.7), the stress-strain behavior of a layer written in terms of the stresses and strains in the principal material, or 1-2-3, coordinate system, including the effects of thermal expansion, can be written as Q 11 s1 c s2 s 5 £Q 12 t12 0

Q 12 Q 22 0

0 e1 2 a1 T 0 ≥ ce2 2 a2 Ts Q 66 g12

(5.6.20)

5-78

MECHANICS OF COMPOSITE MATERIALS

where

where n12 E 2 5 1 2 n12 n21

Q 11

E1 5 1 2 n12 n21

Q 12

Q 22

E2 5 1 2 n12 n21

Q 66 5 G12

(5.6.21)

The Q matrix is the reduced stiffness matrix. The term reduced is used to distinguish the 3-by-3 stiffness matrix of Eq. (5.6.20) from the full 6-by-6 stiffness matrix of Eq. (5.6.4) that must be used when all six components of stress and strain are considered. It is important to recognize that while the numerical values in the reduced compliance matrix are identical to those in the full 6-by-6 compliance matrix, the numerical values in the reduced stiffness matrix are different from those in the full 6-by-6 stiffness matrix. Again, the form of the reduced stiffness matrix in Eq. (5.6.20) is said to represent orthotropic material behavior. For simplicity, moisture content effects have not been included in Eq. (5.6.20). They can be included in the same manner as thermal expansion effects. EXAMPLE 2. Compute the numerical values of the reduced stiffness matrix for a glass-fiber-reinforced composite.

T11 [T]21 5 C T21 T31

T12 T22 T32

s1 sx c s2 s 5 [T] c sy s t12 txy

Q 12 5 Q 22 5

E1

n12 E 2 1 2 n12 n21 E2 1 2 n12n21

Q 11 [Q] 5 £Q 12 0 47.0 5 £ 4.56 0

5 5 5

45.7 3 109 5 47.0 3 109 5 47.0 GPa 1 2 s0.276ds0.0970d 0.276s16.07 3 109d 1 2 s0.276ds0.0970d

5 4.56 GPa

16.07 3 109 5 16.51 GPa 1 2 s0.276ds0.0970d

Q 12 0 47.0 Q 22 0 ≥ 5 £ 4.56 0 Q 66 0 4.56 16.51 0

4.56 16.51 0

T12 T22 T32

T13 m2 T23 S 5 C n2 2mn T33

n2 m2 mn

2mn 22mn S (5.6.25) m 2 2 n2

As a result of the fibers being at an angle relative to the laminate coordinate system, stresses and strains measured in the laminate coordinate system are quite different from those measured in the principal material coordinate system. Example 3 demonstrates the interesting strain response of a single layer caused by a temperature change, but measured in the laminate coordinate system.

The reciprocity relation of Eq. (5.6.5) was used to compute n21. We emphasize that in nearly all cases n12 2 n21.

STRESS-STRAIN BEHAVIOR OF A LAYER: LAMINATE COORDINATE SYSTEM

Since there can be many fiber orientations within a laminate, with each orientation having its own principal material coordinate system, it is more convenient to describe laminate behavior in terms of one coordinate system, namely, the laminate, or xyz, coordinate system shown in Fig. 5.6.6. The stresses and strains in the laminate coordinate system, including thermal strains, can be expressed in terms of those in the principal material coordinate system by transformation relations, namely,

ax a1 c ay s T 5 [T]21 ca2s T 1 0 2axy

T11 where [T] 5 C T21 T31

(5.6.24)

The above 3-by-3 matrix, denoted as T, is referred to as the

0 0 ≥ 3 109 5.62

ex e1 c ey s 5 [T]21 c e2 s 1 1 2 gxy 2 g12

(5.6.23)

transformation matrix.

0 0 ≥ GPa 5.62

sx s1 c sy s 5 [T]21 c s2 s txy t12

e1 ex e 2 c s 5 [T] c ey s 1 1 g 12 2 2 gxy

a1 ax ca2s T 5 [T] • ay ¶ T 1 0 2 axy

SOLUTION

1 2 n12 n21

2 2mn 2 mn S m 2 2 n2

n2 m2 2 mn

and where m 5 cos u and n 5 sin u, with u being the angle measured from the x axis to the 1 axis of the layer. In Eq. (5.6.23), the 3-by-3 matrix, denoted as T 1, is referred to as the inverse transformation matrix. The word inverse is used because the stresses and strains of importance within a laminate are those parallel and perpendicular to the fibers, namely, those in the principal material coordinate system. The principal material coordinate system stresses and strains are used to determine at what load level a layer will fail. The relationship in Eq. (5.6.22) transforms those important stresses and strains into stresses and strains in the xyz coordinate system. The latter quantities are neither parallel nor perpendicular to the fibers, so it is more difficult to evaluate whether the stresses and strains are high enough to cause, e.g., failure of a layer due to excess stress in the fiber direction. Note well in the above relations to factor of 1⁄2 associated with the sheer strain. The above relations can be inverted and rewritten as

APPROACH. Substitute the engineering properties directly into Eq. (5.6.21). [As noted, all examples are based on using a 60 percent volume fraction of glass fibers and, because of their simplicity and accuracy, employing Eqs. (5.6.9), (5.6.10), (5.6.12), (5.6.14), (5.6.15), and (5.6.16) to calculate the material properties, as presented in Table 5.6.3.]

Q 11 5

m2 T13 21 T23 S 5 C n2 T33 mn

EXAMPLE 3. The temperature of an element cut from a single layer of glass epoxy with its fibers at 308 is decreased by 100 8C. What shear strain is measured in the laminate coordinate system due to this temperature decrease? Note that this temperature decrease is similar in magnitude to the decrease from the curing temperature of the material to room temperature. APPROACH. The thermally induced strains in the laminate coordinate system can be computed from the known values of the thermally induced strains in the principal material coordinate system from Table 5.6.3 and the transformation relations of Eqs. (5.6.22) and (5.6.23). SOLUTION.

Since u 5 308, m 5 23⁄2, and n 5 1⁄2,

ax a1 c ay s T 5 [Ts308d]21 ca2s T 1 0 2axy 3 4

1 4

5 E 14

3 4

23 4

(5.6.22) or

2 23 2

6.51 3 1026

23 26 2 U c 24.4 3 10 s s2100d

2 23 4

ax T 5 21,099 3 1026 axy T 5 1,551 3 10

26

1 2

0 ay T 5 21,099 3 1026

STRESS-STRAIN BEHAVIOR OF A LAYER: LAMINATE COORDINATE SYSTEM DISCUSSION. What is important to note is that although there is no shear strain in the principal material coordinate system, i.e., a12 T 5 0, a shear strain as well as contraction strains in the x and y directions are measured in the laminate coordinate system. Since polymer matrix composites are cured at an elevated temperature and then cooled, when a layer is part of a laminate, the thermally induced strains in a layer caused by cooling the laminate from its curing temperature can produce residual thermal stresses. If the element is square and 0.1 m 3 0.1 m in dimensions before the temperature change, after cooling the length in the x and y directions would be less. In addition, the original right angles would be changed by 1,551 microradians, denoted as 1,551 mrad, or 0.0888 deg, as shown in exaggerated fashion in Fig. 5.6.7, with the right angles of two opposite corners increasing and the right angles of the other two opposite corners decreasing. This may not seem like much of an angle change, but it can cause significant stresses if resisted. The strains in the x and y directions are –1,099 and –1,994 microstrain, denoted, respectively, as –1,099 and –1,994 me. The deformed element would have dimensions 0.0999 and 0.0998 m in the x and y directions, respectively.

EXAMPLE 4. A normal stress of 50 MPa is applied in the x direction to a square element of composite material with its fibers at 308 relative to the 1 x axis while the temperature is kept constant, i.e., T 5 0. No other stresses are present. What strains result from this applied stress? APPROACH AND SOLUTION. Substituting directly into the stress-strain relation expressed in the laminate coordinate system, Eq. (5.6.26), gives 50 3 106 Q 11 s308d sx 0 s 5 £ Q 12 s308d c sy s 5 c 0 Q 16 s308d txy

Q 16 s308d ex Q 26 s308d≥ c ey s Q 66 s308d gxy

where the Qrs are computed by using Eq. (5.6.27) with u 5 308, namely, Q 11 s308d £ Q 12 s308d Q 16 s308d

Q 12 s308d Q 22 s308d Q 26 s308d

0.1 m

33.4 £ 10.54 10.05

0.0998 m

30°

Deformed

Q 16 s308d 33.4 Q 26 s308d≥ 5 £10.54 Q 66 s308d 10.05

10.54 18.15 3.14

10.05 3.14≥ 3 109 N/m2 11.60

10.05 ex 50 3 106 3.14 ≥ 3 109 c ey s 5 c 0 s 11.60 gxy 0

2,370 ex • ey ¶ 5 • 21,069 ¶ 3 1026 gxy 21,760

y 0.1 m 0.0999 m

Fig. 5.6.7 Cooling deformations of an element with fibers as oriented at 1 308.

As it is convenient to transform the stresses and strains from each of the principal material coordinate systems of the various layers to the single laminate coordinate system, it is convenient to transform the stress-strain relation of each layer to the laminate coordinate system. This can be accomplished by starting with the stress-strain relation of Eq. (5.6.20) and applying the transformation relations of Eq. (5.6.22). As a result, the stress-strain relation expressed in the laminate coordinate system is written as Q 12 Q 22 Q 26

10.54 18.15 3.14

which leads to

89.9°

sx Q 11 c sy s 5 £Q 12 txy Q 16

Q 12 s308d Q 22 s308d Q 26 s308d

resulting in

Undeformed

x

5-79

Q 16 ex 2 ax T Q 26≥ c ey 2 ax T s Q 66 gxy 2 axy T

As shown in Fig. 5.6.8, the element of material stretches in the x direction and contracts in the y direction, as an element of conventional material such as aluminum or steel would, the contraction being due to Poisson effects. Unlike an element of conventional material, however, the element experiences a shear strain. This results in corner angles that are not right angles when the square element deforms.

Undeformed Deformed 0.1 m 0.0999 m

30°

(5.6.26)

50 MPa

90.1°

where

0.1 m y

Q 11 5 Q 11m 4 1 2sQ 12 1 2Q 66 dm 2 n2 1 Q 22 n4 Q 12 5 sQ 11 1 Q 22 2 4Q 66d n2 m 2 1 Q 12 sn4 1 m 4d Q 16 5 sQ 11 2 Q 12 2 2Q 66d nm 3 1 sQ 12 2 Q 22 1 2Q 66 dn3m (5.6.27) Q 22 5 Q 11 n4 1 2sQ 12 1 2Q 66 d m 2n2 1 Q 22 m 4 Q 26 5 sQ 11 2 Q 12 2 2Q 66 d n m 1 sQ 12 2 Q 22 1 2Q 66 d nm 3

3

0.1002 m x

Fig. 5.6.8 Deformations due to application of 50-MPa normal stress in x direction.

Q 66 5 sQ 11 1 Q 22 2 2Q 12 2 2Q 66 d n2m 2 1 Q 66 sn4 1 m 4d As mentioned, m 5 cos u and n 5 sin u, so the Q terms are functions of u. The 3-by-3 Q matrix is referred to as the transformed reduced stiffness matrix. As can be observed, because of fiber orientation, compared to conventional isotropic materials or the orthotropic behavior of Eq. (5.6.20), unusual relations between stresses and strains occur. Specifically, the shear stress is related to the two normal strains, and the two normal stresses are related to the shear strain. There are no zeros in the Q matrix as there are with the Q matrix in Eq. (5.6.20). The behavior represented by Eq. (5.6.27) is referred to as anisotropic behavior. A word of caution: Note carefully the factors of 1⁄2 and 2 that occur in some relations but not in others. This is due to the use of engineering shear strains g12 and gxy, as opposed to tensor shear strains, which are given by 1⁄2g12 and 1⁄2gxy.

To be noted is the fact that there is distinct difference between stating that the element of material is stressed in the x direction and stating that the element of material is strained in the x direction. If the element is strained in the x direction by, e.g., 1,000 me, with the element experiencing no other strains, then the stresses required are given by again using Eq. (5.6.26), namely, sx Q 11 s308d • sy ¶ 5 ° Q 12 s308d txy Q 16 s308d or

Q 12 s308d Q 22 s308d Q 26 s308d

Q 16 s308d 1,000 Q 26 s308d ¢ • 0 ¶ 3 1026 Q 66 s308d 0

33.4 33.4 sx • sy ¶ 5 • 10.54 ¶ 3 106 Pa 5 • 10.54 ¶ MPa 10.05 10.05 txy

It can be concluded that to produce this rather simple state of deformation, a tensile stress in the y direction and a positive shear stress, in addition to a tensile

5-80

MECHANICS OF COMPOSITE MATERIALS

stress in the x direction, are required. These stresses are illustrated in Fig. 5.6.9. For a conventional material, although a tensile stress in the y direction would be required, the shear stress would not be required because there is no tendency for the corner right angle to change. If the fiber angle were 2 308, the same tensile stresses in the x and y directions would be required, but the shear stress required would be –10.05 MPa, a negative shear stress.

strains and curvatures of the reference surface are known, then the strains, and subsequently the stresses, in every layer can be determined. The reference surface is somewhat like the neutral axis in simple beam analysis, except with composite materials the stresses are not necessarily zero at the reference surface location.

Deformed

10.54 MPa

z

A′

∂w o ∂x

10.05 MPa Undeformed 0.1 m 30° 0.1 m

y

x

∂w o ∂x

P Po Undeformed

33.4 MPa

z

z2 z1 z0

0.1001 m

MACROMECHANICS OF A LAMINATE

When layers are combined to form a laminate, the unusual behavior in the various layers can interact to produce results that are even more unusual than the behavior of a single layer. To study the behavior of a laminate, it is assumed that within a laminate the layers are perfectly bonded together and do not slip relative to one another. Only small deformations are considered, much the same as in studying the mechanics of conventional materials. Like a layer, a laminate occupies a volume in three-dimensional space, so the analysis and the design of laminates are truly a three-dimensional problem. However, the plane stress assumption eliminates some of the three-dimensionality of the problem, as does one other key assumption, namely, the Kirchhoff hypothesis. As a direct result of the Kirchhoff hypothesis, the strains ex, ey, and gxy vary linearly through the thickness of the laminate. If the no-slip condition were not true, the Kirchhoff hypothesis certainly would not be valid. The Kirchhoff hypothesis assumes that when the laminate deflects out of plane in the z direction, all points through the thickness of the laminate at a given x and y location deflect out of plane the same amount. That is, the out-of-plane displacement of a point within a laminate is independent of the z location of the point. The assumed deformation characteristics of a laminate are further illustrated in Fig. 5.6.10, where the deformation is viewed in the xz plane. The figure shows that straight line AAr through the thickness of the laminate before deformation remains straight after deformation, a result that leads to a linear variation of strains through the thickness of the laminate. Specifically, the displacements in the x, y, and z directions of a generic point P at location x, y, z within the laminate are assumed to have the following forms: 'w o sx, yd usx, y, zd 5 u o sx, yd 2 z 'x 'w o sx, yd o vsx, y, zd 5 v sx, yd 2 z (5.6.28) 'y wsx, y,zd 5 w o sx,yd where the superscript o denotes the displacements of corresponding point P o on the geometric midplane of the laminate, which is also referred to as the reference surface. Only small displacements and rotations are considered in the development of Eq. (5.6.28). As will be seen, if the deformation of the reference surface is known, i.e., if the

wo

uo ∂w ∂x

o

zN−1

Deformed

Fig. 5.6.9 Stresses required to produce strain of 1,000 m e in the x direction.

A′

zN

P

z

H

A x

Po

A Fig. 5.6.10 Implications of Kirchhoff hypothesis for laminate deformations.

To be noted in Fig. 5.6.10 is the notation z0, z1, z2, . . . which describes the z locations of the interfaces between layers, or alternatively, the z locations of the lower and upper surfaces of the layers. Accordingly, z0 5 2 H⁄2, with H being the total laminate thickness, and the thickness of the kth layer hk is given by zk 2 zz21. The strains at any location within the laminate can be written in terms of the displacements as '2w o 'u 'u o 5 eox 1 zkox 5 2z 'x 'x 'x 2 'v '2w o 'v o ey 5 5 2 z 2 5 eoy 1 zkoy 'y 'y 'y

ex 5

gxy 5

(5.6.29)

'2w o 'v 'v o 'u o 'u 1 5 1 2 2z 5 goxy 1 zkoxy 'y 'x 'y 'x 'x 'y

where

eox • e0y ¶ 5 f goxy

'u o 'x 'v o 'y

v

'v 'u 1 'y 'x o

'2w o 'x 2 kox 2 o ' w c koy s 5 f 2 2 v 'y koxy '2w o 22 'x 'y 2

o

(5.6.30)

o The quantities exo, eyo, and gxy are referred to as the midplane or o reference surface strains. The quantities kxo, kyo, and kxy are the midplane or reference surface curvatures and are the bending curvatures of the x and y directions, and the twist curvature, respectively, of the reference surface. By knowing the midplane strains and curvatures, the state of strain at any z location within any layer can be computed though the use of Eq. (5.6.29). These strains, in turn, can be transformed to determine the strains in the principal material coordinate system of the layer at this z location by using the second equation of Eq. (5.6.24). The stresses in the laminate coordinate system at this z location can then be computed using Eq. (5.6.20). As can be seen, then, knowing the midplane strains and curvatures is key to determining the stresses anywhere through the thickness of the laminate. The plane stress assumption and the Kirchhoff hypothesis are keys to the analysis simplifying to this

MACROMECHANICS OF A LAMINATE

degree. Without these assumptions, the design and analysis of composite laminates would be intractable. Since the midplane strains and curvatures are the keys to determining the stresses within a laminate, it is important to be able to compute these midplane quantities in terms of the loads applied to a laminate. In the application of composite materials, most often loads, in the form of pressures, forces, or bending moments, are specified rather than midplane strains and curvatures, or rather than the stresses. A relationship between the specified loads and the midplane strains and curvatures is therefore necessary. So far all that has been discussed are strains and stresses. Therefore, relationships between loads and the stresses or between loads and the midplane strains and curvatures need to be developed to be able to proceed with studying the behavior of laminates in practical applications. To develop these relations, the loads are defined in terms of the stresses as follows: H

H

⁄2

Nx 5 3 sx dz H 2

⁄2

2

H

⁄2

Nxy 5 3 txy dz

⁄2

2H⁄ 2

H

⁄2

M x 5 3 sx z dz H 2

H

⁄2

Ny 5 3 sy dz H

⁄2

⁄2

M y 5 3 sy z dz H 2

H

⁄2

o Nx 5 3 [Q 11 seox 1 zkoxd 1 Q 12 seoy 1 zkoyd 1 Q 16 sgxy 1 zkoxyd] dz H 2

⁄2

H

⁄2

2 3 sQ 11ax 1 Q 12ay 1 Q 16axydT dz

2H⁄ 2

The Ns are the force resultants and the Ms are the moment resultants. The units of the force and moment resultants are force per unit length and moment per unit length, respectively. Collectively the six quantities are the stress resultants. The introduction of the stress resultants further reduces the three-dimensional nature of the problem. The per unit length part of the definition of the stress resultants is very important. The resultant Nx is the force in the x direction per unit length of the laminate in the y direction; Ny is the force in the y direction per unit length of laminate in the x direction. Similar interpretations are associated with the moment resultants M x and M y. Since it is based on stress component txy, the force resultant Nxy is the force in the x direction per unit length of laminate in the y direction. However, it is also the force in the y direction per unit length of laminate in the x direction, depending on which edge of the laminate is being discussed. A similar dual definition is associated with M xy. It should be emphasized that it is impossible from equilibrium considerations to have Nxy and M xy acting on one edge of an element of material and not have the same values of Nxy and M xy acting on the adjacent edge. It is important to be aware of the positive sense of the stress resultants, which are illustrated in Fig. 5.6.11.

The second integral is special. It is defined as the effective thermal force resultant in the x direction, i.e., H

⁄2

N Tx 5 3 H sQ 11ax 1 Q 12ay 1 Q 16axyd T dz 2 ⁄2

H

⁄2

H



2 Nˆ Tx 5 3 sQ 11ax 1 Q 12ay 1 Q 16axyd dz H 2

Note that in the definition of the unit effective thermal force resultant, the coefficients of thermal expansion combine with the reduced stiffnesses. This indicates that the magnitude of the thermal effects depends not only on the thermal expansion or contraction of the individual layers due to temperature changes, but also on how resistant the layers are to deformation, i.e., how stiff they are. By using the nomenclature of Eq. (5.6.36), Eq. (5.6.33) can be written as H

H

⁄2

⁄2

2

2

2

H

⁄2

H

sQ 11 z dz ≤kxo 1 ¢ 3

⁄2

2H⁄2

sQ 12 z dz≤ kyo



2

Mxy

Mxy Mxy

Nxy

H

⁄2

Nx 5 ¢ 3 Q 11 dz≤ eox 1 ¢ 3 Q 12 dz≤ eoy 1 ¢ 3 Q 16 dz≤y oxy 2H⁄ 2H⁄ 2H⁄

H

Mx Nxy

(5.6.36)

⁄2

2 o 1 ¢ 3 Q 16 z dz≤ kxy 2 Nˆ Tx T H

x

Nx

(5.6.35)

The integral within the brackets is called the unit effective thermal force resultant. It is only a function of material properties and the laminate geometry and is defined as

2H⁄2

z y

(5.6.34)

where the superscript T identifies that the resultant is due to thermal effects. If the temperature change is not a function of z, i.e., if the temperature change is uniform through the thickness of the laminate, then T can be removed from the integral to yield

1 ¢3

Ny

(5.6.33)

2H⁄ 2

N Tx 5 B 3 H sQ 11ax 1 Q 12ay 1 Q 16axyd dz R T 2 ⁄2

M xy 5 3 txy z dz

⁄2

Regrouping gives

(5.6.31)

H

⁄2

5-81

Nx

My

Mx

Mxy

My

Ny

(5.6.37)

⁄2

where the reference surface strains and curvatures have been removed from the integral because they are not functions of z and are therefore not involved in the integration with respect to z. This is a very important point, and it is a direct result of the Kirchhoff hypothesis. As a final step, Eq. (5.6.37) is written as Nx 5 A11exo 1 A12eoy 1 A16 y oxy 1 B11kxo

Fig. 5.6.11 Illustration of positive sense of force and moment resultants.

If the three components of stress from Eq. (5.6.26) are substituted into the integrands of the definitions of the six stress resultants, the integration with respect to z can be carried out. Considering Nx as an example, substitution of the first equation of the stress-strain relation from Eqs. (5.6.26) into the first equation of Eqs. (5.6.31) results in H

⁄2

Nx 5 3 [Q 11 seox 1 zkox 2 ax T d 1 Q 12 seoy 1 zkoy 2 ay T d H 2

⁄2

1 Q 16 sgoxy 1 zkoxy 2 axy Td] dz

(5.6.32)

1 B12koy 1 B16koxy 2 Nˆ Tx T

(5.6.38)

The As and Bs are defined as the integrals of the reduced stiffnesses through the thickness of the laminate, and hence they represent smeared, integrated, or overall stiffnesses of the laminate. The reduced stiffnesses of every layer contribute to the smeared stiffnesses of the laminate. In the same spirit, the effective thermal force resultant represents smeared, integrated, or overall effects. Note that Eq. (5.6.38) relates the three reference surface strains and three reference surface curvatures to the force resultant and to thermal effects. This relationship and the other five to follow are quite important and absolutely necessary if the stresses within the various layers are to be computed from the loads. Before proceeding, it should be noted, as was stated at the onset,

5-82

MECHANICS OF COMPOSITE MATERIALS

that smearing of properties is used for a second time with the definition of the As, Bs, and the effective thermal force resultant. The smearing of properties, be it at the micromechanics level or at the macromechanics, or laminate, level, is essential if progress is to be made in modeling and predicting behavior. If the other five integrals in Eq. (5.6.31) are treated in a similar fashion and the results combined with Eq. (5.6.38), the following relationship results: Nx A11 Ny A12 Nxy A f v 5 F 16 Mx B11 My B12 M xy B16

A12 A22 A26 B12 B22 B26

A16 A26 A66 B16 B26 B66

B11 B12 B16 D11 D12 D16

B12 B22 B26 D12 D22 D26

B16 exo B26 eyo o B66 gxy V f ov D16 kx D26 kyo o D66 kxy

Special Cases of Laminate Stacking Sequences For so-called symmetric laminates, all elements of the B matrix are 0, as are

Nˆ Tx Nˆ Ty 2g

Nˆ Txy ˆT M x

w T

(5.6.39)

ˆT M y ˆT M xy The 6-by-6 matrix is referred to as the laminate stiffness matrix or, more simply, the ABD matrix. This matrix relates the stress resultants, or loads, to the deformations, or strains and curvatures, of the reference surface. Since the reduced stiffnesses do not vary through the thickness of any particular layer, the integrals that define the elements of the ABD matrix reduce to summations, namely, N

Aij 5 g Q ijk szk 2 zk21d k51 N

Bij 5

1 N g Q sz 2 2 z 2k21d 2 k51 ijk k

1 g Q sz 3 2 z 3k21d 3 k51 ijk k

Dij 5

and are referred to as the bending-stretching stiffnesses. This nomenclature refers to the fact that through these terms, bending effects, nameo ly, the curvatures, kxo, kyo, and kxy , are coupled to in-plane force resultants o Nx, Ny, and Nxy, and conversely, the in-plane strains exo, eyo, and gxy are coupled to the bending moment resultants Mx, My, and Mxy. The unit effective thermal stress resultants are also functions of stiffnesses and coefficients of thermal expansion weighted by geometry. With the elements of the ABD matrix and the unit effective thermal stress resultants depending on layer thickness and location, a layer can have greater or less influence on the overall behavior of a laminate, depending on its location within the stacking sequence and its thickness. It is important to realize that once the location, material properties, and fiber orientation of each layer are known, the sums defining the ABD matrix and the unit effective thermal stress resultants can be computed and numerical values obtained.

(5.6.40)

where the zk are defined in Fig. 5.6.10 and N is the number of layers in the laminate. Since the coefficients of thermal expansion also do not vary through the thickness of any particular layer, the unit effective thermal stress resultants become summations given by

the three unit effective thermal moment resultants if the temperature change T is uniform through the thickness of the laminate. A symmetric laminate is defined as follows: If, for every layer at a specific location and with a specific thickness, material properties, and fiber orientation located on one side of the geometric midplane, there is a layer with identical thickness, material properties, and fiber orientation at the mirror-image location on the other side of the geometric midplane, then the laminate is said to be symmetric. For balanced laminates, A16, A26, and Nˆ Txy are zero. A balanced laminate is defined as follows: If, for every layer with a specific thickness, material properties, and fiber orientation, there is a layer somewhere within the laminate with the same specific thickness and material properties, but with the opposite fiber orientation, then the laminate is said to be balanced. Symmetric and balanced laminates are very common, and the relation of Eq. (5.6.39) simplifies considerably to Nx A11 Ny A12 0 Nxy f v5F Mx 0 My 0 M xy 0

2f

k51 N

Nˆ Ty 5 g s Q 12kaxk 1 Q 22kayk 1 Q 26kaxykdszk 2 zk21d k51 N

Nˆ Txy 5 g s Q 16kaxk 1 Q 26kayk 1 Q 66kaxykd szk 2 zk21d k51 N

(5.6.41)

ˆ T 5 1 g s Q a 1 Q a 1 Q a d sz 2 2 z 2 d M x k k21 11k xk 12k yk 16k xyk 2 k51 N

ˆ T 5 1 g s Q a 1 Q a 1 Q a d sz 2 2 z 2 d M y k k21 12k xk 22k yk 26k xyk 2 k51 N

ˆ T 5 1 g s Q a 1 Q a 1 Q a d sz 2 2 z 2 d M xy k k21 16k xk 26k yk 66k xyk 2 k51 An inspection of the various elements of the ABD matrix reveals that not only do their definitions depend on the reduced stiffnesses of every layer, but also their definitions include the location within the laminate of every layer by virtue of zk and zk-1. The elements of the ABD matrix are thus weighted and smeared stiffness quantities, the weighting factor being the thickness and location of the various layers within the laminate. Strictly speaking, the values of the As involve only the layer thicknesses, i.e., zk  zk1 hk, the thickness of the kth layer, while the values of the Bs and Ds involve layer thicknesses and locations. Physically, the As are the in-plane stiffnesses of the laminate, the Ds are the bending stiffnesses, and the Bs are unique to composite materials

0 0 A66 0 0 0

0 0 0 D11 D12 D16

0 0 0 D12 D22 D26

0 eox 0 eoy 0 go V f xyo v D16 kx D26 koy D66 koxy

Nˆ Tx Nˆ Ty

N

Nˆ Tx 5 g sQ 11kaxk 1 Q 12kayk 1 Q 16kaxykdszk 2 zk21d

A12 A22 0 0 0 0

0 v T 0 0 0

(5.6.42)

It is important to note that D16 and D26 are not zero. It is also important to remember that if the temperature is not uniform through the thickness of the laminate, even though the laminate is symmetric, Eq. (5.6.42) does not apply. Because of the simplifications, Eq. (5.6.42) can be written as two separate equations, namely, Nx A11 • Ny ¶ 5 • A12 Nxy 0

A12 A22 0

Nˆ Tx 0 eox 0 ¶ • eoy ¶ 2 d Nˆ Ty t T A66 goxy 0

Nˆ Tx eox 5 [A] • eoy ¶ 2 d Nˆ Ty t T goxy 0 Mx D11 • M y ¶ 5 • D12 M xy D16

D12 D22 D26

D16 kxo kxo D26 ¶ • kyo ¶ 5 [D] • kyo ¶ o o D66 kxy kxy

(5.6.43)

(5.6.44)

MACROMECHANICS OF A LAMINATE

where it is seen that for symmetric laminates, bending effects sM x, M y, M xy o o and kxo, kyo, kxy d and in-plane effects sNx, Ny, Nxy and exo, eyo, gxy d decouple. Actually, Eq. (5.6.43) can be further simplified to two other equations as A12 eox N A Nˆ T (5.6.45) e x f 5 B 11 R e o f 2 e ˆ xT f T N A A e N y

12

y

22

This six-layer laminate is also quasi-isotropic. The bending stiffnesses D16 and D26 are not zero. EXAMPLE 7 [ 6 30/0]S A11 A12 A16 28.4 6.41 C A12 A22 A26 S 5 C 6.41 13.20 0 0 A16 A26 A66

y

Nxy 5 A66goxy

(5.6.46)

D11 D12 D16 1.191 C D12 D22 D26 S 5 C 0.363 D16 D26 D66 0.1570

Cross-ply laminates are laminates constructed of layers with fiber ori-

entations of only 0 or 908. For such laminates, because by Eq. (5.6.27) Q 16 and Q 26are zero for every layer, all 16 and 26 terms and Nˆ Txy and Mˆ Txy in Eq. (5.6.39) are zero. For a symmetric cross-ply laminate, the relations of Eq. (5.6.39) simplify even more to give what can be written as four separate equations: N A A12 eox Nˆ T e x f 5 B 11 R e o f 2 e ˆ xT f T (5.6.47) N A A e N y

12

22

Nxy 5

y A66goxy

M D e x f 5 B 11 My D12

y

(5.6.48)

D12 kxo Re f D22 kyo

(5.6.50)

The existence of the four separate relations signifies a high degree of uncoupling of various stress resultants and reference surface deformations. Again as a reminder, if the temperature is not uniform through the thickness of the laminate, then Eqs. (5.6.47) to (5.6.50) do not apply. If the layers are all isotropic, or for a single isotropic layer, the form of Eq. (5.6.39) reduces to the form of Eqs. (5.6.47) to (5.6.50). To follow are numerical examples of the A, B, and D matrices for a number of common laminates. The numerical examples are based upon the properties for a glass-fiber-reinforced polymer having a fiber volume fraction of 60 percent, as given in Example 2. EXAMPLE 5 [ 45/0/90]S. The subscript S signifies that the laminate is symmetric, and because there is a 458 layer for each 458 layer, the laminate is balanced. Neither the 0 nor 908 layers contribute to any of the 16 terms or the unit effective thermal stress resultants.

D11 C D12 D16

A12 A22 A26

A16 27.8 A26 S 5 C 8.55 A66 0

8.55 27.8 0

D12 D22 D26

D16 2.18 D26 S 5 C 0.961 D66 0.1784

0.961 1.945 0.1784

e

Nˆ Tx Nˆ Ty

f 5 e

425 425

f

0 MN 0 S m 9.60 0.1784 0.1784 S N # m 1.050

EXAMPLE 6 [ 60/0]S A11 C A12 A16

A12 A22 A26

A16 20.8 A26 S 5 C 6.41 A66 0

6.41 20.8 0

0 MN 0 S m 7.20

D11 C D12 D16

D12 D22 D26

D16 0.675 D26 S 5 C 0.363 D66 0.0491

0.363 1.151 0.1570

0.0491 0.1570 S N # m 0.400

Nˆ Tx Nˆ Ty

f 5 e

319 319

f

N/m 8C

Nˆ Ty

f 5 e

315 323

f

0.363 0.636 0.0491

0.1570 0.0491 S N # m 0.400

N/m 8C

It is interesting to note the equality of some of the terms in this laminate and the [ 60/0]S laminate above. EXAMPLE 8 [ 30]2S. angle-ply laminate.

This eight-layer laminate is a symmetric and balanced

A11 C A12 A16 D11 C D12 D16

A12 A22 A26 D12 D22 D26

A16 33.4 A26 S 5 C 10.54 A66 0 D16 2.78 D26 S 5 C 0.878 0.314 D66 e

Nˆ Tx Nˆ Ty

f 5 e

421 429

10.54 18.15 0 0.878 1.512 0.0981

f

0 MN 0 S m 11.60 0.314 0.0981 S N # m 0.966

N/m 8C

It is interesting to compare the properties of this laminate with the [630/0]S above, which differs in thickness and the presence of 08 layers. EXAMPLE 9 [ 60/0/90]S A11 C A12 A16

A12 A22 A26

24.9 A16 A26 S 5 C 7.55 A66 0

7.55 32.6 0

0 MN 0 S m 8.61

D11 C D12 D16

D12 D22 D26

D16 1.773 D26 S 5 C 0.816 D66 0.0736

0.816 2.65 0.235

0.0736 0.235 S N # m 0.904

e

N/m 8C

This eight-layer laminate is called a quasi-isotropic laminate because the in-plane extensional stiffnesses A11 and A22 are equal, but the in-plane shear stiffness A66 is not related to A11 (A22), as it would be for a truly isotropic material. In addition, the bending stiffness D11 and D22 are not equal; nor are D16 and D26 zero, as they would be for a truly isotropic material, but they are equal. Note also that Nˆ Tx and Nˆ Ty are equal. Herein, as stated before, the thickness of a layer is taken to be 0.000125 m (0.125 mm).

e

Nˆ Tx

e

0 MN 0 S m 7.20

(5.6.49)

M xy 5 D66kyo

A11 C A12 A16

5-83

Nˆ Tx Nˆ Ty

f 5 e

427 423

f

N/m 8C

EXAMPLE 10 [0/90]2S. This is an eight-layer symmetric cross-ply laminate. A11 A12 A16 31.7 4.56 0 MN C A12 A22 A26 S 5 C 4.56 31.7 0 S m A16 A26 A66 0 0 5.62 D11 D12 D16 3.12 C D12 D22 D26 S 5 C 0.380 D16 D26 D66 0 e

Nˆ Tx Nˆ Ty

f 5 e

425 425

f

0.380 2.17 0

0 0 S N # m 0.468

N/m 8C

This laminate could be considered quasi-isotropic also. It is interesting to note the difference in D11 and D22. The outer 08 layers provide most of the contribution to D11, while the 908 layers provide most of the contribution to D22. Since the 908 layers are closer to the reference surface than the 08 layers, the bending stiffness

5-84

MECHANICS OF COMPOSITE MATERIALS

in the y direction, D22, is less than the bending stiffness in the x direction, D11. On the other hand, the four 08 layers and the four 908 layers provide the same resistance to extension in the x and y directions, respectively, so A22  A11. EXAMPLE 11 [04/904]T. A11 A12 A F 16 B11 B12 B16

A12 A22 A26 B12 B22 B26

A16 A26 A66 B16 B26 B66 MN

B12 B22 B26 D12 D22 D26

B16 B26 B66 V D16 D26 D66

MN

kox d11 ckoy s 5 C d12 o kxy d16

0

2 3,810 N

0

0

0

0

3,810 N

0

0

0

5.62 m

0

0

0

23,810 N

0

0

2.65 m

0.380 m

N 0.380 m

N 2.65 m

0

0

0.468 m

MN

0

3,810 N

0

0

0

0

ˆT M x

N

0

X

0 N

425 8C N/m 425 8C

Nˆ Ty h

N

0

x5h

ˆT M y ˆT M xy

N x 19.77 3 1024 8C N 219.77 3 1024 8C

0

The only terms in the B matrix are B11, which equals B22, but this single coupling term can cause interesting effects. There are also thermally induced moments, which cause this laminate to curl out of plane when cooled from its elevated curing temperature. The two unit equivalent thermal moment resultants are equal in magnitude and opposite in sign. Note that the A matrix is the same as for the previous cross-ply laminate, Example 10.

a16 a26 a66 b61 b62 b66

b11 b21 b61 d11 d12 d16

b12 b22 b62 d12 d22 d26

b16 Nx Nˆ Tx b26 Ny Nˆ Ty b66 N Nˆ T V ßg xy w 1 g ˆ xyT w  T∑ d16 Mx Mx ˆT d26 My M y ˆT d66 M xy M xy (5.6.51)

where a11 a12 a F 16 b11 b12 b16

a12 a22 a26 b21 b22 b26

a16 a26 a66 b61 b62 b66

b11 b21 b61 d11 d12 d16

Mx d16 M x d26 S cM y s 5 [d ] cM y s d66 M xy M xy

a12 a22 0

A11 0 0 S 5 C A12 a66 0

A12 A22 0

0 21 0 S A66

d11 C d12 d16

d12 d22 d26

d16 D11 d26 S 5 C D12 d66 D16

D12 D22 D26

D16 21 D26 S D66

b12 b22 b62 d12 d22 d26

b16 A11 b26 A12 b66 A V 5 F 16 d16 B11 d26 B12 d66 B16

A12 A22 A26 B12 B22 B26

A16 A26 A66 B16 B26 B66

B11 B12 B16 D11 D12 D16

B12 B22 B26 D12 D22 D26

B16 21 B26 B66 V D16 D26 D66 (5.6.52)

is the laminate compliance matrix or ABD inverse matrix.

(5.6.55)

(5.6.56)

The inverse matrices, namely the compliance matrices, are actually more useful than the stiffness matrices for computing the reference surface strains and curvatures from the force and moment resultants. For a symmetric cross-ply laminate, the inverse relations are determined from Eqs. (5.6.47) to (5.6.50) as eo a a12 Nx 1 Nˆ Tx T r b xo r 5 B 11 Rb (5.6.57) e a a N 1 Nˆ T T y

12

goxy b

22

y

y

5 a66Nxy

kxo d r 5 B 11 kyo d12

(5.6.58)

d12 M x Rb r d22 M y

(5.6.59)

o kxy 5 d66M xy

Generally the force and moment resultants and temperature change are known for an element of laminate, and it is of interest to compute the strains and stresses in individual layers, or throughout the entire thickness of the laminate if a complete stress analysis is desired. It is convenient to invert the relations previously discussed so the reference surface strains and curvatures can be computed more easily. For a general laminate, from Eq. (5.6.39), the inverse relation is a12 a22 a26 b21 b22 b26

(5.6.54)

a11 C a12 0

INVERSE RELATIONS

eox a11 eoy a12 go a F xy V 5 F 16 kxo b11 kyo b12 o kxy b16

d12 d22 d26

(5.6.53)

where

N/m

Nˆ Tx Nˆ Txy

a12 Nx 1 Nˆ Tx T Rb r a22 Ny 1 Nˆ Ty T

goxy 5 a66 Nxy

MN 31.7 m

MN 4.56 m

eo a b xo r 5 B 11 ey a12

This an eight-layer antisymmetric cross-ply laminate.

4.56 m

31.7 m

5H

B11 B12 B16 D11 D12 D16

For a symmetric balanced laminate, the inverses of Eqs. (5.6.44) to (5.6.46) are

(5.6.60)

where the compliance matrices are given by a11 C a12 0

a12 a22 0

0 A11 0 S 5 C A12 a66 0

A12 A22 0

0 21 0 S A66

d11 C d12 0

d12 d22 0

D11 0 0 S 5 C D12 d66 0

D12 D22 0

0 21 0 S D66

(5.6.61)

The numerical values of the compliance matrices for the laminates discussed in Examples 5 to 11 above are as follows: EXAMPLE 12 [ 45/0/90]S a11 C a12 a16

a12 a22 a26

a16 39.8 a26 S 5 C 212.26 a66 0

d11 C d12 d16

d12 d22 d26

d16 0.588 d26 S 5 C 20.286 d66 20.0514

212.26 39.8 0

0 m 0 S GN 104.1

20.286 0.662 20.0638

20.0514 1 20.0638 S N#m 0.972

where, for example, 39.8

m m m 5 39.8 3 1029 5 39.8 N GN 109 N

EXAMPLE 13 [ 60/0]S a11 C a12 a16

a12 a22 a26

a16 53.1 a26 S 5 C 216.34 a66 0

216.34 53.1 0

0 m 0 S GN 138.8

LAMINATE STRESS ANALYSIS 20.562 1.095 20.361

1.782 d16 d26 S 5 C 20.562 0.001835 d66

d12 d22 d26

d11 C d12 d16

inverse relations of the previous section, depending on the particular laminate. The strains at every z location through the thickness of the laminate can be determined from Eq. (5.6.29). These strains, in turn, can be transformed by using Eq. (5.6.24) to determine the strains in the principal material coordinate system at every z location. These strains can then be used in Eq. (5.6.20) to compute the stresses in the principal material coordinate system at every z location. The approach is very systematic, the procedures to employ at each step are straightforward and well defined, and numerical values can be determined. It goes without saying that these steps are best executed by programming a calculator or computer to deal with the many algebraic relations. Executing the many algebraic steps by hand is very error-prone. However, any program should be thoroughly checked to be sure it is computing what is intended before the results of the program are used for analysis and design. Once checked for accuracy, the program can be very useful. Below are several examples that demonstrate the use of the steps described above to compute the stresses within a laminate.

0.001835 1 20.361 S N # m 2.64

EXAMPLE 14 [ 30/0]S a11 C a12 a16

a12 a22 a26

a16 39.5 a26 S 5 C 219.18 0 a66

219.18 85.1 0

d11 C d12 d16

d12 d22 d26

d16 1.062 d26 S 5 C 20.579 20.346 d66

20.579 1.903 20.00631

0 m 0 S GN 138.8 20.346 1 20.00631 S N # m 2.64

EXAMPLE 15 [ 30]2S 221.3 67.5 0

a11 C a12 a16

a12 a22 a26

a16 36.7 a26 S 5 C 221.3 a66 0

d11 C d12 d16

d12 d22 d26

d16 0.454 d26 S 5 C 20.256 d66 20.1215

0 m 0 S GN 86.2

20.256 0.810 0.000881

20.1215 1 0.000881 S N # m 1.074

EXAMPLE 19. Consider a flat six-layer [ 30/0]S laminate with in-plane dimensions 0.35 m by 0.25 m subjected to a force of 15,000 N in the x direction uniformly distributed along opposite edges, as shown in Fig. 5.6.12. The temperature change from the cure condition is T  1008C. Compute the stresses in each layer.

EXAMPLE 16 [ 60/0/90]S 210.00 33.0 0

a11 C a12 a16

a12 a22 a26

a16 43.1 a26 S 5 C 210.00 a66 0

d11 C d12 d16

d12 d22 d26

d16 0.657 d26 S 5 C 20.203 d66 20.000718

5-85

0 m 0 S GN 116.2

z

y x

20.203 0.450 20.1006

20.000718 1 20.1006 S N # m 1.132

0.35 m 15,000 N

EXAMPLE 17 [0/90]2S a11 C a12 a16

a12 a22 a26

32.2 a16 a26 S 5 C 24.62 a66 0

d11 C d12 d16

d12 d22 d26

d16 0.327 d26 S 5 C 20.0573 d66 0

24.62 32.2 0

15,000 N

0 m 0 S GN 178.0

20.0573 0.471 0

0.25 m

0 1 0 S N # m 2.14

Fig. 5.6.12 Laminate loaded in x direction. APPROACH. The relations between stress resultants and deformations and the deformations and stresses, and the transformation relations, all outlined above are applied to the problem to determine the desired answers. Note that the laminate is symmetric and balanced, so Eqs. (5.6.53) to (5.6.55) apply, and only a single force is applied. This should considerably simplify the calculations.

EXAMPLE 18 [04 /904]T a11 a12 a F 16 b11 b12 b16

a12 a22 a26 b21 b22 b26

5I

a16 a26 a66 b61 b62 b66

b11 b21 b61 d11 d12 d16

b12 b22 b62 d12 d22 d26

b16 b26 b66 V d16 d26 d66

m 39.1 GN

m 25.61 GN

0

1 5.62 3 1025 N

0

m 25.61 GN

m 39.1 GN

0

0

1 25.62 3 1025 N

0

0

0

m 178.0 GN

0

0

0

1 5.62 3 1025 N

0

0

0.469 m N

20.0673 m N

0

0

20.0673 m N

0.469 m N

0

0

0

0

2.14 m N

0 0

25.62 3 10

25

1 N

0

SOLUTION. Except for the effective thermal stress resultants, Nx is the only stress resultant, and since the force is uniformly distributed, it is given by 15,000 N 5 60,000 N/m Nx 5 0.25 m

0

Y

It is interesting to note that while a16 and a26 are always negative, the sign and existence of d16 and d26 vary with the laminate. This has implications regarding the directions of the curvatures produced by applied moments, particularly the twist quantities. LAMINATE STRESS ANALYSIS

With what has been presented, it is now possible to compute the stresses in an element of a laminate that is loaded by known applied force and moment resultants and that is subjected to a temperature change. The material properties, fiber orientation, and z coordinates of each layer lead to numerical values for the elements of the A, B, and D matrices and unit effective thermal stress resultants. Given the magnitude and direction of the applied loads and the dimensions of the element, the force and moment resultants can be determined. The strains and curvatures of the reference surface can be computed by using the various

The unit effective thermal stress resultants consist of just two components given by Example 7, namely, 315 Nˆ Tx N/m T ˆ d N y t 5 d 323 t 8C Nˆ Txy 0 Since there are no moment resultants, the reference surface curvatures are zero. The two reference surface strains are given by direct application of Eqs. (5.6.53) and (5.6.54), with numerical values from Example 14, to yield a11 eox c eoy s 5 C a12 0 goxy

a12 a22 0

39.5 5 C 219.18 0

Nx 1 Nˆ Tx T 0 0 S cNy 1 Nˆ Ty T s a66 N 1 Nˆ T T xy xy 219.18 85.1 0

0 60,000 1 315s2100d 0 S 3 1029 c 323s2100d s 138.8 0

2,370 2625 1,745 eox or ceoy s 5 c21,151s 3 1026 1 c22,140s 3 1026 5 c23,290s 3 1026 goxy 0 0 0

5-86

MECHANICS OF COMPOSITE MATERIALS

The first term after the first equals sign is the strains due to the applied force, and the second term is the strains due to the decrease in temperature. Due to the applied force, the element of laminate stretches in the direction of the applied force, contracts perpendicular to that direction, and experiences no shear strain. Due to thermal effects, the element of laminate contracts in both directions, but more in the y direction. The net effect is stretching in the direction of the applied force and considerable contraction perpendicular to that direction, mostly due to thermal effects. Transforming the strains to the principal material coordinate system for each fiber orientation from Eq. (5.6.24) results in eo1 s308d

eox

d eo2 s308d t 5 [Ts308d] d eoy t 1 o 2 g12 s308d

1 4

1

3 4

5E 4 23

2 4

23 2

1 2

1

3 4

5E 4 23 4

23

2 2

1,745

23 26 2 U c 23,290 s 3 10

23

1 2

2 4

0

486 5 c 22,030 s 3 1026 2,180 eo1 s08d

0.25

0 1,745 1,745 0 S c 23,290 s 3 1026 5 c 23,290 s 3 1026 1 0 0

In the principal material coordinate system there are substantial shear strains in the 6308 layers, and they are of opposite sign. The extensional strains are identical. Of course, no transformation is necessary for the 08 layers. Substituting the strains for each layer into the stress-strain relation of Eq. (5.6.20) provides the desired stresses: s1 s1308d Q 11 c s2 s1308d s 5 £Q 12 t12 s1308d 0 s1 s1308d 47.0 c s2 s1308d s 5 £ 4.56 t12 s1308d 0

Q 12 Q 22 0

0 e1 s1308d 2 a1 T 0 ≥ c e2 s1308d 2 a2 T s Q 66 g12 s1308d

4.56 16.51 0

0 0 ≥ 5.62

486 2 6.51s2100d 55.3 3 109 c22,030 2 24.4s2100d s 3 1026 5 c 11.94s MPa 24,360 224.5 Q 11 s1 s2308d c s2 s2308d s 5 £Q 12 t12 s2308d 0

Q 12 Q 22 0

0 e1 s2308d 2 a1 T 0 ≥ ce2 s2308d 2 a2 T s Q 66 g12 s2308d

z, mm

1 o 2 gxy

0 1 0

s1 s2 308d 68.7 c s2 s2 308d s 5 • 2.33 ¶ MPa s12 s2 308d 17.12

0.375

d eo2 s08d t 5 [Ts08d] d eoy t

1 5 C0 0

0 0 ≥ 5.62

It can be seen that residual thermal effects increase some stress components and decrease others. Even though some stress components are small compared to others, e.g., 2 compared to 1, the material is much weaker perpendicular to the fibers than in the fiber direction. The small stress may be approaching levels to cause failure perpendicular to the fibers. Figure 5.6.13 shows the variation through the thickness of the three components of stress in the principal material coordinate system. The stresses are constant within each layer, but the overall distributions through the thickness are discontinuous, a characteristic which makes composite materials different from

eox

1 o 2 g12 s08d

4.56 16.51 0

106.1 s1 s08d c s2 s08d s 5 c 28.20s MPa t12 s08d 0

1 o 2 gxy

1 4

0 e1 s08d 2 a1 T 0 ≥ c e2 s08d 2 a2 T s Q 66 g12 s08d

1,745 2 6.51s2100d 108.7 3 109 c23,290 2 24.4s2100ds 3 1026 5 c 23.11s MPa 0 0

d eo2 s2308d t 5 [Ts2308d] d eoy t

3 4

Q 12 Q 22 0

68.7 s1 s 1 308d c s2 s 1 308d s 5 c 2.33 s MPa s12 s 1 308d 217.12

eox

1 o 2 g12 s2308d

0 0 ≥ 5.62

As might be expected, the stress in the fiber direction in the 08 layers is the highest. The 308 layers experience a shear stress. If thermal effects were ignored, T would be taken to be zero, resulting in

486 5 c22,030 s 3 1026 22,180

eo1 s2308d

Q 11 s1 s08d c s2 s08d s 5 £Q 12 t12 s08d 0

1,745 23 2 2 U c23,290 s 3 1026 0

23 4

4.56 16.51 0

486 2 6.51s2100d 55.3 3 109 c22,030 2 24.4s2100ds 3 1026 5 c11.94s MPa 4,360 24.5

47.0 s1 s08d c s2 s08d s 5 £ 4.56 t12 s08d 0

1 o 2 gxy 3 4

47.0 s1 s2308d c s2 s2308d s 5 £ 4.56 t12 s2308d 0

0.125 0 −0.125 −0.25 −0.375 −50

−25

0

25

50

75

100

125

Stress, MPa Fig. 5.6.13 Through-thickness distribution of stresses: s1, solid; s2, dotted; t12, dashed. conventional materials such as aluminum or steel. For aluminum or steel, for example, the stress distributions through the thickness would be continuous and there would be no shear stress. Note that there is no net shear stress for the laminate, although there are shear strains, but of opposite sign, for the 308 layers. This characteristic is due to the balanced nature of the laminate. EXAMPLE 20. A flat eight-layer [ 45/0/90]S quasi-isotropic laminate 0.1 by 0.4 m is loaded by a 0.7 N # m moment, as shown in Fig. 5.6.14. The throughthickness distribution of the stresses in the principal material coordinate system is of interest. The temperature change from the cure temperature condition is 21008C. APPROACH. The symmetric and balanced nature of the laminate, along with the fact that only a single moment is applied, simplifies the computations considerably. Equations (5.6.53) to (5.6.55) and the material properties from Examples 5 and 12 apply.

LAMINATE STRESS ANALYSIS

z

For 20.000500 m # z # 20.000375 m and 0.000375 m # z # 0.000500 m,

y

ex

e1 s458d

x

d e2 s458d t 5 [Ts458d] d ey t

0.4 m

0.7 N • m 0.1 m Fig. 5.6.14 Laminate loaded by moment.

SOLUTION. If it is assumed that the applied moment is uniformly distributed along the opposite edges, the moment resultant is given by 20.7 Mx 5 5 27 N # m/m 0.1 The values of M y and M xy are zero. There are no unit equivalent thermal moment resultants, and the unit equivalent thermal force resultants are given in Example 5 as Nˆ Tx 425 N/m d Nˆ Ty t 5 c 425s 8C 0 Nˆ Txy The reference surface strains are due only to the equivalent thermal force resultants and are computed as follows by way of Eqs. (5.6.53) and (5.6.54), with numerical values from Example 12, as Nˆ Tx T

a11

a12

0

d eoy t 5 Da12

a22

ˆ T T t 0 T dN y

goxy

0

0

39.8 5 C 212.26 0

a66

Nˆ Txy T

212.26 39.8 0

0 425s2100d 0 S 3 1029 c425s2100ds 104.1 0

21,171 5 c21,171s 3 1026 0

d12 d22 d26

0.588 5 C 20.286 20.0514

d16 Mx d26 S c M y s d66 M xy 20.286 0.662 20.0638

1 2 1 5D 2 1 22

1 1 21,171 3 1026 2 4.12z 2 1 21,171 3 1026 1 2.00zs T c 21 2 1 0.180z 0 2

21,171 3 1026 2 0.878z 5 c21,171 3 1026 2 1.238z s 3.06z For 20.000375 m # z # 20.000250 m and 0.000250 m # z # 0.000375 m, e1 s2458d 21,171 3 1026 2 1.238z e s2458d d 2 t 5 c21,171 3 1026 2 0.878zs 23.06z

1 2 g12 s2458d

For 20.000250 m # z # 20.000125 m and 0.000125 m # z # 0.000250 m, e1 s08d

21,171 3 1026 2 4.12z

d e2 s08d t 5 c 21,171 3 1026 1 2.00zs 1 2 g12 s08d

0.1799z

For 20.000125 m # z # 0.000125 m, e1 s908d

21,171 3 1026 1 2.00z d e2 s908d t 5 c 21,171 3 1026 2 4.12zs 20.1799z

1 2 g12 s908d

Note that each equation is valid for two ranges of z, the ranges depending on the z location of the layers with the different fiber orientations. The stresses in the various layers in the principal material coordinate system can be computed directly from Eq. (5.6.20) by using the above strains. Again there is a range of z for each relation, depending on the layer. The stresses in the principal material coordinate system are as follows: For 20.000500 m # z # 20.000375 m and 0.000375 m # z # 0.000500 m,

The reference surface curvatures are due to the applied moment and are computed by way of Eq. (5.6.55) and Example 12 as d11 kox c koy s 5 C d12 koxy d16

1 2 gxy

1 2 g12 s458d

0.7 N • m

eox

5-87

24.12 20.0514 27 1 1 20.0638 S c 0 s m 5 c 2.00s m 0.360 0.972 0

Note that in addition to producing curvature component kox, as expected, the single moment M x produces curvature components koy and koxy. The former curvature would exist if this were a conventional material such as aluminum or steel, and it is called the anticlastic curvature. It is due to Poisson’s ratio, which is reflected in the value of d12 and is related to Q 12. The twist curvature koxy is due to the existence of d16, which can be related to the existence of nonzero values of Q 16 in the

458 layers. Twist curvature would not exist if this were a conventional material. The strain as a function of z in the laminate coordinate system is given by Eq. (5.6.29) as eox kox 21,171 24.12 ex c ey s 5 c eoy s 1 z c koy s 5 c21,171s 3 1026 1 z c 2.00 s gxy goxy koxy 0 0.360 These relations are valid for the full range of z, 20.000500 m # z # 10.000500 m. Transforming these strains to the principal material coordinate system by way of Eq. (5.6.24) for the various fiber orientations gives the following:

s1 s458d Q 11 c s2 s458d s 5 C Q 12 t12 s458d 0 47.0 s1 s458d c s2 s458d s 5 C 4.56 0 t12 s458d

Q 12 Q 22 0

0 e1 s458d 2 a1 T 0 S ce2 s458d 2 a2 T s Q 66 g12 s458d

4.56 16.51 0

0 0 S 5.62

21,171 3 1026 2 0.878z 2 6.51 3 1026 s2100d 3 109 c21,171 3 1026 2 1.238z 2 24.4 3 1026 s2100ds 6.12z s1 s458d 218.62 2 46,900z c s2 s458d s 5 c 18.62 2 24,400z s MPa t12 s458d 34,400z For 20.000375 m # z # 20.000250 m and 0.000250 m # z # 0.000375 m, s1 s2458d 218.62 2 62,100z c s2 s2458d s 5 c 18.62 2 20,100z s MPa t12 s2458d 234,400z For 20.000250 m # z # 20.000125 m and 0.000125 m # z # 0.000250 m, 218.62 2 184,300 z s1 s08d c s2 s08d s 5 c 18.62 1 14,310 z s MPa t12 s08d 2,020 z

5-88

MECHANICS OF COMPOSITE MATERIALS

For 20.000125 m # z # 0.000125 m,

0.5

s1 s908d 218.62 1 75,300 z c s2 s908d s 5 c 18.62 2 58,900 z s MPa t12 s908d 22,020 z The first term in the expressions for the stresses are the stresses due to thermal effects. Because of the quasi-isotropic nature of the laminate, every layer experiences identical thermally induced stresses, and the stress perpendicular to the fiber direction s2 is equal to but opposite in value to the stress in the fiber direction s1. There are no thermally induced shear stresses. The second terms, the terms linear in z, are the stresses due to the applied moment. These stresses vary linearly with z within each layer, but are discontinous from layer to layer. The through-thickness distributions of the thermally induced and moment-induced stresses are plotted separately in Figs. 5.6.15 and 5.6.16, respectively.

0.5

0 −0.25 −0.5 −50

−25

25

50

Fig. 5.6.16 Through-thickness distribution of stresses due to applied moment: s1, solid; s2, dotted; t12, dashed.

eox 5 a11Nx

0

(5.6.65)

Combining Eqs. (5.6.63) to (5.6.65) leads to

−0.25 −0.5 −20

0 Stress, MPa

where, recall, H is the thickness of the laminate. For a bar Ny and Nxy would be zero, resulting in, from Eq. (5.6.53),

0.25 z, mm

z, mm

0.25

eox 5 −10

0

10

20

It is seen that the largest tensile and compressive stresses in the principal material coordinate system due to the applied moment occur in layers 3 and 6, respectively, not in the outer layers. This is very much unlike the stress distributions in a conventional material such as aluminum or steel, where the stresses with the largest magnitude occur at z 5 6H/2. This characteristic of layered materials is often overlooked.

sE x dextension 5

sx 5 E xex

M x b 5 2E x I

sx 5 E x eox

Nx 5 sxH

1 bH 3 12

(5.6.68)

(5.6.69)

kox 5 d11M x

(5.6.70)

Combining Eqs. (5.6.68) to (5.6.70) results in M xb

kox 5 E I 5 x

M xb 1 E x A12 bH 3 B

5 d11M x

(5.6.71)

Using the third and fourth parts of the relationship provides the definition of the bending modulus as sE xdbending 5

12 d11H 3

(5.6.72)

EXAMPLE 21. Compute the extensional and bending modulus in the x direction for an eight-layer [645/0/90]s laminate. APPROACH. Use the results of Example 12 directly in Eqs. (5.6.67) and (5.6.72). SOLUTION

(5.6.63)

(5.6.64)

d 2w o 5 E xIkox dx 2

The quantity b is the width of the beam (dimension in the y direction in Fig. 5.6.11) that must be included with the moment since the definition of moment M x as used herein is moment per unit in-plane dimension in the y direction. For bending of a beam, M y and M xy would be zero, so the curvature in the x direction for a symmetric balanced laminate is given by, from Eq. (5.6.55),

(5.6.62)

The integral definition of the stress resultant Nx in Eq. (5.6.31) would reduce to

(5.6.67)

where I is the second moment of area of the rectangular cross section beam and is given by I5

where E x is the extensional modulus of the material. If a material is homogeneous, then the stress will be uniform through the thickness. However, it has been shown in the examples that, in general, for a laminated material the stress is not uniform through the thickness. Therefore, consider the stress in Eq. (5.6.62) to be the average stress sx. Then, considering that under tension or compression the strain in a bar constructed of a symmetric balanced laminate would be that given by the geometric midsurface, or reference surface, strain, Eq. (5.6.62) can be rewritten as

1 a11H

In a similar fashion, if, instead of tension, bending is considered, then for a homogeneous material the well-known moment-curvature relation for bending of a beam in the x direction can be written as

ENGINEERING PROPERTIES OF A LAMINATE

Often there is interest in what can be considered smeared engineering properties for a laminate. For example, it may be desirable to treat a laminate as a bar or beam made of a homogeneous material with an equivalent modulus. Since a bar is usually associated with tension, or compression, and a beam is usually associated with bending, the equivalent modulus for a laminated composite bar could be different than the equivalent modulus for a laminated composite beam. This makes sense, since the As depend differently on the thicknesses and locations of the layers than the Ds. Given a bar and a one-dimensional state of stress, the stress-strain behavior can be described by

(5.6.66)

By equating the second and fourth parts of the relationship, the equivalent extensional modulus can thus be defined as

Stress, MPa Fig. 5.6.15 Through-thickness distribution of stresses due to thermal effects: s1, solid; s2, dotted; t12 5 0.

sx 5 a11Nx 5 a11sxH Ex

sE x dextension 5 sE xdbending 5

1 1 5 25.1 GPa 5 a11H 39.8 3 1029 s8 3 0.000125d 12 12 5 5 20.4 GPa d11H 3 0.588s8 3 0.000125d3

FAILURE OF COMPOSITE MATERIALS Note the bending modulus is lower than the extensional modulus because the 08 layers are near the center of the laminate and have less influence on resistance to bending than on resistance to extension.

FAILURE OF COMPOSITE MATERIALS

Generally, the calculation of the stresses within a laminate is carried out so that the values of the stresses can be compared with values known to cause failure. Knowing the levels of the stresses without having a metric to compare the stresses to is generally not useful. Most often it is of interest to know what value of applied load for a particular laminate in a particular loading situation causes failure. As might be expected, failure of fiber-reinforced composite materials is a complex process, and historically no aspect of the behavior of composite materials has been studied and debated more than the issue of failure. The topic is complex because of the multiple modes of failure and because the strength of a composite material in tension is considerably different than the strength in compression. A composite can fail due to excess stress in the fiber direction, and the strength in the fiber direction in tension is greater than the strength in compression. Alternatively, a composite material can fail because of excess stress perpendicular to the fibers. The strength perpendicular to the fibers in tension is considerably less than the strength in compression. And finally, a composite can fail due to excess shear stress. The strength in shear is considerable less than the strength in tension in the fiber direction, although the strength in shear does not depend on the sign of the shear stress. These issues are further complicated by the fact that when the fibers are oriented at an angle relative to the loading direction, through the transformation relations, there is a stress component in the fiber direction, a component perpendicular to the fiber direction, and a shear stress. Which component of stress leads to failure? Or is it a combination of the components? If it is a combination, how should the stresses be combined to predict failure? There are many issues related to failure of composite materials and there are a number of failure theories. Some theories are quite straightforward, while others are complicated and require a considerable number of experiments to determine all the parameters necessary to implement the theory. No one theory, or criterion, works perfectly for all materials in all situations. It is better to think of failure theories as indicators of failure rather than predictors of failure. Also, failure in one layer due to excess stress perpendicular to the fibers, for example, does not necessarily mean catastrophic failure of the entire laminate. The other layers may be able to assume a greater portion of the load to compensate for the failed layer. Thus the issue of failure really becomes one of damage accumulation. With this viewpoint, failure of a laminate will occur when there is sufficient damage accumulation that not enough undamaged material remains to support the applied load. The level of damage accumulation that can be accepted depends on the application. For example, excess tensile stress perpendicular to the fibers generally causes cracking of a composite material, the cracks being parallel with the fibers. If some cracking is acceptable, then the prediction of cracking does not equate to failure of the laminate. However, the existence of any cracking at all is equivalent to the accumulation of some degree of damage. How much cracking can be tolerated depends very much on the application. One failure theory that is easy to implement is the maximum stress failure criterion. The maximum stress failure criterion independently treats tension failure in the fiber direction, compression failure in the fiber direction, tension failure perpendicular to the fiber direction, compression failure perpendicular to the fiber direction, and failure in shear. There are thus five possible modes of failure for a plane stress state. A known level of failure stress for each mode is required. Failure in tension in the fiber direction is a direct result of fibers fracturing or otherwise breaking. Failure in compression in the fiber direction is due to the kinking of fibers, generally due to the lack of support of the matrix material surrounding the fibers and the development of shear, or kink, bands within the fiber. Failure in tension perpendicular to the fibers is due to failure of the matrix material between the fibers, failure of the

5-89

fibers, failure of the bond between the matrix material and the fibers, or a combination of all three failures. Failure in compression perpendicular to the fibers is generally due to crushing of the matrix material. Finally, failure in shear is also due to failure of the matrix material between the fibers, failure of the fibers, failure of the bond between the matrix material and the fibers, or a combination of all three. It is important to realize there is a variability to failure strengths, so the approach to take is to employ allowable stress levels for each mode that adequately account for the variability. If these definitions for failure stresses in each of the modes are used sT1 5 tensile failure stress in the 1 direction sC1 5 compression failure stress in the 1 direction (a negative number) sT2 5 tensile failure stress in the 2 direction sC2 5 compression failure stress in the 2 direction (a negative number) tF12 5 shear failure stress in 1-2 plane (a positive number) then the maximum stress failure criterion states that failure occurs at a point in a laminate if any of the following six equalities are satisfied: s1 5 sT1 s2 5 sT2 t12 5 tF12

s1 5 sC1 s2 5 sC2 t12 5 2tF12

(5.6.73)

The last two equations can be replaced by one, namely, |t12 | 5 tF12

(5.6.74)

EXAMPLE 22. A thin-walled cylindrical pressure vessel with radius R  0.25 m is constructed using eight layers of glass-fiber-reinforced material and a stacking sequence of [660/0/90]S. The fiber angles are specified relative to the axial direction of the cylinder. The end caps of the vessel are suitably designed and reinforced, so the concern is with failure away from the end caps. The change in temperature due to curing is 21008C. The level of internal pressure is increased from zero. In what layer, or layers, does the first failure occur, what is the mode of failure, and what is the pressure level? APPROACH. Since interest centers on the stresses away from the end caps, the stress resultants acting within the cylinder wall due to the internal pressure can be determined by considering the equilibrium of a section of cylinder away from the end caps, just as is done with conventional materials. These stress resultants will be a function of the cylinder radius as well as the pressure. By using the stress resultants, a stress analysis of the laminate can be conducted. SOLUTION. As shown in Fig. 5.6.17, by considering equilibrium in the Y direction of the section of cylinder of length L, and equilibrium in the X direction of the flat end cap, the axial and circumferential stress resultants acting on the cylinder wall can be computed as a function of pressure and cylinder radius.

Y

Y

DL

2pR∆L

A Nx

Nx

X

A

R p X

Z Nq ∆L

Nq

Nq ∆L

Section A-A Nx p

Nx2πR

R pπR 2

X

End cap Fig. 5.6.17 Forces acting on section of cylinder away from end caps and on end caps.

5-90

MECHANICS OF COMPOSITE MATERIALS

Summing forces in the X direction on the end cap and in the Y direction of the section cylinder results in

g FX 5 0 : ppR2 2 Nx2pR 5 0 S Nx 5 12 pR g FY 5 0 : 2pR L 2 2Nu L 5 0 S Nu 5 pR 1 2

Nx

d Nu t 5 d 1 t pR

or

Nxu

0

where, for generality, the results are left in term of p and R. These are the only two non-zero stress resultants acting within the cylinder wall away from the end caps due to the internal pressure. There are no moment resultants. There are, of course, equivalent thermal force resultants from Example 9 given by Nˆ Tx

427 N/m d Nˆ Tu t 5 c423 s 8C 0 0 As can be seen, the subscript y in the nomenclature has been replaced by the subscript u for this particular problem. This problem is like Example 19 as there are no moment resultants, and therefore no reference surface curvatures, only reference surface strains. However, the problem is unlike Example 19 in that the applied load level, in this case the internal pressure level, is not known. This is not a problem. The unknown pressure can be carried along symbolically in the calculations. Accordingly, from Eqs. (5.6.53) and (5.6.54), with numerical values from Examples 9 and 16, the reference surface strains are given by eox Nx 1 Nˆ Tx T c eou s 5 [a] d Ny 1 Nˆ Ty T t goxu Nxy 1 Nˆ Txy T 1 0 2 pR 1 427s2100d 0 S 3 1029 c pR 1 423s2100d s 116.2 0 pR

210.00 33.0 0

43.1 5 C 210.00 0

For convenience, the pressure p in pascals is converted to atmospheres by using the relation p 5 101,400 pa where pa is the internal pressure in atmospheres. The reference surface strains become eox 21,418 1 1,172 pa R c eou s 5 c 2970 1 2,840 pa R s 3 1026 goxu 0 pa R The strains in the principal material coordinate system for the layers with 608 fiber orientation are computed from the transformation relation of Eq. (5.6.24) as eox

e1 s608d

d e2 s608d t 5 [Ts608d] d eou t 1 2 g12 s608d

3 4

3

1 4

5E 4

23

23

2970 1 2,840 paR e1 s908d c e2 s908d s 5 c21,418 1 1,172 paR s 3 1026 g12 s908d 0 paR The stresses in the various layers can be computed using Eq. (5.6.20) and the above strains as follows: e1 s608d 2 a1 T s1 s608d c s2 s608d s 5 [Q]ce2 s608d 2 a2 T s t12 s608d g12 s608d 47.0 5 C 4.56 0

4.56 16.51 0

0 0 S 5.62

21,082 1 2,420paR 2 6.51s2100d 3 109 c 21,306 1 1,589paR 2 24.4s2100ds 3 1026 388 1 1,446 paR Carrying out the algebra, substituting for the radius with R  0.25 m, and doing similar calculations for the other three layer angles result in 215.08 1 30.3 pa s1 s608d c s2 s608d s 5 c 16.78 1 9.32 pa s MPa t12 s608d 2.18 1 2.03 pa 215.08 1 30.3 pa s1 s2608d c s2 s2608d s 5 c 16.78 1 9.32 pa s MPa t12 s2608d 2 2.18 22.03 pa 229.3 1 17.00 pa s1 s08d c s2 s08d s 5 c 20.8 1 13.07 pa s MPa t12 s08d 0 pa s1 s908d 210.34 1 34.7 pa c s2 s908d s 5 c 15.45 1 8.08 pa s MPa t12 s908d 0 pa By using Eq. (5.6.73), the stresses in each layer are equated to the failure levels for each failure mode, and the value of pa is computed for each mode. Interest is in positive values of pa, corresponding to the internal pressure as the problem is formulated here. However, for the sake of completeness, all six failure equations will be considered for the four fiber angles, resulting in 24 values of pa. The smallest positive value is the value of internal pressure that first causes failure as the pressure is increased from zero. The corresponding layer and the corresponding mode of failure can be determined from the particular equation that leads to this lowest positive value of pa. For convenience, the six equations of Eq. (5.6.73) are expressed as sT1 s1 1,000 c s2 s 5 d sT2 t 5 c 30s MPa t12 70 tF12

sC1 2600 s1 c s2 s 5 d sC2 t 5 c2120s MPa t12 270 2tF12

where typical failure stress levels have been used. For the 608layers, using megapascals, the above six failure equations are

goxu 1 4

e1 s08d 21,418 1 1,172 paR c e2 s08d s 5 c 2970 1 2,840 paR s 3 1026 g12 s08d 0

23 2

21,418 1 1,172 paR

23

2 2 U c 2970 1 2,840 paR S 3 1026 0 p aR 212

2 4 4 Carrying out the algebra and doing the same for the other layer orientations give e1 s608d 21,082 1 2,420 paR c e2 s608d s 5 c21,306 1 1,589 paRs 3 1026 g12 s608d 388 1 1,446 paR 21,082 1 2,420 paR e1 s2608d c e2 s2608d s 5 c21,306 1 1,589 paRs 3 1026 g12 s2608d 2388 2 1,446 paR

1,000 215.08 1 30.3 pa c 16.78 1 9.32 pa s 5 c 30s 2.18 1 2.03 pa 70

215.08 1 30.3 pa 2600 c 16.78 1 9.32 pa s 5 c2120s 2.18 1 2.03 pa 270

For the 2608 layers, the six failure equations are 215.08 1 30.3 pa 1,000 c 16.78 1 9.32 pa s 5 c 30s 22.18 2 2.03 pa 70

215.08 1 30.3 pa 2600 c 16.78 1 9.32 pa s 5 c2120s 2.18 2 2.03 pa 270

For the 08 layers, the six failure equations are 229.3 1 17.00 pa 1,000 c 20.8 1 13.07 pa s 5 c 30s 0 pa 70

229.3 1 17.00 pa 2600 c 20.8 1 13.07 pa s 5 c2120s 0 pa 270

FAILURE OF COMPOSITE MATERIALS For the 908 layers, the six failure equations are 210.34 1 34.7 pa 1,000 c 15.45 1 8.08 p1 s 5 c 30 s 0 pa 70

210.34 1 34.7 pa 2600 c 15.45 1 8.08 pa s 5 c 2120 s 0 pa 270

From these 24 equations there are 10 positive values of pa, 10 negative values of pa, and 4 values of infinity. The infinite values come from the two shear equations for the 08 layers and the two shear equations for the 908layers. From these multiple values of pa, the lowest level of the internal pressure is pa  0.704 atm. This value is obtained from the solution of the second equation for the positive failure stress levels for the 08 layer, i.e., 20.8 1 13.07 pa 5 30

5-91

This is interpreted to mean that at pa  0.704 atm cracks parallel to the fibers occur in the 08 layers due to high tensile level of stress component s2. If matrix cracking is to be avoided, only pressures below this level are allowed. The answer to the original question is that failures occur when pa  0.704 atm, the mode of failure is excess tensile stress perpendicular to the fibers, and it occurs in the 08 layers. To go one step beyond what was asked for, however, a negative pressure of magnitude 10.78 atm is obtained from the solution of the second equation for negative failure stress levels for the 08 layers, i.e., 20.8 1 13.07 pa 5 2120 This value is the negative value of pa with the lowest magnitude. It is interpreted to mean that an external pressure of 10.78 atm will cause failure of the 08 layers due to a high compressive level of stress component s2, assuming that the cylinder has not buckled due to the external pressure.

Section

6

Materials of Engineering BY

EUGENE A. AVALLONE Consulting Engineer; Professor of Mechanical Engineering, Emeritus,

The City College of The City University of New York HAROLD W. PAXTON United States Steel Professor Emeritus, Carnegie Mellon University JAMES D. REDMOND Principal, Technical Marketing Resources, Inc. MALCOLM BLAIR Technical and Research Director, Steel Founders Society of America ROBERT E. EPPICH Vice President, Technology, American Foundry Society L. D. KUNSMAN Late Fellow Engineer, Research Labs, Westinghouse Electric Corp. C. L. CARLSON Late Fellow Engineer, Research Labs, Westinghouse Electric Corp. J. RANDOLPH KISSEL President, The TGB Partnership LARRY F. WIESERMAN Senior Technical Supervisor, ALCOA RICHARD L. BRAZILL Technology Specialist, ALCOA FRANK E. GOODWIN Executive Vice President, ILZRO, Inc. DON GRAHAM Manager, Turning Products, Carboloy, Inc. ARTHUR COHEN Formerly Manager, Standards and Safety Engineering, Copper Development

Assn. JOHN H. TUNDERMANN Formerly Vice President, Research and Technology, INCO

International, Inc. JAMES D. SHEAROUSE, III Late Senior Development Engineer, The Dow Chemical Co. PETER K. JOHNSON Director, Marketing and Public Relations, Metal Powder Industries

Federation JOHN R. SCHLEY Manager, Technical Marketing, RMI Titanium Co. ROBERT D. BARTHOLOMEW Associate, Sheppard T. Powell Associates, LLC DAVID A. SHIFLER MERA Metallurgical Services DAVID W. GREEN Supervisory Research General Engineer, Forest Products Lab, USDA ROLAND HERNANDEZ Research Engineer, Forest Products Lab, USDA JOSEPH F. MURPHY Research General Engineer, Forest Products Lab, USDA ROBERT J. ROSS Supervisory Research General Engineer, Forest Products Lab, USDA WILLIAM T. SIMPSON Research Forest Products Technologist, Forest Products Lab, USDA ANTON TENWOLDE Supervisory Research Physicist, Forest Products Lab, USDA ROBERT H. WHITE Supervisory Wood Scientist, Forest Products Lab, USDA STAN LEBOW Research Forest Products Technologist, Forest Products Lab, USDA ALI M. SADEGH Professor of Mechanical Engineering, The City College of The City University of

New York WILLIAM L. GAMBLE Professor Emeritus of Civil and Environmental Engineering, University of

Illinois at Urbana-Champaign ARNOLD S. VERNICK Formerly Associate, Geraghty & Miller, Inc. GLENN E. ASAUSKAS Lubrication Engineer, Chevron Corp. STEPHEN R. SWANSON Professor of Mechanical Engineering, University of Utah

6.1

GENERAL PROPERTIES OF MATERIALS Revised by E. A. Avallone Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 Specific Gravities and Densities and Other Physical Data . . . . . . . . . . . . . . . 6-7 6.2 IRON AND STEEL by Harold W. Paxton Classification of Iron and Steel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12 Steel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13 Effect of Alloying Elements on the Properties of Steel. . . . . . . . . . . . . . . . . 6-18

Principles of Heat Treatment of Iron and Steel. . . . . . . . . . . . . . . . . . . . . . . 6-19 Composite Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-20 Thermomechanical Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-21 Commercial Steels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-21 Tool Steels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-28 Spring Steel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29 Special Alloy Steels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29 Stainless Steels (BY JAMES D. REDMOND) . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29 6-1

6-2

MATERIALS OF ENGINEERING

6.3 IRON AND STEEL CASTINGS by Malcolm Blair and Robert E. Eppich Classification of Castings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-34 Cast Iron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-35 Steel Castings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-40 6.4

NONFERROUS METALS AND ALLOYS; METALLIC SPECIALITIES Introduction (BY L. D. KUNSMAN AND C. L. CARLSON, Amended by Staff) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-46 Aluminum and Its Alloys (BY J. RANDOLPH KISSELL, LARRY F. WIESERMAN, AND RICHARD L. BRAZILL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-49 Bearing Metals (BY FRANK E. GOODWIN) . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-58 Cemented Carbides (BY DON GRAHAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-58 Copper and Copper Alloys (BY ARTHUR COHEN) . . . . . . . . . . . . . . . . . . . . . 6-62 Jewelry Metals (Staff Contribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-71 Low-Melting-Point Metals and Alloys (BY FRANK E. GOODWIN) . . . . . . . . . 6-72 Metals and Alloys for Use at Elevated Temperatures (BY JOHN H. TUNDERMANN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-73 Metals and Alloys for Nuclear Energy Applications (BY L. D. KUNSMAN AND C. L. CARLSON; Amended by staff) . . . . . . . . . . . . . . . . . . . . . . . . . 6-79 Magnesium and Magnesium Alloys (BY JAMES D. SHEAROUSE, III) . . . . . . . 6-82 Powdered Metals (BY PETER K. JOHNSON) . . . . . . . . . . . . . . . . . . . . . . . . . . 6-83 Nickel and Nickel Alloys (BY JOHN H. TUNDERMANN) . . . . . . . . . . . . . . . . . 6-86 Titanium and Zirconium (BY JOHN R. SCHLEY). . . . . . . . . . . . . . . . . . . . . . . 6-88 Zinc and Zinc Alloys (BY FRANK E. GOODWIN) . . . . . . . . . . . . . . . . . . . . . . 6-90 6.5 CORROSION by Robert D. Bartholomew and David A. Shifler Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-92 Thermodynamics of Corrosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-93 Corrosion Kinetics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-94 Factors Influencing Corrosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-95 Forms of Corrosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-97 Corrosion Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-102 Corrosion Protection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-103 Corrosion in Industrial and Power Plant Steam-Generating Systems . . . . . 6-105 Corrosion in Heating and Cooling Water Systems and Cooling Towers . . . 6-109 Corrosion in the Chemical Process Industry. . . . . . . . . . . . . . . . . . . . . . . . 6-110 6.6

PAINTS AND PROTECTIVE COATINGS Revised by Staff Paint Ingredients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-111 Paints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-111 Other Protective and Decorative Coatings . . . . . . . . . . . . . . . . . . . . . . . . . 6-113 Varnish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-114 Lacquer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-115 6.7 WOOD by Staff, Forest Products Laboratory, USDA Forest Service. Prepared under the direction of David W. Green Composition, Structure, and Nomenclature (BY DAVID W. GREEN) . . . . . . . 6-115 Physical and Mechanical Properties of Clear Wood (BY DAVID W. GREEN, ROBERT WHITE, ANTON TENWOLDE, WILLIAM SIMPSON, JOSEPH MURPHY, AND ROBERT ROSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-116 Properties of Lumber Products (BY ROLAND HERNANDEZ AND DAVID W. GREEN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-121 Properties of Structural Panel Products (BY ROLAND HERNANDEZ) . . . . . . . 6-127 Durability of Wood in Construction (BY STAN LEBOW AND ROBERT WHITE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-129 Commercial Lumber Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-131 6.8

NONMETALLIC MATERIALS by Ali M. Sadegh Abrasives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-131 Adhesives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-133

Brick, Block, and Tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-134 Ceramics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-139 Cleansing Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-140 Cordage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-141 Electrical Insulating Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-141 Fibers and Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-143 Freezing Preventives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-144 Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-145 Natural Stones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-146 Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-147 Roofing Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-148 Rubber and Rubberlike Materials (Elastomers) . . . . . . . . . . . . . . . . . . . . . 6-150 Solvents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-151 Thermal Insulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-153 Silicones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-154 Refractories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-154 Sealants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-158 6.9

CEMENT, MORTAR, AND CONCRETE by William L. Gamble Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-162 Lime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-163 Aggregates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-163 Water. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-164 Admixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-164 Mortars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-165 Concrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-166 6.10 WATER by Arnold S. Vernick and Amended by Staff Water Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-171 Measurements and Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-172 Industrial Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-174 Water Pollution Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-175 Water Desalination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-176 6.11

LUBRICANTS AND LUBRICATION by Glenn E. Asauskas Lubricants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-180 Liquid Lubricants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-180 Lubrication Regimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-181 Lubricant Testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-181 Viscosity Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-181 Other Physical and Chemical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-182 Greases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-183 Solid Lubricants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-184 Lubrication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-185 Lubrication of Specific Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-185 6.12 PLASTICS Staff Contribution General Overview of Plastics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-189 Raw Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-189 Primary Fabrication Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-205 Additives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-205 Adhesives, Assembly, and Finishes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-205 Recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-205 6.13

FIBER COMPOSITE MATERIALS by Stephen R. Swanson Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-206 Typical Advanced Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-206 Fibers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-206 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-206 Material Forms and Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-207 Design and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-207

6.1 GENERAL PROPERTIES OF MATERIALS Revised by E. A. Avallone REFERENCES: “International Critical Tables,” McGraw-Hill. “Smithsonian Physical Tables,” Smithsonian Institution. Landolt, “Landolt-Börnstein, Zahlenwerte und Funktionen aus Physik, Chemie, Astronomie, Geophysik und Technik,” Springer. “Handbook of Chemistry and Physics,” Chemical Rubber Co. “Book of ASTM Standards,” ASTM. “ASHRAE Refrigeration Data Book,” ASHRAE. Brady, “Materials Handbook,” McGraw-Hill. Mantell, “Engineering Materials Handbook,” McGraw-Hill. International Union of Pure and Applied Chemistry, Butterworth Scientific Publications. “U.S. Standard Atmosphere,” Government Printing Office. Tables of Thermodynamic Properties of Gases, NIST Circ. 564, ASME Steam Tables.

Thermodynamic properties of a variety of other specific materials are listed also in Secs. 4.1, 4.2, and 9.8. Sonic properties of several materials are listed in Sec. 12.6. CHEMISTRY

Every elementary substance is made up of exceedingly small particles called atoms which are all alike and which cannot be further subdivided or broken up by chemical processes. It will be noted that this statement is Table 6.1.1

virtually a definition of the term elementary substance and a limitation of the term chemical process. There are as many different classes or families of atoms as there are chemical elements. See Table 6.1.1. Two or more atoms, either of the same kind or of different kinds, are, in the case of most elements, capable of uniting with one another to form a higher order of distinct particles called molecules. If the molecules or atoms of which any given material is composed are all exactly alike, the material is a pure substance. If they are not all alike, the material is a mixture. If the atoms which compose the molecules of any pure substances are all of the same kind, the substance is, as already stated, an elementary substance. If the atoms which compose the molecules of a pure chemical substance are not all of the same kind, the substance is a compound substance. The atoms are to be considered as the smallest particles which occur separately in the structure of molecules of either compound or elementary substances, so far as can be determined by ordinary chemical analysis. The molecule of an element consists of a definite (usually small) number of its atoms. The molecule of a compound consists of one or more atoms of each of its several elements, the numbers of the

Chemical Elementsa

Element

Symbol

Atomic no.

Actinium Aluminum Americium Antimony Argonc Arsenicd Astatine Barium Berkelium Beryllium Bismuth Borond Brominee Cadmium Calcium Californium Carbond Cerium Cesiumk Chlorine f Chromium Cobalt Columbium (see Niobium) Copper Curium Dysprosium Einsteinium Erbium Europium Fermium Fluorine g Francium Gadolinium Galliumk Germanium Gold Hafnium Heliumc Holmium Hydrogenh Indium Iodined

Ac A1 Am Sb Ar As At Ba Bk Be Bi B Br Cd Ca Cf C Ce Cs Cl Cr Co

89 13 95 51 18 33 85 56 97 4 83 5 35 48 20 98 6 58 55 17 24 27

Cu Cm Dy Es Er Eu Fm F Fr Gd Ga Ge Au Hf He Ho H In I

29 96 66 99 68 63 100 9 87 64 31 32 79 72 2 67 1 49 53

Atomic weightb 26.9815

Valence 3

121.75 39.948 74.9216

3, 5 0 3, 5

137.34

2

9.0122 208.980 10.811l 79.904m 112.40 40.08

2 3, 5 3 1, 3, 5 2 2

12.01115l 140.12 132.905 35.453m 51.996m 58.9332

2, 4 3, 4 1 1, 3, 5, 7 2, 3, 6 2, 3

63.546m

1, 2

162.50

3

167.26 151.96

3 2, 3

18.9984 157.25 69.72 72.59 196.967 178.49 4.0026 164.930 1.00797i 114.82 126.9044

1 3 2, 3 2, 4 1, 3 4 0 3 1 1, 2, 3 1, 3, 5, 7 6-3

6-4

GENERAL PROPERTIES OF MATERIALS Table 6.1.1

Chemical Elementsa

(Continued)

Element

Symbol

Atomic no.

Atomic weightb

Valence

Iridium Iron Kryptonc Lanthanum Lead Lithiumi Lutetium Magnesium Manganese Mendelevium Mercurye Molybdenum Neodymium Neonc Neptunium Nickel Niobium Nitrogenf Nobelium Osmium Oxygen f Palladium Phosphorusd Platinum Plutonium Polonium Potassium Praseodymium Promethium Protactinium Radium Radon j Rhenium Rhodium Rubidium Ruthenium Samarium Scandium Seleniumd Silicond Silver Sodium Strontium Sulfurd Tantalum Technetium Telluriumd Terbium Thallium Thorium Thulium Tin Titanium Tungsten Uranium Vanadium Xenonc Ytterbium Yttrium Zinc Zirconium

Ir Fe Kr La Pb Li Lu Mg Mn Md Hg Mo Nd Ne Np Ni Nb N No Os O Pd P Pt Pu Po K Pr Pm Pa Ra Rn Re Rh Rb Ru Sm Sc Se Si Ag Na Sr S Ta Tc Te Tb T1 Th Tm Sn Ti W U V Xe Yb Y Zn Zr

77 26 36 57 82 3 71 12 25 101 80 42 60 10 93 28 41 7 102 76 8 46 15 78 94 84 19 59 61 91 88 86 75 45 37 44 62 21 34 14 47 11 38 16 73 43 52 65 81 90 69 50 22 74 92 23 54 70 39 30 40

192.2 55.847m 83.80 138.91 207.19 6.939 174.97 24.312 54.9380

2, 3, 4, 6 2, 3 0 3 2, 4 1 3 2 2, 3, 4, 6, 7

200.59 95.94 144.24 20.183

1, 2 3, 4, 5, 6 3 0

58.71 92.906 14.0067

2, 3, 4 2, 3, 4, 5 3, 5

190.2 15.9994l 106.4 30.9738 195.09

2, 3, 4, 6, 8 2 2, 4 3, 5 2, 4

39.102 140.907

2, 4 1 3 5

186.2 102.905 85.47 101.07 150.35 44.956 78.96 28.086l 107.868m 22.9898 87.62 32.064l 180.948

2 0 1, 4, 7 3, 4 1 3, 4, 6, 8 3 3 2, 4, 6 4 1 1 2 2, 4, 6 4, 5

127.60 158.924 204.37 232.038 168.934 118.69 47.90 183.85 238.03 50.942 131.30 173.04 88.905 65.37 91.22

2, 4, 6 3 1, 3 3 3 2, 4 3, 4 3, 4, 5, 6 4, 6 1, 2, 3, 4, 5 0 2, 3 3 2 4

a All the elements for which atomic weights listed are metals, except as otherwise indicated. No atomic weights are listed for most radioactive elements, as these elements have no fixed value. b The atomic weights are based upon nuclidic mass of C12  12. c Inert gas. d Metalloid. e Liquid. f Gas. g Most active gas. h Lightest gas. i Lightest metal. j Not placed. k Liquid at 25 8C. l The atomic weight varies because of natural variations in the isotopic composition of the element. The observed ranges are boron, 0.003; carbon, 0.00005; hydrogen, 0.00001; oxygen, 0.0001; silicon, 0.001; sulfur,

0.003. m The atomic weight is believed to have an experimental uncertainty of the following magnitude: bromine, 0.001; chlorine, 0.001; chromium, 0.001; copper, 0.001; iron, 0.003; silver, 0.001. For other elements, the last digit given is believed to be reliable to 0.5. SOURCE: Table courtesy IUPAC and Butterworth Scientific Publications.

CHEMISTRY

various kinds of atoms and their arrangement being definite and fixed and determining the character of the compound. This notion of molecules and their constituent atoms is useful for interpreting the observed fact that chemical reactions—e.g., the analysis of a compound into its elements, the synthesis of a compound from the elements, or the changing of one or more compounds into one or more different compounds— take place so that the masses of the various substances concerned in a given reaction stand in definite and fixed ratios. It appears from recent researches that some substances which cannot by any available means be decomposed into simpler substances and which must, therefore, be defined as elements, are continually undergoing spontaneous changes or radioactive transformation into other substances which can be recognized as physically and chemically different from the original substance. Radium is an element by the definition given and may be considered as made up of atoms. But it is assumed that these atoms, so called because they resist all efforts to break them up and are, therefore, apparently indivisible, nevertheless split up spontaneously, at a rate which scientists have not been able to influence in any way, into other atoms, thus forming other elementary substances of totally different properties. See Table 6.1.3.

The view generally accepted at present is that the atoms of all the chemical elements, including those not yet known to be radioactive, consist of several kinds of still smaller particles, three of which are known as protons, neutrons, and electrons. The protons are bound together in the atomic nucleus with other particles, including neutrons, and are positively charged. The neutrons are particles having approximately the mass of a proton but are uncharged. The electrons are negatively charged particles, all alike, external to the nucleus, and sufficient in number to neutralize the nuclear charge in an atom. The differences between the atoms of different chemical elements are due to the different numbers of these smaller particles composing them. According to the original Bohr theory, an ordinary atom is conceived as a stable system of such electrons revolving in closed orbits about the nucleus like the planets of the solar system around the sun. In a hydrogen atom, there is 1 proton and 1 electron; in a radium atom, there are 88 electrons surrounding a nucleus 226 times as massive as the hydrogen nucleus. Only a few, in general the outermost or valence electrons of such an atom, are subject to rearrangement within, or ejection from, the atom, thereby enabling it, because of its increased energy, to combine with other atoms to form molecules of either elementary substances or compounds. The atomic number of an

Table 6.1.2 Solubility of Inorganic Substances in Water (Number of grams of the anhydrous substance soluble in 1,000 g of water. The common name of the substance is given in parentheses) Temperature, 8F (8C) Substance

Composition

32 (0)

122 (50)

Aluminum sulfate Aluminum potassium sulfate (potassium alum) Ammonium bicarbonate Ammonium chloride (sal ammoniac) Ammonium nitrate Ammonium sulfate Barium chloride Barium nitrate Calcium carbonate (calcite) Calcium chloride Calcium hydroxide (hydrated lime) Calcium nitrate Calcium sulfate (gypsum) Copper sulfate (blue vitriol) Ferrous chloride Ferrous hydroxide Ferrous sulfate (green vitriol or copperas) Ferric chloride Lead chloride Lead nitrate Lead sulfate Magnesium carbonate Magnesium chloride Magnesium hydroxide (milk of magnesia) Magnesium nitrate Magnesium sulfate (Epsom salts) Potassium carbonate (potash) Potassium chloride Potassium hydroxide (caustic potash) Potassium nitrate (saltpeter or niter) Potassium sulfate Sodium bicarbonate (baking soda) Sodium carbonate (sal soda or soda ash) Sodium chloride (common salt) Sodium hydroxide (caustic soda) Sodium nitrate (Chile saltpeter) Sodium sulfate (Glauber salts) Zinc chloride Zinc nitrate Zinc sulfate

Al2(S04)3 Al2K2(SO4)4  24H2O NH4HCO3 NH4Cl NH4NO3 (NH4)2S04 BaCl2  2H2O Ba(N03)2 CaCO3 CaCl2 Ca(OH)2 Ca(NO3)2  4H2O CaSO4  2H2O CuSO4  5H2O FeCl2  4H2O Fe(OH)2 FeSO4  7H2O FeCl3 PbCl2 Pb(N03)2 PbSO4 MgCO3 MgCl2  6H2O Mg(OH)2 Mg(NO3)2  6H2O MgSO4  7H2O K2CO3 KCl KOH KNO3 K2SO4 NaHCO3 NaCO3  10H2O NaCl NaOH NaNO3 Na2SO4  10H2O ZnCl2 Zn(NO3)2  6H2O ZnSO4  7H2O

313 30 119 297 1,183 706 317 50 0.018* 594 1.77 931 1.76 140 644§ 0.0067‡ 156 730 6.73 403 0.042† 0.13‡ 524 0.009‡ 665 269 893 284 971 131 74 69 204 357 420 733 49 2,044 947 419

521 170

* 598F. † 688F. ‡ In cold water. § 508F.

6-5

504 3,440 847 436 172

3,561 2.06 334 820 482 3,160 16.7

212 (100) 891 1,540 760 8,710 1,033 587 345 0.88 1,576 0.67 3,626 1.69 753 1,060

5,369 33.3 1,255

723 903 500 1,216 435 1,414 851 165 145 475 366 1,448 1,148 466 4,702 768

710 1,562 566 1,773 2,477 241 452 392 3,388 1,755 422 6,147 807

6-6

Table 6.1.3

Periodic Table of the Elements

SPECIFIC GRAVITIES AND DENSITIES AND OTHER PHYSICAL DATA

6-7

Table 6.1.4 Solubility of Gases in Water (By volume at atmospheric pressure) t, 8F (8C)

Air Acetylene Ammonia Carbon dioxide Carbon monoxide Chlorine

t, 8F (8C)

32 (0)

68 (20)

212 (100)

0.032 1.89 1,250 1.87 0.039 5.0

0.020 1.12 700 0.96 0.025 2.5

0.012

0.26 0.00

element is the number of excess positive charges on the nucleus of the atom. The essential feature that distinguishes one element from another is this charge of the nucleus. It also determines the position of the element in the periodic table. Modern researches have shown the existence of isotopes, that is, two or more species of atoms having the same atomic number and thus occupying the same place in the periodic system, but differing somewhat in atomic weight. These isotopes are chemically identical and are merely different species of the same chemical element. Most of the ordinary inactive elements have been shown to consist of a mixture of isotopes. This convenient atomic model should be regarded as only a working hypothesis for coordinating a number of phenomena about which much yet remains to be known. Calculation of the Percentage Composition of Substances Add the atomic weights of the elements in the compound to obtain its molecular weight. Multiply the atomic weight of the element to be calculated by the number of atoms present (indicated in the formula by a subscript number) and by 100, and divide by the molecular weight of the compound. For example, hematite iron ore (Fe2O3) contains 69.94 percent of iron by weight, determined as follows: Molecular weight of Fe2O3  (55.84  2)  (16  3)  159.68. Percentage of iron in compound  (55.84  2)  100/159.68  69.94. SPECIFIC GRAVITIES AND DENSITIES AND OTHER PHYSICAL DATA Table 6.1.5 Approximate Specific Gravities and Densities (Water at 398F and normal atmospheric pressure taken as unity) For more detailed data on any material, see the section dealing with the properties of that material. Data given are for usual room temperatures.

Substance

Specific gravity

Avg density lb/ft3

kg/m3

165 534 481 509 554 556 262 536 1,205 1,073 1,383 442 450 485 468 437 325 237 315 172 710 465 475 259 847

2,643 8,553 7,702 8,153 8,874 8,906 4,197 8,586 19,300 17,190 22,160 7,079 7,207 7,658 7,496 6,984 5,206 3,796 5,046 2,755 11,370 7,449 7,608 4,149 13,570

Metals, alloys, ores* Aluminum, cast-hammered Brass, cast-rolled Bronze, aluminum Bronze, 7.9–14% Sn Bronze, phosphor Copper, cast-rolled Copper ore, pyrites German silver Gold, cast-hammered Gold coin (U.S.) Iridium Iron, gray cast Iron, cast, pig Iron, wrought Iron, spiegeleisen Iron, ferrosilicon Iron ore, hematite Iron ore, limonite Iron ore, magnetite Iron slag Lead Lead ore, galena Manganese Manganese ore, pyrolusite Mercury

2.55–2.80 8.4–8.7 7.7 7.4–8.9 8.88 8.8–8.95 4.1–4.3 8.58 19.25–19.35 17.18–17.2 21.78–22.42 7.03–7.13 7.2 7.6–7.9 7.5 6.7–7.3 5.2 3.6–4.0 4.9–5.2 2.5–3.0 11.34 7.3–7.6 7.42 3.7–4.6 13.546

Hydrogen Hydrogen sulfide Hydrochloric acid Nitrogen Oxygen Sulfuric acid

32 (0)

68 (20)

212 (100)

0.023 5.0 560 0.026 0.053 87

0.020 2.8 480 0.017 0.034 43

0.018 0.87 0.0105 0.185

Table 6.1.5 Approximate Specific Gravities and Densities (Continued )

Substance Monel metal, rolled Nickel Platinum, cast-hammered Silver, cast-hammered Steel, cold-drawn Steel, machine Steel, tool Tin, cast-hammered Tin ore, cassiterite Tungsten Uranium Zinc, cast-rolled Zinc, ore, blende

Specific gravity 8.97 8.9 21.5 10.4–10.6 7.83 7.80 7.70–7.73 7.2–7.5 6.4–7.0 19.22 18.7 6.9–7.2 3.9–4.2

Avg density lb/ft3

kg/m3

555 537 1,330 656 489 487 481 459 418 1,200 1,170 440 253

8,688 8,602 21,300 10,510 7,832 7,800 7,703 7,352 6,695 18,820 18,740 7,049 4,052

0.41 0.62 0.73 0.77 1.2–1.5 0.9–1.3 0.22–0.26 1.47–1.50 0.90–0.97 0.40–0.50 0.70–0.80 2.40–2.80 2.45–2.72 2.90–3.00 3.2–4.7 0.32 0.86–1.02 0.70–1.15

26 39 45 48 85 69 15 93 58 28 47 162 161 184 247 20 59 58

417 625 721 769 1,360 1,104 240 1,491 925 448 753 2,595 2,580 1,950 3,960 320 945 929

0.67 0.92–0.96 1.0–2.0 0.77 2.11 1.53 1.93–2.07 1.32

44 59 94 48 132 96 125 82

705 946 1,506 769 2,115 1,539 2,001 1,315

44 34 42 44 22 27 30 29 32 25 35

705 545 973 705 352 433 481 465 513 401 561

Various solids Cereals, oats, bulk Cereals, barley, bulk Cereals, corn, rye, bulk Cereals, wheat, bulk Cordage (natural fiber) Cordage (plastic) Cork Cotton, flax, hemp Fats Flour, loose Flour, pressed Glass, common Glass, plate or crown Glass, crystal Glass, flint Hay and straw, bales Leather Paper Plastics (see Sec. 6.12) Potatoes, piled Rubber, caoutchouc Rubber goods Salt, granulated, piled Saltpeter Starch Sulfur Wool

Timber, air-dry Apple Ash, black Ash, white Birch, sweet, yellow Cedar, white, red Cherry, wild red Chestnut Cypress Fir, Douglas Fir, balsam Elm, white

0.66–0.74 0.55 0.64–0.71 0.71–0.72 0.35 0.43 0.48 0.45–0.48 0.48–0.55 0.40 0.56

6-8

GENERAL PROPERTIES OF MATERIALS

Table 6.1.5 Approximate Specific Gravities and Densities (Continued )

Substance Hemlock Hickory Locust Mahogany Maple, sugar Maple, white Oak, chestnut Oak, live Oak, red, black Oak, white Pine, Oregon Pine, red Pine, white Pine, Southern Pine, Norway Poplar Redwood, California Spruce, white, red Teak, African Teak, Indian Walnut, black Willow

Table 6.1.5 Approximate Specific Gravities and Densities (Continued ) Avg density

Specific gravity

lb/ft3

kg/m3

0.45–0.50 0.74–0.80 0.67–0.77 0.56–0.85 0.68 0.53 0.74 0.87 0.64–0.71 0.77 0.51 0.48 0.43 0.61–0.67 0.55 0.43 0.42 0.45 0.99 0.66–0.88 0.59 0.42–0.50

29 48 45 44 43 33 46 54 42 48 32 30 27 38–42 34 27 26 28 62 48 37 28

465 769 722 705 689 529 737 866 673 770 513 481 433 610–673 541 433 417 449 994 769 593 449

Various liquids Alcohol, ethyl (100%) Alcohol, methyl (100%) Acid, muriatic, 40% Acid, nitric, 91% Acid, sulfuric, 87% Chloroform Ether Lye, soda, 66% Oils, vegetable Oils, mineral, lubricants Turpentine

0.789 0.796 1.20 1.50 1.80 1.500 0.736 1.70 0.91–0.94 0.88–0.94 0.861–0.867

Water, 48C, max density Water, 1008C Water, ice Water, snow, fresh fallen Water, seawater

1.0 0.9584 0.88–0.92 0.125 1.02–1.03

49 50 75 94 112 95 46 106 58 57 54

802 809 1,201 1,506 1,795 1,532 738 1,699 930 914 866

Water 62.426 59.812 56 8 64

999.97 958.10 897 128 1,025

Ashlar masonry Granite, syenite, gneiss Limestone Marble Sandstone Bluestone

2.4–2.7 2.1–2.8 2.4–2.8 2.0–2.6 2.3–2.6

159 153 162 143 153

2,549 2,450 2,597 2,290 2,451

153 147 137 147 156

2,451 2,355 2,194 2,355 2,500

130 125 110

2,082 2,001 1,762

128 112 103 112

2,051 1,794 1,650 1,794

Rubble masonry Granite, syenite, gneiss Limestone Sandstone Bluestone Marble

2.3–2.6 2.0–2.7 1.9–2.5 2.2–2.5 2.3–2.7 Dry rubble masonry

Granite, syenite, gneiss Limestone, marble Sandstone, bluestone

1.9–2.3 1.9–2.1 1.8–1.9 Brick masonry

Hard brick Medium brick Soft brick Sand-lime brick

1.8–2.3 1.6–2.0 1.4–1.9 1.4–2.2

Substance

2.2–2.4 1.9–2.3 1.5–1.7

lb/ft3

kg/m3

40–45 94 196 53–64 103 94 135 67–72 98–117 96 49–55

640–721 1,505 3,140 849–1,025 1,650 1,505 2,163 1,074–1,153 1,570–1,874 1,538 785–849

63 110 100 76 95 78 96 108 115 80–85 90 105 90–105 100–120 126

1,009 1,761 1,602 1,217 1,521 1,250 1,538 1,730 1,841 1,282–1,361 1,441 1,681 1,441–1,681 1,602–1,922 2,019

60 65 80 90 70 65

951 1,041 1,281 1,432 1,122 1,041

153 281 184 159 159 109 143 137 181 162 175 165 187 159 187 155 170 187 200 172 40 165 143 171 172 169 165

2,451 4,504 2,950 2,549 2,549 1,746 2,291 2,196 2,901 2,596 2,805 2,644 2,998 2,549 2,998 2,484 2,725 2,998 3,204 2,758 641 2,645 2,291 2,740 2,758 2,709 2,645

96 95 82

1,579 1,572 1,314

Various building materials Ashes, cinders Cement, portland, loose Portland cement Lime, gypsum, loose Mortar, lime, set

0.64–0.72 1.5 3.1–3.2 0.85–1.00 1.4–1.9

Mortar, portland cement Slags, bank slag Slags, bank screenings Slags, machine slag Slags, slag sand

2.08–2.25 1.1–1.2 1.5–1.9 1.5 0.8–0.9 Earth, etc., excavated

Clay, dry Clay, damp, plastic Clay and gravel, dry Earth, dry, loose Earth, dry, packed Earth, moist, loose Earth, moist, packed Earth, mud, flowing Earth, mud, packed Riprap, limestone Riprap, sandstone Riprap, shale Sand, gravel, dry, loose Sand, gravel, dry, packed Sand, gravel, wet

1.0 1.76 1.6 1.2 1.5 1.3 1.6 1.7 1.8 1.3–1.4 1.4 1.7 1.4–1.7 1.6–1.9 1.89–2.16 Excavations in water

Sand or gravel Sand or gravel and clay Clay River mud Soil Stone riprap

0.96 1.00 1.28 1.44 1.12 1.00

Asbestos Barytes Basalt Bauxite Bluestone Borax Chalk Clay, marl Dolomite Feldspar, orthoclase Gneiss Granite Greenstone, trap Gypsum, alabaster Hornblende Limestone Marble Magnesite Phosphate rock, apatite Porphyry Pumice, natural Quartz, flint Sandstone Serpentine Shale, slate Soapstone, talc Syenite

2.1–2.8 4.50 2.7–3.2 2.55 2.5–2.6 1.7–1.8 1.8–2.8 1.8–2.6 2.9 2.5–2.7 2.7–2.9 2.6–2.7 2.8–3.2 2.3–2.8 3.0 2.1–2.86 2.6–2.86 3.0 3.2 2.6–2.9 0.37–0.90 2.5–2.8 2.0–2.6 2.7–2.8 2.6–2.9 2.6–2.8 2.6–2.7

Minerals

Stone, quarried, piled

Concrete masonry Cement, stone, sand Cement, slag, etc. Cement, cinder, etc.

Avg density

Specific gravity

144 130 100

2,309 2,082 1,602

Basalt, granite, gneiss Limestone, marble, quartz Sandstone

1.5 1.5 1.3

SPECIFIC GRAVITIES AND DENSITIES AND OTHER PHYSICAL DATA Table 6.1.5 Approximate Specific Gravities and Densities (Continued )

Compressibility of Liquids Avg density

Specific gravity

lb/ft3

kg/m3

1.5 1.7

92 107

1,474 1,715

1.1–1.5 1.4–1.8 1.2–1.5 1.1–1.4 0.65–0.85 0.28–0.44 0.47–0.57 1.0–1.4 1.64–2.7 0.87–0.91 0.87 0.78–0.82

81 97 84 78 47 23 33 75 135 56 54 50

1,298 1,554 1,346 1,250 753 369 481 1,201 2,163 898 856 801

0.73–0.75 0.70–0.75 1.07–1.15 1.20

46 45 69 75

737 721 1,105 1,201

47–58 40–54 20–26 10–14 23–32

753–930 641–866 320–417 160–224 369–513

Substance Shale Greenstone, hornblend

If v1 and v2 are the volumes of the liquids at pressures of p1 and p2 atm, respectively, at any temperature, the coefficient of compressibility b is given by the equation 1 v1 2 v2 b5v p 2p 1 1 2

Bituminous substances Asphaltum Coal, anthracite Coal, bituminous Coal, lignite Coal, peat, turf, dry Coal, charcoal, pine Coal, charcoal, oak Coal, coke Graphite Paraffin Petroleum Petroleum, refined (kerosene) Petroleum, benzine Petroleum, gasoline Pitch Tar, bituminous

Coal and coke, piled Coal, anthracite Coal, bituminous, lignite Coal, peat, turf Coal, charcoal Coal, coke

0.75–0.93 0.64–0.87 0.32–0.42 0.16–0.23 0.37–0.51 Gases (see Sec. 4)

The value of b  106 for oils at low pressures at about 708F varies from about 55 to 80; for mercury at 328F, it is 3.9; for chloroform at 328F, it is 100 and increases with the temperature to 200 at 1408F; for ethyl alcohol, it increases from about 100 at 328F and low pressures to 125 at 1048F; for glycerin, it is about 24 at room temperature and low pressure.

Table 6.1.6 Average Composition of Dry Air between Sea Level and 90-km (295,000-ft) Altitude Element

Formula

% by Vol.

% by Mass

Molecular weight

Nitrogen Oxygen Argon Carbon Dioxide Neon Helium Krypton Methane

N2 02 Ar C02 Ne He Kr CH4

78.084 20.948 0.934 0.0314 0.00182 0.00052 0.000114 0.0002

75.55 23.15 1.325 0.0477 0.00127 0.000072 0.000409 0.000111

28.0134 31.9988 39.948 44.00995 20.183 4.0026 83.80 16.043

From 0.0 to 0.00005 percent by volume of nine other gases. Average composite molecular weight of air 28.9644. SOURCE: “U.S. Standard Atmosphere,” Government Printing Office.

* See also Sec. 6.4.

Table 6.1.7 Specific Gravity and Density of Water at Atmospheric Pressure* (Weights are in vacuo) Density

Density

Temp, 8C

Specific gravity

lb/ft3

kg/m3

Temp, 8C

Specific gravity

lb/ft3

kg/m3

0 2 4 6 8

0.99987 0.99997 1.00000 0.99997 0.99988

62.4183 62.4246 62.4266 62.4246 62.4189

999.845 999.946 999.955 999.946 999.854

40 42 44 46 48

0.99224 0.99147 0.99066 0.98982 0.98896

61.9428 61.894 61.844 61.791 61.737

992.228 991.447 990.647 989.797 988.931

10 12 14 16 18 20 22 24 26 28

0.99973 0.99952 0.99927 0.99897 0.99862 0.99823 0.99780 0.99732 0.99681 0.99626

62.4096 62.3969 62.3811 62.3623 62.3407 62.3164 62.2894 62.2598 62.2278 62.1934

999.706 999.502 999.272 998.948 998.602 998.213 997.780 997.304 996.793 996.242

50 52 54 56 58 60 62 64 66 68

0.98807 0.98715 0.98621 0.98524 0.98425 0.98324 0.98220 0.98113 0.98005 0.97894

61.682 61.624 61.566 61.505 61.443 61.380 61.315 61.249 61.181 61.112

988.050 987.121 986.192 985.215 984.222 983.213 982.172 981.113 980.025 978.920

30 32 34 36 38

0.99567 0.99505 0.99440 0.99371 0.99299

62.1568 62.1179 62.0770 62.0341 61.9893

995.656 995.033 994.378 993.691 992.973

70 72 74 76 78

0.97781 0.97666 0.97548 0.97428 0.97307

61.041 60.970 60.896 60.821 60.745

977.783 976.645 975.460 974.259 973.041

* See also Secs. 4.2 and 6.10.

Table 6.1.8

Volume of Water as a Function of Pressure and Temperature Pressure, atm

Temp, 8F (8C)

0

500

1,000

2,000

3,000

4,000

5,000

6,500

8,000

32 (0) 68 (20) 122 (50) 176 (80)

1.0000 1.0016 1.0128 1.0287

0.9769 0.9804 0.9915 1.0071

0.9566 0.9619 0.9732 0.9884

0.9223 0.9312 0.9428 0.9568

0.8954 0.9065 0.9183 0.9315

0.8739 0.8855 0.8974 0.9097

0.8565 0.8675 0.8792 0.8913

0.8361 0.8444 0.8562 0.8679

0.8244 0.8369 0.8481

SOURCE: “International Critical Tables.”

6-9

6-10

Table 6.1.9 Basic Properties of Several Metals (Staff contribution)*

Material

Density,† g/cm3

Aluminum 2024-T3 Aluminum 6061-T6 Aluminum 7079-T6 Beryllium, QMV Copper, pure Gold, pure Lead, pure Magnesium AZ31B-H24 (sheet) Magnesium HK31A-H24 Molybdenum, wrought Nickle, pure Platinum Plutonium, alpha phase Silver, pure Steel, AISI C1020 (hot-worked) Steel, AISI 304 (sheet) Tantalum Thorium, induction melt Titanium, B 120VCA (aged) Tungesten Uranium D-38

2.77 2.70 2.74 1.85 8.90 19.32 11.34 1.77 1.79 10.3 8.9 21.45 19.0–19.7 10.5 7.85 8.03 16.6 11.6 4.85 19.3 18.97

Coefficient of linear thermal expansion,‡ in/(in  8F)  106 12.6 13.5 13.7 6.4–10.2 9.2 29.3 14.5 14.0 3.0 7.2 5.0 30.0 11.0 6.3 9.9 3.6 6.95 5.2 2.5 4.0–8.0

Thermal conductivity, Btu/(h  ft  8F)

Specific heat,‡ Btu/(lb  8F)

Approx melting temp, 8F

Modulus of elasticity, lb/in2  106

Poisson’s ratio

110 90 70 85 227 172 21.4 55 66 83 53 40 4.8 241 27 9.4 31 21.7 4.3 95 17

0.23 0.23 0.23 0.45 0.092 0.031 0.031 0.25 0.13 0.07 0.11 0.031 0.034 0.056 0.10 0.12 0.03 0.03 0.13 0.033 0.028

940 1,080 900 2,340 1,980 1,950 620 1,100 1,100 4,730 2,650 3,217 1,184 1,760 2,750 2,600 5,425 3,200 3,100 6,200 2,100

10.6 10.6 10.4 40–44 17.0 10.8 2.0 6.5 6.4 40.0 32.0 21.3 14.0 10–11 29–30 28 27.0 7–10 14.8 50 24

0.33 0.33 0.33 0.024–0.030 0.32 0.42 0.40–0.45 0.35 0.35 0.32 0.31§ 0.39 0.15–0.21 0.37 0.29 0.29 0.35 0.27 0.3 0.28 0.21

Room-temperature properties are given. For further information, consult the “Metals Handbook” or a manufacturer’s publication. * Compiled by Anders Lundberg, University of California, and reproduced by permission. † To obtain the preferred density units, kg/m3, multiply these values by 1,000. ‡ See also Tables 6.1.10 and 6.1.11. § At 258C.

Ultimate Yield stress, lb/in2  103 50 40 68 27–38

1.3 22 29 80

40 48 39 21 190 28

stress, lb/in2  103

Elongation, %

70 45 78 33–51 See “Metals Handbook” 18 2.6 37 37 120–200 See “Metals Handbook” 20–24 60 18 65 87 50–145 32 200 18–600 56

18 17 14 1–3.5 30 20–50 15 8 Small 35–40 Small 48 36 65 1–40 34 9 1–3 4

SPECIFIC GRAVITIES AND DENSITIES AND OTHER PHYSICAL DATA Table 6.1.10 Coefficient of Linear Thermal Expansion for Various Materials [Mean values between 32 and 2128F except as noted; in/(in  8F)  104] Metals Aluminum bronze Brass, cast Brass, wire Bronze Constantan (60 Cu, 40 Ni) German silver Iron: Cast Soft forged Wire Magnalium (85 Al, 15 Mg) Phosphor bronze Solder Speculum metal Steel: Bessemer, rolled hard Bessemer, rolled soft Nickel (10% Ni) Type metal

0.094 0.104 0.107 0.100 0.095 0.102 0.059 0.063 0.080 0.133 0.094 0.134 0.107 0.056 0.063 0.073 0.108

Other Materials Bakelite, bleached 0.122 Brick 0.053 Carbon—coke 0.030 Cement, neat 0.060 Concrete 0.060 Ebonite 0.468 Glass: Thermometer 0.045 Hard 0.033 Plate and crown 0.050 Flint 0.044 Pyrex 0.018 Granite 0.04–0.05 Graphite 0.044 Gutta percha 0.875 Ice 0.283 Limestone 0.023–0.05 Marble 0.02–0.09 Masonry 0.025–050

Paraffin: 32–618F 61–1008F 100–1208F Porcelain Quartz: Parallel to axis Perpend. to axis Quarts, fused Rubber Vulcanite Wood (II to fiber): Ash Chestnut and maple Oak Pine Across the fiber: Chestnut and pine Maple Oak

0.592 0.724 2.612 0.02 0.044 0.074 0.0028 0.428 0.400 0.053 0.036 0.027 0.030 0.019 0.027 0.030

Table 6.1.11 Specific Heat of Various Materials [Mean values between 32 and 2128F; Btu/(lb  8F)] Solids Alloys: Bismuth-tin Bell metal Brass, yellow Brass, red Bronze Constantan German silver Lipowits’s metal Nickel steel Rose’s metal Solders (Pb and Sn) Type metal Wood’s metal 40 Pb  60 Bi 25 Pb  75 Bi Asbestos Ashes Bakelite Basalt (lava) Borax Brick Carbon-coke Chalk Charcoal Cinders Coal Concrete Cork Corundum Dolomite Ebonite Glass: Normal Crown Flint

0.040–0.045 0.086 0.0883 0.090 0.104 0.098 0.095 0.040 0.109 0.050 0.040–0.045 0.0388 0.040 0.0317 0.030 0.20 0.20 0.3–0.4 0.20 0.229 0.22 0.203 0.215 0.20 0.18 0.3 0.156 0.485 0.198 0.222 0.33 0.199 0.16 0.12

Granite Graphite Gypsum Hornblende Humus (soil) Ice: 48F 328F India rubber (Para) Kaolin Limestone Marble Oxides: Alumina (Al3O2) Cu2O Lead oxide (PbO) Lodestone Magnesia Magnetite (Fe3O4) Silica Soda Zinc oxide (ZnO) Paraffin wax Porcelain Quarts Quicklime Salt, rock Sand Sandstone Serpentine Sulfur Talc Tufa Vulcanite

0.195 0.201 0.259 0.195 0.44 0.465 0.487 0.27–0.48 0.224 0.217 0.210 0.183 0.111 0.055 0.156 0.222 0.168 0.191 0.231 0.125 0.69 0.22 0.17–0.28 0.21 0.21 0.195 0.22 0.25 0.180 0.209 0.33 0.331

Wood: Fir Oak Pine Liquids Acetic acid Acetone Alcohol (absolute) Aniline Bensol Chloroform Ether Ethyl acetate Ethylene glycol Fusel oil Gasoline Glycerin Hydrochloric acid Kerosene Naphthalene Machine oil Mercury Olive oil Paraffin oil Petroleum Sulfuric acid Sea water Toluene Turpentine Molten metals: Bismuth (535–7258F) Lead (590–6808F) Sulfur (246–2978F) Tin (460–6608F)

0.65 0.57 0.67 0.51 0.544 0.58 0.49 0.40 0.23 0.54 0.478 0.602 0.56 0.50 0.58 0.60 0.50 0.31 0.40 0.033 0.40 0.52 0.50 0.336 0.94 0.40 0.42 0.036 0.041 0.235 0.058

6-11

6.2 IRON AND STEEL by Harold W. Paxton REFERENCES: “Metals Handbook,” ASM International, latest ed., ASTM Standards, pt. 1. SAE Handbook. “Steel Products Manual,” AISI. “Making, Shaping and Treating of Steel,” AISE, latest ed.

CLASSIFICATION OF IRON AND STEEL

Iron (Fe) is not a high-purity metal commercially but contains other chemical elements which have a large effect on its physical and mechanical properties. The amount and distribution of these elements are dependent upon the method of manufacture. The most important commercial forms of iron are listed below. Pig iron is the product of the blast furnace and is made by the reduction of iron ore. Cast iron is an alloy of iron containing enough carbon to have a low melting temperature and which can be cast to close to final shape. It is not generally capable of being deformed before entering service. Gray cast iron is an iron which, as cast, has combined carbon (in the form of cementite, Fe3C) not in excess of a eutectoid percentage—the balance of the carbon occurring as graphite flakes. The term “gray iron” is derived from the characteristic gray fracture of this metal. White cast iron contains carbon in the combined form. The presence of cementite or iron carbide (Fe3C) makes this metal hard and brittle, and the absence of graphite gives the fracture a white color. Malleable cast iron is an alloy in which all the combined carbon in a special white cast iron has been changed to free or temper carbon by suitable heat treatment. Nodular (ductile) cast iron is produced by adding alloys of magnesium or cerium to molten iron. These additions cause the graphite to form into small nodules, resulting in a higher-strength, ductile iron. Ingot iron, electrolytic iron (an iron-hydrogen alloy), and wrought iron are terms for low-carbon materials which are no longer serious items of commerce but do have considerable historical interest. Steel is an alloy predominantly of iron and carbon, usually containing measurable amounts of manganese, and often readily formable. Carbon steel is steel that owes its distinctive properties chiefly to the carbon it contains. Alloy steel is steel that owes its distinctive properties chiefly to some element or elements other than carbon, or jointly to such other elements and carbon. Some alloy steels necessarily contain an important percentage of carbon, even as much as 1.25 percent. There is no complete agreement about where to draw the line between the alloy steels and the carbon steels. Basic oxygen steel and electric-furnace steel are steels made by the basic oxygen furnace and electric furnace processes, irrespective of carbon content; the effective individual alloy content in engineering steels can range from 0.05 percent up to 3 percent, with a total usually less than 5 percent. Open-hearth and Bessemer steelmaking are no longer practiced in the United States. Iron ore is reduced in a blast furnace to form pig iron, which is the raw material for practically all iron and steel products. Formerly, nearly 90 percent of the iron ore used in the United States came from the Lake Superior district; the ore had the advantages of high quality and the cheapness with which it could be mined and transported by way of the Great Lakes. With the rise of global steelmaking and the availability of high-grade ores and pellets (made on a large scale from low-grade ores) from many sources, the choice of feedstock becomes an economic decision. The modern blast furnace consists of a vertical shaft up to 10 m or 40 ft in diameter and over 30 m (100 ft) high containing a descending column of iron ore, coke, and limestone and a large volume of ascending 6-12

hot gas. The gas is produced by the burning of coke in the hearth of the furnace and contains about 34 percent carbon monoxide. This gas reduces the iron ore to metallic iron, which melts and picks up considerable quantities of carbon, manganese, phosphorus, sulfur, and silicon. The gangue (mostly silica) of the iron ore and the ash in the coke combine with the limestone to form the blast-furnace slag. The pig iron and slag are drawn off at intervals from the hearth through the iron notch and cinder notch, respectively. Some of the larger blast furnaces produce around 10,000 tons of pig iron per day. The blast furnace produces a liquid product for one of three applications: (1) the huge majority passes to the steelmaking process for refining; (2) pig iron is used in foundries for making castings; and (3) ferroalloys, which contain a considerable percentage of another metallic element, are used as addition agents in steelmaking. Compositions of commercial pig irons and two ferroalloys (ferromanganese and ferrosilicon) are listed in Table 6.2.1. Physical Constants of Unalloyed Iron Some physical properties of iron and even its dilute alloys are sensitive to small changes in composition, grain size, or degree of cold work. The following are reasonably accurate for “pure” iron at or near room temperature; those with an asterisk are sensitive to these variables perhaps by 10 percent or more. Those with a dagger (†) depend measurably on temperature; more extended tables should be consulted. Specific gravity, 7.866; melting point, 1,5368C (2,7978F); heat of fusion 277 kJ/kg (119 Btu/lbm); thermal conductivity 80.2 W/(m  C) [557 Btu/(h  ft2  in  8F)*†; thermal coefficient of expansion 12  106/8C (6.7  106/8F)†; electrical resistivity 9.7 m  cm*†; and temperature coefficient of electrical resistance 0.0065/8C (0.0036/8F).† Mechanical Properties Representative mechanical properties of annealed low-carbon steel (often similar to the former ingot iron) are as follows: yield strength 130 to 150 MPa (20 to 25 ksi); tensile strength 260 to 300 MPa (40 to 50 ksi); elongation 20 to 45 percent in 2 in; reduction in area of 60 to 75 percent; Brinell hardness 65 to 100. These figures are at best approximate and depend on composition (especially trace additives) and processing variables. For more precise data, suppliers or broader databases should be consulted. Young’s modulus for ingot iron is 202,000 MPa (29,300,000 lb/in2) in both tension and compression, and the shear modulus is 81,400 MPa (11,800,000 lb/in2). Poisson’s ratio is 0.28. The effect of cold rolling on the tensile strength, yield strength, elongation, and shape of the stressstrain curve is shown in Fig. 6.2.1, which is for Armco ingot iron but would not be substantially different for other low-carbon steels. Uses Low-carbon materials weld evenly and easily in all processes, can be tailored to be readily paintable and to be enameled, and with other treatments make an excellent low-cost soft magnetic material with high permeability and low coercive force for mass-produced motors and transformers. Other uses, usually after galvanizing, include culverts, flumes, roofing, siding, and housing frames; thin plates can be used in oil and water tanks, boilers, gas holders, and various nondemanding pipes; enameled sheet retains a strong market in ranges, refrigerators, and other household goods, in spite of challenges from plastics. World Production From about 1970 to 1995, the annual world production of steel was remarkably steady at some 800,000,000 tons. Reductions in some major producers (United States, Japan, some European countries, and the collapsing U.S.S.R.) were balanced by the many smaller countries which began production. The struggle for markets led to pricing that did not cover production costs and led to international political strains. A major impact began in the 1990s as China pushed its own production from a few tens of million tons toward 300,000,000 tons in 2004, pushing world production to over 1 billion tons.

STEEL Table 6.2.1

6-13

Types of Pig Iron for Steelmaking and Foundry Use Chemical composition, %*

Designation

Si

P

Mn

C†

Basic pig, northern In steps of Foundry, northern In steps of Foundry, southern In steps of Ferromanganese (3 grades) Ferrosilicon (silvery pig)

1.50 max 0.25 3.50 max 0.25 3.50 max 0.25 1.2 max 5.00–17.00

0.400 max

1.01–2.00 0.50 0.50–1.25 0.25 0.40–0.75 0.25 74–82 1.00–2.00

3.5–4.4

Basic oxygen steel

3.0–4.5

A wide variety of castings

3.0–4.5

Cast-iron pipe

7.4 max 1.5 max

Addition of manganese to steel or cast iron Addition of silicon to steel or cast iron

0.301–0.700 0.700–0.900 0.35 max 0.300 max

Principal use

* Excerpted from “The Making, Shaping and Treating of Steel,” AISE, 1984; further information in “Steel Products Manual,” AISI, and ASTM Standards, Pt. 1. † Carbon content not specified—for information only. Usually S is 0.05 max (0.06 for ferrosilicon) but S and P for basic oxygen steel are typically much lower today.

Fig. 6.2.1 Effect of cold rolling on the stress-strain relationship of Armco ingot iron. (Kenyon and Burns.)

The world raw material infrastructure (iron ore, coke, and scrap) was not able to react quickly to support this increase, and prices soared; steel prices followed. Recently (2006) long-term ore contracts reflected large price increases, so it appears that for some time steel prices will remain high by historical comparisons. The effects of material substitution are not yet clear, but as end users examine their options more closely, the marketplace will most likely settle the matter. STEEL Steel Manufacturing

Steel is produced by the removal of impurities from pig iron in a basic oxygen furnace or an electric furnace. Basic Oxygen Steel This steel is produced by blowing pure (99 percent) oxygen either vertically under high pressure (1.2 MPa or 175 lb/in2) onto the surface of molten pig iron (BOP) or through tuyeres in the base of the vessel (the Q-BOP process). Some facilities use a combination depending on local circumstances and product mix. This is an autogenous process that requires no external heat to be supplied. The furnaces are similar in shape to the former Bessemer converters but range in capacity to 275 metric tons (t) (300 net tons) or more. The barrelshaped furnace or vessel may or may not be closed on the bottom, is open at the top, and can rotate in a vertical plane about a horizontal axis for charging and for pouring the finished steel. Selected scrap is charged into the vessel first, up to 30 percent by weight of the total charge. Molten pig iron (often purified from the raw blast-furnace hot metal to give lower sulfur, phosphorus, and sometimes silicon) is poured into the vessel. In the Q-BOP, oxygen must be flowing through the bottom tuyeres at this time to prevent clogging; further flow serves to refine the charge and carries in fluxes as powders. In the BOP process, oxygen is introduced through a water-cooled lance introduced through the top of the vessel. Within seconds after the oxygen is turned on, some iron in the charge is converted to ferrous oxide, which reacts rapidly with the impurities of the charge to remove them from the metal. As soon as reaction starts, limestone is added as a flux. Blowing is continued until the desired degree of purification is attained. The reactions take place very rapidly, and blowing of a heat is completed in

about 20 min in a 200-net-ton furnace. Because of the speed of the process, a computer is used to calculate the charge required for making a given heat of steel, the rate and duration of oxygen blowing, and to regulate the quantity and timing of additions during the blow and for finishing the steel. Production rates of well over 270 t per furnace hour (300 net tons) can be attained. The comparatively low investment cost and low cost of operation have already made the basic oxygen process the largest producer of steel in the world, and along with electric furnaces, it almost completely replaces the basic open hearth as the major steelmaking process. No open hearths operate in the United States today. Electric Steel The biggest change in steelmaking over the last 20 years is the fraction of steel made by remelting scrap in an electric furnace (EF), originally to serve a relatively nondemanding local market, but increasingly moving up in quality and products to compete with mills using the blast-furnace/oxygen steelmaking route. The economic competition is fierce and has served to improve choices for customers. In the last decade the fraction of steel made in the EF in the United States has gone above 50 percent by a combination of contraction in blast-furnace (BF) production (no new ones built, retirement of some, and a shortage of suitable coke) and an increase of total production, which has been met with EFs. At one time, there was concern that some undesirable elements in scrap that are not eliminated in steelmaking, notably copper and zinc, would increase with remelting cycles. However, the development of alternative iron units (direct reduced iron, iron carbide, and even pig iron) to dilute scrap additions has at least postponed this as an issue. Early processes used three-phase alternating current, but increasingly the movement is to a single dc electrode with a conducting hearth. The high-power densities necessitate water cooling and improved basic refractory linings. Scrap is charged into the furnace, which usually contains some of the last heat to improve efficiency. Older practices often had a second slag made after the first meltdown and refining by oxygen blowing, but today, final refining takes place outside the melting unit in a ladle furnace, which allows refining, temperature control, and alloying additions to be made without interfering with the next heat. The materials are continuously cast including slabs only 50 mm (2 in) thick and casting directly to sheet in the 1- to 2-mm (0.04- to 0.08-in) thickness range is passing through the pilot stage. The degree to which electric melting can replace more conventional methods is of great interest and depends in large part on the availability of sufficiently pure scrap at an attractive price and some improvements in surface quality to be able to make the highest-value products. Advances in EF technology are countered aggressively by new developments and cost control in traditional steelmaking; it may well be a decade or more before the pattern clarifies. The induction furnace is simply a fairly small melting furnace to which the various metals are added to make the desired alloy, usually quite specialized. When steel scrap is used as a charge, it will be a highgrade scrap the composition of which is well known (see also Sec. 7). Ladle Metallurgy One of the biggest contributors to quality in steel products is the concept of refining liquid steel outside the first melting unit—BOP, Q-BOP, or EF, none of which is well designed to perform

6-14

IRON AND STEEL

the final refining function. In this separate unit, gases in solution (oxygen, hydrogen, and, to a lesser extent, nitrogen) can be reduced by vacuum treatment, carbon can be adjusted to desirable very low levels by reaction with oxygen in solution, alloy elements can be added, the temperature can be adjusted, and the liquid steel can be stirred by inert gases to float out inclusions and provide a homogeneous charge to the continuous casters which are now virtually ubiquitous. Reducing oxygen in solution means a “cleaner” steel (fewer nonmetallic inclusions) and a more efficient recovery of active alloying elements added with a purpose and which otherwise might end up as oxides. Steel Ingots With the advent of continuous casters, ingot casting is now generally reserved for the production of relatively small volumes of material such as heavy plates and forgings which are too big for current casters. Ingot casting, apart from being inefficient in that the large volume change from liquid to solid must be handled by discarding the large void space usually at the top of the ingot (the pipe), also has several other undesirable features caused by the solidification pattern in a large volume, most notably significant differences in composition throughout the piece (segregation) leading to different properties, inclusions formed during solidification, and surface flaws from poor mold surfaces, splashing and other practices, which if not properly removed lead to defects in finished products (seams, scabs, scale, etc.). Some defects can be removed or attenuated, but others cannot; in general, with the exception of some very specialized tool and bearing steels, products from ingots are no longer state-of-the-art unless they are needed for size. Continuous Casting This concept, which began with Bessemer in the 1850s, began to be a reliable production tool around 1970 and since then has replaced basically all ingot casting. Industrialized countries all continuously cast close to 100 percent of their production. Sizes cast range from 2-m (80-in)—or more—by 0.3-m (12-in) slabs down to 0.1-m (4-in) square or round billets. Multiple strands are common where production volume is important. Many heats of steel can be cast in a continuous string with changes of width possible during operation. Changes of composition are possible in succeeding ladles with a discard of the short length of mixed composition. By intensive process control, it is often possible to avoid cooling the cast slabs to room temperature for inspection, enabling energy savings since the slabs require less reheating before hot rolling. If for some reason the slabs are cooled to room temperature, any surface defects which might lead to quality problems can be removed—usually by scarfing with an oxyacetylene torch or by grinding. Since this represents a yield loss, there is a real economic incentive to avoid the formation of such defects by paying attention to casting practices. Mechanical Treatment of Steel

Cast steel, in the form of slabs, billets, or bars (these latter two differ somewhat arbitrarily in size) is treated further by various combinations of hot and cold deformation to produce a finished product for sale from the mill. Further treatments by fabricators usually occur before delivery to the final customer. These treatments have three purposes: (1) to change the shape by deformation or metal removal to desired tolerances; (2) to break up—at least partially—the segregation and large grain sizes inevitably formed during the solidification process and to redistribute the nonmetallic inclusions which are present; and (3) to change the properties. For example, these may be functional—strength or toughness—or largely aesthetic, such as reflectivity. These purposes may be separable or in many cases may be acting simultaneously. An example is hot-rolled sheet or plate in which often the rolling schedule (reductions and temperature of each pass, and the cooling rate after the last reduction) is a critical path to obtain the properties and sizes desired and is often known as “heat treatment on the mill.” The development of controls to do this has allowed much higher tonnages at attractive prices, and increasing robustness of rolls has allowed steel to be hot-rolled down to the 1 mm (0.04-in) range, where it can compete with the more expensive “cold-rolled” material. Most steels are reduced after appropriate heating (to above 1,0008C) in various multistand hot rolling mills to produce sheet, strip, plate, tubes,

shaped sections, or bars. More specialized deformation, e.g., by hammer forging, can result in working in more than one direction, with a distri-

bution of inclusions which is not extended in one direction. Rolling, e.g., more readily imparts anisotropic properties. Press forging at slow strain rates changes the worked structure to greater depths and is preferred for high-quality products. The degree of reduction required to eliminate the cast structure varies from 4:1 to 10:1; clearly smaller reductions would be desirable but are currently not usual. The slabs, blooms, and billets from the caster must be reheated in an atmosphere-controlled furnace to the working temperature, often from room temperature, but if practices permit, they may be charged hot to save energy. Coupling the hot deformation process directly to slabs at the continuous caster exit is potentially more efficient, but practical difficulties currently limit this to a small fraction of total production. The steel is oxidized during heating to some degree, and this oxidation is removed by a combination of light deformation and high-pressure water sprays before the principal deformation is applied. There are differences in detail between processes, but as a representative example, the conventional production of wide “hot-rolled sheet” [1.5 m (60 in)] will be discussed. The slab, about 0.3 m (12 in) thick at about 12008C is passed through a scale breaker and high-pressure water sprays to remove the oxide film. It then passes through a set of roughing passes (possibly with some modest width reduction) to reduce the thickness to just over 25 mm (1 in), the ends are sheared perpendicular to the length to remove irregularities, and finally they are fed into a series of up to seven roll stands each of which creates a reduction of 50 to 10 percent passing along the train. Process controls allow each mill stand to run sufficiently faster than the previous one to maintain tension and avoid pileups between stands. The temperature of the sheet is a balance between heat added by deformation and that lost by heat transfer, sometimes with interstand water sprays. Ideally the temperature should not vary between head and tail of the sheet, but this is hard to accomplish. The deformation encourages recrystallization and even some grain growth between stands; even though the time is short, temperatures are high. Emerging from the last stand between 815 and 9508C, the austenite* may or may not recrystallize, depending on the temperature. At higher temperatures, when austenite does recrystallize, the grain size is usually small (often in the 10- to 20-mm range). At lower exit temperatures austenite grains are rolled into “pancakes” with the short dimension often less than 10 mm. Since several ferrite* grains nucleate from each austenite grain during subsequent cooling, the ferrite grain size can be as low as 3 to 6 mm (ASTM 14 to 12). We shall see later that small ferrite grain sizes are a major contributor to the superior properties of today’s carbon steels, which provide good strength and superior toughness simultaneously and economically. Some of these steels also incorporate strong carbide* and nitride* formers in small amounts to provide extra strength from precipitation hardening; the degree to which these are undissolved in austenite during hot rolling affects recrystallization significantly. The subject is too complex to treat briefly here; the interested reader is referred to the ASM “Metals Handbook.” After the last pass, the strip may be cooled by programmed water sprays to between 510 and 7308C so that during coiling, any desired precipitation processes may take place in the coiler. The finished coil, usually 2 to 3 mm (0.080 to 0.120 in) thick and sometimes 1.3 to 1.5 mm (0.052 to 0.060 in) thick, which by now has a light oxide coating, is taken off line and either shippped directly or retained for further processing to make higher valueadded products. Depending on composition, typical values of yield strength are from 210 up to 380 MPa (30 to 55 ksi), UTS in the range of 400 to 550 MPa (58 to 80 ksi), with an elongation in 200 mm (8 in) of about 20 percent. The higher strengths correspond to low-alloy steels. About half the sheet produced is sold directly as hot-rolled sheet. The remainder is further cold-worked after scale removal by pickling and either is sold as cold-worked to various tempers or is recrystallized to form a very formable product known as cold-rolled and annealed, * See Fig. 6.2.4 and the discussion thereof.

STEEL

or more usually as cold-rolled, sheet. Strengthening by cold work is common in sheet, strip, wire, or bars. It provides an inexpensive addition to strength but at the cost of a serious loss of ductility, often a better surface finish, and a finished product held to tighter tolerances. It improves springiness by increasing the yield strength, but does not change the elastic moduli. Examples of the effect of cold working on carbon-steel drawn wires are shown in Figs. 6.2.2 and 6.2.3.

6-15

To make the highest class of formable sheet is a very sophisticated operation. After pickling, the sheet is again reduced in a multistand (three, four, or five) mill with great attention paid to tolerances and surface finish. Reductions per pass range from 25 to 45 percent in early passes to 10 to 30 percent in the last pass. The considerable heat generated necessitates an oil-water mixture to cool and to provide the necessary lubrication. The finished coil is degreased prior to annealing. The purpose of annealing is to provide, for the most demanding applications, pancake-shaped grains after recrystallization of the coldworked ferrite, in a matrix with a very sharp crystal texture containing little or no carbon or nitrogen in solution. The exact metallurgy is complex but well understood. Two types of annealing are possible: slow heating, holding, and cooling of coils in a hydrogen atmosphere (box annealing) lasting several days, or continuous feeding through a furnace with a computer-controlled time-temperature cycle. The latter is much quicker but very capital-intensive and requires careful and complex process control. As requirements for formability are reduced, production controls can be relaxed. In order of increasing cost, the series is commercial quality (CQ), drawing quality (DQ), deep drawing quality (DDQ), and extra deep drawing quality (EDDQ). Even more formable steels are possible, but often are not commercially necessary. Some other deformation processes are occasionally of interest, such as wire drawing, usually done cold, and extrusion, either hot or cold. Hot extrusion for materials that are difficult to work became practical through the employment of a glass lubricant. This method allows the hot extrusion of highly alloyed steels and other exotic alloys subjected to service at high loads and/or high temperatures. Constitution and Structure of Steel

Fig. 6.2.2 Increase of tensile strength of plain carbon steel with increasing amounts of cold working by drawing through a wire-drawing die.

Fig. 6.2.3 Reduction in ductility of plain carbon steel with increasing amounts of cold working by drawing through a wire-drawing die.

As a result of the methods of production, the following elements are always present in steel: carbon, manganese, phosphorus, sulfur, silicon, and traces of oxygen, nitrogen, and aluminum. Various alloying elements are frequently deliberately added, such as nickel, chromium, copper, molybdenum, niobium (columbium), and vanadium. The most important of the above elements in steel is carbon, and it is necessary to understand the effect of carbon on the internal structure of steel to understand the heat treatment of carbon and low-alloy steels. The iron–iron carbide equilibrium diagram in Fig. 6.2.4 shows the phases that are present in steels of various carbon contents over a range of temperatures under equilibrium conditions. Pure iron when heated to 9108C (1,6708F) changes its internal crystalline structure from a bodycentered cubic arrangement of atoms, alpha iron, to a face-centered

Fig. 6.2.4 Iron–iron carbide equilibrium diagram, for carbon content up to 5 percent. (Dashed lines represent equilibrium with cementite, or iron carbide; adjacent solid lines indicate equilibrium with graphite.)

6-16

IRON AND STEEL

cubic structure, gamma iron. At 1,3908C (2,5358F), it changes back to the body-centered cubic structure, delta iron, and at 1,5398C (2,8028F) the iron melts. When carbon is added to iron, it is found that it has only slight solid solubility in alpha iron (much less than 0.001 percent at room temperature at equilibrium). These small amounts of carbon, however, are critically important in many high-tonnage applications where formability is required. On the other hand, gamma iron will hold up to 2.0 percent carbon in solution at 1,1308C (2,0668F). The alpha iron containing carbon or any other element in solid solution is called ferrite, and the gamma iron containing elements in solid solution is called austenite. Usually when not in solution in the iron, the carbon forms a compound Fe3C (iron carbide) which is extremely hard and brittle and is known as cementite. Other carbides of iron exist but are only of interest in rather specialized instances. The temperatures at which the phase changes occur are called critical points (or temperatures) and, in the diagram, represent equilibrium conditions. In practice there is a lag in the attainment of equilibrium, and the critical points are found at lower temperatures on cooling and at higher temperatures on heating than those given, the difference increasing with the rate of cooling or heating. The various critical points have been designated by the letter A; when obtained on cooling, they are referred to as Ar, on the heating as Ac. The subscripts r and c refer to refroidissement and chauffage, respectively, and reflect the early French contributions to heat treatment. The various critical points are distinguished from each other by numbers after the letters, being numbered in the order in which they occur as the temperature increases. Ac1 represents the beginning of transformation of ferrite to austenite on heating; Ac3 the end of transformation of ferrite to austenite on heating, and Ac4 the change from austenite to delta iron on heating. On cooling, the critical points would be referred to as Ar4, Ar3, and Ar1, respectively. The subscript 2, not mentioned here, refers to a magnetic transformation. The error came about through a misunderstanding of what rules of thermodynamics apply in phase diagrams. It must be remembered that the diagram represents the pure iron-iron carbide system at equilibrium. The varying amounts of impurities in commercial steels affect to a considerable extent the position of the curves and especially the lateral position of the eutectoid point. Carbon steel in equilibrium at room temperature will have present both ferrite and cementite. The physical properties of ferrite are approximately those of pure iron and are characteristic of the metal. Cementite is itself hard and brittle; its shape, amount, and distribution control many of the mechanical properties of steel, as discussed later. The fact that the carbides can be dissolved in austenite is the basis of the heat treatment of steel, since the steel can be heated above the A1 critical temperature to dissolve all the carbides, and then suitable cooling through the appropriate range will produce a wide and predictable range of the desired size and distribution of carbides in the ferrite. If austenite with the eutectoid composition at 0.76 percent carbon (Fig. 6.2.4) is cooled slowly through the critical temperature, ferrite and cementite are rejected simultaneously, forming alternate plates or lamellae. This microstructure is called pearlite, since when polished and etched it has a pearly luster. When examined under a high-power optical microscope, however, the individual plates of cementite often can be distinguished easily. If the austenite contains less carbon than the eutectoid composition (i.e., hypoeutectoid compositions), free ferrite will first be rejected on slow cooling through the critical temperature until the remaining austenite reaches eutectoid composition, when the simultaneous rejection of both ferrite and carbide will again occur, producing pearlite. A hypoeutectoid steel at room temperature will be composed of areas of free ferrite and areas of pearlite; the higher the carbon percentage, the greater the amount of pearlite present in the steel. If the austenite contains more carbon than the eutectoid composition (i.e., hypereutectoid composition) and is cooled slowly through the critical temperature, then cementite is rejected and appears at the austenitic grain boundaries, forming a continuous cementite network until the remaining austenite reaches eutectoid composition, at which time pearlite is formed. A hypereutectoid steel, when slowly cooled, will exhibit areas of pearlite surrounded by a thin network of cementite, or iron carbide.

As the cooling rate is increased, the spacing between the pearlite lamellae becomes smaller; with the resulting greater dispersion of carbide preventing slip in the iron crystals, the steel becomes harder. Also, with an increase in the rate of cooling, there is less time for the separation of excess ferrite or cementite, and the equilibrium amount of these constituents will not be precipitated before the austenite transforms to pearlite. Thus with a fast rate of cooling, pearlite may contain less or more carbon than given by the eutectoid composition. When the cooling rate becomes very rapid (as obtained by quenching), the carbon does not have sufficient time to separate out in the form of carbide, and the austenite transforms to a highly elastically stressed structure supersaturated with carbon called martensite. This structure is exceedingly hard but brittle and requires tempering to increase the ductility. Tempering consists of heating martensite to some temperature below the critical temperature, causing the carbide to precipitate in the form of small spheroids, or especially in alloy steels, as needles or platelets. The higher the tempering temperature, the larger the carbide particle size, the greater the ductility of the steel, and the lower the hardness. In a carbon steel, it is possible to have a structure consisting either of parallel plates of carbide in a ferrite matrix, the distance between the plates depending upon the rate of cooling, or of carbide spheroids in a ferrite matrix, the size of the spheroids depending upon the temperature to which the hardened steel was heated. (Some spheroidization occurs when pearlite is heated, but only at high temperatures close to the critical temperature range.) Heat-Treating Operations

The following definitions of terms have been adopted by the ASTM, SAE, and ASM in substantially identical form. Heat Treatment An operation, or combination of operations, involving the heating and cooling of a metal or an alloy in the solid state, for the purpose of obtaining certain desirable conditions or properties. Quenching Rapid cooling by immersion in liquids or gases or by contact with metal. Hardening Heating and quenching certain iron-base alloys from a temperature either within or above the critical range for the purpose of producing a hardness superior to that obtained when the alloy is not quenched. Usually restricted to the formation of martensite. Annealing A heating and cooling operation implying usually a relatively slow cooling. The purpose of such a heat treatment may be (1) to remove stresses; (2) to induce softness; (3) to alter ductility, toughness, electrical, magnetic, or other physical properties; (4) to refine the crystalline structure; (5) to remove gases; or (6) to produce a definite microstructure. The temperature of the operation and the rate of cooling depend upon the material being heat-treated and the purpose of the treatment. Certain specific heat treatments coming under the comprehensive term annealing are as follows: Full Annealing Heating iron base alloys above the critical temperature range, holding above that range for a proper period of time, followed by slow cooling to below that range. The annealing temperature is usually about 508C (s0.3 3 2,000 2 Ad

Table 10.3.1

Weight of locomotive required to haul train up the grade:

W 5 LsR 1 Gd> s0.25 3 2,000 2 Gd Weight of locomotive necessary to start train on the grade:

W 5 LsR 1 G 1 Ad> s0.30 3 2,000 2 G 2 Ad where W is the weight in tons of locomotive required; R is the frictional resistance of the cars in pounds per ton and is taken as 20 lb for cars with antifriction bearings and 30 lb for plain-bearing cars; L is the weight of the load in tons; A is the acceleration resistance [this is 100 for 1 mi/(h  s) and is usually taken at 20 for less than 10 mi/h or at 30 from 10 to 12 mi/h, corresponding to an acceleration of 0.2 or 0.3 mi/(h  s)]; G is the grade resistance in pounds per ton or 20 lb/ton for each percent of grade (25 percent is the running adhesion of the locomotive, 30 percent is the starting adhesion using sand); 2,000 is the factor to give adhesion in pounds per ton. Where the grade is in favor of the load:

W 5 LsG 2 Rd>s0.20 3 2,000 2 Gd To brake the train to a stop on grade:

W 5 LsG 1 B 2 Rd> s0.20 3 2,000 2 G 2 Bd where B is the braking (or decelerating) effort in pounds per ton and equals 100 lb/ton for a braking rate of 1 mi/(h  s) or 20 lb/ton for a braking rate of 0.2 mi/(h  s) or 30 lb for a braking rate of 0.3 mi/(h  s). The adhesion is taken from a safety standpoint as 20 percent. It is not advisable to rely on using sand to increase the adhesion, since the sandboxes may be empty when sand is needed. Time in seconds to brake the train to stop:

s5

mi/h sstartd 2 mi/h sfinishd deceleration in mi/sh # sd

Distance in feet to brake the train to a stop:

ft 5 [mi/h sstartd 2 mi/h sfinishd] 3 s 3 1.46/2 Storage-battery locomotives are used for hauling muck cars in tunnel construction where it is inconvenient to install trolley wires and bond the track as the tunnel advances. They are also used to some extent in metal mines and in mines of countries where trolley locomotives are not permitted. They are often used in coal mines in the United States for hauling supplies. Their first cost is frequently less than that for a trolley installation. They also possess many of the advantages of the trolley locomotive and eliminate the danger and obstruction of the trolley wire. Storage-battery locomotives are limited by the energy that is stored in the battery and should not be used on steep grades or where large, continuous overloads are required. Best results are obtained where light

Haulage Capacities of Locomotives with Steel-Tired or Rolled-Steel Wheels* Weight of locomotive, tons†

Grade Level 1% 2% 3% 4% 5%

Drawber pull, lb Haulage capacity, gross tons Drawbar pull, lb Haulage capacity, gross tons Drawbar pull, lb Haulage capacity, gross tons Drawbar pull, lb Haulage capacity, gross tons Drawbar pull, lb Haulage capacity, gross tons Drawbar pull, lb Haulage capacity, gross tons

10-23

11

15

20

27

37

50

5,500 275 5,280 132 5,260 88 4,840 67 4,620 46 4,400 31

7,500 375 7,200 180 6,900 115 6,600 82 6,300 63 6,000 50

10,000 500 9,600 240 9,200 153 8,800 110 8,400 84 8,000 67

13,500 675 12,960 324 12,420 207 11,880 149 11,340 113 10,800 90

18,500 925 17,760 444 17,020 284 16,280 204 15,540 155 14,800 123

25,000 1,250 24,000 600 23,000 384 22,000 275 21,000 210 20,000 167

* Jeffrey Mining Machinery Co. † Haulage capacities are based on 20 lb/ton rolling friction, which is conservative for roller-bearing cars. Multiply lb by 0.45 to get kg and tons by 0.91 to get tonnes.

10-24

DRAGGING, PULLING, AND PUSHING

and medium loads are to be handled intermittently over short distances with a grade of not over 3 percent against the load. The general construction and mechanical features are similar to those of the four-wheel trolley type, with battery boxes located either on top of the locomotive or between the side frames, according to the height available. The motors are rugged, with high efficiency. Storage-battery locomotives for coal mines are generally of the explosion-tested type approved by the Bureau of Mines for use in gaseous mines. The battery usually has sufficient capacity to last a single shift. For two- or three-shift operation, an extra battery box with battery is required so that one battery can be charging while the other is working on the locomotive. Motor-generator sets or rectifiers are used for charging the batteries. The overall efficiency of the battery, motor, and gearing is approximately 63 percent. The speed varies from 3 to 7 mi/h, the average being 31⁄2 to 41⁄2 mi/h. Battery locomotives are available in sizes from 2 to 50 tons. They are usually manufactured to suit individual requirements, since the sizes of motors and battery are determined by the amount of work that the locomotive has to do in an 8-h shift. INDUSTRIAL CARS

Various types of narrow-gage industrial cars are used for handling bulk and package materials inside and outside of buildings. Those used for bulk material are usually of the dumping type, the form of the car being determined by the duty. They are either pushed by workers or drawn by mules, locomotives, or cable. The rocker side-dump car (Fig. 10.3.1) consists of a truck on which is mounted a V-shaped steel body supported on rockers so that it can be tipped to either side, discharging material. This type is mainly used on construction work. Capacities vary from 2⁄3 to 5 tons for track gages of 18, 20, 24, 30, 36, and 561⁄2 in. In the gablebottom car (Fig. 10.3.2), the side doors a are hinged at the top and controlled by levers b and c, which lock the doors when closed. Since this type of car discharges material close to the ground on both sides of

circle. This car is made with capacities from 12 to 27 ft3 to suit local requirements. The hopper-bottom car (Fig. 10.3.4) consists of a hopper on wheels, the bottom opening being controlled by door a, which is operated by chain b winding on shaft c. The shaft is provided with handwheel and ratchet and pawl. The type of door or gate controlling the bottom opening varies with different materials.

Fig. 10.3.3 Scoop dumping car.

Fig. 10.3.4 Hopper-bottom car.

The box-body dump car (Fig. 10.3.5) consists of a rectangular body pivoted on the trucks at a and held in horizontal position by chains b. The side doors of the car are attached to levers so that the door is automatically raised when the body of the car is tilted to its dumping position. The cars can be dumped to either side. On the large sizes, where rapid dumping is required, dumping is accomplished by compressed air. This type of car is primarily used in excavation and quarry work, being loaded by power shovels. The greater load is placed on the side on which the car will dump, so that dumping is automatic when the operator releases the chain or latch. The car bodies may be steel or steellined wood. Mine cars are usually of the four-wheel type, with low bodies, the doors being at one end, and pivoted at the top with latch at

Fig. 10.3.5 Box-body dump car. Fig. 10.3.1 Rocker side-dump car.

Fig. 10.3.2 Gable-bottom car.

the track simultaneously, it is used mainly on trestles. Capacities vary from 29 to 270 ft3 for track gages of 24, 36, 40, and 50 in. The scoop dumping car (Fig. 10.3.3) consists of a scoop-shaped steel body pivoted at a on turntable b, which is carried by the truck. The latch c holds the body in a horizontal position, being released by chain d attached to handle e. Since the body is mounted on a turntable, the car is used for service where it is desirable to discharge material at any point in the

Table 10.3.2

the bottom. Industrial tracks are made with rails from 12 to 45 lb/yd (6.0 to 22 kg/m) and gages from 24 in to 4 ft 81⁄2 in (0.6 to 1.44 m). Either steel or wooden ties are used. Owing to its lighter weight, the steel tie is preferred where tracks are frequently moved, the track being made up in sections. Industrial cars are frequently built with one wheel attached to the axle and the other wheel loose to enable the car to turn on shortradius tracks. Capacities vary from 4 to 50 yd3 (3.1 to 38 m3) for track gages of 36 to 561⁄2 in (0.9 to 1.44 m), with cars having weights from 6,900 to 80,300 lb (3,100 to 36,000 kg). The frictional resistance per ton (2,000 lb) (8,900 N) for different types of mine-car bearings are given in Table 10.3.2.

Frictional Resistance of Mine Car Bearings Drawbar pull Level track

Types of bearings Spiral roller Solid roller Self-oiling Babbitted, old style

lb/short ton 13 14 22 24

2% grade

4% grade

N/tonne

lb/short ton

N/tonne

58 62 98 107

15 18 31 40

67 80 138 178

lb/short ton

N/tonne

46 53

205 236

CAR-UNLOADING MACHINERY DOZERS, DRAGLINES

The dual capability of some equipment, such as dozers and draglines, suggests that it should be mentioned as prime machinery in the area of materials handling by dragging, pulling, or pushing. Dozers are described in the discussion on earthmoving equipment since their basic frames are also used for power shovels and backhoes. In addition, dozers perform the auxiliary function of pushing carryall earthmovers to assist them in scraping up their load. Dragline equipment is discussed with belowsurface handling or excavation. The same type of equipment that would drag or scrape may also have a lifting function.

10-25

The loaded car b is to one side of the center and causes the cylinder to rotate, the material rolling to the chute beneath. The band brake c, with counterweight d, is operated by lever e, putting the dumping under control of the operator. No power is required; one operator can dump two or three cars per minute.

MOVING SIDEWALKS

Moving horizontal belts with synchronized balustrading have been introduced to expedite the movement of passengers to or from railroad trains in depots or planes at airports (see belt conveyors). A necessary feature is the need to prevent the clothing of anyone (e.g., a child) sitting on the moving walk from being caught in the mechanism at the end of the walk. Use of a comblike stationary end fence protruding down into longitudinal slots in the belt is an effective preventive. CAR-UNLOADING MACHINERY

Four types of devices are in common use for unloading material from all types of open-top cars: crossover and horn dumps, used to unload mine cars with swinging end doors; rotary car dumps, for mine cars without doors; and tipping car dumps, for unloading standard-gage cars where large unloading capacity is required. Crossover Dump Figure 10.3.6 shows a car in the act of dumping. Figure 10.3.7 shows a loaded car pushing an empty car off the dump. A section of track is carried by a platform supported on rockers a. An extension bar b carries the weight c and the brake friction bar d. A hand lever controls the brake, acting on the friction bar and placing the dumping under the control of the operator. A section of track e in front

Fig. 10.3.6 Crossover dump: car unloading.

Fig. 10.3.7 Crossover dump: empty car being pushed away.

of the dump is pivoted on a parallel motion and counterbalanced so that it is normally raised. The loaded car depresses the rails e and, through levers, pivots the horns f around the shafts g, releasing the empty car. The loaded car strikes the empty car, starting it down the inclined track. After the loaded car has passed the rails e, the springs return the horns f so that they stop the loaded car in the position to dump. Buffer springs on the shaft g absorb the shock of stopping the car. Since the center of gravity of the loaded car is forward of the rockers, the car will dump automatically under control of the brake. No power is required for this dump, and one operator can dump three or four cars per minute. Rotary Gravity Dump (Fig. 10.3.8) This consists of a steel cylinder supported by a shaft a, its three compartments carrying three tracks.

Fig. 10.3.8 Rotary gravity dump. Rotary-power dumpers are also built to take any size of open-top railroad car and are frequently used in power plants, coke plants, ports, and ore mines to dump coal, coke, ore, bauxite, and other bulk material. They are mainly of two types: (1) single barrel, and (2) tandem. The McDowell-Wellman Engineering Co. dumper consists of a revolving cradle supporting a platen (with rails in line with the approach and runoff car tracks in the upright position), which carries the car to be dumped. A blocking on the dumping side supports the side of the car as the cradle starts rotating. Normally, the platen is movable and the blocking is fixed, but in some cases the platen is fixed and the blocking movable. Where there is no variation in the car size, the platen and blocking are both fixed. The cradle is supported on two end rings, which are bound with a rail and a gear rack. The rail makes contact with rollers mounted in sills resting on the foundation. Power through the motor rotates the cradle by means of two pinions meshing with the gear racks. The angle of rotation for a complete dump is 155 for a normal operation, but occasionally, a dumper is designed for 180 rotation. The clamps, supported by the cradle, start moving down as the dumper starts to rotate. These clamps are lowered, locked, released, and raised either by a gravity-powered mechanism or by hydraulic cylinders. With the advent of the unit-train system, the investment and operating costs for a dumper have been reduced considerably. The design of dumpers for unit train has improved and results in fewer maintenance problems. The use of rotary couplers on unit train eliminates uncoupling of cars while dumping because the center of the rotary coupler is in line with the center of rotation of the dumper. Car Shakers As alternatives to rotating or tilting the car, several types of car shakers are used to hasten the discharge of the load. Usually the shaker is a heavy yoke equipped with an unbalanced pulley rotated at 2,000 r/min by a 20-hp (15-kW) motor. The yoke rests upon the car top sides, and the load is actively vibrated and rapidly discharged. While a car shaker provides a discharge rate about half that of a rotary dumper, the smaller investment is advantageous. Car Positioner As the popularity of unit-train systems consisting of rail cars connected by rotary couplers has increased, more rotary dumping stations have been equipped with an automatic train positioner developed by McDowell-Wellman Engineering Company. This device consists of a carriage moving parallel to the railroad track actuated by either hydraulic cylinders or wire rope driven by a winch, which carries an arm that rotates in a vertical plane to engage the coupling between the cars. The machines are available in many sizes, the largest of which are capable of indexing 200-car trains in one- or two-car increments through the rotary dumper. These machines or similar ones are also available from FMC/Materials Handling System Division, Heyl and Patterson, Inc., and Whiting Corp.

10.4 LOADING, CARRYING, AND EXCAVATING by Ernst K. H. Marburg and Associates CONTAINERIZATION

The proper packaging of material to assist in handling can significantly minimize the handling cost and can also have a marked influence on the type of handling equipment employed. For example, partial carload lots of liquid or granular material may be shipped in rigid or nonrigid containers equipped with proper lugs to facilitate in-transit handling. Heavy-duty rubberized containers that are inert to most cargo are available for repeated use in shipping partial carloads. The nonrigid container reduces return shipping costs, since it can be collapsed to reduce space. Disposable lightweight corrugated-cardboard shipping containers for small and mediumsized packages both protect the cargo and permit stacking to economize on space requirements. The type of container to be used should be planned or considered when the handling mechanism is selected.

(2,045 to 2,500 kg), with customer manufactured models to 8,000 lb (3,636 kg). While available in a variety of fork sizes, by far the most common is 27 in wide  48 in long (686 mm  1,219 mm). This size accommodates the most common pallet sizes of 40 in  48 in (1,016 mm  1,219 mm) and 48 in  48 in (1,219 mm  1,219 mm). Pallet trucks are also available in motorized versions, equipped with dc electric motors to electrically raise and transport. The power supply for these

SURFACE HANDLING by Colin K. Larsen Blue Giant Equipment Co. Lift Trucks and Palletized Loads

The basis of all efficient handling, storage, and movement of unitized goods is the cube concept. Building a cube enables a large quantity of unit goods to be handled and stored at one time. This provides greater efficiency by increasing the volume of goods movement for a given amount of work. Examples of cube-facilitating devices include pallets, skids, slip sheets, bins, drums, and crates. The most widely applied cube device is the pallet. A pallet is a low platform, usually constructed of wood, incorporating openings for the forks of a lift truck to enter. Such openings are designed to enable a lift truck to pick up and transport the pallet containing the cubed goods. Lift truck is a loose term for a family of pallet handling/transporting machines. Such machines range from manually propelled low-lift devices (Fig. 10.4.1) to internal combustion and electric powered ride-on highlift devices (Fig. 10.4.2). While some machines are substitutes in terms of function, each serves its own niche when viewed in terms of individual budgets and applications. Pallet trucks are low-lift machines designed to raise loaded pallets sufficiently off the ground to enable the truck to transport the pallet horizontally. Pallet trucks are available as manually operated and propelled models that incorporate a hydraulic pump and handle assembly (Fig. 10.4.1). This pump and handle assembly enables the operator to raise the truck forks, and push/pull the load. Standard manual pallet trucks are available in lifting capacities from 4,500 to 5,500 lb

Fig. 10.4.1 Manually operated and propelled pallet truck. 10-26

Fig. 10.4.2 Lift truck powered by an internal-combustion engine.

trucks is an on-board lead-acid traction battery that is rechargeable when the truck is not in use. Control of these trucks is through a set of lift, lower, speed, and direction controls fitted into the steering handle assembly. Powered pallet trucks are available in walk and ride models. Capacities range from 4,000 to 10,000 lb (1,818 to 4,545 kg), with forks up to 96 in (2,438 mm) long. The longer fork models are designed to allow the truck to transport two pallets, lined up end to end. Stackers, as the name implies, are high lift machines designed to raise and stack loaded pallets in addition to providing horizontal transportation.

Fig. 10.4.3 Giant.)

Counterbalanced electric-battery-powered pallet truck. (Blue

SURFACE HANDLING

Stackers are separated into two classes: straddle and counterbalanced. Straddle stackers are equipped with legs which straddle the pallet and provide load and truck stability. The use of the straddle leg system results in a very compact chassis which requires minimal aisle space for turning. This design, however, does have its trade-offs insomuch as the straddles limit the truck’s usage to smooth level floors. The limited leg underclearance inherent in these machines prohibits their use on dock levelers for loading/unloading transport trucks. Straddle stackers are available from 1,000 to 4,000 lb (455 to 1,818 kg) capacity with lift heights to 16 ft (4,877 mm). Counterbalanced stackers utilize a counterweight system in lieu of straddle legs for load and vehicle stability (Fig. 10.4.3). The absence of straddle legs results in a chassis with increased underclearance which can be used on ramps, including dock levelers. The counterbalanced chassis, however, is longer than its straddle counterpart, and this requires greater aisle space for maneuvering. For materials handling operations that require one machine to perform a multitude of tasks, and are flexible in floor layout of storage areas, the counterbalanced stacker is the recommended machine. Off-Highway Vehicles and Earthmoving Equipment by B. Douglas Bode, John Deere and Co.

The movement of large quantities of bulk materials, earth, gravel, and broken rock in road building, mining, construction, quarrying, and land clearing may be handled by off-highway vehicles. Such vehicles are mounted on large pneumatic tires or on crawler tracks if heavy pulling and pushing are required on poor or steep terrain. Width and weight of the rubber-tired equipment often exceed highway legal limits, and use of grouser tracks on highways is prohibited. A wide range of working tool attachments, which can be added (and removed) without modification to the basic machine, are available to enhance the efficiency and versatility of the equipment. Proper selection of size and type of equipment depends on the amount, kind, and density of the material to be moved in a specified time and on the distances, direction, and steepness of grades, footing for traction, and altitude above sea level. Time cycles and pay loads for production per hour can then be estimated from manufacturers’ performance data and job experience. This production per hour, together with the corresponding owning, operating, and labor costs per hour, enables selection by favorable cost per cubic yard, ton, or other pay unit. Current rapid progress in the development of off-highway equipment will soon make any description of size, power, and productivity obsolete. However, the following brief description of major off-highway vehicles will serve as a guide to their applications. Crawler Tractors These are track-type prime movers for use with mounted bulldozers, rippers, winches, cranes, cable layers, and side booms rated by net engine horsepower in sizes from 40 to over 500 hp; maximum traveling speeds, 5 to 7 mi/h (8 to 11 km/h). Crawler tractors develop drawbar pulls up to 90 percent or more of their weight with mounted equipment. Wheel Tractors Sizes range from rubber-tired industrial tractors for small scoops, loaders, and backhoes to large, diesel-powered, two- and four-wheel drive pneumatic-tired prime movers for propelling scrapers and wagons. Large, four-wheel-drive, articulated-steering types also power bulldozers. Bulldozer—Crawler Type (Fig. 10.4.4) This is a crawler tractor with a front-mounted blade, which is lifted by hydraulic or cable power control. There are four basic types of moldboards: straight, semi-U and U (named by top-view shape), and angling. The angling type, often called bullgrader or angledozer, can be set for side casting 25 to the right or left of perpendicular to the tractor centerline, while the other blades can be tipped forward or back through about 10 for different digging conditions. All blades can be tilted for ditching, with hydraulic-power tilt available for all blades. APPLICATION. This is the best machine for pioneering access roads, for boulder and tree removal, and for short-haul earthmoving in rough terrain. It push-loads self-propelled scrapers and is often used with a rear-mounted ripper to loosen firm or hard materials, including rock, for

10-27

Fig. 10.4.4 Crawler tractor with dozer blade. (John Deere.)

scraper loading. U blades drift 15 to 20 percent more loose material than straight blades but have poor digging ability. Angling blades expedite sidehill benching and backfilling of trenches. Loose-material capacity of straight blades varies approximately as the blade height squared, multiplied by length. Average capacity of digging blades is about 1 yd3 loose measure per 30 net hp rating of the crawler tractor. Payload is 60 to 90 percent of loose measure, depending on material swell variations. Bulldozer—Wheel Type This is a four-wheel-drive, rubber-tired tractor, generally of the hydraulic articulated-steering type, with frontmounted blade that can be hydraulically raised, lowered, tipped, and tilted. Its operating weights range to 150,000 lb, with up to 700 hp, and its traveling speeds range from stall to about 20 mi/h for pushing and mobility. APPLICATION. It is excellent for push-loading self-propelled scrapers, for grading the cut, spreading and compacting the fill, and for drifting loose materials on firm or sandy ground for distances up to 500 ft. Useful tractive effort on firm earth surfaces is limited to about 60 percent of weight, as compared with 90 percent for crawler dozers. Compaction Equipment Compactors are machines, either selfpropelled or pull-type, consisting of drums, rollers, or wheels and combinations of wheels and drums. Drums can be smooth or sheepsfoot (drums with protrusions to provide a blunt point of contact with the soil), standard or vibrating. They contact the surface, causing pressure to compact the soil. Sizes range from small walk-behind units to landfill compactors weighing 100,000 lb (45.4 tonnes) or more. APPLICATION. Sheepsfoot compactors are used to compact soil in road building, dams, and site preparation. Smooth drums and rubbertired wheels are used to finish compacting granular surfaces and asphalt pavement. Compaction is needed to increase soil and surface density to support heavy loads. Loader—Crawler Type (Fig. 10.4.5) This is a track-type prime mover with front-mounted bucket that can be raised, dumped, lowered, and tipped by power control. Capacities range from 0.7 to 5.0 yd3 (0.5 to 3.8 m3), SAE rated. It is also available with grapples for pulpwood, logs, and lumber.

Fig. 10.4.5 Crawler tractor with loader bucket. (John Deere.)

10-28

LOADING, CARRYING, AND EXCAVATING

APPLICATION. It is used for digging basements, pools, ponds, and ditches; for loading trucks and hoppers; for placing, spreading, and compacting earth over garbage in sanitary fills; for stripping sod; for removing steel-mill slag; and for carrying and loading pulpwood and logs. Loader—Wheel Type (Fig. 10.4.6) This is a four-wheel, rubbertired, articulated-steer machine equipped with a front-mounted, hydraulic-powered bucket and loader linkage that loads material into the bucket through forward motion of the machine and which lifts, transports, and discharges the material. The machine is commonly referred to as a four-wheel-drive loader tractor. Bucket sizes range from 1⁄2 yd3 (0.4 m3) to more than 20 yd3 (15 m3), SAE rated capacity. The addition

and commercial construction, bridge and road repair, and agriculture. They are used to lift, transport, load materials with buckets or forks, break concrete with hammers, plane high spots, and grade and smooth a landscape. They are a true multipurpose utility unit. Backhoe Loader (Fig. 10.4.8) This is a self-propelled, highly mobile machine with a main frame to support and accommodate both the rear-mounted backhoe and front-mounted loader. The machine was designed with the intention that the backhoe will normally remain in place when the machine is being used as a loader and vice versa. The backhoe digs, lifts, swings, and discharges material while the machine is stationary. When used in the loader mode, the machine loads material into the bucket through forward motion of the machine and lifts, transports, and discharges the material. Backhoe loaders are categorized according to digging depth of the backhoe. Backhoe loader types include variations of front/rear/articulated and all-wheel steer and rear/four-wheel drive.

Fig. 10.4.6 Four-wheel-drive loader. (John Deere.)

of a quick coupler to the loader linkage permits convenient interchange of buckets and other working tool attachments, adding versatility to the loader. Rigid-frame machines with variations and combinations of front/rear/skid steer, front/rear drive, and front/rear engine are also used in various applications. APPLICATION. Four-wheel-drive loaders are used primarily in construction, aggregate, and utility industries. Typical operations include truck loading, filling hoppers, trenching and backfilling, land clearing, and snow removal. Loader—Skid Steer (Fig. 10.4.7) This is a four-wheel or rubbertracked skid-steer unit equipped with a hydraulically operated loader linkage and bucket that loads material into the bucket by forward movement of the unit and which lifts, transports loads, and dumps material. Bucket sizes range from 1⁄2 yd3 s0.38 m3d to 13⁄4 yd3 s1.34 m3d, with the weight of these units ranging from 4000 lb (1.8 tonnes) to 10,000 lb (4.5 tonnes). These units use a quick coupler on the front to accommodate a large variety of other attachments such as forks, blades, rakes, and a host of hydraulically driven attachments such as brooms, hammers, planers, augers, and trenchers. The small size and the versatility of the unit contribute to its increased popularity. APPLICATION. Skid-steer loaders are used in all types of job sites where small power units are required, such as landscaping and nurseries, home

Fig. 10.4.7 Skid-steer loader. (John Deere.)

Fig. 10.4.8 Backhoe loader. (John Deere.) APPLICATION. Backhoe loaders are used primarily for trenching and backfilling operations in the construction and utility industries. Quick couplers for the loader and backhoe are available which quickly interchange the working tool attachments, thus expanding machine capabilities. Backhoe loader mobility allows the unit to be driven to nearby job sites, thus minimizing the need to load and haul the machine. Scrapers (Fig. 10.4.9) This is a self-propelled machine, having a cutting edge positioned between the front and rear axles which loads, transports, discharges, and spreads material. Tractor scrapers include open-bowl and self-loading types, with multiple steer and drive axle variations. Scraper rear wheels may also be driven by a separate rearmounted engine which minimizes the need for a push tractor. Scraper ratings are provided in cubic yard struck/heaped capacities. Payload capacities depend on loadability and swell of materials but approximate the struck capacity. Crawler tractor-drawn, four-wheel rubber-tired scrapers have traditionally been used in a similar manner—normally in situations with shorter haul distances or under tractive and terrain conditions that are unsuitable for faster, self-propelled scrapers. APPLICATION. Scrapers are used for high-speed earth moving, primarily in road building and other construction work where there is a need to move larger volumes of material relatively short distances. The convenient

Fig. 10.4.9 Two-axle articulated self-propelled elevating scraper. (John Deere.)

ABOVE-SURFACE HANDLING

control of the cutting edge height allows for accurate control of the grade in either a cut or fill mode. The loaded weight of the scraper can contribute to compaction of fill material. All-wheel-drive units can also load each other through a push-pull type of attachment. Two-axle, fourwheel types have the best maneuverability; however, the three-axle type is sometimes preferred for operator comfort on longer, higher-speed hauls. Motor Grader (Fig. 10.4.10) This is a six-wheel, articulated-frame self-propelled machine characterized by a long wheelbase and midmounted blade. The blade can be hydraulically positioned by rotation about a vertical axis—pitching fore/aft, shifting laterally, and independently raising each end—in the work process of cutting, moving, and spreading material, to rough- or finish-grade requirements. Motor graders range in size to 60,000 lb (27,000 kg) and 275 hp (205 kW) with typical transport speeds in the 25-mi/h (40-km/h) range. Rigidframe machines with various combinations of four/six wheels, two/ four-wheel drive, front/rear-wheel steer are used as dictated by the operating requirements.

10-29

generally downward) and the shovel type (bucket cuts toward the unit and generally upward). Weight of the machines range from mini [2000 lb (0.9 tonnes)] to giant [1,378,000 lb (626 tonnes)] with power ratings from 8 to 3644 hp (5.9 to 2717 kW). APPLICATION. The typical attachment for the unit is the bucket, which is used for trenching for the placement of pipe and other underground utilities, digging basements and footings for building foundations, loading trucks in mass excavation sites, and maintaining and grading steep slopes of retention ponds. Other specialized attachments include hydraulic hammers and compactors, thumbs, clamshells, grapples, and long-reach fronts that expand the capabilities of the excavator. Dump Trucks (Fig. 10.4.12) A dump truck is a self-propelled machine, having an open body or bin, that is designed to transport and dump or spread material. Loading is accomplished by means external to the dumper with another machine. Types are generally categorized into rigid-frame or articulated-steer. Most rigid-frame trucks are rear dump with two axles and front axle steer while articulated trucks most commonly are rear dump with three axles and front frame steer. Some bottom dumps are configured as a tractor trailer with up to five axles.

Fig. 10.4.10 Six-wheel articulated-frame motor grader. (John Deere.) APPLICATION. Motor graders are the machine of choice for building paved and unpaved roads. The long wheelbase in conjunction with the midmounted blade and precise hydraulic controls allows the unit to finish-grade road beds within 0.25 in (6 mm) prior to paving. The weight, power, and blade maneuverability enable the unit to perform all the necessary work, including creating the initial road shape, cutting the ditches, finishing the bank slopes, and spreading the gravel. The motor grader is also a cost-effective and vital part of any road maintenance fleet. Excavators (Fig. 10.4.11) This is a mobile machine that is propelled by either a crawler-track or rubber-tired undercarriage, with the unique feature being an upper structure that is capable of continuous rotation and a wide working range. The unit digs, lifts, swings, and dumps material by action of the boom, arm, or telescoping boom and bucket. Excavators include the hoe type (bucket cuts toward the unit and

Fig. 10.4.12 Articulated dump truck. (John Deere.)

APPLICATION. Dump trucks are used for hauling and dumping blasted materials in mines and quarries and earth, sand and gravel for building roads and dams, and materials for site development. The units are capable of 30 to 40 mi/h (50 to 65 km/h) when loaded, depending on terrain and slope. Owning and Operating Costs These include depreciation; interest, insurance, taxes; parts, labor, repairs, and tires; fuel, lubricant, filters, hydraulic-system oil, and other operating supplies. This is reduced to cost per hour over a service life of 4 to 6 years of 2,000 h each—average 5 years, 10,000 h. Owning and operating costs of diesel-powered bulldozers and scrapers, excluding operator’s wages, average 3 to 4 times the delivered price in 10,000 h.

ABOVE-SURFACE HANDLING Monorails by David T. Holmes, Lift-Tech International

Fig. 10.4.11 Excavator with tracked undercarriage. (John Deere.)

Materials can be moved from point to point on an overhead fixed track by wheeled carriers or trolleys that roll on the top surface of the track lower flange. Track members supporting the trolleys can be structural I beams, wide-flange or H beams, and specially fabricated rails having an I shape and special lower flat flanges for improved rolling characteristics. Track sections may be straight, curved, or a combination thereof. The wheel tread diameter and surface finish determine the ease with which it rolls on a track. Figure 10.4.13 illustrates a typical trolley that will negotiate both straight and curved track sections. Typical dimensions are given in Table 10.4.1. The trolley may be plain hand push, geared hand wheel with hand chain, or motor driven. To raise and

10-30

LOADING, CARRYING, AND EXCAVATING Table 10.4.1

Typical Monorail Trolley Dimensions

Capacity, short tons

I-beam range (depth), in

Wheeltread diam, in

Net weight, lb

B,* in

C, in

H, in

M, in

N, in

Min beam radius, * in

1⁄2 1 11⁄2 2 3 4 5 6 8 10

5—10 5—10 6—10 6—10 8—15 8—15 10—18 10—18 12—24 12—24

31⁄2 31⁄2 4 4 5 5 6 6 8 8

32 32 52 52 88 88 137 137 279 279

31⁄4 31⁄4 37⁄8 37⁄8 47⁄16 47⁄16 53⁄16 53⁄16 51⁄2 51⁄2

41⁄8 41⁄8 45⁄8 45⁄8 53⁄8 53⁄8 63⁄16 63⁄16 77⁄16 77⁄16

97⁄8 97⁄8 113⁄8 113⁄8 131⁄2 131⁄2 153⁄8 153⁄8 213⁄8 213⁄8

23⁄8 23⁄8 215⁄16 215⁄16 213⁄16 213⁄16 35⁄16 35⁄16 43⁄16 43⁄16

61⁄4 61⁄4 77⁄16 77⁄16 715⁄16 715⁄16 101⁄8 101⁄8 133⁄4 133⁄4

21 21 30 30 42 42 48 48 60 60

Metric values, multiply tons by 907 for kg, inches by 25.4 for mm, and lb by 0.45 for kg. * These dimensions are given for minimum beam. SOURCE: CM Hoist Division, Columbus McKinnon.

Fig. 10.4.13 Monorail trolley. (CM Hoist Div., Columbus McKinnon.)

lower a load, a hoist must be attached to the trolley. For low headroom, the trolley can be built into the hoist and is known as a monorail trolley hoist. Overhead Traveling Cranes by David T. Holmes, Lift-Tech International

An overhead crane is a mechanism used to lift, transport, and lower loads. It consists of a traveling bridge that supports a movable trolley or a stationary hoist mechanism. The bridge structure has single or multiple horizontal girders supported at each end by a structural end truck assembly. This assembly incorporates wheels, axles, and bearings and rides on an overhead fixed runway structure installed at right angles to the bridge girders. The movable trolley travels the full length of the bridge girders and incorporates a hoist mechanism for lifting and lowering loads. When the bridge is equipped with a stationary hoist it may be mounted at any position in the length of the bridge girders. Overhead cranes are provided as top-running or under-running. This refers to the structural section on which the wheel of the structural end truck assembly travels. Top-running cranes utilize single- or doubleflange wheels that ride on the top surface of a railroad-type rail or square bar. Under-running cranes utilize single-flange wheels that ride on the top surface of the lower flange of a structural I beam, wide-flange beam, or H beam like the trolley on the monorail illustrated in Figure 10.4.13. Overhead cranes provide three directions of motion at right angles to each other. This permits access to any point in the cube of space over which the crane operates. Light-capacity bridge and trolley horizontal traverse motions may be propelled by hand or geared wheel with hand chain, while electric or pneumatic motors may drive all capacities. The hoist motion, which is entirely independent of the trolley motion, may be propelled by a geared wheel with hand chain or a motor like the bridge or trolley. Overhead cranes are most commonly powered by electricity. In hazardous atmospheres, as identified in the National Electrical Code, cranes are frequently powered by compressed air and are provided with spark-resistant features. Occasionally overhead cranes are powered by hydraulics.

Specifications 70 and 74 of the Crane Manufacturers Association of America (CMAA) have established crane service classes for overhead cranes so that the most economical crane can be obtained for the installation. Service classifications A through F are defined as follows: Class A (Standby or Infrequent Service) This service class covers cranes that may be used in installations such as powerhouses, public utilities, turbine rooms, and transformer stations where precise handling of equipment at slow speeds with long idle periods between lifts is required. Capacity loads may be handled for initial installation of equipment and for infrequent maintenance. Class B (Light Service) This service covers cranes that may be used in repair shops, light assembly operations, service buildings, light warehousing, etc., where service requirements are light and the speed is slow. Loads may vary from no load to occasional full-rated loads with 2 to 5 lifts per hour, averaging 10 feet per lift. Class C (Moderate Service) This service covers cranes that may be used in machine shops or papermill machine rooms, etc., where service requirements are moderate. In this type of service the crane will handle loads that average 50 percent of the rated capacity with 5 to 10 lifts per hour, averaging 15 feet, not over 50 percent of the lift at rated capacity. Class D (Heavy Service) This service covers cranes that may be used in heavy machine shops, foundries, fabricating plants, steel warehouses, container yards, lumber mills, etc., and standard-duty bucket and magnet operations where heavy-duty production is required. In this type of service, loads approaching 50 percent of the rated capacity will be handled constantly during the working period. High speeds are desirable for this type of service, with 10 to 20 lifts per hour averaging 15 feet, not over 65 percent of the lifts at rated capacity. Class E (Severe Service) This type of service requires a crane capable of handling loads approaching rated capacity throughout its life. Applications may include magnet, bucket, magnet/bucket combination cranes for scrap yards, cement mills, lumber mills, fertilizer plants, container handling, etc., with 20 or more lifts per hour at or near the rated capacity. Class F (Continuous Severe Service) This type of service requires a crane capable of handling loads approaching rated capacity continuously under severe service conditions throughout its life. Applications may include custom-designed specialty cranes essential to performing the critical work tasks affecting the total production facility. These cranes must provide the highest reliability with special attention to ease of maintenance features. Single-girder cranes have capacities from 1⁄2 to 30 tons with spans up to 60 feet in length. Typical speeds for motions of motor-driven singlegirder cranes as suggested by the Crane Manufactures Association of America are given in Table 10.4.2. Single-girder cranes are provided for service classes A through D. A basic top-running single-girder crane is illustrated in Figure 10.4.14. It consists of an I-beam girder a supported at each end by a twowheeled structural end truck assembly b. The trolley c traverses the length of the girder on the top surface of the lower flange of the I beam.

ABOVE-SURFACE HANDLING Table 10.4.2

10-31

Single-girder crane speeds Suggested operating speeds, ft/min Hoist

Trolley

Bridge

Capacity, tons

Slow

Medium

Fast

Slow

Medium

Fast

Slow

Medium

Fast

3 5 7.5 10 15 20 25 30

14 14 13 13 13 10 8 7

35 27 27 21 19 17 14 14

45 40 38 35 31 30 29 28

50 50 50 50 50 50 50 50

80 80 80 80 80 80 80 80

125 125 125 125 125 125 125 125

50 50 50 50 50 50 50 50

115 115 115 115 115 115 115 115

175 175 175 175 175 175 175 150

A hoist is attached to the trolley to provide the lifting mechanism. The trolley and hoist may be an integral unit. The bridge travels on the runway by pulling on the hand chain d, turning sprocket wheel that is attached to shaft f. The pinions at each end of the crane attached to shaft f mesh with the gears g which are attached to the wheel axles. An underrunning crane is similar except for having two wheels at each wheel location that travel on the top surface of the lower flange of an I beam, wide-flange beam, or H beam. In lieu of the hand-geared drive, the bridge may be driven by a motor through a gear reduction to the shaft f. When the bridge is motor-driven, a pendant push-button station suspended from the crane or hoist operates the motions.

Fig. 10.4.14 Hand-powered crane.

Table 10.4.3

Double-girder cranes have capacities up to 200 tons, with spans of 20 to 200 feet. Larger capacities and spans are built for special applications. Typical speeds for motions of motor-driven double-girder cranes as suggested by the Crane Manufacturers Association of American are given in Tables 10.4.3 for floor-controlled cranes and in Table 10.4.4 for cab-controlled cranes. Double-girder cranes are provided for service classes A through F. A basic top-running double-girder crane is illustrated in Figure 10.4.15. Two bridge girders a are supported at each end by structural end trucks having two or more wheels depending upon the capacity of the crane. The bridge travels on the runway by motor c driving shaft d through a gear reduction. Shaft d is coupled to the wheel axle directly or through a gear mesh. The top-running integral trolley hoist b is set on the rails mounted to the top of the girders a traversing their length. Both trolley and hoist are motor-driven. An operator’s cab e may be suspended at any position along the length of the girders. It contains the motor controls, master switches or push-button station, hydraulic brake, and warning device. The two girders are standard structural shapes for low-capacity cranes up to 60 feet in span. High-capacity and long-span cranes will use box girders fabricated of plates or plates and channels to provide strength and stiffness. Provisions are made in the structural end truck to prevent it dropping more than 1 inch if there is an axle failure. A hydraulic brake mounted to the bridge motor shaft is used on caboperated cranes to stop the crane. Floor-operated cranes use a spring-set, electrically-released brake. The top-running trolley hoist has the following basic parts: structural frame, hoist mechanism, and trolley traverse drive. The structural frame supports the hoist mechanism and imparts the load to the trolley drive.

Double-girder crane speeds—floor-controlled Suggested operating speeds, ft/min Hoist

Trolley

Bridge

Capacity, tons

Slow

Medium

Fast

Slow

Medium

Fast

Slow

Medium

Fast

3 5 7.5 10 15 20 25 30 35 40 50 60 75 100 150

14 14 13 13 13 10 8 7 7 7 5 5 4 4 3

35 27 27 21 19 17 14 14 12 12 11 9 9 8 6

45 40 38 35 31 30 29 28 25 25 20 18 15 13 11

50 50 50 50 50 50 50 50 50 40 40 40 40 30 25

80 80 80 80 80 80 80 80 80 70 70 70 70 60 60

125 125 125 125 125 125 125 125 125 100 100 100 100 80 80

50 50 50 50 50 50 50 50 50 40 40 40 30 25 25

115 115 115 115 115 115 115 115 115 100 100 75 75 50 50

175 175 175 175 175 175 175 150 150 150 150 125 125 100 100

NOTE: Consideration must be given to length of runway for the bridge speed, span of bridge for the trolley speed, average travel, distance and spotting characteristics required.

10-32

LOADING, CARRYING, AND EXCAVATING Table 10.4.4

Double-girder crane speeds—cab-controlled Suggested operating speeds, ft/min

Capacity,

Hoist

Trolley

Bridge

tons

Slow

Medium

Fast

Slow

Medium

Fast

Slow

Medium

Fast

3 5 7.5 10 15 20 25 30 35 40 50 60 75 100 150

14 14 13 13 13 10 8 7 7 7 5 5 4 4 3

35 27 27 21 19 17 14 14 12 12 11 9 9 8 6

45 40 38 35 31 30 29 28 25 25 20 18 15 13 11

125 125 125 125 125 125 100 100 100 100 75 75 50 50 30

150 150 150 150 150 150 150 125 125 125 125 100 100 100 75

200 200 200 200 200 200 175 175 150 150 150 150 125 125 100

200 200 200 200 200 200 200 150 150 150 100 100 75 50 50

300 300 300 300 300 300 300 250 250 250 200 200 150 100 75

400 400 400 400 400 400 400 350 350 350 300 300 200 150 100

NOTE: Consideration must be given to length of runway for the bridge speed, span of bridge for the trolley speed, average travel, distance and spotting characteristics required.

current motors are generally of the single or two-speed squirrel cage or wound-rotor type designed for operation on a 230 or 460-V, threephase, 60-Hz power source. Standard crane and hoist motor control uses contactors, relays, and resistors. Variable-frequency drives provide precise load control with a wide range of speeds when used with squirrel cage motors. Direct current motors can be either series-wound or shunt-wound type designed for operation on a 250-V power source. Gantry Cranes

A gantry crane is an adaptation of an overhead crane. Most gantry crane installations are outdoors where an elevated runway is long and not cost-effective to erect. Figure 10.4.16 illustrates a double leg gantry crane. A structural leg a is attached to each end of the bridge and is supported by structural end trucks having two or more wheels, depending upon the capacity of the crane. The crane rides on railroad-type rails mounted on the ground and is driven by motor b through a gear reduction

Fig. 10.4.15 Electric traveling crane.

The hoist mechanism includes the motor, motor brake, load brake, gearing, helically grooved rope drum, upper sheave assembly, lower block with hook, and wire rope. The wire rope is reeved from the drum over the sheaves of the upper sheave assembly and the lower block. To prevent over wrapping of the wire rope on the drum and the engagement of the frame by the lower block, limit switches are provided to stop the motors at the upper limit of travel. Frequently, limit switches are also provided to limit the lower travel of the lower block to prevent the rope from totally unwinding and reverse winding onto the rope drum. The trolley traverse drive includes the motor, motor brake, gearing, axles, and wheels. On large-capacity trolleys, an auxiliary hoist of lower capacity and higher lifting speed is provided. For low headroom applications, an under-running trolley is used, placing the hoisting mechanism up between the girders. Table 10.4.5 provides capacities and dimensions for standard industrial cranes. Electrically powered cranes use either alternating current or direct current power sources. Alternating current is predominately used. Power is delivered to the crane bridge by rolling or sliding collectors in contact with the electrical conductors mounted on the runway or by festooned multiconductor cables. In a similar manner, power is delivered to the trolley hoist with the conductor system being supported from the bridge girders. The motors and controls are designed for crane and hoist service to operate specifically on one of the power sources. Alternating

Fig. 10.4.16 Gantry crane.

Table 10.4.5

Dimensions, Loads, and Speeds for Industrial-Type Cranes

a, d, e

H

Max load per wheel, lbc

Runway rail, lb/yd

X, in

No. of bridge wheels

317⁄8 s 317⁄8 s 317⁄8 s 317⁄8 s

8r0s 9r6s 11r6s 14r6s

9,470 12,410 15,440 19,590

25 25 25 40

12 12 12 12

4 4 4 4

297⁄8 s 297⁄8 s 297⁄8 s 297⁄8 s

317⁄8 s 317⁄8 s 317⁄8 s 317⁄8 s

8r0s 9r6s 11r6s 14r6s

15,280 18,080 20,540 24,660

25 40 40 60

12 12 12 12

4 4 4 4

4r0s 4r0s 4r0s 4r0s

281⁄8 s 281⁄8 s 281⁄8 s 281⁄8 s

381⁄2 s 381⁄2 s 381⁄2 s 381⁄2 s

8r0s 9r6s 11r6s 14r6s

20,950 23,680 26,140 30,560

60 60 60 80

12 12 12 12

4 4 4 4

3r9s 3r9s 3r11s 3r11s

4r0s 4r0s 4r0s 4r0s

405⁄8 s 405⁄8 s 401⁄8 s 401⁄8 s

285⁄8 s 285⁄8 s 281⁄8 s 281⁄8 s

2r2s 2r2s 2r2s 2r2s

8r0s 9r6s 11r6s 14r6s

26,350 30,000 33,370 38,070

60 80 60 80

18 18 18 18

4 4 4 4

63⁄8 s 63⁄8 s 77⁄8 s 77⁄8 s

3r11s 3r11s 3r11s 3r11s

4r6s 4r6s 4r6s 4r6s

417⁄8 s 417⁄8 s 403⁄8 s 403⁄8 s

287⁄8 s 287⁄8 s 273⁄8 s 273⁄8 s

1r111⁄2 s 1r111⁄2 s 1r111⁄2 s 1r111⁄2 s

9r6s 9r6s 11r6s 14r6s

37,300 40,050 44,680 51,000

80 135 80 135

24 24 24 24

4 4 4 4

25 25 40 54

77⁄8 s 77⁄8 s 77⁄8 s 77⁄8 s

5r5s 5r5s 5r5s 5r5s

6r6s 6r6s 6r6s 6r6s

467⁄8 s 467⁄8 s 467⁄8 s 467⁄8 s

175⁄8 s 175⁄8 s 175⁄8 s 175⁄8 s

2r63⁄4 s 2r63⁄4 s 2r63⁄4 s 2r63⁄4 s

9r6s 9r6s 11r6s 14r6s

48,630 52,860 59,040 65,200

135 135 135 135

24 24 24 24

4 4 4 4

40 60 80 100

25 25 32 46

77⁄8 s 77⁄8 s 9s 9s

6r10s 6r7s 6r7s 6r7s

7r6s 7r6s 7r6s 7r6s

563⁄8 s 563⁄8 s 551⁄4 s 551⁄4 s

291⁄4 s 291⁄4 s 281⁄8 s 281⁄8 s

3r51⁄4 s 3r51⁄4 s 3r51⁄4 s 3r51⁄4 s

10r6s 10r6s 11r6s 14r6s

55,810 62,050 68,790 76,160

135 135 135 176

24 24 24 24

4 4 4 4

40 60 80 100

25 25 32 46

9s 9s 9s 9s

6r8s 6r8s 6r8s 6r10s

7r6s 7r6s 7r6s 7r6s

551⁄4 s 551⁄4 s 551⁄4 s 551⁄4 s

281⁄8 s 281⁄8 s 281⁄8 s 281⁄8 s

3r51⁄4 s 3r51⁄4 s 3r51⁄4 s 3r51⁄4 s

10r6s 10r6s 11r6s 14r6s

65,970 72,330 79,230 89,250

135 135 175 175

24 24 24 24

4 4 4 4

Span, ft

Std. lift, main hoist, ftb

A

B min

C

D

Eb

6

40 60 80 100

36 53 86 118

57⁄8 s 57⁄8 s 57⁄8 s 57⁄8 s

3r9s 3r9s 3r9s 3r9s

3r1⁄4 s 3r1⁄4 s 3r1⁄4 s 3r1⁄4 s

297⁄8 s 297⁄8 s 297⁄8 s 297⁄8 s

10

40 60 80 100

36 53 86 118

57⁄8 s 57⁄8 s 57⁄8 s 57⁄8 s

3r9s 3r9s 3r9s 3r9s

3r1⁄4 s 3r1⁄4 s 3r1⁄4 s 3r1⁄4 s

16

40 60 80 100

28 40 62 84

57⁄8 s 57⁄8 s 57⁄8 s 57⁄8 s

3r9s 3r9s 3r9s 3r10s

20 5 ton aux

40 60 80 100

22 30 46 63

57⁄8 s 57⁄8 s 63⁄8 s 63⁄8 s

30 5 ton aux

40 60 80 100

22 22 34 48

40 5 ton aux

40 60 80 100

50 10 ton aux

60 10 ton aux

Capacity main hoist, tons, 2,000 lb

a

Dimensions, refer to Fig. 10.4.13

Lift-Tech International, Inc. For each 10 ft extra lift, increase H by X. Direct loads, no impact. d These figures should be used for preliminary work only as the data varies among manufacturers. e Multiply ft by 0.30 for m, ft/min by 0.0051 for m/s, in by 25.4 for mm, lb by 0.45 for kg, lb/yd by 0.50 for kg/m. b c

F

10-33

10-34

LOADING, CARRYING, AND EXCAVATING

to shaft c, which drives the vertical shafts d through a bevel gear set. Shaft d is coupled to the wheel axles by bevel and spur gear reductions. Gantry cranes are also driven with separate motors, brakes, and gear reductions mounted adjacent to the drive wheels of the structural end truck in lieu of the bridge-mounted cross-shaft drive. Gantry cranes are available with similar capacities and spans as overhead cranes. Special-Purpose Overhead Traveling Cranes

Overhead and gantry cranes can be specially designed to meet special application requirements: furnace charging cranes that charge scrap metal into electric furnaces, wall traveling cranes that use a runway system on one wall of a building, lumber handling cranes that travel at extremely high speeds and are equipped with special reeving systems to provide stability of the loads, polar cranes that operate on a single rail circular runway, and stacker cranes that move materials in and out of racks are examples of these applications. Jib Cranes

Jib cranes find common use in manufacturing. Figure 10.4.17 shows a column jib crane, consisting of pivoted post a and carrying boom b, on which travels either an electric or a hand hoist c. The post a is attached to building column d so that it can swing through approximately 2708. Cranes of this type have been replaced by such other methods of handling materials as the mobile lift truck and other types of cranes. Column jib cranes are built with radii up to 20 ft (6 m) and for loads up

Fig. 10.4.17 Column jib crane.

to 5 tons (4.5 tonnes). Yard jib cranes are generally designed to meet special conditions. Derricks by Ronald M. Kohner, Landmark Engineering Services, Ltd.

Derricks are stationary lifting devices used to raise and reposition heavy loads. A diagonal latticed strut, or boom, from which loads are suspended is typically pinned near the base of a second vertical latticed strut, or mast. The object to be lifted is raised or lowered vertically by means of a pulley and rope system suspended from the tip of the boom. When one end of the rope is spooled onto a drum, the object is raised vertically toward the boom tip. When the rope is unspooled from the drum, the object is lowered. The bottom end of the boom is pinned near the base of the mast to allow it to be pivoted through a vertical arc usually slightly less than 908. A second rope and pulley system connects the top of the boom to the top of the vertical mast. Spooling or unspooling this second rope from another drum moves the boom through its vertical arc. As the boom pivots in the vertical plane, any load that is suspended from the tip of the boom will be moved horizontally—either closer to or farther away from the base of the mast and boom. The top of the derrick mast is typically held in place either by two rigid tension legs fastened to anchor sills, as in a stiff-leg derrick (Fig. 10.4.18), or by a system of guy lines anchored some distance from the derrick, as in a guy derrick (Fig. 10.4.19). Both the boom and mast can be rotated horizontally about the mast’s vertical axis to provide lateral movement of any load suspended from the boom tip. A derrick can therefore be used to move a load up or down (hoisting), closer or farther away from the derrick base (luffing), and side to side (swinging). The term derrick is originally derived from a London hangman who achieved such notoriety that his name became synonymous with the tall gallows on which he plied his gruesome trade. In early days derricks consisted of booms and masts that were simply large timbers fitted with steel end caps to which the base pins or pulley systems could be attached. Drum power was supplied by manually operated cranks. Steel soon replaced timber as the primary material for booms and masts. Power was provided first by steam boiler then by electric motors or internal combustion engines. Lifting capacities grew from a few hundred pounds for the earliest manually powered derricks to over 600 tons. Some guy derricks have even been adapted to mount on mobile

Fig. 10.4.18 Steel stiff-leg derrick. (HydraLift AmClyde.)

ABOVE-SURFACE HANDLING

10-35

boom. Booms suspended by ropes in this fashion are typically latticetype structures built in sections that can be fastened together to lengthen or shorten the boom (Fig. 10.4.21). Boom systems of this type reach lengths of over 500 ft.

Fig. 10.4.19 Self-slewing steel guy derrick. (HydraLift AmClyde.)

crawler and truck type cranes to increase the inherent lifting capacity of these machines (Fig. 10.4.20). Due to their fixed location and the effort required to assemble/disassemble them, the use of derricks in recent years has been limited to a relatively few special-purpose applications. They have largely been overshadowed by mobile-type cranes in the lifting industry. Mobile Cranes

Mobile cranes are characterized by their ability to move from lifting job to lifting job with relative ease. This is accomplished by mounting the crane structure and hoist machinery on some sort of mobile base. Another distinguishing feature of this type of crane is the strut, or boom, that protrudes diagonally upward from the crane body to provide a lifting point from which heavy loads can be suspended. A rope and pulley system is hung from the tip of this boom to provide a means for raising and lowering loads. Internal combustion engines provide power to spool the pulley rope onto drums to raise heavy loads. The diagonal position of the boom is controlled by one of two basic systems. With the first arrangement, the tip of the boom is tied back to the rear of the crane with a second rope and pulley system. Spooling this rope on or off a drum raises or lowers the tip of the boom, thereby changing its angle in a vertical arc about hinge pins at the bottom of the

Fig. 10.4.20 Guy derrick.

Fig. 10.4.21 Lattice boom crane. (Construction Safety Association of Ontario.)

The second type of boom system uses hydraulic cylinders pushing up on the bottom of the boom to support it at the desired angle. Extending or retracting the hydraulic boom support cylinder raises or lowers the tip of the boom in a vertical arc about its bottom hinge pins. Booms supported in this fashion are typically composed of several nested rectangular box sections that can be telescoped in or out with another internal hydraulic cylinder to change the length of the boom (Fig. 10.4.22). Boom systems of this type reach lengths of several hundred feet. With either type of boom system, sideways movement of a suspended load is achieved by rotating the boom horizontally on a swing roller system. The first mobile cranes were associated with the railroads and moved from location to location on their tracks. As time progressed the need to travel beyond the confines of the rail bed gave rise to crane mountings that could move over virtually any reasonably solid and level terrain.

Fig. 10.4.22 Telescoping boom crane. (Crane Institute of America, Inc.)

10-36

LOADING, CARRYING, AND EXCAVATING

Today, mobile cranes are grouped into one of two general categories based on the travel system on which the crane is mounted. One category of cranes is mounted on crawler tracks much like those on a military tank (Fig. 10.4.23). The second category is mounted on various types of rubber-tired vehicles. Crawler mountings offer the advantage of a more rigid base for the boom system and allow the entire crane to move forward or backward while a load remains suspended from the boom. The disadvantage of crawlers is their very slow travel speed, which imposes a practical limit on the distance that can be traveled. Crawler-mounted cranes are typically loaded on trucks or railcars for extended travel distances between jobs. Crawler cranes range in capacity from a few tons to over 1,000 tons. Fig. 10.4.25 Carrier-mounted lattice boom crane. (Crane Institute of America, Inc.)

Fig. 10.4.23 Crawler-mounted lattice boom crane. (Crane Institute of America. Inc.)

The second category of mobile cranes is mounted on rubber-tired vehicles and can be further subdivided by the configuration of that vehicle. Boom-and-hoist systems that are mounted on relatively standard, commercially available flatbed trucks are commonly termed boom trucks (Fig. 10.4.24). They are generally shorter in boom length and lower in lifting capacity than other rubber-tired cranes. Booming and hoisting functions are powered by the truck’s own engine. These cranes are equipped with hydraulic telescopic booms and are light enough to travel easily over most roads for rapid deployment. They range in capacity from a few tons to near 35 tons.

originally geared for travel only on improved roads. Today, with both offroad and highway gearing, truck-mounted cranes are available in an allterrain (AT) configuration. They are available with either rope-supported lattice booms or hydraulic telescopic-type booms. They range in capacity from a few tons to over 1,000 tons. A third category of rubber-tire-mounted mobile crane is the rough terrain (RT) crane (Fig. 10.4.26). It is distinguished by higher ground clearance with much larger tires than other wheel-mounted cranes. Unlike conventional truck cranes, the steering and travel controls are typically located at the same operator’s station as the hoist, thereby placing all crane movements under the control of a single individual. Like most other wheel-mounted cranes, an integral outrigger system provides the necessary stability for lifting heavy loads but confines the crane to a fixed location when on outriggers. Rough terrain cranes are typically configured only with hydraulic telescopic-type booms. These cranes in the 30- to 60-ton capacity range have become the generalpurpose workhorses for most job sites where cranes are required. Capacities range from a few tons to over 100 tons.

Fig. 10.4.26 Rough terrain crane. (Construction Safety Association of Ontario.)

Fig. 10.4.24 Boom truck. (Construction Safety Association of Ontario.)

Larger rubber-tired cranes are mounted on truck carriers specially built to support the swing roller system on which bigger cranes must be mounted (Fig. 10.4.25). These truck carriers are built stronger than conventional trucks to withstand the heavier loads imposed by crane service. They are driven from a second control location remote from the position where the operator of the crane is seated. Like all wheelmounted cranes, they are equipped with self-contained outriggers to increase the crane’s stability beyond that provided by the relatively narrow tire system. Deployment of the outriggers for maximum lifting stability generally raises the crane off its wheels, thereby fixing the crane’s location when loads are being lifted. Truck-mounted cranes were

Numerous attachments other than simple booms have been developed for rope-supported lattice-boom-type mobile cranes to address specific problems associated with lifting work. Several of these attachments are as follows: 1. Jib. A smaller, lighter strut mounted on top of a standard crane boom. It is typically offset at some slight angle forward from the main boom. It allows lighter loads to be lifted to higher elevations than is possible on a standard boom alone (Fig. 10.4.21). Jibs are also used on top of hydraulic telescopic booms. 2. Luffing jib. A smaller, lighter strut mounted on top of a standard crane boom. With a luffing jib, the angle that the jib is offset forward of the main boom can be varied with a rope-and-pulley system. This offers a wider selection of boom angle/jib angle combinations than are possible with a fixed jib. Luffing-type jibs are available on either lattice or hydraulic telescopic-type booms (Fig. 10.4.27).

ABOVE-SURFACE HANDLING

10-37

5. Sky Horse attachment. The first of several similar attachments offered by different crane manufacturers that place an auxiliary counterweight on a wheeled trailer system directly behind the crane. A second strut, or mast, is mounted behind the Sky Horse boom and protrudes up diagonally to the rear (Fig. 10.4.30). The purpose of this attachment is to increase the capacity of a standard crawler-mounted crane by adding extra counterweight while maintaining mobility with a suspended load as the auxiliary counterweight follows behind on its own wheels. The mast provides improved geometry for the boom suspension system. The patented Sky Horse attachment approximately doubles the capacity of a standard crawler crane. Other similar attachments are known by names such as Maxer and Superlift (Fig. 10.4.31).

Fig. 10.4.27 Telescoping boom crane with luffing jib. (Crane Institute of America, Inc.)

3. Tower attachment. The normal diagonally oriented working boom is mounted on the top of a second strut, or tower, that rises vertically from the crane body. Protruding up and out from the vertical tower allows the working boom to maintain sufficient clearance to reach up over the corner of tall structures and handle loads while the crane base stands close to the front of the structure. Tower attachments are typically available only on some lattice boom cranes (Fig. 10.4.28). Fig. 10.4.30 Sky Horse crane.

Fig. 10.4.31 Superlift crane. (Construction Safety Association of Ontario.)

Mobile cranes have grown from capacities of 20 or 30 tons several decades ago to well over 1,000 tons today. Boom lengths can reach elevations over 500 ft. Sizes are largely limited by the weight and external dimensions of crane components when moving the machines between job sites. Current equipment can move rapidly between work sites and deploy for lifting in a matter of minutes. They have become one of the most popular and necessary tools in the industry’s toolbox. Cableways Fig. 10.4.28 Mobile tower crane. (Construction Safety Association of Ontario.)

4. Ringer attachment. A rail-like structural ring is placed completely around the tracks of a conventional crawler-mounted crane. A large supplemental counterweight is placed on top of a roller bed on the rear of the ring to increase the crane’s stability. The crane’s boom is mounted on a similar roller bed on top of the ring in front of the crane. A second strut, or mast, is mounted behind the ringer boom and protrudes up diagonally to the rear. The tip of the boom is attached by a rope and pulley system to the tip of the mast. The purpose of this attachment is to increase the capacity of a standard crawler-mounted crane by adding extra counterweight to the crane along with a Fig. 10.4.29 Ringer crane. mast to provide more advantageous (Construction Safety Association geometry for the boom suspension sysof Ontario.) tem. When equipped with a Ringer attachment, the crane can no longer move with a load suspended. The patented Ringer attachment increases a standard crawler crane’s capacity from 300 tons to over 1,000 tons (Fig. 10.4.29).

Cableways are aerial hoisting and conveying devices using suspended steel cable for their tracks, the loads being suspended from carriages and moved by gravity or power. The most common uses are transporting material from open pits and quarries to the surface; handling construction material in the building of dams, docks, and other structures where the construction of tracks across rivers or valleys would be uneconomical; and loading logs on cars. The maximum clear span is 2,000 to 3,000 ft (610 to 914 m); the usual spans, 300 to 1,500 ft (91 to 457 m). The gravity type is limited to conditions where a grade of at least 20 percent is obtainable on the track cable. Transporting cableways move the load from one point to another. Hoisting transporting cableways hoist the load as well as transport it. A transporting cableway may have one or two fixed track cables, inclined or horizontal, on which the carriage operates by gravity or power. The gravity transporting type (Fig. 10.4.32-I) will either raise or lower material. It consists of one track cable a on which travels the wheeled carriage b carrying the bucket. The traction rope c attached to the carriage is made fast to power drum d. The inclination must be sufficient for the carriage to coast down and pull the traction rope after it. The carriage is hauled up by traction rope c. Drum d is provided with a brake to control the lowering speed, and material may be either raised or lowered. When it is not possible to obtain sufficient fall to operate the load by gravity, traction rope c (Fig. 10.4.32-II) is made endless so that carriage b is drawn in either direction by power drum d. Another type of inclined cableway, shown in Fig. 10.4.32-III, consists of two track cables aa, with an endless traction rope c, driven and controlled by drum d. When material is being lowered, the loaded bucket b raises the empty carriage bb, the speed being controlled by the brake on the drum.

10-38

LOADING, CARRYING, AND EXCAVATING

When material is being raised, the drum is driven by power, the descending empty carriage assisting the engine in raising the loaded carriage. This type has twice the capacity of that shown in Fig. 10.4.32-I. A hoisting and conveying cableway (Fig. 10.4.32-IV) hoists the material at any point under the track cable and transports it to any other point. It consists of a track cable a and carriage b, moved by the endless traction rope c and by power drum d. The hoisting of the load is accomplished by power drum e through fall rope f, which raises the fall block g suspended from the carriage. The fall-rope carriers h support the fall rope; otherwise, the weight of this sagging rope would prevent fall block g from lowering when without load. Where it is possible to obtain a minimum inclination of 20 on the track cable, the traction-rope drum d is provided with a brake and is not power-driven. The carriage then

Fig. 10.4.32 Cableways.

descends by gravity, pulling the fall and traction ropes to the desired point. Brakes are applied to drum d, stopping the carrier. The fall block is lowered, loaded, and raised. If the load is to be carried up the incline, the carriage is hauled up by the fall rope. With this type, the friction of the carriage must be greater than that of the fall block or the load will run down. A novel development is the use of self-filling grab buckets operated from the carriages of cableways, which are lowered, automatically filled, hoisted, carried to dumping position, and discharged. The carriage speed is 300 to 1,400 ft/min (1.5 to 7.1 m/s) [in special cases, up to 1,800 ft/min (9.1 m/s)]; average hoisting speed is 100 to 700 ft/min. The average loads for coal and earth are 1 to 5 tons (0.9 to 4.5 tonnes); for rock from quarries, 5 to 20 tons; for concrete, to 12 yd3 (9.1 m3) at 50 tons. The deflection of track cables with their maximum gross loads at midspan is usually taken as 51⁄2 to 6 percent of the span. Let S  span between supports, ft; L  one-half the span, ft; w  weight of rope, lb/ft; P  total concentrated load on rope, lb; h  deflection, ft; H  horizontal tension in rope, lb. Then h  (wL  P)L/2H; P  (2h  wL2)/ L  (8hH  wS2)/2S. For track cables, a factor of safety of at least 4 is advised, though this may be as low as 3 for locked smooth-coil strands that use outer wires of high ultimate strength. For traction and fall ropes, the sum of the load and bending stress should be well within the elastic limit of the rope or, for general hoisting, about two-thirds the elastic limit (which is taken at 65 percent of the breaking strength). Let P  load on the rope, lb; A  area of metal in rope section, in2; E  29,500,000; R  radius of curvature of hoisting drum or sheave, whichever is smaller, in; d  diameter of individual wires in rope, in (for six-strand 19-wire rope, d  1⁄15 rope diam; for six-strand 7-wire rope, d  1/9 rope diam). Then load stress per in2  T1  P/A, and bending stress per in2  Tb  Ed/2R. The radius of curvature of saddles, sheaves, and driving drums is thus important to fatigue life of the cable. In determining the horsepower required, the load on the traction ropes or on the fall ropes will govern, depending upon the degree of inclination.

Cable Tramways

Cable tramways are aerial conveying devices using suspended cables, carriages, and buckets for transporting material over level or mountainous country or across rivers, valleys, or hills (they transport but do not hoist). They are used for handling small quantities over long distances, and their construction cost is insignificant compared with the construction costs of railroads and bridges. Five types are in use: Monocable, or Single-Rope, Saddle-Clip Tramway Operates on grades to 50 percent gravity grip or on higher grades with spring grip and has capacity of 250 tons/h (63 kg/s) in each direction and speeds to 500 ft/min. Single section lengths to 16 miles without intermediate stations or tension points. Can operate in multiple sections without transshipment to any desired length [monocables to 170 miles (274 km) over jungle terrain are practical]. Loads automatically leave the carrying moving rope and travel by overhead rail at angle stations and transfer points between sections with no detaching or attaching device required. Main rope constantly passes through stations for inspection and oiling. Cars are light and safe for passenger transportation. Single-Rope Fixed-Clip Tramway Endless rope traveling at low speed, having buckets or carriers fixed to the rope at intervals. Rope passes around horizontal sheaves at each terminal and is provided with a driving gear and constant tension device. Bicable, or Double-Rope,Tramway Standing track cable and a moving endless hauling or traction rope traveling up to 500 ft/min (2.5 m/s). Used on excessively steep grades. A detacher and attacher is required to open and close the car grip on the traction rope at stations. Track cable is usually in sections of 6,000 to 7,000 ft (1,830 to 2,130 m) and counterweighted because of friction of stiff cable over tower saddles. Jigback, or Two-Bucket, Reversing Tramway Usually applied to hillside operations for mine workings so that on steep slopes loaded bucket will pull unloaded one up as loaded one descends under control of a brake. Loads to 10 tons (9 tonnes) are carried using a pair of track cables and an endless traction rope fixed to the buckets. To-and-Fro, or Single-Bucket, Reversing Tramway A single track rope and a single traction rope operated on a winding or hoist drum. Suitable for light loads to 3 tons (2.7 tonnes) for intermittent working on a hillside, similar to a hoisting and conveying cableway without the hoisting feature. The monocable tramway (Fig. 10.4.33) consists of an endless cable a passing over horizontal sheaves d and e at the ends and supported at intervals by towers. This cable is moved continuously, and it both supports and propels carriages b and c. The carriages either are attached permanently to the cable (as in the single-rope fixed clip tramway), in

Fig. 10.4.33 Single-rope cable tramway.

which case they must be loaded and dumped while in motion, or are attached by friction grips so that they may be connected automatically or by hand at the loading and dumping points. When the tramway is lowering material from a higher to a lower level, the grade is frequently sufficient for the loaded buckets b to raise the empty buckets c, operating the tramway by gravity, the speed being controlled by a brake on grip wheel d. The bicable tramway (Fig. 10.4.34) consists of two stationary track cables a, on which the wheeled carriages c and d travel. The endless traction rope b propels the carriages, being attached by friction grips. Figure 10.4.35 shows the arrangement of the overhead type. The track cable a is supported at intervals by towers b, which carry the saddles c

Fig. 10.4.34 Double-rope cable tramway.

ABOVE-SURFACE HANDLING

in which the track cable rests. Each tower also carries the sheave d for supporting traction rope e. The self-dumping bucket f is suspended from carriage g. The grip h, which attaches the carriage to traction rope e, is controlled by lever k. In the underhung type, shown in Fig. 10.4.36, track cable a is carried above traction rope e. Saddle c on top of the tower supports the track cable, and sheave d supports the traction cable. The sheave is provided with a rope guard m. The lever h, with a roller on the end, automatically attaches and detaches the grip by coming in contact with guides at the loading and dumping points. The carriages move in

10-39

traction cable f, and pass on the track cable a. Traction cable f passes around and is driven by drum g. When the carriages are permanently attached to the traction cable, they are loaded by a moving hopper, which is automatically picked up by the carriage and carried with it a short distance while the bucket is being filled. Figure 10.4.38 shows a discharge terminal. The carriage rolls off from the track cable a to the fixed track c, being automatically ungripped. It is pushed around the 180 bend of track c, discharging into the bin underneath and continuing on track c

Fig. 10.4.38 Cable-tramway discharge terminal. Fig. 10.4.35 Overhead-type double-rope cable tramway.

only one direction on each track. On steep downgrades, special hydraulic speed controllers are used to fix the speed of the carriages. The track cables are of the special locked-joint smooth-coil, or tramway, type. Nearly all wire rope is made of plow steel, with the old cast-steel type no longer being in use. The track cable is usually provided with a smooth outer surface of Z-shaped wires for full lock type

Fig. 10.4.36 Underhung-type double-rope cable tramway.

or with a surface with half the wires H-shaped and the rest round. Special tramway couplers are attached in the shops with zinc or are attached in the field by driving little wedges into the strand end after inserting the end into the coupler. The second type of coupling is known as a dry socket and, though convenient for field installation, is not held in as high regard for developing full cable strength. The usual spans for level ground are 200 to 300 ft (61 to 91 m). One end of the track cable is anchored; the other end is counterweighted to one-quarter the breaking strength of the rope so that the horizontal tension is a known quantity. The traction ropes are made six-strand 7-wire or six-strand 19-wire, of cast or plow steel on hemp core. The maximum diameter is 1 in, which limits the length of the sections. The traction rope is endless and is driven by a drum at one end, passing over a counterweighted sheave at the other end. Figure 10.4.37 shows a loading terminal. The track cables a are anchored at b. The carriage runs off the cable to the fixed track c, which makes a 180 bend at d. The empty buckets are loaded by chute e from the loading bin, continue around track c, are automatically gripped to

until it is automatically gripped to traction cable f. The counterweights h are attached to track cables a, and the counterweight k is attached to the carriage of the traction-rope sheave m. The supporting towers are A frames of steel or wood. At abrupt vertical angles the supports are placed close together and steel tracks installed in place of the cable. Spacing of towers will depend upon the capacity of the track cables and sheaves and upon the terrain as well as the bucket spacing. Stress In Ropes (Roebling) The deflection for track cables of tramways is taken as one-fortieth to one-fiftieth of the span to reduce the grade at the towers. Let S  span between supports, ft; h  deflection, ft; P  gross weight of buckets and carriages, lb; Z  distance between buckets, ft; W1  total load per ft of rope, lb; H  horizontal tension of rope, lb. The formulas given for cableways then apply. When several buckets come in the span at the same time, special treatment is required for each span. For large capacities, the buckets are spaced close together, the load may be assumed to be uniformly distributed, and the live load per linear foot of span  P/Z. Then H  W1S2/8h, where W1  (weight of rope per ft)  (P/Z). When the buckets are not spaced closely, the equilibrium curve can be plotted with known horizontal tension and vertical reactions at points of support. For figuring the traction rope, t0  tension on counterweight rope, lb; t1, t2, t3, t4  tensions, lb, at points shown in Fig. 10.4.39; n  number of carriers in motion; a  angle subtended between the line connecting the tower supports and the horizontal; W1  weight of each loaded carrier, lb; W2  weight of each empty carrier, lb; w  weight of traction

Fig. 10.4.39 Diagram showing traction rope tensions.

rope, lb/ft; L  length of tramway of each grade a, ft; D  diameter of end sheave, ft; d  diameter of shaft of sheave, ft; f1  0.015  coefficient of friction of shaft; f2  0.025  rolling friction of carriage wheels. Then, if the loads descend, the maximum stress on the loaded side of the traction rope is t2 5 t1 1 g sLw sin a 1 1⁄2nW1 sin ad 2 f2 g sLw cos a 1 1⁄2nW1 cos ad where t1   f1(d/D)]. If the load ascends, there are two cases: (1) driving power located at the lower terminal, (2) driving power at the upper terminal. If the line has no reverse grades, it will operate by 1⁄2t [1 0

Fig. 10.4.37 Cable-tramway loading terminal.

10-40

LOADING, CARRYING, AND EXCAVATING

gravity at a 10 percent incline to 10 tons/h capacity and at a 4 percent grade for 80 tons/h. The preceding formula will determine whether it will operate by gravity. The power required or developed by tramways is as follows: Let V  velocity of traction rope, ft/min; P  gross weight of loaded carriage, lb; p  weight of empty carriage, lb; N  number of carriages on one track cable; P/50  friction of loaded carriage; p/50  friction of empty carriage; W  weight of moving parts, lb; E  length of tramway divided by difference in levels between terminals, ft. Then, power required is hp 5

P2p P1p NV ¢ ≤ 6 0.0000001 WV 6 E 33,000 50

Where power is developed by tramways, use 80 instead of 50 under P  p. BELOW-SURFACE HANDLING (EXCAVATION) Power Shovels Power shovels stand upon the bottom of the pit being dug and dig above

this level. Small machines are used for road grading, basement excavation, clay mining, and trench digging; larger sizes are used in quarries, mines, and heavy construction; and the largest are used for removing overburden in opencut mining of coal and ore. The uses for these machines may be divided into two groups: (1) loading, where sturdy machines with comparatively short working ranges are used to excavate material and load it for transportation; (2) stripping, where a machine of very great dumping and digging reaches is used to both excavate the material and transport it to the dump or wastepile. The full-revolving shovel, which is the only type built at the present (having entirely displaced the old railroad shovel), is usually composed of a crawlermounted truck frame with a center pintle and roller track upon which the revolving frame can rotate. The revolving frame carries the swing and hoisting machinery and supports, by means of a socket at the lower end and cable guys at the upper end, a boom carrying guides for the dipper handles and machinery to thrust the dipper into the material being dug. Figure 10.4.40 shows a full-revolving shovel. The dipper a, of cast or plate steel, is provided with special wear-resisting teeth. It is pulled through the material by a steel cable b wrapped on a main drum c. Gasoline engines are used almost exclusively in the small sizes, and diesel, diesel-electric, or electric power units, with Ward Leonard control, in the large machines. The commonly used sizes are from 1⁄2 to 5 yd3 (0.4 to 3.8 m3) capacity, but special machines for coal-mine stripping are built with buckets holding up to 33 yd3 (25 m3) or even more. The very large machines are not suited for quarry or heavy rock work. Sizes up to 5 yd3 (3.8 m3) are known as quarry machines. Stripping shovels are crawler-mounted, with double-tread crawlers under each of the four corners and with power means for keeping the turntable level when traveling over uneven ground. The crowd motion consists of a chain

Fig. 10.4.40 Revolving power shovel.

which, through the rack-and-pinion mechanism, forces the dipper into the material as the dipper is hoisted and withdraws it on its downward swing. On the larger sizes, a separate engine or motor is mounted on the boom for crowding. A separate engine working through a pinion and horizontal gear g swings the entire frame and machinery to bring the dipper into position for dumping and to return it to a new digging position. Dumping is accomplished by releasing the hinged dipper bottom, which drops upon the pulling of a latch. With gasoline-engine or dieselengine drives, there is only one prime mover, the power for all operations being taken off by means of clutches. Practically all power shovels are readily converted for operation as dragline excavators, or cranes. The changes necessary are very simple in the case of the small machines; in the case of the larger machines, the installation of extra drums, shafts, and gears is required, in addition to the boom and bucket change. The telescoping boom, hydraulically operated excavator shown in Fig. 10.4.41 is a versatile machine that can be quickly converted from the rotating-boom power shovel shown in Fig. 10.4.41a to one with a crane boom (Fig. 10.4.41b) or backhoe shovel boom (Fig. 10.4.41c). It can dig ditches reaching to 22 ft (6.7 m) horizontally and 9 ft 6 in (2.9 m) below grade; it can cut slopes, rip, scrape, dig to a depth of 12 ft 6 in (3.8 m), and load to a height of 11 ft 2 in (3.4 m). It is completely hydraulic in all powerized functions. Dredges Placer dredges are used for the mining of gold, platinum, and tin from

placer deposits. The usual maximum digging depth of most existing dredges is 65 to 70 ft (20 to 21 m), but one dredge is digging to 125 ft (38 m). The dredge usually works with a bank above the water of 8 to 20 ft (2.4 to 6 m). Sometimes hydraulic jets are employed to break down these banks ahead of the dredge. The excavated material is deposited astern, and as the dredge advances, the pond in which the dredge floats is carried along with it. The digging element consists of a chain of closely connected buckets passing over an idler tumbler and an upper or driving tumbler. The chain is mounted on a structural-steel ladder which carries a series of

Fig. 10.4.41 Hydraulically operated excavator. (Link-Belt.)

BELOW-SURFACE HANDLING (EXCAVATION)

rollers to provide a bearing track for the chain of buckets. The upper tumbler is placed 10 to 40 ft (3 to 13 m) above the deck, depending upon the size of the dredge. Its fore-and-aft location is about 65 percent of the length of the ladder from the bow of the dredge. The ladder operates through a well in the hull, which extends from the bow practically to the upper-tumbler center. The material excavated by the buckets is dumped by the inversion of the buckets at the upper tumbler into a hopper, which feeds it to a revolving screen. Placer dredges are made with buckets ranging in capacity from 2 to 20 ft3 (0.06 to 0.6 m3). The usual speed of operation is 15 to 30 buckets per min, in the inverse order of size. The digging reaction is taken by stern spuds, which act as pivots upon which the dredge, while digging, is swung from side to side of the cut by swinging lines which lead off the dredge near the bow and are anchored ashore or pass over shore sheaves and are dead-ended near the lower tumbler on the digging ladder. By using each spud alternately as a pivot, the dredge is fed forward into the bank. Elevator dredges, of which dredges are a special classification, are used principally for the excavation of sand and gravel beds from rivers, lakes, or ocean deposits. Since this type of dredge is not as a rule required to cut its own flotation, the bow corners of the hull may be made square and the digging ladder need not extend beyond the bow. The bucket chain may be of the close-connected placer-dredge type or of the open-connected type with one or more links between the buckets. The dredge is more of an elevator than a digging type, and for this reason the buckets may be flatter across the front and much lighter than the placer-dredge bucket. The excavated material is usually fed to one or more revolving screens for classification and grading to the various commercial sizes of sand and gravel. Sometimes it is delivered to sumps or settling tanks in the hull, where the silt or mud is washed off by an overflow. Secondary elevators raise the material to a sufficient height to spout it by gravity or to load it by belt conveyors to the scows. Hydraulic dredges are used most extensively in river and harbor work, where extremely heavy digging is not encountered and spoil areas are available within a reasonable radius of the dredge. The radius may vary from a few hundred feet to a mile or more, and with the aid of booster pumps in the pipeline, hydraulic dredges have pumped material through distances in excess of 2 mi (3.2 km), at the same time elevating it more than 100 ft (30 m). This type of dredge is also used for sand-and-gravelplant operations and for land-reclamation work. Levees and dams can be built with hydraulic dredges. The usual maximum digging depth is about 50 ft (16 m). Hydraulic dredges are reclaiming copper stamp-mill tailings from a depth of 115 ft (35 m) below the water, and a depth of 165 ft (50 m) has been reached in a land-reclamation job. The usual type of hydraulic dredge has a digging ladder suspended from the bow at an angle of 45 for the maximum digging depth. This ladder carries the suction pipe and cutter, with its driving machinery, and the swinging-line sheaves. The cutter head may have applied to it 25 to 1,000 hp (3.7 to 746 kW). The 20-in (0.5-m) dredge, which is the standard, general-purpose machine, has a cutter drive of about 300 hp. The usual operating speed of the cutter is 5 to 20 r/min. The material excavated by the cutter enters the mouth of the suction pipe, which is located within and at the lower side of the cutter head. The material is sucked up by a centrifugal pump, which discharges it to the dump through a pipeline. The shore discharge pipe is usually of the telescopic type, made of No. 10 to 3⁄10-in (3- to 7.5-mm) plates in lengths of 16 ft (5 m) so that it can be readily handled by the shore crew. Floating pipelines are usually made of plates from 1⁄4 to 1⁄2 in (6 to 13 mm) thick and in lengths of 40 to 100 ft (12 to 30 m), which are floated on pontoons and connected together through rubber sleeves or, preferably, ball joints. The floating discharge line is flexibly connected to the hull in order to permit the dredge to swing back and forth across the cut while working without disturbing the pipeline. Pump efficiency is usually sacrificed to make an economical unit for the handling of material, which may run from 2 to 25 percent of the total volume of the mixture pumped. Most designs have generous clearances and will permit the passage of stone which is 70 percent of the pipeline

10-41

diameter. The pump efficiencies vary widely but in general may run from 50 to 70 percent. Commercial dredges vary in size and discharge-pipe diameters from 12 to 30 in (0.3 to 0.8 m). Smaller or larger dredges are usually specialpurpose machines. A number of 36-in (0.9-m) dredges are used to maintain the channel of the Mississippi River. The power applied to pumps varies from 100 to 3,000 hp (75 to 2,200 kW). The modern 20-in (0.5-m) commercial dredge has about 1,350 bhp (1,007 kW) applied to the pump. Diesel dredges are built for direct-connected or electric drives, and modern steam dredges have direct-turbine or turboelectric drives. The steam turbine and the dc electric motor have the advantage that they are capable of developing full rating at reduced speeds. Within its scope, the hydraulic dredge can work more economically than any other excavating machine or combination of machines. Dragline Excavators Dragline excavators are typically used for digging open cuts, drainage

ditches, canals, sand, and gravel pits, where the material is to be moved 20 to 1,000 ft (6 to 305 m) before dumping. They cannot handle rock unless the rock is blasted. Since they are provided with long booms and mounted on turntables, permitting them to swing through a full circle, these excavators can deposit material directly on the spoil bank farther from the point of excavation than any other type of machine. Whereas a shovel stands below the level of the material it is digging, a dragline excavator stands above and can be used to excavate material under water. Figure 10.4.42 shows a self-contained dragline mounted on crawler treads. The drive is almost exclusively gasoline in the small sizes and diesel, diesel-electric, or electric, frequently with Ward Leonard control, in the large sizes. The boom a is pivoted at its lower end to the turntable, the outer end being supported by cables b, so that it can be raised or lowered to the desired angle. The scraper bucket c is supported

Fig. 10.4.42 Dragline excavator.

by cable d, which is attached to a bail on the bucket, passes over a sheave at the head of the boom, and is made fast to the engine. A second cable e is attached to the front of the bucket and made fast to the second drum of the engine. The bucket is dropped and dragged along the surface of the material by cable e until filled. It is then hoisted by cable d, drawn back to its dumping position, e being kept tight until the dumping point is reached, when e is slacked, allowing the bucket to dump by gravity. After the bucket is filled, the boom is swung to the dumping position while the bucket is being hauled out. A good operator can throw the bucket 10 to 40 ft (3 to 12 m) beyond the end of the boom, depending on the size of machine and the working conditions. The depth of the cut varies from 12 to 75 ft (3.7 to 23 m), again depending on the size of machine and the working conditions. With the smaller machines and under favorable conditions, two or even three trips per minute are possible; but with the largest machines, even one trip per minute may not be attained. The more common sizes are for handing 3 ⁄4- to 4-yd3 (0.6- to 3-m3) buckets with boom lengths up to 100 or 125 ft (30 to 38 m), but machines have been built to handle an 8-yd3 (6-m3) bucket with a boom length of 200 ft (60 m). The same machine can handle a 12-yd3 (9-m3) bucket with the boom shortened to 165 ft (50 m).

10-42

CONVEYOR MOVING AND HANDLING

Slackline Cableways

Used widely in sand-and-gravel plants, the slackline cableway employs an open-ended dragline bucket suspended from a carrier (Fig. 10.4.43) which runs upon a track cable. It will dig, elevate, and convey materials in one continuous operation.

tower g supports the lower end of the track cable. The bucket is raised and lowered by tensioning or slacking off the track cable. The bucket is loaded, after lowering, by pulling on the load cable. The loaded bucket, after raising, is conveyed at high speed to the dumping point and is

Fig. 10.4.43 Slackline-cable bucket and trolley.

Figure 10.4.44 shows a typical slackline-cableway operation. The bucket and carrier is a; b is the track cable, inclined to return the bucket and carrier by gravity; c is a tension cable for raising or lowering the track cable; d is the load cable; and e is a power unit with two friction drums having variable speeds. A mast or tower f is used to support guide and tension blocks at the high end of the track cable; a movable tail

Fig. 10.4.44 Slackline-cable plant. (Sauerman.)

returned at a still higher speed by gravity to the digging point. The cableway can be operated in radial lines from a mast or in parallel lines between two moving towers. It will not dig rock unless the rock is blasted. The depth of digging may vary from 5 to 100 ft.

10.5 CONVEYOR MOVING AND HANDLING by Vincent M. Altamuro OVERHEAD CONVEYORS By Ivan L. Ross Acco Chain Conveyor Division

Conveyors are primarily horizontal-movement, fixed-path, constantspeed material handling systems. However, they often contain inclined sections to change the elevation of the material as it is moving, switches to permit alternate paths and “power-and-free” capabilities to allow the temporary slowing, stopping, or accumulating of material. Conveyors are used, not only for transporting material, but also for in-process storage. To the extent possible, materials should be processed while being moved, so that value is added to it as it is continuously transported. They may be straight, curved, closed-loop, irreversible, or reversible. Some types of conveyors are: air blower apron belt bucket car-on-track carousel chain flight hydraulic magnetic

monorail pneumatic tube power-and-free roller screw skate wheel slat spiral towline trolley

Tables 10.5.1 and 10.5.2 describe and compare some of these. Conveyors are often used as integral components of assembly systems. They bring the correct material, at the required rate, to each worker and then to the next operator in the assembly sequence. Figure 10.5.1 shows ways conveyors can be used in assembly. Overhead conveyor systems are defined in two general classifications: the basic trolley conveyor and the power-and-free conveyor, each of which serves a definite purpose. Trolley conveyors, often referred to as overhead power conveyors, consist of a series of trolleys or wheels supported from or within an

overhead track and connected by an endless propelling means, such as chain, cable, or other linkages. Individual loads are usually suspended from the trolleys or wheels (Fig. 10.5.2). Trolley conveyors are utilized for transportation or storage of loads suspended from one conveyor which follows a single fixed path. They are normally used in applications where a balanced, continuous production is required. Track sections range from lightweight “tee” members or tubular sections, to medium- and heavy-duty I-beam sections. The combinations and sizes of trolley-propelling means and track sections are numerous. Normally this type of conveyor is continually in motion at a selected speed to suit its function. Power-and-free conveyor systems consist of at least one power conveyor, but usually more, where the individual loads are suspended from one or more free trolleys (not permanently connected to the propelling means) which are conveyor-propelled through all or part of the system. Additional portions of the system may have manual or gravity means of propelling the trolleys. Worldwide industrial, institutional, and warehousing requirements of in-process and finished products have affected the considerable growth and development of power-and-free conveyors. Endless varieties of size, style, and color, in all imaginable product combinations, have extended the use of power-and-free conveyors. The power-andfree system combines the advantages of continuously driven chains with the versatile traffic system exemplified by traditional monorail unpowered systems. Thus, high-density load-transportation capabilities are coupled each with complex traffic patterns and in-process or workstation requirements to enable production requirements to be met with a minimum of manual handling or transferring. Automatic dispatch systems for coding and programming of the load routing are generally used. Trolley Conveyors

The load-carrying member of a trolley conveyor is the trolley or series of wheels. The wheels are sized and spaced as a function of the imposed

OVERHEAD CONVEYORS Table 10.5.1

10-43

Types of Conveyors Used in Factories

Type of conveyor Overhead Power-and-free

Trolley Above-floor Roller, powered

Roller, gravity Skate wheel Spiral tower

Description

Features/limitations

Carriers hold parts and move on overhead track. With inverted designs, track is attached to factory floor. Carriers can be transferred from powered to “free” track. Similar to power and free, but carriers cannot move off powered track.

Accumulation; flexible routing; live storage; can hold parts while production steps are performed; can travel around corners and up inclines. Same as power and free, but flexible routing is not possible without additional equipment

Load-carrying rollers are driven by a chain or belt.

Handles only flat-bottomed packages, boxes, pallets or parts; can accumulate loads; can also be suspended from ceiling for overhead handling. When inclined, loads advance automatically.

Free-turning rollers; loads are moved either by gravity or manual force. Free-turning wheels spaced on parallel shafts.

Magnetic

Helix-shaped track which supports parts or small “pallets” that move down track. Metal belt conveyor with magnetized bed.

Pneumatic Car-on-track

Air pressure propels cylindrical containers through metal tubes. Platforms powered by rotating shaft move along track.

In-floor Towline

Carts are advanced by a chain in the floor.

For lightweight packages and boxes; less expensive than gravity rollers. Buffer storage; provides surge of parts to machine tools when needed. Handles ferromagnetic parts, or separates ferromagnetic parts from nonferrous scrap. Moves loads quickly; can be used overhead to free floor space. Good for flexible manufacturing systems; precise positioning of platforms; flexible routing. High throughput; flexible routing possible by incorporating spurs in towline; tow carts can be manually removed from track; carts can travel around corners.

SOURCE: Modern Materials Handling, Aug. 5, 1983, p. 55.

Table 10.5.2

Transportation Equipment Features Most practical travel distance

Automatic load or unload

Typical load

Throughput rate

Travel path

Typical application

Conveyors Belt

Short to medium

No

Cases, small parts

High

Fixed

Chain

Short to medium

Yes

Unit

High

Fixed

Roller

Short to long

Yes

Unit, case

High

Fixed

Towline

Medium to long

Yes

Carts

High

Fixed

Take-away from picking, sorting Deliver to and from automatic storage and retrieval systems Unit: same as chain Case: sorting, delivery between pick station Delivery to and from shipping or receiving

Short to long

Yes

Unit

Low

Fixed

Pallet truck

Short to long

Yes

Unit

Low

Fixed

Tractor train

Short to long

Yes

Unit

Medium

Fixed

Short

No

Unit

Low

Flexible

Medium Short to long

No No

Unit Unit

Low Medium

Flexible Flexible

Equipment

Wire-guided vehicles Cart

Operator-guided vehicles Walkie pallet Rider pallet Tractor train

SOURCE: Modern Materials Handling, Feb. 22, 1980, p. 106.

Delivery between two drop points Delivery between two drop points Delivery to multiple drop points Short hauls at shipping and receiving Dock-to-warehouse deliveries Delivery to multiple drop points

10-44

CONVEYOR MOVING AND HANDLING

load, the propelling means, and the track capability. The load hanger (carrier) is attached to the conveyor and generally remains attached unless manually removed. However, in a few installations, the load hanger is transferred to and from the conveyor automatically.

Max drive effort, N 5 sA 1 B 1 Cd 9.81 Net drive effort, N 5 sA 1 B 1 C 2 Dd 9.81 where A  fw [where w  total weight of chain, carriers, and live load, lb (kg) and f  coefficient of friction]; B  wS [where w  average carrier load per ft (m), lb/ft (kg/m) and S  total vertical rise, ft (m)]; C  0.017f(A  B)N (where N  sum of all horizontal and vertical curves, deg); and D  wS [where w  average carrier load per ft (m), lb/ft (kg/m) and s  total vertical drop, ft (m)]. For conveyors with antifriction wheels, with clean operating condition, the coefficient of friction f may be 0.13.

Fig. 10.5.2 Typical conveyor chains or cable.

Fig. 10.5.1 Conveyors used in assembly operations. (a) Belt conveyor, with diverters at each station; (b) carousel circulates assembly materials to workers; (c) multiple-path conveyors allow several products to be built at once; (d) gravity roller conveyors; (e) towline conveyor, moving assemblies from one group of assemblers to another. (Modern Materials Handling, Nov. 1979, page 116.)

The trolley conveyor can employ any chain length consistent with allowable propelling means and drive(s) capability. The track layout always involves horizontal turns and commonly has vertical inclines and declines. When a dimensional layout, load spacing, weights of moving loads, function, and load and unload points are determined, the chain or cable pull can be calculated. Manufacturer’s data should be used for frictional values. A classical point-to-point analysis should be made, using the most unfavorable loading condition. In the absence of precise data, the following formulas can be used to find the approximate drive effort: Max drive effort, lb 5 A 1 B 1 C Net drive effort, lb 5 A 1 B 1 C 2 D

Where drive calculations indicate that the allowable chain or cable tension may be exceeded, multiple-drive units are used. When multiple drives are used for constant speed, high-slip motors or fluid couplings are commonly used. If variable speed is required with multiple drives, it is common to use direct-current motors with direct-current supply and controls. For both constant and variable speed, drives are balanced to share drive effort. Other than those mentioned, many various methods of balancing are available for use. Complexities of overhead conveyors, particularly with varying loads on inclines and declines, as well as other influencing factors (e.g., conveyor length, environment, lubrication, and each manufacturer’s design recommendations), usually require detail engineering analysis to select the proper number of drives and their location. Particular care is also required to locate the take-up properly. The following components or devices are used on trolley-conveyor applications: Trolley Assembly Wheels and their attachment portion to the propelling means (chain or cable) are adapted to particular applications, depending upon loading, duty cycle, environment, and manufacturer’s design. Carrier Attachments These are made in three main styles: (1) enclosed tubular type, where the wheels and propelling means are carried inside; (2) semienclosed tubular type, where the wheels are enclosed and the propelling means is external; and (3) open-tee or I-beam type, where the wheels and propelling means are carried externally. Sprocket or Traction Wheel Turn Any arc of horizontal turn is available. Standards usually vary in increments of 15 from 15 to 180. Roller Turns Any arc of horizontal turn is available. Standards usually vary in increments of 15 from 15 to 180. Track Turns These are horizontal track bends without sprockets, traction wheels, or rollers; they are normally used on enclosed-track conveyors where the propelling means is fitted with horizontal guide wheels.

OVERHEAD CONVEYORS Track Hangers, Brackets, and Bracing These conform to track size and shape, spaced at intervals consistent with allowable track stress and deflection applied by loading and chain or cable tensions. Track Expansion Joints For use in variable ambient conditions, such as ovens, these are also applied in many instances where conveyor track crosses building expansion joints. Chain Take-Up Unit Required to compensate for chain wear and/or variable ambient conditions, this unit may be traction-wheel, sprocket, roller, or track-turn type. Adjustment is maintained by screw, screw spring, counterweight, or air cylinder. Incline and Decline Safety Devices An “anti-back-up” device will ratchet into a trolley or the propelling means in case of unexpected reversal of a conveyor on an incline. An “anti-runaway” device will sense abnormal conveyor velocity on a decline and engage a ratchet into a trolley or the propelling means. Either device will arrest the uncontrolled movement of the conveyor. The anti-runaway is commonly connected electrically to cut the power to the drive unit. Drive Unit Usually sprocket or caterpillar type, these units are available for constant-speed or manual variable-speed control. Common speed variation is 1 : 3; e.g., 5 to 15 ft/min (1.5 to 4.5 m/min). Drive motors commonly range from fractional to 15 hp (11 kW). Drive Unit Overload Overload is detected by electrical, torque, or pull detection by any one of many available means. Usually overload will disconnect power to the drive, stopping the conveyor. When the reason for overload is determined and corrected, the conveyor may be restarted. Equipment Guards Often it is desirable or necessary to guard the conveyor from hostile environment and contaminants. Also employees must be protected from accidental engagement with the conveyor components. Transfer Devices Usually unique to each application, automatic part or carrier loading, unloading, and transfer devices are available. With growth in the use of power-and-free, carrier transfer devices have become rare. Power-and-Free Conveyor

The power-and-free conveyor has the highest potential application wherever there is a requirement for other than a single fixed-path flow (trolley conveyor). Power-and-free conveyors may have any number of automatic or manual switch points. A system will permit scheduled transit and delivery of work to the next assigned station automatically. Accumulation (storage) areas are designed to accommodate in-process inventory between operations. The components and chain-pull calculation discussed for powered overhead conveyors are basically applicable to the power-and-free conveyor. Addition of a secondary free track surface is provided for the work carrier to traverse. This free track is usually disposed directly below the power rail but is sometimes found alongside the power rail. (This arrangement is often referred to as a “side pusher” or “drop finger.”) The power-and-free rails are joined by brackets for rail (free-track)

10-45

continuity. The power chain is fitted with pushers to engage the workcarrier trolley. Track sections are available in numerous configurations for both the power portion and the free portion. Sections will be enclosed, semienclosed, or open in any combinations. Two of the most common types are shown in Fig. 10.5.3. As an example of one configuration of power-and-free, Fig. 10.5.4 shows the ACCO Chain Conveyor Division enclosed-track power-andfree rail. In the cutaway portion, the pushers are shown engaged with the work-carrier trolley. The pushers are pivoted on an axis parallel to the chain path and swing aside to engage the pusher trolley. The pusher trolley remains engaged on level and sloped sections. At automatic or manual switching points, the leading dispatch trolley head which is not engaged with the

Fig. 10.5.4 Power-and-free conveyor rail and trolley heads. (ACCO Chain Conveyor Division.)

chain is propelled through the switch to the branch line. As the chain passes the switching point, the pusher trolley departs to the right or left from pusher engagement and arrives on a free line, where it is subject to manual or controlled gravity flow. The distance between pushers on the chain for power-and-free use is established in accordance with conventional practice, except that the minimum allowable pusher spacing must take into account the wheelbase of the trolley, the bumper length, the load size, the chain velocity, and the action of the carrier at automatic switching and reentry points. A switching headway must be allowed between work carriers. An approximation of the minimum allowable pusher spacing is that the pusher spacing will equal twice the work-carrier bumper length. Therefore, a 4-ft (1.2-m) work carrier would indicate a minimum pusher spacing of 8 ft (2.4 m). The load-transmission capabilities are a function of velocity and pusher spacing. At the 8-ft (2.4-m) pusher spacing and a velocity of 40 ft/min (0.22 m/s), five pushers per minute are made available. The load-transmission capability is five loads per minute, or 300 loads per hour. Method of Automatic Switching from a Powered Line to a Free Line

Fig. 10.5.3 Conveyor tracks.

Power-and-free work carriers are usually switched automatically. To do this, it is necessary to have a code device on the work carrier and a decoding (reading) device along the track in advance of the track switch. Figure 10.5.5 shows the equipment relationship. On each carrier, the free trolley carries the code selection, manually or automatically introduced, which identifies it for a particular destination or routing. As the free trolley passes the reading station, the trolley intelligence is decoded and compared with a preset station code and its current knowledge of the switch position and branch-line condition; a decision is then reached which results in correct positioning of the rail switch. In Fig. 10.5.5, the equipment illustrated includes a transistorized readout station a, which supplies 12 V direct current at stainless-steel code brushes b. The code brushes are “matched” by contacts on the encoded trolley head c. When a trolley is in register and matches the codebrush positioning, it will be allowed to enter branch line d if the line is not full. If carrier e is to be entered, the input signal is rectified and

10-46

CONVEYOR MOVING AND HANDLING

amplified so as to drive a power relay at junction box f, which in turn actuates solenoid g to operate track switch h to the branch-line position. A memory circuit is established in the station, indicating that a full-line condition exists. This condition is maintained until the pusher pin of the switched carrier clears reset switch j.

space provision for arriving and departing carriers. (Considerable knowledge of work-rate standards and production-schedule requirements is needed for accurate sizing.) (4) As manned or automatic storage lines, especially for handling production imbalance for later consumption. Nonmanned automatic free lines require that the carrier be controlled in conformance with desired conveyor function and with regard to the commodity being handled. The two principal ways to control carriers in nonmanned free rails are (1) to slope the free rail so that all carriers will start from rest and use incremental spot retarders to check velocity, and (2) to install horizontal or sloped rails and use auxiliary power conveyor(s) to accumulate carriers arriving in the line. In manned free lines, it is usual to have slope at the automatic arrival and automatic departure sections only. These sections are designed for automatic accumulation of a finite number of carriers, and retarding or feeding devices may be used. Throughout the remainder of the manned station, the carrier is propelled by hand. Method of Automatic Switching from a Free Line to a Powered Line Power-and-free work carriers can be reentered into the powered

Fig. 10.5.5 Switch mechanism exiting trolley from power rail to free rail. (ACCO Chain Conveyor Division.)

The situations which can be handled by the decoding stations are as follows: (1) A trolley with a matching code is in register; there is space in the branch line. The carrier is allowed to enter the branch line. (2) A trolley with a matching code is in register; there is no space in the branch line. The carrier is automatically recirculated on the powered system and will continue to test its assigned destination until it can be accommodated. (3) A trolley with a matching code is in register; there is no space in the branch line. The powered conveyor is automatically stopped, and visual and/or audible signals are started. The conveyor can be automatically restarted when the full-line condition is cleared, and the waiting carrier will be allowed to enter. (4) A trolley with a nonmatching code is in register; in this case, the decoding station always returns the track switch to the main-line position, if necessary, and bypasses the carrier. Use and Control of Carriers on Free and Gravity Lines Free and gravity lines are used as follows: (1) To connect multiple power-andfree conveyors, thus making systems easily extensible and permitting different conveyor designs for particular use. (2) To connect two auxiliary devices such as vertical conveyors and drop-lift stations. (3) As manned or automatic workstations. In this case, the size of the station depends on the number of carriers processed at one time, with additional

lines either manually or automatically. The carrier must be integrated with traffic already on the powered line and must be entered so that it will engage with a pusher on the powered chain. Figure 10.5.6 illustrates a typical method of automatic reentry. The carrier a is held at a rest on a slope in the demand position by the electric trolley stop b. The demand to enter enables the sensing switches c and d mounted on the powered rail to test for the availability of a pusher. When a pusher that is not propelling a load is sensed, all conditions are met and the carrier is released by the electric trolley stop such that it arrives in the pickup position in advance of the pusher. A retarding or choke device can be used to keep the entering carrier from overrunning the next switch position or another carrier in transit. The chain pusher engages the pusher trolley and departs the carrier. The track switch f can remain in the branch-line position until a carrier on the main line g would cause the track switch to reset. Automatic reentry control ensures that no opportunity to use a pusher is overlooked and does not require the time of an operator. Power-and-Free Conveyor Components All components used on trolley conveyors are applied to power-and-free conveyors. Listed below are a few of the various components unique to power-and-free systems. TRACK SWITCH. This is used for diverting work carriers either automatically or manually from one line or path to another. Any one system may have both. Switching may be either to the right or to the left. Automatically, stops are usually operated pneumatically or electromechanically. Track switches are also used to merge two lines into one.

Fig. 10.5.6 Switch mechanism reentering trolley from free rail to power rail. (ACCO Chain Conveyor Division.)

NONCARRYING CONVEYORS TROLLEY STOPS. Used to stop work carriers, these operate either automatically or manually on a free track section or on a powered section. Automatically, stops are usually operated pneumatically or electromechanically. STORAGE. Portions or spurs of power-and-free conveyors are usually dedicated to the storage (accumulation) of work carriers. Unique to the design, type, or application, storage may be accomplished on (1) level hand-pushed lines, (2) gravity sloped lines (usually with overspeedcontrol retarders), (3) power lines with spring-loaded pusher dogs, or (4) powered lines with automatic accumulating free trolleys. INCLINES AND DECLINES. As in the case of trolley conveyors, vertical inclines and declines are common to power-and-free. In addition to safety devices used on trolley conveyors, similar devices may be applied to free trolleys. LOAD BARS AND CARRIERS. The design of load bars, bumpers, swivel devices, index devices, hooks, or carriers is developed at the time of the initial power-and-free investigation. The system will see only the carrier, and all details of system design are a function of its design. How the commodity is being handled on or in the carrier is carefully considered to facilitate its use throughout the system, manually, automatically, or both. VERTICAL CONVEYOR SECTIONS. Vertical conveyor sections are often used as an accessory to power-and-free. For practical purposes, the vertical conveyor can be divided into two classes of devices: DROP (LIFT) SECTION. This device is used to drop (or lift) the work carrier vertically to a predetermined level in lieu of vertical inclines or declines. One common reason for its use is to conserve space. The unit may be powered by a cylinder or hoist, depending on the travel distance, cycle time, and load. One example of the use of a drop (lift) section is to receive a carrier on a high level and lower it to an operations level. The lower level may be a load-unload station or processing station. Automatic safety stops are used to close open rail ends. INTERFLOOR VERTICAL CONVEYOR. When used for interfloor service and long lifts, the vertical conveyor may be powered by high-speed hoists or elevating machines. In any case, the carriers are automatically transferred to and from the lift, and the dispatch control on the carrier can instruct the machine as to the destination of the carrier. Machines can be equipped with a variety of speeds and operating characteristics. Multiple carriers may be handled, and priority-call control systems can be fitted to suit individual requirements.

10-47

otherwise controlled, as from a tandem elevator or conveyor. As the feeder is interlocked, either mechanically or electrically, the feed stops if the conveyor stops. Flight conveyors may be classified as scraper type (Fig. 10.5.8), in which the element (chain and flights) rests on the trough; suspended-flight type (Fig. 10.5.9), in which the flights are carried clear of the trough by shoes resting on guides; and suspended-chain type (Fig. 10.5.10), in which the chain rests on guides, again carrying the flights clear of the trough. These types are further differentiated as single-strand (Figs. 10.5.8 and 10.5.9) and double-strand (Fig. 10.5.10). For lumpy material, the latter has the advantage since the lumps will enter the trough without interference. For heavy duty also, the double strand has the advantage, in that the pull is divided between

Fig. 10.5.9 Single-strand suspended-flight conveyor.

Fig. 10.5.10 Double-strand roller-chain flight conveyor.

two chains. A special type for simultaneous handling of several materials may have the trough divided by longitudinal partitions. The material having the greatest coefficient of friction is then carried, if possible, in the central zone to equalize chain wear and stretch. Improvements in the welding and carburizing of welded-link chain have made possible its use in flight conveyors, offering several significant advantages, including economy and flexibility in all directions. Figure 10.5.11 shows a typical scraper-type flight cast from malleable iron incorporated onto a slotted conveyor bed. The small amount of fines that fall through the slot are returned to the top of the bed by the returning flights. Figure 10.5.12 shows a double-chain scraper conveyor in which the ends of the flights ride in a restrictive channel. These types of flight conveyors are driven by pocket wheels.

NONCARRYING CONVEYORS Flight Conveyors Flight conveyors are used for moving granular, lumpy, or pulverized materials along a horizontal path or on an incline seldom greater than about 40. Their principal application is in handling coal. The flight conveyor of usual construction should not be specified for a material that is actively abrasive, such as damp sand and ashes. The drag-chain conveyor (Fig. 10.5.7) has an open-link chain, which serves, instead of flights, to push the material along. With a hard-faced concrete or cast-iron trough, it serves well for handling ashes. The return run is, if possible, above the carrying run, so that the dribble will be back into the loaded run. A feeder must be provided unless the feed is

Fig. 10.5.11 Scraper flight with welded chain.

Fig. 10.5.7 Drag chain.

Fig. 10.5.8 Single-strand scraper-flight conveyor.

Flight conveyors of small capacity operate usually at 100 to 150 ft/min (0.51 to 0.76 m/s). Large-capacity conveyors operate at 100 ft/min (0.51 m/s) or slower; their long-pitch chains hammer heavily against the drive-sprocket teeth or pocket wheels at higher speeds.

10-48

CONVEYOR MOVING AND HANDLING

A conveyor steeply inclined should have closely spaced flights so that the material will not avalanche over the tops of the flights. The capacity of a given conveyor diminishes as the angle of slope increases. For the heaviest duty, hardened-face rollers at the articulations are essential.

sewage sludge, and crushed stone. Fabrication from corrosion-resistant materials such as brass, monel, or stainless steel may be necessary for use with some corrosive materials. Where a single runaround conveyor is required with multiple feed points and some recirculation of excess load, the Redler serves. The Uframe flights do not squeeze the loads, as they resume parallelism after separating when rounding the terminal wheels. As an elevator, this machine will also handle sluggish materials that do not flow out readily. A pusher plate opposite the discharge chute can be employed to enter

Fig. 10.5.12 Scraper flight using parallel welded chains. Cautions in Flight-Conveyor Selections With abrasive material, the trough design should provide for renewal of the bottom plates without disturbing the side plates. If the conveyor is inclined and will reverse when halted under load, a solenoid brake or other automatic backstop should be provided. Chains may not wear or stretch equally. In a doublestrand conveyor, it may be necessary to shift sections of chain from one side to the other to even up the lengths. Intermediate slide gates should be set to open in the opposite direction to the movement of material in the conveyor. The continuous-flow conveyor serves as a conveyor, as an elevator, or as a combination of the two. It is a slow-speed machine in which the material moves as a continuous core within a duct. Except with the Redler conveyor, the element is formed by a single strand of chain with closely spaced impellers, somewhat resembling the flights of a flight conveyor. The Bulk flo (Fig. 10.5.13) has peaked flights designed to facilitate the outflow of the load at the point of discharge. The load, moved by a positive push of the flights, tends to provide self-clearing action at the end of a run, leaving only a slight residue. The Redler (Fig. 10.5.14) has skeletonized or U-shaped impellers which move the material in which they are submerged because the resistance to slip Fig. 10.5.13 Bulk-flow through the element is greater than the continuous-flow elevator. drag against the walls of the duct. Materials for which the continuousflow conveyor is suited are listed below in groups of increasing difficulty. The constant C is used in the power equations below, and in Fig. 10.5.16. C  1: clean coal, flaxseed, graphite, soybeans, copra, soap flakes C  1.2: beans, slack coal, sawdust, wheat, wood chips (dry), flour C  1.5: salt, wood chips (wet), starch C  2: clays, fly ash, lime (pebble), sugar (granular), soda ash, zinc oxide C  2.5: alum, borax, cork (ground), limestone (pulverized) Among the materials for which special construction is advised are bauxite, brown sugar, hog fuel, wet coal, shelled corn, foundry dust, cement, bug dust, and hot brewers’ grains. The machine should not be specified for ashes, bagasse, carbon-black pellets, sand and gravel,

Fig. 10.5.14 Redler U-type continuous-flow conveyor.

between the legs of the U flights to push out such material. When horizontal or inclined, continuous-flow elevators such as the Redler are normally self-cleaning. When vertical, the Redler type can be made self-cleaning (except with sticky materials) by use of special flights. Continuous-flow conveyors and elevators do not require a feeder (Fig. 10.5.15). They are self-loading to capacity and will not overload, even though there are several open- or uncontrolled-feed openings, since the duct fills at the first opening and automatically prevents the entrance of additional material at subsequent openings. Some special care may be required with free-flowing material.

Fig. 10.5.15 Shallow-track hopper for continuous-flow conveyor with feed to return run.

NONCARRYING CONVEYORS

The duct is easily insulated by sheets of asbestos cement or similar material to reduce cooling in transit. As the duct is completely sealed, there is no updraft where the lift is high. The material is protected from exposure and contamination or contact with lubricants. The handling capacity for horizontal or inclined lengths (nearly to the angle of repose of the material) approximates 100 percent of the volume swept through by the movement of the element. For steeper inclines or elevators, it is between 50 and 90 percent. If the material is somewhat abrasive, as with wet bituminous coal, the duct should be of corrosion-resistant steel, of extra thickness, and the chain pins should be both extremely hard and of corrosion-resistant material. A long horizontal run followed by an upturn is inadvisable because of radial thrust. Lumpy material is difficult to feed from a track hopper. An automatic brake is unnecessary, as an elevator will reverse only a few inches when released. The motor horsepower P required by continuous-flow conveyors for the five arrangements shown in Fig. 10.5.16 is given in the accompanying formulas in terms of the capacity T, in tons per h; the horizontal run H, ft; the vertical lift V, ft; and the constant C, values for which are given above. If loading from a track hopper, add 10 percent.

10-49

Screw Conveyors

The screw, or spiral, conveyor is used quite widely for pulverized or granular, noncorrosive, nonabrasive materials when the required capacity is moderate, when the distance is not more than about 200 ft (61 m), and when the path is not too steep. It usually costs substantially less than any other type of conveyor and is readily made dusttight by a simple cover plate. The conveyor will handle lumpy material if the lumps are not large in proportion to the diameter of the helix. If the length exceeds that advisable for a single conveyor, separate or tandem units are readily arranged. Screw conveyors may be inclined. A standard-pitch helix will handle material on inclines up to 35. The reduction in capacity as compared with the capacity when horizontal is indicated in the following table: Inclination, deg Reduction in capacity, percent

10 10

15 26

20 45

25 58

30 70

35 78

Abrasive or corrosive materials can be handled with suitable construction of the helix and trough. The standard screw-conveyor helix (Fig. 10.5.17) has a pitch approximately equal to its outside diameter. Other forms are used for special cases.

Fig. 10.5.17 Spiral conveyor. Short-pitch screws are advisable for inclines above 29. Variable-pitch screws, with short pitch at the feed end, automatically

control the flow to the conveyor so that the load is correctly proportioned for the length beyond the feed point. With a short section either of shorter pitch or of smaller diameter, the conveyor is self-loading to capacity and does not require a feeder.

Fig. 10.5.18 Cut-flight conveyor. Cut flights (Fig. 10.5.18) are used for conveying and mixing cereals, grains, and other light materials. Ribbon screws (Fig. 10.5.19) are used for wet and sticky materials, such as molasses, hot tar, and asphalt, which might otherwise build up on the spindle. Paddle screws are used primarily for mixing such materials as mortar and bitulithic paving mixtures. One typical application is to churn ashes and water to eliminate dust.

Fig. 10.5.19 Ribbon conveyor.

Fig. 10.5.16 Continuous-flow-conveyor arrangements.

Standard constructions have a plain or galvanized-steel helix and trough. For abrasives and corrosives such as wet ashes, both helix and trough may be of hard-faced cast iron. For simple abrasives, the outer edge of the helix may be faced with a renewable strip of Stellite or

10-50

CONVEYOR MOVING AND HANDLING Table 10.5.3 Capacities and Speed of Spiral Conveyors Max percent of cross section occupied by the material

Group 1 2 3 4 5

Max density of material, lb/ ft3 (kg/m3)

45 38 31 25 121⁄2

Max r/min for diameters 6 in (152 mm)

20 in (508 mm)

170 120 90 70 30

110 75 60 50 25

50 (800) 50 (800) 75 (1,200) 100 (1,600)

Group 1 includes light materials such as barley, beans, brewers grains (dry), coal (pulv.), corn meal, cottonseed meal, flaxseed, flour, malt, oats, rice, wheat. The value of the factor F is 0.5. Group 2 includes fines and granular materials. The values of F are alum (pulv.), 0.6, coal (slack or fines), 0.9; coffee beans, 0.4; sawdust, 0.7; soda ash (light), 0.7; soybeans, 0.5; fly ash, 0.4. Group 3 includes materials with small lumps mixed with fines. Values of F are alum, 1.4; ashes (dry), 4.0; borax, 0.7; brewers grains (wet), 0.6; cottonseed, 0.9; salt, course or fine, 1.2; soda ash (heavy), 0.7. Group 4 includes semiabrasive materials, fines, granular and small lumps. Values of F are acid phosphate (dry), 1.4; bauxite (dry), 1.8; cement (dry), 1.4; clay, 2.0; fuller’s earth, 2.0; lead salts, 1.0; limestone screenings, 2.0; sugar (raw), 1.0; white lead, 1.0; sulfur (lumpy), 0.8; zinc oxide, 1.0. Group 5 includes abrasive lumpy materials which must be kept from contact with hanger bearings. Values of F are wet ashes, 5.0; flue dirt, 4.0; quartz (pulv.), 2.5; silica sand, 2.0; sewage sludge (wet and sandy), 6.0.

similar extremely hard material. For food products, aluminum, bronze, monel metal, or stainless steel is suitable but expensive. Table 10.5.3 gives the capacities, allowable speeds, percentages of helix loading for five groups of materials, and the factor F used in estimating the power requirement. Table 10.5.4 gives the handling capacities for standard-pitch screw conveyors in each of the five groups of materials when the conveyors are operating at the maximum advised speeds and in the horizontal position. The capacity at any lower speed is in the ratio of the speeds. Power Requirements The power requirements for horizontal screw conveyors of standard design and pitch are determined by the Link-Belt Co. by the formula that follows. Additional allowances should be made for inclined conveyors, for starting under load, and for materials that tend to stick or pack in the trough, as with cement. H 5 hp at conveyor head shaft 5 sALN 1 CWLFd 3 1026 where A  factor for size of conveyor (see Table 10.5.5); C  quantity of material, ft3/h; L  length of conveyor, ft; F  factor for material (see Table 10.5.3); N  r/min of conveyor; W  density of material, lb/ft3. The motor size depends on the efficiency E of the drive (usually close to 90 percent); a further allowance G, depending on the horsepower, is made: H G

1 2

1–2 1.5

2–4 1.25

4–5 1.1

5 1.0

Motor hp  HG/E When the material is distributed into a bunker, the conveyor has an open-bottom trough to discharge progressively over the crest of the pile so formed. This trough reduces the capacity and increases the required power, since the material drags over the material instead of over a polished trough. If the material contains unbreakable lumps, the helix should clear the trough by at least the diameter of the average lump. For a given capacity, a conveyor of larger size and slower speed is preferable to a conveyor of minimum size and maximum speed. For large capacities

and lengths, the alternatives—a flight conveyor or a belt conveyor— should receive consideration. EXAMPLES. 1. Slack coal 50 lb/ft3 (800 kg/m3); desired capacity 50 tons/h (2,000 ft3/h) (45 tonnes/h); conveyor length, 60 ft (18 m); 14-in (0.36-m) conveyor at 80 r/min. F for slack coal  0.9 (group 2).

H 5 s255 3 60 3 80 1 2,000 3 50 3 60 3 0.9d>1,000,000 5 6.6 Motor hp 5 s6.6 3 1.0d>0.90 5 7.3 s5.4 kWd Use 71⁄2-hp motor. 2. Limestone screenings, 90 lb/ft3; desired capacity, 10 tons/h (222 ft3/h); conveyor length, 50 ft; 9-in conveyor at 50 r/min. F for limestone screenings  2.0 (group 4).

H 5 s96 3 50 3 50 1 222 3 90 3 50 3 2.0d>1,000,000 5 2.24 Motor hp 5 s2.24 3 1.25d>0.90 5 2.8 Use 3-hp motor. Chutes Bulk Material If the material is fragile and cannot be set through a simple vertical chute, a retarding chute may be specified. Figure 10.5.20 shows a ladder chute in which the material trickles over shelves instead of falling freely. If it is necessary to minimize breakage when material is fed from a bin, a vertical box chute with flap doors opening inward, as shown in Fig. 10.5.21, permits the material to flow downward only from the top surface and eliminates the degradation that results from a converging flow from the bottom of the mass. Straight inclined chutes for coal should have a slope of 40 to 45. If it is found that the coal accelerates objectionably, the chute may be provided with cross angles over which the material cascades at reduced speed (Fig. 10.5.22). Lumpy material such as coke and large coal, difficult to control when flowing from a bin, can be handled by a chain-controlled feeder chute with a screen of heavy endless chains hung on a sprocket shaft (Fig. 10.5.23). The weight of the chain curtain holds the material in the chute. When a feed is desired, the sprocket shaft is revolved slowly, either manually or by a motorized reducer. Unit Loads Mechanical handling of unit loads, such as boxes, barrels, packages, castings, crates, and palletized loads, calls for methods and

Table 10.5.4 Screw-Conveyor Capacities

(ft3/h) Conveyor size, in* Group

6

9

10

12

14

16

18

20

1 2 3 4 5

350 220 150 90 20

1,100 700 460 300 68

1,600 950 620 400 90

2,500 1,600 1,100 650 160

4,000 2,400 1,600 1,000 240

5,500 3,400 2,200 1,500 350

7,600 4,500 3,200 2,000 500

10,000 6,000 4,000 2,600 650

*Multiply by 25.4 to obtain mm.

CARRYING CONVEYORS

10-51

Table 10.5.5 Factor A (Self-lubricating bronze bearings assumed) Diam of conveyor, in mm Factor A

6 152 54

9 229 96

10 254 114

mechanisms entirely different from those adapted to the movement of bulk materials.

Fig. 10.5.20 Ladder chute.

12 305 171

14 356 255

16 406 336

18 457 414

20 508 510

24 610 690

maximum size of package that can be handled. These chutes may have receive and discharge points at any desired floors. There are certain kinds of commodities, such as those made of metal or bound with wire or metal bands, that cannot be handled satisfactorily unless the spiral chute is designed to handle only that particular commodity. Sheet-metal spirals can be built with double or triple blades, all mounted on the same standpipe. Another form of sheet-metal spiral is the open-core type, which is especially adaptable for handling long and narrow articles or bulky classes of merchandise or for use where the spiral must wind around an existing column or pass through floors in locations limited by beams or girders that cannot be conveniently cut or moved. For handling bread or other food products, it is customary to have the spiral tread made from monel metal or aluminum.

Fig. 10.5.21 Box chute with flap doors. Chute is always full up to discharging point.

Spiral chutes are adapted for the direct lowering of unit loads of various shapes, sizes, and weights, so long as their slide characteristics do not vary widely. If they do vary, care must be exercised to see that items accelerating on the selected helix pitch do not crush or damage those ahead.

Fig. 10.5.22 Inclined chute with cross angles.

Fig. 10.5.23 Chain-controlled feeder chute.

A spiral chute may extend through several floors, e.g., for lowering parcels in department stores to a basement shipping department. The opening at each floor must be provided with automatic closure doors, and the design must be approved by the Board of Fire Underwriters. At the discharge end, it is usual to extend the chute plate horizontally to a length in which the loads can come to rest. A tandem gravity roll conveyor may be advisable for distribution of the loads. The sheet-metal spiral (Fig. 10.5.24) has a fixed blade and can be furnished in varying diameters and pitches, both of which determine the

Fig. 10.5.24 Metal spiral chute.

CARRYING CONVEYORS Apron Conveyors Apron conveyors are specified for granular or lumpy materials. Since the

load is carried and not dragged, less power is required than for screw or scraper conveyors. Apron conveyors may have stationary skirt or side plates to permit increased depth of material on the apron, e.g., when used as a feeder for taking material from a track hopper (Fig. 10.5.25) with

Fig. 10.5.25 Track hopper and apron feeder supplying a gravity-discharge bucket-elevator boot.

10-52

CONVEYOR MOVING AND HANDLING

controlled rate of feed. They are not often specified if the length is great, since other types of conveyor are substantially lower in cost. Sizes of lumps are limited by the width of the pans and the ability of the conveyor to withstand the impact of loading. Only end discharge is possible. The apron conveyor (Fig. 10.5.26) consists of two strands of roller chain separated by overlapping apron plates, which form the carrying surface, with sides 2 to 6 in (51 to 152 mm) high. The chains are driven by sprockets at one end, take-ups being provided at the other end. The conveyors always pull the material toward the driving end. For light duty, flangeless rollers on flat rails are used; for heavy duty, singleflanged rollers and T rails are used. Apron conveyors may be run without feeders, provided that the opening of the feeding hopper is made sufficiently narrow to prevent material from spilling over the sides of the conveyor after passing from the opening. When used as a conveyor, the speed should not exceed 60 ft/min (0.30 m/s); when used as a feeder, 30 ft/min (0.15 m/s). Table 10.5.6 gives the capacities of apron feeders with material weighing 50 lb/ft3 (800 kg/m3) at a speed of 10 ft/min (0.05 m/s). Chain pull for horizontal-apron conveyor:

separated by V-shaped steel buckets. Figure 10.5.28 shows the most common form, where material is received on the lower horizontal run, elevated, and discharged through openings in the bottom of the trough of the upper horizontal run. The material is scraped along the horizontal trough of the conveyor, as in a flight conveyor. The steel guard plates a

2LFsW 1 W1d Fig. 10.5.28 V-bucket carrier.

Chain pull for inclined-apron conveyor: LsW 1 W1dsF cos u 1 sin ud 1 WLsF cos u 2 sin ud

at the right prevent spillage at the bends. Figure 10.5.29 shows a different form, where material is dug by the elevator from a boot, elevated vertically, scraped along the horizontal run, and discharged through gates in the bottom of the trough. Figure 10.5.30 shows a variation of the type shown in Fig. 10.5.29, requiring one less bend in the conveyor. The troughs are of steel or steel-lined wood. When feeding material to the horizontal run, it is advisable to use an automatic feeder driven by

where L  conveyor length, ft; W  weight of chain and pans per ft, lb; W1  weight of material per ft of conveyor, lb; u  angle of inclination, deg; F  coefficient of rolling friction, usually 0.1 for plain roller bearings or 0.05 for antifriction bearings.

Fig. 10.5.27 Open-top carrier.

Fig. 10.5.26 Apron conveyor. Bucket Conveyors and Elevators

Open-top bucket carriers (Fig. 10.5.27) are similar to apron conveyors,

except that dished or bucket-shaped receptacles take the place of the flat or corrugated apron plates used on the apron conveyor. The carriers will operate on steeper inclines than apron conveyors (up to 70), as the buckets prevent material from sliding back. Neither sides extending above the tops of buckets nor skirtboards are necessary. Speed, when loaded by a feeder,  60 ft/min (max)(0.30 m/s) and when dragging the load from a hopper or bin, )30 ft/min. The capacity should be calculated on the basis of the buckets being three-fourths full, the angle of inclination of the conveyor determining the loading condition of the bucket. V-bucket carriers are used for elevating and conveying nonabrasive materials, principally coal when it must be elevated and conveyed with one piece of apparatus. The length and height lifted are limited by the strength of the chains and seldom exceed 75 ft (22.9 m). These carriers can operate on any incline and can discharge at any point on the horizontal run. The size of lumps carried is limited by the size and spacing of the buckets. The carrier consists of two strands of roller chain Table 10.5.6

Fig. 10.5.29 and 10.5.30 V-bucket carriers.

power from one of the bend shafts to prevent overloading. Should the buckets of this type of conveyor be overloaded, they will spill on the vertical section. The drive is located at b, with take-up at c. The speed should not exceed 100 ft/min (0.51 m/s) when large material is being handled, but when material is small, speed may be increased to 125 ft/min (0.64 m/s). The best results are obtained when speeds are kept low. Table 10.5.7 gives the capacities and weights based on an even and continuous feed. Pivoted-bucket carriers are used primarily where the path is a runaround in a vertical plane. Their chief application has been for the dual

Capacities of Apron Conveyors

Width between skirt plates

Capacity, 50 lb/ft3 (800 kg/m3) material at 10 ft/min (0.05 m/s) speed Depth of load, in (mm)

in

mm

12 (305)

16 (406)

20 (508)

24 (610)

24 30 36 42

610 762 914 1,067

22 (559) 26 (660) 34 (864) 39 (991)

30 (762) 37 (940) 45 (1,143) 52 (1,321)

47 (1,194) 56 (1,422) 65 (1,651)

56 (1,422) 67 (1,702) 79 (2,007)

CARRYING CONVEYORS Table 10.5.7

10-53

Capacities and Weights of V-Bucket Carriers*

Length, in

Width, in

Depth, in

Spacing, in

Capacity, tons of coal per hour at 100 ft/ min

12 16 20 24 30 36 42 48

12 12 15 20 20 24 24 24

6 6 8 10 10 12 12 12

18 18 24 24 24 30 30 36

29 32 43 100 126 172 200 192

Buckets

Weight per ft of chains and buckets, lb 36 40 55 65 70 94 105 150

* Multiply in by 25.4 for mm, ton/h by 0.25 for kg/s or by 0.91 for tonnes/h.

duty of handling coal and ashes in boiler plants. They require less power than V-bucket carriers, as the material is carried and not dragged on the horizontal run. The length and height lifted are limited by the strength of the chains. The length seldom exceeds 500 ft (152 m) and the height lifted 100 ft (30 m). They can be operated on any incline and can discharge at any point on the horizontal run. The size of lumps is limited by the size of buckets. The maintenance cost is extremely low. Many carrier installations are still in operation after 40 years of service. Other applications are for hot clinker, granulated and pulverized chemicals, cement, and stone. The carrier consists of two strands of roller chain, with flanged rollers, between which are pivoted buckets, usually of malleable iron. The drive (Fig. 10.5.31) is located at a or a, the take-up at b. The material is fed to the buckets by a feeder at any point along the lower horizontal run, is elevated, and is discharged on the upper horizontal run.

the turns, the buckets swing in a larger-radius curve, automatically unlatch, and then lap correctly as they enter the straight run. The pivoted-bucket carrier requires little attention beyond periodic lubrication and adjustment of take-ups. For the dual service of coal and ash handling, its only competitor is the skip hoist.

Fig. 10.5.32 Link-Belt Peck carrier buckets.

Fig. 10.5.31 Pivoted-bucket carrier.

The tripper c, mounted on wheels so that it can be moved to the desired dumping position, engages the cams on the buckets and tips them until the material runs out. The buckets always remain vertical except when tripped. The chain rollers run on T rails on the horizontal sections and between guides on the vertical runs. Speeds range from 30 to 60 ft/min (0.15 to 0.30 m/s). After dumping, the overlapping bucket lips are in the wrong position to round the far corner; after rounding the take-up wheels, the lap is wrong for making the upturn. The Link-Belt Peck carrier eliminates this by suspending the buckets from trunnions attached to rearward cantilever extensions of the inner links (Fig. 10.5.32). As the chain rounds

Table 10.5.8 shows the capacities of pivoted-bucket carriers with materials weighing 50 lb/ft3 (800 kg/m3), with carriers operating at 40 to 50 ft/min (0.20 to 0.25 m/s), and with buckets loaded to 80 percent capacity. Bucket elevators are of two types: (1) chain-and-bucket, where the buckets are attached to one or two chains; and (2) belt-and-bucket, where the buckets are attached to canvas or rubber belts. Either type may be vertical or inclined and may have continuous or noncontinuous buckets. Bucket elevators are used to elevate any bulk material that will not adhere to the bucket. Belt-and-bucket elevators are particularly well adapted to handling abrasive materials which would produce excessive wear on chains. Chain-and-bucket elevators are frequently used with perforated buckets when handling wet material, to drain off surplus water. The length of elevators is limited by the strength of the chains or belts. They may be built up to 100 ft (30 m) long, but they average 25 to 75 ft (7.6 to 23 m). Inclined-belt elevators operate best on an angle of about 30 to the vertical. At greater angles, the sag of the return belt is excessive, as it cannot be supported by rollers between the head and foot pulleys. This applies also to single-strand chain elevators. Doublestrand chain elevators, however, if provided with roller chain, can run on an angle, as both the upper and return chains are supported by rails.

Table 10.5.8 Capacities of Pivoted-Bucket Carriers with Coal or Similar Materials Weighing 50 lb/ft3 (800 kg/m3) at Speeds Noted Bucket pitch  width

Capacity of coal

Speed

in

mm

Short ton/h

tonne/h

ft/min

m/s

24  18 24  24 24  30 24  36

610  457 610  610 610  762 610  914

35–45 50–60 60–75 70–90

32–41 45–54 54–68 63–82

40–50 40–50 40–50 40–50

0.20–0.25 0.20–0.25 0.20–0.25 0.20–0.25

10-54

CONVEYOR MOVING AND HANDLING

The size of lumps is limited by the size and spacing of the buckets and by the speed of the elevator. Continuous-bucket elevators (Fig. 10.5.33 and Table 10.5.9) usually operate at 100 ft/min (0.51 m/s) or less and are single- or double-strand. The contents of each bucket discharge over the back of the preceding bucket. For maximum capacity and a large proportion of lumps, the buckets extend rearward behind the chain runs. The elevator is then called a supercapacity elevator (Fig. 10.5.34 and Table 10.5.10). Gravity-discharge elevators operate at 100 ft/min (0.51 m/s) or less and are double-strand, with spaced V buckets. The path may be an L, an inverted L, or a runaround in a vertical plane Fig. 10.5.33 Continuous bucket. (Fig. 10.5.28). Along the horizontal run, the buckets function as pushers within a trough. An elevator with a tandem flight conveyor costs less. For a runaround path, the pivoted-bucket carrier requires less power and has lower maintenance costs. Table 10.5.9

10  5  8 10  7  12 12  7  12 14  7  12 14  8  12 16  8  12 18  8  12

Bucket, in length  width  depth

Max lump size, large lumps not more than 20%, in

Capacity with 50 lb material tons/h

16  12  18 20  12  18 24  12  18 30  12  18 24  17  24 36  17  24

8 8 8 8 10 10

115 145 175 215 230 345

* Multiply in by 25.4 for mm, lb by 0.45 for kg, tons by 0.91 for tonnes.

with structural-steel boot and casing. Elevators of this type must be run at sufficient speed to throw the discharging material clear of the bucket ahead. Capacity Elevators are rated for capacity with the buckets 75 percent loaded. The buckets must be large enough to accommodate the lumps, even though the capacity is small.

Continuous Bucket Elevators* Max lump size, in

Bucket size, in

Table 10.5.10 Supercapacity Elevators* (Link-Belt Co.)

All lumps 3⁄4

1 1 1

11⁄4 11⁄2 11⁄2

10% lumps

21⁄2 3 3 3 4

41⁄2 41⁄2

Capacity with 50 lb material at 100 ft/min tons/h 17 21 25 30 36 42 46

* Multiply in by 25.4 for mm, lb by 0.45 for kg, tons by 0.91 for tonnes.

As bucket elevators have no feed control, an interlocked feeder is desirable for a gravity flow. Some types scoop up the load as the buckets round the foot end and can take care of momentary surges by spilling the excess back into the boot. The continuous-bucket elevator, however, must be loaded after the buckets line up for the lift, i.e., when the gaps between buckets have closed. Belt-and-bucket elevators are advantageous for grain, cereals, glass batch, clay, coke breeze, sand, and other abrasives if the temperature is not high enough to scorch the belt [below 250F (121C) for natural rubber]. Elevator casings usually are sectional and dusttight, either of 3⁄16-in (4.8-mm) sheet steel or, better, of aluminum. If the elevator has considerable height, its cross section must be sufficiently large to prevent sway contact between buckets and casing. Chain guides extending the length of both runs may be provided to control sway and to prevent piling up of the element, at the Fig. 10.5.34 Supercapacity boot, should the chain break. Caution: bucket. Indoor high elevators may develop considerable updraft tending to sweep up light, pulverized material. Provision to neutralize the pressure differential at the top and bottom may be essential. Figure 10.5.35 shows the cast-iron boot used with centrifugaldischarge and V-bucket chain elevators and belt elevators. Figure 10.5.36 shows the general form of a belt-and-bucket elevator

Fig. 10.5.35 Cast-iron boot. Power Requirements The motor horsepower for the continuousbucket and supercapacity elevators can be approximated as

Motor hp 5 s2 3 tons/h 3 lift, ftd/1,000 The motor horsepower of gravity-discharge elevators can be approximated by using the same formula for the lift and adding for the horsepower of the horizontal run the power as estimated for a flight conveyor. For a vertical runaround path, add a similar allowance for the lower horizontal run.

Fig. 10.5.36 Structural-steel boot and casing. Belt Conveyors

The belt conveyor is a heavy-duty conveyor available for transporting large tonnages over paths beyond the range of any other type of mechanical conveyor. The capacity may be several thousand tons per hour, and the distance several miles. It may be horizontal or inclined upward or downward, or it may be a combination of these, as outlined in Fig. 10.5.37. The limit of incline is reached when the material tends to slip on the belt surface. There are special belts with molded designs to assist in keeping material from slipping on inclines. They will handle pulverized, granular, or lumpy material. Special compounds are available if material is hot or oily. In its simplest form, the conveyor consists of a head or drive pulley, a take-up pulley, an endless belt, and carrying and return idlers. The spacing of the carrying idlers varies with the width and loading of the belt

CARRYING CONVEYORS

and usually is 5 ft (1.5 m) or less. Return idlers are spaced on 10-ft (3.0-m) centers or slightly less with wide belts. Sealed antifriction idler bearings are used almost exclusively, with pressure-lubrication fittings requiring attention about once a year.

10-55

somewhat lower tensile ratings of 90 (1.61), 120 (2.14), 155 (2.77), 195 (3.48), and 240 (4.29) lb/in (kg/mm) per ply ratings. The B. F. Goodrich Company has developed a steel-cable-reinforced belt rated 700 to 4,400 lb/in (12.5 to 78.6 kg/mm) of belt width. The belt has parallel brass-plated 7 by 19 steel airplane cables ranging from 5 ⁄32 to 3⁄8 in (4.0 to 9.5 mm) diameter placed on 1⁄2- to 3⁄4-in (12.7to 19.0-mm) centers. These belts are used for long single-length conveyors and for highlift, extremely heavy duty service, e.g., for taking ore from deep open pits, thus providing an alternative to a spiraling railway or truck route. Synthetic rubber is in use for belts. Combinations of synthetic and natural rubbers have been found satisfactory. Synthetics are superior under special circumstances, e.g., neoprene for flame resistance and resistance to petroleum-based oils, Buna N for resistance to vegetable, animal, and petroleum oils, and butyl for resistance to heat (per RMA). Belt Life With lumpy material, the impact at the loading point may be destructive. Heavy lumps, such as ore and rock, cut through the protective cover and expose the carcass. The impact shock is reduced by making the belt supports flexible. This can be done by the use of idlers with cushion or pneumatic tires (Fig. 10.5.39) or by supporting the idlers on rubber mountings. Chuting the load vertically against the belt should be avoided. Where possible, the load should be given a movement in the direction of belt travel. When the material is a mixture of lumps and fines, the fines should be screened through to form a cushion for the lumps. Other destructive factors are belt overstressing, belts

Fig. 10.5.37 Typical arrangements of belt conveyors. Belts Belt width is governed by the desired conveyor capacity and maximum lump size. The standard rubber belt construction (Fig. 10.5.38) has several plies of square woven cotton duck or synthetic fabric such as rayon, nylon, or polyester cemented together with a rubber compound called “friction” and covered both top and bottom with rubber to resist abrasion and keep out moisture. Top cover thickness is Fig. 10.5.39 Pneumatic-tired idlers applied to belt feeder at loading point of belt conveyor.

Fig. 10.5.38 Rubber-covered conveyor belt.

determined by the severity of the job and varies from 1⁄16 to 3⁄4 in (1.6 to 19 mm). The bottom cover is usually 1⁄16 in (1.6 mm). By placing a layer of loosely woven fabric, called the breaker strip, between the cover and outside fabric ply, it is often possible to double the adhesion of the cover to the carcass. The belt is rated according to the tension to which it may safely be subjected, and this is a function of the length and lift of the conveyor. The standard Rubber Manufacturers Association (RMA) multiple ply ratings in lb/in (kg/mm) of width per ply are as follows: RMA multiple ply No. Permissible pull, lb/in (kg/mm) of belt width per ply, vulcanized splice Permissible pull, lb/in (kg/mm) of belt width per ply, mechanical splice

MP35 35 (0.63)

MP43 43 (0.77)

MP50 50 (0.89)

MP60 60 (1.07)

MP70 70 (1.25)

27 (0.48)

33 (0.60)

40 (0.71)

45 (0.80)

55 (0.98)

Thus, for a pull of 4,200 lb (1,905 kg) and 24-in (0.61-m) belt width, a five-ply MP35 could be used with a vulcanized splice or a five-ply MP50 could be used with a mechanical splice. High-Strength Belts For belt conveyors of extremely great length, a greater strength per inch of belt width is available now through the use of improved weaving techniques that provide straight-warp synthetic fabric to support the tensile forces (Uniflex by Uniroyal, Inc., or Flexseal by B. F. Goodrich). Strengths go to 1,500 lb/in width tension rating. They are available in most cover and bottom combinations and have good bonding to carcass. The number of plies is reduced to two instead of as many as eight so as to give excellent flexibility. Widths to 60 in are available. Other conventional high-strength fabric belts are available to

running out of line and rubbing against supports, broken idlers, and failure to clean the belt surface thoroughly before it comes in contact with the snub and tripper pulleys. Introduction of a 180 twist in the return belt (B. F. Goodrich Co.) at both head and tail ends can be used to keep the clean side of the belt against the return idlers and to prevent buildup. Using one 180 twist causes both sides of the belt to wear evenly. For each twist, 1 ft of length/in of belt width is required. Idler Pulleys Troughing idlers are usually of the three-pulley type (Fig. 10.5.40), with the troughing pulleys at 20. There is a growing tendency toward the use of 35 and 45 idlers to increase the volume capacity of a belt; 35 idlers will increase the volume capacity of a given belt 25 to 35 percent over 20 idlers, and 45 idlers, 35 to 40 percent.

Fig. 10.5.40 Standard assembly of three-pulley troughing idler and return idler.

The bearings, either roller or ball type, are protected by felt or labyrinth grease seals against the infiltration of abrasive dust. A belt running out of line may be brought into alignment by shifting slightly forward one end or the other with a few idler sets. Self-aligning idlers (Fig. 10.5.41) should be spaced not more than 75 ft (23 m) apart. These call attention to the necessity of lining up the belt and should not serve as continuing correctives.

10-56

CONVEYOR MOVING AND HANDLING

Drive Belt slip on the conveyor-drive pulley is destructive. There is little difference in tendency to slip between a bare pulley and a rubberlagged pulley when the belt is clean and dry. A wet belt will adhere to a lagged pulley much better, especially if the lagging is grooved. Heavy-duty conveyors exposed to the possibility of wetting the belt are

downward (retarding conveyor), when the take-up is located at the downhill end. Trippers The load may be removed from the belt by a diagonal or V plow, but a tripper that snubs the belt backward is standard equipment. Trippers may be (1) stationary, (2) manually propelled by crank, or (3) propelled by power from one of the snubbing pulleys or by an independent motor. The discharge may be chuted to either side or back to the belt by a deflector plate. When the tripper must move back to the load-receiving end of the conveyor, it is usual to incline the belt for about 15 ft (4.6 m) to match the slope up to the tripper top pulley. As the lower tripper snub pulleys are in contact with the dirty side of the belt, a cleaner must be provided between the pulleys. A scraper in light contact with the face of the pulley may be advisable. Belt Slope The slopes (in degrees) given in Table 10.5.11 are the maximum permissible angles for various materials.

Fig. 10.5.41 Self-aligning idler. 1

generally driven by a head pulley lagged with a ⁄2-in (12.7-mm) rubber belt and with 1⁄4- by 1⁄4-in (6.4- by 6.4-mm) grooves spaced 1⁄2 in (12.7 mm) apart and, preferably, diagonally as a herringbone gear. A snub pulley can be employed to increase the arc of contact on the head pulley, and since the pulley is in contact with the dirty side of the belt, a belt cleaner is essential. The belt cleaner may be a high-speed bristle brush, a spiral rubber wiper (resembling an elongated worm pinion), circular disks mounted slantwise on a shaft to give a wiping effect when rotated, or a scraper. Damp deposits such as clay or semifrozen coal dirt are best removed by multiple diagonal scrapers of stainless steel. A belt conveyor should be emptied after each run to avoid a heavy starting strain. The motor should have high starting torque, moderate starting-current inrush, and good characteristics when operating under full load. The double-squirrel-cage ac motor fulfills these requirements. Heavy-Duty Belt-Conveyor Drives For extremely heavy duty, it is essential that the drive torque be built up slowly, or serious belt damage will occur. The hydraulic clutch, derived from the hydraulic automobile clutch, serves nicely. The best drive developed to date is the dynamatic clutch (Fig. 10.5.42). This has a magnetized rotor on the extended motor shaft, revolving within an iron ring keyed to the reduction gearing of the conveyor. The energizing current is automatically built up over a period that may extend to 2 min, and the increasing magnetic pull on the ring builds up the belt speed.

Table 10.5.11 Degrees

Maximum Belt Slopes for Various Materials,

Coal: anthracite, sized; mined, 50 mesh and under; or mined and sized Coal, bituminous, mined, run of mine Coal: bituminous, stripping, not cleaned; or lignite Earth, as excavated, dry Earth, wet, containing clay Gravel, bank run Gravel, dry, sharp Gravel, pebbles Grain, distillery, spent, dry Sand, bank, damp Sand, bank, dry Wood chips

16 18 22 20 23 20 15–17 12 15 20–22 16–18 27

SOURCE: Uniroyal, Inc.

Determination of Motor Horsepower The power required to drive a belt conveyor is the sum of the powers required (1) to move the empty belt, (2) to move the load horizontally, and (3) to lift the load if the conveyor is inclined upward. If (3) is larger than the other two, an automatic brake must be provided to hold the conveyor if the current fails. A solenoid brake is usual. The power required to move the empty belt is given by Fig. 10.5.43. The power to move 100 tons/h horizontally is given by the formula hp  0.4  0.00325L, where L is the distance between centers, ft. For other capacities the horsepower is proportional.

Fig. 10.5.42 Operating principle of the dynamic clutch. Take-ups For short conveyors, a screw take-up is satisfactory. For long conveyors, the expansion and contraction of the belt with temperature changes and the necessity of occasional cutting and resplicing make a weighted gravity take-up preferable, especially if a vulcanized splice is used. The take-up should, if possible, be located where the slack first occurs, usually just back of the drive except in a conveyor inclined

Fig. 10.5.43 Horsepower required to move belt conveyor empty at 100 ft/ min (0.15 m/s).

Table 10.5.12 Troughed Conveyor-Belt Capacities* Tons per hour (TPH) with belt speed of 100 ft/min (0.51 m/s).

12 24 30 42 60 72

Idler angle: SCA†:

Long center, 3 roll

Equal length, 3 roll

Belt shape: Belt width, in

20º

35º

45º

35º

45%



10º

30º



10º

30º



10º

30º

CED‡



10º

30º



10º

30º

CED‡

10 52 86 177 375 548

14 74 121 249 526 768

24 120 195 400 843 1,232

16 83 137 282 598 874

20 102 167 345 729 1,064

28 143 232 476 1,003 1,464

19 98 161 332 702 1,026

29 115 188 386 815 1,190

35 150 244 500 1,053 1,535

.770 1.050 1.095 1.130 1.187 1.205

82 111 179 266 291

101 144 248 417 516

141 212 394 734 987

96 133 216 324 356

113 163 281 468 573

149 225 417 772 1,030

1.05 1.12 1.22 1.35 1.44

*Tons per hour sTPHd 5 value from table 3

sactual material wt., lb/ft 3d 100

3

sactual belt speed, ft/mind

100 † Surcharge angle (see Fig 10.5.44). ‡ Capacity calculated for standard distance of load from belt edge: s0.55 b 1 0.9d, where b = belt width, inches. For constant 2-in edge distance (CED) multiply by CED constant as given in this table. For slumping materials (very free flowing), use capacities based upon 2-in CED. This includes dry silica sand, dry aerated portland cement, wet concrete, etc., with surcharge angle 58 or less. For metric units multiply in by 25.4 for mm; tons per hour by 0.91 for tonne per hour; ft/min by 0.0051 for m/s.

10-57

10-58

CONVEYOR MOVING AND HANDLING Table 10.5.13

Minimum Belt Width for Lumps

Belt width,

in mm in mm in mm

Sized material, Unsized material,

12 305 2 51 4 102

18 457 4 102 6 152

24 610 5 127 8 203

30 762 6 152 10 254

42 1,067 8 203 14 356

60 1,524 12 305 20 508

72 1,829 14 356 24 610

SOURCE: Uniroyal, Inc.

Table 10.5.14

Maximum Belt Speeds, ft/min, for Various Materials*

Width of belt, in

Light or free-flowing materials, grains, dry sand, etc.

Moderately free-flowing sand, gravel, fine stone, etc.

Lump coal, coarse stone, crushed ore

Heavy sharp lumpy materials, heavy ores, lump coke

12–14 16–18 20–24 30–36 42–60

400 500 600 750 850

250 300 400 500 550

250 350 400 450

250 300 350

* Multiply in by 25.4 for mm, ft/min by 0.005 for m/s.

The capacity in tons per hour for materials of various weights per cubic foot is given by Table 10.5.12. Table 10.5.13 gives minimum belt widths for lumps of various sizes. Table 10.5.14 gives advisable maximum belt speeds for various belt widths and materials. The speed should ensure full-capacity loading so that the entire belt width is utilized. Drive Calculations From the standpoint of the application of power to the belt, a conveyor is identical with a power belt. The determining factors are the coefficient of friction between the drive pulley and the belt, the tension in the belt, and the arc of contact between the pulley and the belt. The arc of the contact is increased up to about 240 by using a

(a)

(b)

Fig. 10.5.44 (a) Troughed belt; (b) flat belt. (Uniroyal, Inc.)

snub pulley and up to 410 by using two pulleys geared together or driven by separate motors and having the belt wrapped around them in the form of a letter S. The resistance to be overcome is the sum of all the frictional resistances throughout the length of the conveyor plus, in the case of a rising conveyor, the resistance due to lifting the load. The sum of the conveyor and load resistances determines the working pull that has to be transmitted to the belt at the drive pulley. The total pull is increased by the slack-side tension necessary to keep the belt from slipping on the pulley. Other factors adding to the belt pull are the component of the weight of the belt if the conveyor is inclined and a take-up pull to keep the belt from sagging between the idlers at the loading point. These, however, do not add to the working pull. The maximum belt pull determines the length of conveyor that can be used. If part of the conveyor runs downgrade, the load on it will reduce the working pull. In moderatelength conveyors, stresses due to acceleration or deceleration are safely carried by the factor of safety used for belt-life calculations. The total or maximum tension Tmax must be known to specify a suitable belt. The effective tension Te is the difference between tight-side tension and slack-side tension, or Te  T1  T2. The coefficient of friction between rubber and steel is 0.25; with a lagged pulley, between rubber and rubber, it is 0.55 for ideal conditions but should be taken as 0.35 to allow for loss due to dirty conditions. Except for extremely heavy belt pulls, the tandem drive is seldom used since it is costly; the lagged-and-grooved drive pulley is used for most industrial installations.

For a belt with 6,000-lb (26,700-N) max tension running on a bare pulley drive with 180 wrap (Table 10.5.15), T  1.857Te  6,000 lb; Te  3,200 lb (14,200 N). Such a belt, 30 in wide, might be a five-ply MP50, a reduced-ply belt rated at 200 lb/in, or a steel-cable belt with 5 ⁄32-in (4-mm) cables spaced on 0.650-in (16.5-mm) centers. The last is the most costly. Table 10.5.15 Ratio T1 to Te for Various Arcs of Contact with Bare Pulleys and Lagged Pulleys (Coefficients of friction 0.25 and 0.35, respectively) Belt wrap, deg Bare pulley Lagged pulley

180 1.85 1.50

200 1.72 1.42

210 1.67 1.38

215 1.64 1.36

220 1.62 1.35

240 1.54 1.30

In an inclined belt with single pulley drive, the Tmax is lowest if the drive is at the head end and increases as the drive shifts toward the foot end. Allowance for Tripper The belt lifts about 5 ft to pass over the top snub pulley of the tripper. Allowance should be made for this lift in determining the power requirement of the conveyor. If the tripper is propelled by the belt, an allowance of 1 hp (0.75 kW) for a 16-in (406-mm) belt, 3 hp (2.2 kW) for a 36-in (914-mm) belt, or 7 hp (5.2 kW) for a 60-in (1,524-mm) belt is ample. If a rotary cleaning brush is driven from one of the snub shafts, an allowance should be made which is approximately the same as that for the propulsion of the tripper. Magnetic pulleys are frequently used as head pulleys on belt conveyors to remove tramp iron, such as stray nuts or bolts, before crushing; to concentrate magnetic ores, such as magnetic or nickeliferous pyrrhotite, from nonmagnetic material; and to reclaim iron from foundry refuse. A chute or hopper automatically receives the extracted material as it is drawn down through the other nonmagnetic material, drawn around the end pulley on the belt, and finally released as the belt leaves the pulley. Light-duty permanent-magnet types [for pulleys 12 to 24 in (305 to 610 mm) in diameter] will separate material through a 2in (51-mm) layer on the belt. Heavy-duty permanent-magnet units (12 to 24 in in diameter) will separate material if the belt carries over 2 in of material or if the magnetic content is very fine. Even larger units are available for special applications. So effective and powerful are the permanent-magnet types that electromagnetic pulleys are available only in the larger sizes, from 18 to 48 in in diameter. The permanent-magnet type requires no slip rings, external power, or upkeep. Overhead magnetic separators (Dings), both electromagnetic and Ceramox permanent-magnet types, for suspension above a belt conveyor are also available for all commercial belt widths to pull magnetic material from burden as thick as 40 in and at belt speeds to 750 ft/min. These may or may not be equipped with a separately encompassing belt to be

CARRYING CONVEYORS

self-cleaning. Wattages vary from 1,600 to 17,000. The permanent-magnet type requires no electric power and has nonvarying magnet strength. An alternate type of belt protection is to use a Ferro Guard Detector (Dings) to stop belt motion if iron is detected. Trippers of the fixed or movable type are used for discharging material between the ends of a belt conveyor. A self-propelling tripper consists of two pulleys, over which the belt passes, the material being discharged into the chute as the belt bends around the upper pulley. The pulleys are mounted on a frame carried by four wheels and power-driven. A lever on the frame and stops alongside the rails enable the tripper, taking power from the conveyor belt, to move automatically between the stops, thus distributing the material. Rail clamps are provided to hold the tripper in a fixed position when discharging. Motor-driven trippers are used when it is desirable to move the tripper independently of the conveyor belt. Fixed trippers have their pulleys mounted on the conveyor framework instead of on a movable carriage. Shuttle conveyors are frequently used in place of trippers for distributing materials. They consist of a reversible belt conveyor mounted upon a movable frame and discharging over either end. Belt-Conveyor Arrangements Figure 10.5.37 shows typical arrangements of belt conveyors. a is a level conveyor receiving material at one end and discharging at the other. b shows a level conveyor with traveling tripper. The receiving end of the conveyor is depressed so that the belt will not be lifted against the chute when the tripper is at its extreme loading end. c is a level conveyor with fixed trippers. d shows an inclined end combined with a level section. e is a combination of level conveyor, vertical curve, and horizontal section. The radius of the vertical curve depends upon the weight of the belt and the tension in the belt at the point of tangency. This must be figured in each case and is found by the formula: min radius, ft  belt tension at lowest point of curve divided by weight per ft of belt. The belt weight should be for the worn belt with not over 1⁄16-in (1.6-mm) top cover. f is a combination of level conveyor receiving material from a bin, a fixed dump, and inclined section, and a series of fixed trippers. Portable conveyors are widely used around retail coal yards, small power plants, and at coal mines for storing coal and reclaiming it for loading into trucks or cars. They are also used for handling other bulk materials. They consist of a short section of chain or belt conveyors mounted on large wheels or crawler treads and powered with a gasoline engine or electric motor. They vary in length from 20 to 90 ft and can handle up to 250 tons/h (63 kg/s) of coal. For capacities greater than what two people can shovel onto a belt, some form of power loader is necessary. Sectional-belt conveyors have come into wide use in coal mines for bringing the coal from the face and loading it into cars in the entry. They consist of short sections (6 ft or more) of light frame of special low-type construction. The sections are designed for ease of connecting and disconnecting for transfer from one part of the mine to another. They are built in lengths up to 1,000 ft (305 m) or more under favorable conditions and can handle 125 tons/h (32 kg/s) of coal. Sliding-belt conveyors use belts sliding on decks instead of troughed belts carried on rollers. Sliding belts are used in the shipping rooms of department stores for handling miscellaneous parcels, in post offices for handling mail bags, in chemical plants for miscellaneous light waste, etc. The decking preferably is of maple strips. If of steel, the deck should be perforated at intervals to relieve the vacuum effect between the bottom of the belt and the deck. Cotton or balata belts are best. The speed should be low. The return run may be carried on 4-in straight-face idlers. The power requirement is greater than with idler rollers. The oscillating conveyor is a horizontal trough varying in width from 12 to 48 in (305 to 1,219 mm), mounted on rearward-inclined cantilever supports, and driven from an eccentric at 400 to 500 r/min. The effect is to “bounce” the material along at about 50 ft/min (0.25 m/s) with minimum wear on the trough. The conveyor is adapted to abrasive or hot fragmentary materials, such as scrap metals, castings, or metal chips. The trough bottom may be a screen plate to cull fine material, as when cleaning sand from castings, or the trough may have louvers and a ventilating hood to cool the moving material. These oscillating

10-59

conveyors may have unit lengths up to 100 ft (30 m). Capacities range from a few tons to 100 tons/h (25 kg/s) with high efficiency and low maintenance. Feeders

When material is drawn from a hopper or bin to a conveyor, an automatic feeder should be used (unless the material is dry and free-running, e.g., grain). The satisfactory operation of any conveyor depends on the material being fed to it in an even and continuous stream. The automatic feeder not only ensures a constant and controlled feed, irrespective of the size of material, but saves the expense of an operator who would otherwise be required at the feeding point. Figure 10.5.45 shows a reciprocating-plate feeder, consisting of a plate mounted on four wheels and forming the bottom of the hopper. When the plate is moved forward, it carries the material with it; when moved back, the plate is withdrawn from under the material, allowing it to fall into the chute. The plate is moved by connecting rods from cranks or eccentrics. The capacity of this feeder is determined by the length and number of strokes, width of plate, and location of the adjustable gate. The number of strokes should not exceed 60 to 70 per min. When used under a track hopper, the material remaining on the plate may freeze in winter, as this type of feeder is not self-clearing.

Fig. 10.5.45 Reciprocating-plate feeder. Vibrating Feeders The vibrating feeder consists of a plate inclined downward slightly and vibrated (1) by a high-speed unbalanced pulley, (2) by electromagnetic vibrations from one or more solenoids, as in the Jeffrey Manufacturing Co. feeder, or (3) by the slower pulsations secured by mounting the plate on rearward-inclined leaf springs. The electric vibrating feeder (Fig. 10.5.46) operates magnetically with a large number of short strokes (7,200 per min from an alternating current in the small sizes and 3,600 from a pulsating direct current in the larger sizes). It is built to feed from a few pounds per minute to 1,250 tons/h (315 kg/s) and will handle any material that does not adhere to the pan. It is self-cleaning, instantaneously adjustable for capacity, and controllable from any point near or remote. It is usually supported from above with spring shock absorbers a in each hanger, but it may be supported from below with similar springs in the supports. A modified form can be set to feed a weighed constant amount hourly for process control.

Fig. 10.5.46 Electric vibrating feeder. Roller Conveyors

The principle involved in gravity roller conveyors is the control of motion due to gravity by interposing an antifriction trackage set at a definite grade. Roller conveyors are used in the movement of all sorts of package goods with smooth surfaces which are sufficiently rigid to prevent sagging between rollers—in warehouses, brickyards, building-supply

10-60

CONVEYOR MOVING AND HANDLING

yards, department stores, post offices, and the manufacturing and shipping departments of industrial manufacturers. The rollers vary in diameter and strength from 1 in, with a capacity of 5 lb (2.3 kg) per roller, up to 4 in (102 mm), with a capacity of 1,800 lb (816 kg) per roller. The heavier rollers are generally used in foundries and steel mills for moving large molds, castings, or stacks of sheet steel. The small roller is used for handling small, light objects. The spacing of the rollers in the frames varies with the size and weight of the objects to be moved. Three rollers should be in contact with the package to prevent hobbling. The grade of fall required to move the object varies from 11⁄2 to 7 percent, depending on the weight and character of the material in contact with the rollers. Figure 10.5.47 shows a typical cross section of a roller conveyor. Curved sections are similar in construction to straight sections, except that in the majority of cases multiple rollers (Fig. 10.5.48) are used to keep the package properly lined up and in the center of the conveyor.

Fig. 10.5.47 Gravity roller conveyor.

Fig. 10.5.48 Multiple-roller conveyor.

Figure 10.5.49 illustrates a wheel conveyor, used for handling bundles of shingles, fruit boxes, bundles of fiber cartons, and large, light cases. The wheels are of ball-bearing type, bolted to flat-bar or angle-frame rails.

Fig. 10.5.49 Wheel-type conveyor.

When an installation involves a trunk line with several tributary runs, a simple two-arm deflector at each junction point holds back the item on one run until the item on the other has cleared. Power-operated roller conveyors permit handling up an incline. Usually the rolls are driven by sprockets on the spindle ends. An alternative of a smooth deck and pusher flights should be considered as costing less and permitting steeper inclines. Platform conveyors are single- or double-strand conveyors (Fig. 10.5.50) with plates of steel or hardwood forming a continuous platform on which the loads are placed. They are adapted to handling heavy drums or barrels and miscellaneous freight.

Fig. 10.5.50 Double-strand platform conveyor. Pneumatic Conveyors

The pneumatic conveyor transports dry, free-flowing, granular material in suspension within a pipe or duct by means of a high-velocity airstream or by the energy of expanding compressed air within a comparatively dense column of fluidized or aerated material. Principal uses are (1) dust collection; (2) conveying soft materials, such as grain, dry foodstuff (flour and feeds), chemicals (soda ash, lime, salt cake), wood chips, carbon black, and sawdust; (3) conveying hard materials, such as fly ash, cement, silica metallic ores, and phosphate. The need in processing of bulk-transporting plastic pellets, powders, and flour under

contamination-free conditions has increased the use of pneumatic conveying. Dust Collection All pipes should be as straight and short as possible, and bends, if necessary, should have a radius of at least three diameters of the pipe. Pipes should be proportioned to keep down friction losses and yet maintain the air velocities that will prevent settling of the material. Frequent cleanout openings must be provided. Branch connections should go into the sides of the main and deliver the incoming stream as nearly as possible in the direction of flow of the main stream. Sudden changes in diameter should be avoided to prevent eddy losses. When vertical runs are short in proportion to the horizontal runs, the size of the riser is locally restricted, thereby increasing the air velocity and producing sufficient lifting power to elevate the material. If the vertical pipes are comparatively long, they are not restricted, but the necessary lifting power is secured by increased velocity and suction throughout the entire system. The area of the main at any point should be 20 to 25 percent in excess of the sums of the branches entering it between the point in question and the dead end of the main. Floor sweepers, if equipped with efficient blast gates, need not be included in computing the main area. The diameter of the connecting pipe from machine to main and the section required at each hood are determined by experience. The sum of the volumes of each branch gives the total volume to be handled by the fan. Fan Suction The maintained resistance at the fan is composed of (1) suctions of the various hoods, which must be chosen from experience, (2) collector loss, and (3) loss due to pipe friction. The pipe loss for any machine is the sum of the losses in the corresponding branch and in the main from that branch to the fan. For each elbow, add a length equal to 10 diameters of straight pipe. The total loss in the system, or static pressure required at the fan, is equal to the sum of (1), (2), and (3). For conveying soft materials, a fan is used to create a suction. The suspended material is collected at the terminal point by a separator upstream from the fan. The material may be moved from one location to another or may be unloaded from barge or rail car. Required conveying velocity ranges from 2,000 ft/min (10.2 m/s) for light materials, such as sawdust, to 3,000 to 4,000 ft/min (15.2 to 20.3 m/s) for mediumweight materials, such as grain. Since abrasion is no problem, steel pipe or galvanized-metal ducts are satisfactory. Unnecessary bends and fittings should be avoided to minimize power consumption. For conveying hard materials, a water-jet exhauster or steam exhauster is used on suction systems, and a positive-displacement blower on pressure systems. A mechanical exhauster may also be used on suction systems if there is a bag filter or air washer ahead of the exhauster. The largest tonnage of hard material handled is fly ash. A single coal-fired, steam-electric plant may collect more than 1,000 tons (907 tonnes) of fly ash per day. Fly ash can be conveyed several miles pneumatically at 30 tons (27 tonnes) or more per hour using a pressure conveyor. Another high-tonnage material conveyed pneumatically is cement. Individual transfer conveyors may handle several hundred tons per hour. Hard materials are usually also heavy and abrasive. Required conveying velocities vary from 4,000 to 5,000 ft/min (20.3 to 25.4 m/s). Heavy cast-iron or alloy pipe and fittings are required to prevent excessive wear. Vacuum pneumatic ash-handling systems have the airflow induced by steam- or water-jet exhausters, or by mechanical blowers. Cyclone-type Nuveyor receivers collect the ash for storage in a dry silo. A typical Nuveyor system is shown in Fig. 10.5.51. The conveying velocity is dependent upon material handled in the system. Fly ash is handled at approximately 3,800 ft/min (19.3 m/s) and capacity up to 60 tons/h (15.1 kg/s). Positive-pressure pneumatic ash systems are becoming more common because of higher capacities required. These systems can convey fly ash up to 11⁄2 mi (2.4 km) and capacities of 100 tons/h (25.2 kg/s) for shorter systems. The power requirement for pneumatic conveyors is much greater than for a mechanical conveyor of equal capacity, but the duct can be led along practically any path. There are no moving parts and no risk of injury to the attendant. The vacuum-cleaner action provides dustless

CHANGING DIRECTION OF MATERIALS ON A CONVEYOR

10-61

Fig. 10.5.51 Nuveyor pneumatic ash-handling system. (United Conveyor Corp.)

operation, sometimes important when pulverized material is unloaded from boxcars through flexible hose and nozzle. A few materials build up a state-electric charge which may introduce an explosion risk. Sulfur is an outstanding example. Sticky materials tend to pack at the elbows and are unsuitable for pneumatic handling. The performance of a pneumatic conveyor cannot be predicted with the accuracy usual with the various types of mechanical conveyors and elevators. It is necessary to rely on the advice of experienced engineers. The Fuller-Kinyon system for transporting dry pulverized material consists of a motor- or engine-driven pump, a source of compressed air for fluidizing the material, a conduit or pipe-line, distributing valves (operated manually, electropneumatically, or by motor), and electric bin-level indicators (high-level, low-level, or both). The impeller is a specially designed differential-pitch screw normally turning at 1,200 r/min. The material enters the feed hopper and is compressed in the decreasing pitch of the screw flights. At the discharge end of the screw, the mass is introduced through a check valve to a mixing chamber, where it is aerated by the introduction of compressed air. The fluidized material is conveyed in the transport line by the continuing action of the impeller screw and the energy of expanding air. Practical distance of transportation by the system depends upon the material to be handled. Cement has been handled in this manner for distances up to a mile. The most important field of application is the handling of portland cement. For this material, the Fuller-Kinyon pump is used for such operations as moving both raw material and finished cement within the cementmanufacturing process; loading out; unloading ships, barges, and railway cars; and transferring from storage to mixer plant on large construction jobs. The Airslide (registered trademark of the Fuller Company) conveyor system is an air-activated gravity-type conveyor using low-pressure air to aerate or fluidize pulverized material to a degree which will permit it to flow on a slight incline by the force of gravity. The conveyor comprises an enclosed trough, the bottom of which has an inclined airpermeable surface. Beneath this surface is a plenum chamber through which air is introduced at low pressure, depending upon the application. Various control devices for controlling and diverting material flow and for controlling air supply may be provided as part of complete systems. For normal conveying applications, the air is supplied by an appropriate fan; for operation under a head of material (as in a storage bin), the air is supplied by a rotary positive blower. The Airslide conveyor is widely used for horizontal conveying, discharge of storage bins, and special railway-car and truck-trailer transport, as well as in stationary blow-tank-type conveying systems. An important feature of this conveyor is low power requirement.

Hydraulic Conveyors

Hydraulic conveyors are used for handling boiler-plant ash or slag from an ash hopper or slag tank located under the furnace. The material is flushed from the hopper to a grinder, which discharges to a jet pump or a mechanical pump for conveying to a disposal area or a dewatering bin (Fig. 10.5.52). Water requirements average 1 gal/lb of ash.

Fig. 10.5.52 Ash-sluicing system with jet pump. (United Conveyor Corp.)

CHANGING DIRECTION OF MATERIALS ON A CONVEYOR

Some material in transit can be made to change direction by being bent around a curve. Metal being rolled, wire, strip, material on webs, and the like, can be guided to new directions by channels, rollers, wheels, pulleys, etc. Some conveyors can be given curvatures, such as in overhead monorails, tow car grooves, tracks, pneumatic tube bends, etc. Other, straight sections of conveyors can be joined by curved sections to accomplish directional changes. Figures 10.5.53 to 10.5.56 show some examples of these. In some cases, the simple intersection, or overlapping, at an angle of two conveyors can result in a direction change. See Figs. 10.5.57 and 10.5.58. When not all of the material on the first section of conveyor is to be diverted, sweeps or switches may be used. Figure 10.5.59 shows how fixed diverters set at decreasing heights can direct boxes of certain

10-62

CONVEYOR MOVING AND HANDLING

sizes onto other conveyors. Switches allow the material to be sent in one direction at one time and in another at other times. In Fig. 10.5.60 the diverter can swing to one side or the other. In Fig. 10.5.61, the switching section has a dual set of wheels, with the set being made higher at the time carrying the material in their angled direction. Pushers, as shown in Fig. 10.5.62 can selectively divert material, as can air blasts for lightweight items (Fig. 10.5.63) and tipping sections (Fig. 10.5.64).

Fig. 10.5.59 Fixed diverters. Diverters do not move. They are preset at heights to catch and divert items of given heights.

Fig. 10.5.53 Fixed curve turn section. Joins two conveyors and changes direction and level of material flow.

Fig. 10.5.60 Moving diverter. Material on incoming conveyor section can be sent to either outgoing section by pivoting the diverter to one side or the other.

Fig. 10.5.54 Fixed curve turn section. Joins two conveyors and changes direction of material flow. Turn section has no power. Incoming items push those ahead of them around curve. Wheels, balls, rollers, etc. may be added to turn section to reduce friction of dead plate section shown.

Fig. 10.5.61 Wheel switch. Fixed wheels carry material on incoming conveyor to outgoing conveyor until movable set of wheels is raised above the fixed set. Then material is carried in the other direction. Fig. 10.5.55 Fixed curve turn section. Uses tapered rollers, skate wheels, balls, or belt. May be level or inclined, “dead” or powered.

Fig. 10.5.56 Fixed curve turn section. Disk receives item from incoming conveyor section, rotates it, and directs it onto outgoing conveyor section.

Fig. 10.5.57 Simple intersection. Incoming conveyor section overlaps outgoing section and merely dumps material onto lower conveyor.

Fig. 10.5.58 Simple intersection. Both incoming and outgoing sections are powered. Rotating post serves as a guide.

Fig. 10.5.62 Pusher diverter. When triggered, the pusher advances and moves item onto another conveyor. A movable sweep can also be used.

Fig. 10.5.63 Air diverter. When activated, a jet of air pushes a lightweight item onto another conveyor.

Fig. 10.5.64 Tilting section. Sections of main conveyor move along, level, until activated to tilt at chosen location to dump material onto another conveyor.

10.6 AUTOMATIC GUIDED VEHICLES AND ROBOTS by Vincent M. Altamuro

AUTOMATIC GUIDED VEHICLES

Driverless towing tractors guided by wires embedded into or affixed onto the floor have been available since the early 1950s. Currently, the addition of computer controls, sensors that can monitor remote conditions, real-time feedback, switching capabilities, and a whole new family of vehicles have created automatic guided vehicle systems (AGVs) that compete with industrial trucks and conveyors as material handling devices. Most AGVs have only horizontal motion capabilities. Any vertical motion is limited. Fork lift trucks usually have more vertical motion capabilities than do standard AGVs. Automatic storage and retrieval systems (AS/RS) usually have very high rise vertical capabilities, with horizontal motions limited only to moving to and down the proper aisle. Power for the automatic guided vehicle is usually a battery, like that of the electric industrial truck. Guidance may be provided several ways. In electrical or magnetic guidance, a wire network is usually embedded in a narrow, shallow slot cut in the floor along the possible paths. The electromagnetic field emitted by the conductor wire is sensed by devices on-board the vehicle which activate a steering mechanism, causing the vehicle to follow the wire. An optical guidance system has a similar steering mechanism, but the path is detected by a photosensor picking out the bright path (paint, tape, or fluorescent chemicals) previously laid down. The embedded wire system seems more popular in factories, as it is in a protective groove, whereas the optical tape or painted line can get dirty and/or damaged. In offices, where the AGV may be used to pick up and deliver mail, the optical system may be preferred, as it is less expensive to install and less likely to deteriorate in such an environment. However, in carpeted areas, a wire can easily be laid under the carpet and operate invisibly. That wire can be relocated if the desired path of the AGV changes. In a laser beam guidance system, a laser scanner on the vehicle transmits and receives back an infrared laser beam that reads passive retroreflective targets that are placed at strategic points on x and y coordinates in the facility. The vehicle’s on-board computers take the locations and distances of the targets and calculate the vehicle’s position by triangulation. The locations of the loads to be picked up, the destinations at which they are to be dropped off, and the paths the vehicle is to travel are transmitted from the system’s base station. Instructions are converted to inputs to the vehicle’s steering and driving motors. A variation of the system is for the laser scanner to read bar codes or radio-frequency identification (RFID) targets to get more information regarding their mission than merely their current location. Other, less used, guidance methods are infrared, whereby line-of-sight signals are sent to the vehicle, and dead reckoning, whereby a vehicle is programmed to traverse a certain path and then turned loose. The directions that an AGV can travel may be classified as unidirectional (one way), bidirectional (forward or backward along its path), or omnidirectional (all directions). Omnidirectional AGVs with five or more on-board microprocessors and a multitude of sensors are sometimes called self-guided vehicles (SGVs). Automatic guided vehicles require smooth and level floors in order to operate properly. They can be weatherized so as to be able to run outdoors between buildings but there, too, the surface they travel on must be smooth and level. AGV equipment can be categorized as: 1. Driverless tractors 2. Guided pallet trucks 3. Unit load transporters and platform carriers 4. Assembly or tool bed robot transporters

Driverless tractors can be used to tow a series of powerless material handling carts, like a locomotive pulls a train. They can be routed, stopped, coupled, uncoupled, and restarted either manually, by a programmed sequence, or from a central control computer. They are suited to low-volume, heavy or irregularly shaped loads which have to be moved over longer distances than would be economical for a conveyor. Guided pallet trucks, like the conventional manually operated trucks they replace, are available in a wide range of sizes and configurations. In operation, they usually are loaded under the control of a person, who then drives them over the guide wire, programs in their desired destination, then switches them to automatic. At their destination, they can turn off the main guidepath onto a siding, automatically drop off their load, then continue back onto the main guidepath. The use of guided pallet trucks reduces the need for conventional manually operated trucks and their operators. Unit load transporters and platform carriers are designed so as to carry their loads directly on their flat or specially contoured surface, rather than on forks or on carts towed behind. They can either carry material or work-in-process from workstation to workstation or they can be workstations themselves and process the material while they transport it. The assembly or tool bed type of AGV is used to carry either work-inprocess or tooling to machines. It may also be used to carry equipment for an entire process step—a machine plus its tooling—to large, heavy, or immovable products. Robot transporters are used to make robots mobile. A robot can be fitted atop the transporter and carried to the work. Further, the robot can process the work as the transporter carries it along to the next station, thereby combining productive work with material handling and transportation. Most AGVs and SGVs have several safety devices, including flashing in-motion lights, infrared scanners to slow them down when approaching an obstacle, sound warnings and alarms, stop-on-impact bumpers, speed regulators, and the like.

ROBOTS

A robot is a machine constructed as an assemblage of links joined so that they can be articulated into desired positions by a reprogrammable controller and precision actuators to perform a variety of tasks. Robots range from simple devices to very complex and “intelligent” systems by virtue of added sensors, computers, and special features. See Figure 10.6.1 for the possible components of a robotic system. Robots, being programmable multijointed machines, fall between humans and fixed-purpose machines in their utility. In general, they are most useful in replacing humans in jobs that are dangerous, dirty, dull, or demeaning, and for doing things beyond human capabilities. They are usually better than conventional “hard” automation in short-run jobs and those requiring different tasks on a variety of products. They are ideal for operations calling for high consistency, cycle after cycle, yet are changeable when necessary. In contrast to “fixed” machines, they are “flexible,” meaning that they are reprogrammable. There are several hundred types and models of robots. They are available in a wide range of shapes, sizes, speeds, load capacities, and other characteristics. Care must be taken to select a robot to match the requirements of the tasks to be done. One way to classify them is by their intended application. In general, there are industrial, commercial, laboratory, mobile, military, security, medical, service, hobby, home, and personal robots. While they are programmable to do a wide variety of 10-63

10-64

AUTOMATIC GUIDED VEHICLES AND ROBOTS

tasks, most are limited to one or a few categories of capabilities, based on their construction and specifications. Thus, within the classification of industrial robots, there are those that can paint but not assemble products, and those that can assemble products, some specialize in assembling very small electronic components while others make automobiles.

Rather than have a robot perform only one task, it is sometimes possible to select a model and its tooling so that it can be positioned to do several jobs. Figure 10.6.2 shows a robot set in a work cell so that it can service the incoming material, part feeders, several machines, inspection gages, reject bins, and outgoing product conveyors arranged in an economical cluster around it. Robots are also used in chemical mixing and measuring; bomb disassembly; agriculture; logging; fisheries; department stores; amusement parks; schools; prisons; hospitals; nursing homes; nuclear power plants; warfare; spaces; underwater; surveillance; police, fire, and sanitation departments; inside the body; and an increasing number of innovative places. Robot Anatomy

Fig. 10.6.1 Robotic system schematic. (Robotics Research Division, VMA, Inc.)

Some of the common uses of industrial robots include: loading and unloading machines, transferring work from one machine to another, assembling, cutting, drilling, grinding, deburring, welding, gluing, painting, inspecting, testing, packing, palletizing, and many others.

Fig. 10.6.2 Robot work cell. Robot is positioned to service a number of machines clustered about it.

All robots have certain basic sections, and some have additional parts that give them added capabilities. All robots must have a power source, either mechanical, electrical, electromechanical, pneumatic, hydraulic, nuclear, solar, or a combination of these. They all must also have a means of converting the source of power to usable power and transmitting it: motors, pumps, and other actuators, and gears, cams, drives, bars, belts, chains, and the like. To do useful work, most robots have an assemblage of links arranged in a configuration best-suited to the tasks it is to do and the space within which it is to do them. In some robots this is called the manipulator, and the links and the joints connecting them are sometimes referred to as the robot’s body, shoulder, arm, and wrist. A robot may have no, one, or multiple arms. Multiarmed robots must have control coordination protocols to prevent collisions, as must robots that work very close to other robots. Some robots have an immovable base that connects the manipulator to the floor, wall, column, or ceiling. In others the base is attached to, or part of, an additional section that gives it mobility by virtue of components such as wheels, tracks, rollers, “legs and feet,” or other devices. All robots need an intelligence and control section that can range from a very simple and relatively limited type up to a very complex and sophisticated system that has the capability of continuously interacting with varying conditions and changes in its environment. Such advanced control systems would include computers of various types, a multitude of microprocessors, memory media, many input/output ports, absolute and/or incremental encoders, and whatever type and number of sensors may be required to make it able to accomplish its mission. The robot may be required to sense the positions of its several links, how well it is doing its mission, the state of its environment, and other events and conditions. In some cases, the sensors are built into the robots; in others they are handled as an adjunct capability—such as machine vision, voice recognition, and sonic ranging—attached and integrated into the overall robotic system. Other items that may be built into the robots or handled as attachments are the tooling—the things that interface the robot’s arm or wrist and the workpiece or object to be manipulated and which accomplish the intended task. These are called the robot’s end effectors or end-of-arm tooling (EOAT) and may be very simple or so complex that they become a major part of the total cost of the robotic system. A gripper may be more than a crude open or closed clamp (see Fig. 10.6.3), jaws equipped with sensors (see Figure 10.6.4), or a multifingered hand complete with several types of miniature sensors and capable of articulations that even the human hand cannot do. They may be binary or servoed. They may be purchased from stock or custom-designed to do specific tasks. They may be powered by the same type of energy as the basic robot or they may have a different source of power. A hydraulically powered robot may have pneumatically powered grippers and tooling, for example, making it a hybrid system. Most end effectors and EOAT are easily changeable so as to enable the robot to do more tasks. Another part of the anatomy of many robots is a safety feature. Many robots are fast, heavy, and powerful, and therefore a source of danger. Safety devices can be both built into the basic robot and added as an adjunct to the installation so as to reduce the chances that the robot will do harm to people, other equipment, the tooling, the product, and itself. The American National Standards Institute, New York City, issued

ROBOTS

ANSI/RIA R 15.06-1986 that is supported as a robot safety standard by the Robotic Industries Association, Dearborn, MI. Finally, ways of programming robots, communicating with them, and monitoring their performance are needed. Some of the devices used for these purposes are teach boxes on pendants, keyboards, panels, voice recognition equipment, and speech synthesis modules.

10-65

said to be of the revolute or jointed-arm configuration (Fig. 10.6.8). The selective compliance assembly robot arm (SCARA) is a special type of revolute robot in which the joint axes lie in the vertical plane rather than in the horizontal (Fig. 10.6.9).

Fig. 10.6.3 A simple nonservo, no-sensor robot gripper. Fig. 10.6.5 Cartesian- or rectangular-coordinate robot configuration. (a) Movements; (b) work envelope.

Fig. 10.6.6 Cylindrical-coordinate robot configuration. (a) Movements; (b) work envelope.

Fig. 10.6.4 A complex servoed, multisensor robot gripper.

Robot Specifications

While most robots are made “to stock” and sold from inventory, many are made “to order” to the specifications required by the intended application. Configuration The several links of a robot’s manipulator may be joined to move in various combinations of ways relative to one another. A joint may turn about its axis (rotational axis) or it may translate along its axis (linear axis). Each particular arrangement permits the robot’s control point (usually located at the center of its wrist flange, at the center of its gripper jaws, or at the tip of its tool) to move in a different way and to reach points in a different area. When a robot’s three movements are along translating joints, the configuration is called cartesian or rectangular (Fig. 10.6.5). When one of the three joints is made a rotating joint, the configuration is called cylindrical (Fig. 10.6.6). When two of the three joints are rotational, the configuration is called polar or spherical (Fig. 10.6.7). And if all three joints are rotational, the robot is

Articulations A robot may be described by the number of articulations, or jointed movements, it is capable of doing. The joints described above that establish its configuration give the typical robot three degrees of freedom (DOF) and at least three more may be obtained from the motions of its wrist—these being called roll, pitch, and yaw (Fig. 10.6.10). The mere opening and closing of a gripper is usually not regarded as a degree of freedom, but the many positions assumable by a servoed gripper may be called a DOF and a multifingered mechanical hand has several additional DOFs. The added abilities of mobile robots to traverse land, water, air, or outer space are, of course, additional degrees of freedom. Figure 10.6.11 shows a revolute robot with a threeDOF wrist and a seventh DOF, the ability to move in a track along the floor. Size The physical size of a robot influences its capacity and its capabilities. There are robots as large as gantry cranes and as small as grains of salt—the latter being made by micromachining, which is the same process used to make integrated circuits and computer chips. “Gnat” robots, or “microbots,” can be made using nanotechnology and can be small enough to travel inside a person’s body to detect, report out, and possibly treat a problem. Some robots intended for light assembly work are designed to be approximately the size of a human so as to make the installation of the robot into the space vacated by the replaced human as easy and least disruptive as possible.

10-66

AUTOMATIC GUIDED VEHICLES AND ROBOTS

Fig. 10.6.9 SCARA coordinate robot configuration. (a) Movements; (b) work envelope.

Fig. 10.6.7 Polar- or spherical-coordinate robot configuration. (a) Movements; (b) work envelope.

Fig. 10.6.10 A robot wrist able to move in three ways: roll, pitch, and yaw.

Fig. 10.6.8 Revolute– or jointed-arm–coordinate robot configuration. (a) Movements; (b) work envelope.

Fig. 10.6.11 A typical industrial robot with three basic degrees of freedom, plus three more in its wrist, and a seventh in its ability to move back and forth along the floor.

ROBOTS Workspace The extent of a robot’ s reach in each direction depends on its configuration, articulations, and the sizes of its component links and other members. Figure 10.6.12 shows the side and top views of the points that the industrial robot in Fig. 10.6.11 can reach. The solid geometric space created by subtracting the inner (fully contracted) from the outer (full extended) possible positions of a defined point (e.g., its control point, the center of its gripper, or the tip of its tool) is called the robot’s workspace or work envelope. For the rectangular-coordinate

Fig. 10.6.12 Side and top views of the points that the robot in Fig. 10.6.11 can reach. The three-dimensional figure generated by these accessible end points is shown in Fig. 10.6.8b.

configuration robot, this space is a rectangular parallelepiped; for the cylindrical-configuration robot, it is a cylinder with an inaccessible space hollowed out of its center; for the polar and revolute robots, it consists of spheres with inaccessible spots at their centers; and for the SCARA configuration, it is the unique solid shown in the illustration. See Figs. 10.6.5 to 10.6.9 for illustrations of the work envelope of robots of the various configurations. For a given application, one would select a robot with a work envelope slightly larger than the space required for the job to be done. Selecting a robot too much larger than is needed could increase the costs, control problems, and safety concerns. Many robots have what is called a sweet spot: that smaller space where the performance of most of the specifications (payload, speed, accuracy, resolution, etc.) peaks. Payload The payload is the weight that the robot is designed to lift, hold, and position repeatedly with the accuracy advertised. People reading robot performance claims should be aware of the conditions under which they are promised, e.g., whether the gripper is empty or at maximum payload, or the manipulator is fully extended or fully retracted. Payload specifications usually include the weight of the gripper and other end-of-arm tooling, therefore the heavier those devices become as they are made more complex with added sensors and actuators, the less workpiece payload the robot can lift. Speed A robot’s speed and cycle time are related, but different, specifications. There are two components of its speed: its acceleration and deceleration rate and its slew rate. The acceleration/deceleration rate is the time it takes to go from rest to full speed and the time it takes to go from full speed to a complete stop. The slew rate is the velocity once the robot is at full speed. Clearly, if the distance to go is short, the robot may not reach full speed before it must begin to slow to stop, thus making its slew rate an unused capability. The cycle time is the time it takes for the robot to complete one cycle of picking a given object up at given height, moving it a given distance, lowering it, releasing it, and returning to the starting point. This is an important measure for those concerned with productivity, or how many parts a robot can process in an hour. Speed, too, is a specification that depends on the conditions under which it is measured—payload, extension, etc. Accuracy The accuracy of a robot is the difference between where its control point goes and where it is instructed or programmed to go. Repeatability The repeatability or precision of a robot is the variance in successive positions each time its control point returns to a taught position.

10-67

Resolution The resolution of a robot is the smallest incremental change in position that it can make or its control system can measure. Robot Motion Control

While a robot’s number of degrees of freedom and the dimensions of its work envelope are established by its configuration and size, the paths along which it moves and the points to which it goes are established by its control system and the instructions it is given. The motion paths generated by a robot’s controller are designated as point-to-point or continuous. In point-to-point motion, the controller moves the robot from starting point A to end point B without regard for the path it takes or of any points in between. In a continuous-path robot controller, the path in going from A to B is controlled by the establishment of via or way points through which the control point passes on its way to each end point. If there is a conditional branch in the program at an end point such that the next action will be based on the value of some variable at that point, then the point is called a reference point. Some very simple robots move only until their links hit preset stops. These are called pick-and-place robots and are usually powered by pneumatics with no servomechanism or feedback capabilities. The more sophisticated robots are usually powered by hydraulics or electricity. The hydraulic systems require pumps, reservoirs, and cylinders and are good for lifting large loads and holding them steady. The electric robots use stepper motors or servomotors and are suited for quick movements and high-precision work. Both types of robots can use harmonic drives (Fig. 10.6.13), gears, or other mechanisms to reduce the speed of the actuators. The harmonic drive is based on a principle patented by the Harmonic Drive unit of the Emhart Division of the Black & Decker Corp. It permits dramatic speed reductions, facilitating the use of high-speed actuators in robots. It is composed of three concentric components: an

Fig. 10.6.13 A harmonic drive, based on a principle patented by the Harmonic Drive unit of the Emhart Division of the Black & Decker Corp. As the elliptical center wave generator revolves, it deforms the inner toothed Flexspline into contact with the fixed outer circular spline which has two more teeth. This results in a high speed-reduction ratio.

elliptical center (called the wave generator), a rigid outer ring (called the circular spline), and a compliant inner ring (called the Flexspline) between the two. A speed-reduction ratio equal to one-half of the number of teeth on the Flexspline is achieved by virtue of the wave generator rotating and pushing the teeth of the Flexspline into contact with the stationary circular spline, which has two fewer teeth than the Flexspline. The result is that the Flexspline rotates (in the opposite direction) the distance of two of its teeth (its circular pitch  2) for every full rotation of the wave generator. Robots also use various devices to tell them where their links are. A robot cannot very well be expected to go to a new position if it does not know where it is now. Some such devices are potentiometers, resolvers, and encoders attached to the joints and in communication with the controller. The operation of a rotary-joint resolver is shown in Fig. 10.6.14. With one component on each part of the joint, an excitation input in one part induces sine and cosine output signals in coils (set at right angles in the other part) whose differences are functions of the amount of angular offset between the two. Those angular displacement waveforms are sent to an analog-to-digital converter for output to the robot’s motion controller.

10-68

AUTOMATIC GUIDED VEHICLES AND ROBOTS

Encoders may be of the contact or noncontact type. The contact type has a metal brush for each band of data, and the noncontact type either reflects light to a photodetector or lets the light pass or not pass to a photodetector on its opposite side, depending on its pattern of light/dark

more complex and more expensive. They have a light source, a photodetector, and two electrical wires for each track or concentric ring of marks. Each mark or clear spot in each ring represents a bit of data, with the whole word being made up by the number of rings on the disk. Figure 10.6.16 shows 4-bit-word and 8-bit-word disks. Each position of the joint is identified by a specific unique word. All of the tracks in a segment of the disk are read together to input a complete coded word (address) for that position. The codes mostly used are natural binary, binary-coded decimal (BCD), and Gray. Gray code has the advantage that the state (ON/OFF or high/low) of only one track changes for each position change of the joint, making error detection easier.

Fig. 10.6.14 Rotary-joint resolver. Resolver components on the robot’s joint send analog angular displacement signals to a digital converter for interpretation and output to the robot’s controller.

or clear/solid marks. Both types of encoders may be linear or rotary— that is, shaped like a bar or a disk—to match the type of joint to which they are attached. The four basic parts of a rotary optical encoder (the light sources, the coding disk, the photodetectors, and the signal processing electronics) are all contained in a small, compact, sealed package. Encoders are also classified by whether they are incremental or absolute. Figure 10.6.15 shows two types of incremental optical encoder disks. They have equally spaced divisions so that the output of their photodetectors is a series of square waves that can be counted to tell the amount of rotational movement of the joint to which they are attached. Their resolution depends on how many distinct marks can be painted or etched on a disk of a given size. Disks with over 2,500 segments are available, such that the counter recording 25 impulses would know that the joint rotated 1/100 of a circle, reported as either degrees or radians, depending on the type of controller. It can also know the speed of that movement if it times the intervals between incoming pulses. It will not be able to tell the direction of the rotation if it uses a single-track, or tachometer-type, disk, as shown in Fig. 10.6.15a. A quadrature device, as shown in Fig. 10.6.15b, has two output channels, A and B, which are offset 90, thus allowing the direction of movement to be determined by seeing whether the signals from A lead or lag those from B.

Fig. 10.6.15 Two types of incremental optical encoder disks. Type a, a single track incremental disk, has a spoke pattern that shutters the light onto a photodetector. The resulting triangular waveform is electronically converted to a squarewave output, and the pulses are counted to determine the amount of angular movement of the joint to which the encoder components are attached. Type b, a quadrature incremental disk, uses two light sources and photodetectors to determine the direction of movement.

The major limitation of incremental encoders is that they do not tell the controller where they are to begin with. Therefore, robots using them must be sent “home,’’ to a known or initial position of each of its links, as the first action of their programs and after each switch-off or interruption of power. Absolute encoders solve this problem, but are

Fig. 10.6.16 Two types of absolute optical encoder disks. Type a is for 4-bitword systems, and type b is for 8-bit-word systems. Absolute encoders require a light source and photodetector set for each bit of word length.

All robot controllers require a number of input and output channels for communication and control. The number of I/O ports a robotic system has is one measure of its capabilities. Another measure of a robot’s sophistication is the number and type of microprocessors and microcomputers in its system. Some have none; some have a microprocessor at each joint, as well as one dedicated to mathematical computations, one to safety, several to vision and other sensory subsystems, several to internal mapping and navigation (if they are mobile robots), and a master controller micro- or minicomputer. Programming Robots

The methods of programming robots range from very elementary to advanced techniques. The method used depends on the robot, its controller, and the task to be performed. At the most basic level, some robots can be programmed by setting end stops, switches, pegs in holes, cams, wires, and so forth, on rotating drums, patch boards, control panels, or the robot’s links themselves. Such robots are usually the pick-and-place type, where the robot can stop only at preset end points and nowhere in between, the number of accessible points being 2N, where N is the robot’s number of degrees of freedom. The more advanced robots can be programmed to go to any point within their workspace and to execute other commands by using a teach box on a tether (called a teach pendant) or a remote control unit. Such devices have buttons, switches, and sometimes a joystick to instruct the robot to do the desired things in the desired sequence. Errors can be corrected by simply overwriting the mistake in the controller’s memory. The program can be tested at a slow speed, then, when perfected, can be run at full speed by merely turning a switch on the teach box. This is called the walk-through method of programming. Another way some robots can be programmed is to physically take hold of it (or a remote representation of it) and lead it through the desired sequence of motions. The controller records the motions when in teach mode and then repeats them when switched to run mode. This is called the lead-through method of programming. The foregoing are all teach-and-repeat programming methods and are done on-line, that is, on a specific robot as it is in place in an installation but not doing other work while it is being programmed. More advanced programming involves the use of a keyboard to type in textual language instructions and data. This method can be done on

IDENTIFICATION AND CONTROL OF MATERIALS

line or off line (away from and detached from the robot, such that it can be working on another task while the program is being created). There are many such programming languages available, but most are limited to use on one or only a few robots. Most of these languages are explicit, meaning that each of their instructions gives the robot a specific command to do a small step (e.g., open gripper) as part of a larger task. The more advanced higher-level implicit languages, however, give instructions at the task level (e.g., assemble 200 of product XYZ), and the robot’s computer knows what sequence of basic steps it must execute to do that. These are also called object-level or world-modeling languages and model-based systems. Other ways of instructing a robot include spoken commands and feedback from a wide array of sensors, including

10-69

vision systems. Telecherics, teleoperators, and other mechanisms under the continuous control of a remotely located operator are not true robots, as they are not autonomous. Using Robots

The successful utilization of robots involves more than selecting and installing the right robot and programming it correctly. For an assembly robot, for example, care must be taken to design the product so as to facilitate assembly by a robot, presenting the component piece parts and other material to it in the best way, laying out the workplace (work cell) efficiently, installing all required safety measures, and training programming, operating, and maintenance employees properly.

10.7 MATERIAL STORAGE AND WAREHOUSING by Vincent M. Altamuro The warehouse handling of material is often more expensive than in-process handling, as it frequently requires large amounts of space, expensive equipment, labor, and computers for control. Warehousing activities, facilities, equipment, and personnel are needed at both ends of the process—at the beginning as receiving and raw materials and purchased parts storage, and at the end as finished goods storage, picking, packing, and shipping. These functions are aided by various subsystems and equipment—some simple and inexpensive, some elaborate and expensive. The rapid and accurate identification of materials is essential. This can be done using only human senses, humans aided by devices, or entirely automated. Bar codes have become an accepted and reliable means of identifying material and inputting that data into an information and control system. Material can be held, stacked, and transported in simple devices, such as shelves, racks, bins, boxes, baskets, tote pans, pallets and skids, or in complex and expensive computer-controlled systems, such as automated storage and retrieval systems. IDENTIFICATION AND CONTROL OF MATERIALS

Materials must be identified, either by humans or by automated sensing devices, so as to: 1. Measure presence or movement 2. Qualify and quantify characteristics of interest 3. Monitor ongoing conditions so as to feed back corrective actions 4. Trigger proper marking devices 5. Actuate sortation or classification mechanisms 6. Input computing and control systems, update data bases, and prepare analyses and summary reports To accomplish the above, the material must have, or be given, a unique code symbol, mark, or special feature that can be sensed and identified. If such a coded symbol is to be added to the material, it must be: 1. Easily and economically produced 2. Easily and economically read 3. Able to have many unique permutations—flexible 4. Compact—sized to the package or product 5. Error resistant—low chance of misreading, reliable 6. Durable 7. Digital—compatible with computer data formats The codes can be read by contact or noncontact means, by moving or stationary sensors, and by humans or devices. There are several technologies used in the identification and control of materials. The most important of these are: 1. Bar codes 2. Radio-frequency identification (RFID) 3. Smart cards

4. Machine vision 5. Voice recognition 6. Magnetic stripes 7. Optical character recognition (OCR) All can be used for automatic data collection (ADC) purposes and as part of a larger electronic data interchange (EDI), which is the paperless, computer-to-computer, intercompany, and international communications and information exchange using a common data language. The dominant data format standard in the world is Edifact (Electronic Data Interchange for Administration, Commerce, and Transportation). Bar codes are one simple and effective means of identifying and controlling material. Bar codes are machine-readable patterns of alternating, parallel, rectangular bars and spaces of various widths whose combinations represent numbers, letters, or other information as determined by the particular symbology, or language, employed. Several types of bar codes are available. While multicolor bar codes are possible, simple black-and-white codes dominate because a great number of permutations are available by altering their widths, presence, and sequences. Some codes are limited to numeric information, but others can encode complete alphanumeric plus special symbols character sets. Most codes are binary digital codes, some with an extra parity bit to catch errors. In each bar code, there is a unique sequence set of bars and spaces to represent each number, letter, or symbol. One-dimensional, or linear, codes print all of the bars and spaces in one row. Two-dimensional stacked codes arrange sections above one another in several rows so as to condense more data into a smaller space. To read stacked codes, the scanner must sweep back and forth so as to read all of the rows. The most recent development is the two-dimensional matrix codes that contain much more information and are read with special scanners or video cameras. Many codes are of the n of m type, i.e., 2 of 5, 3 of 9, etc. For example, in the 2-of-5 code, a narrow bar may represent a 0 bit or OFF and a wide bar a 1 or ON bit. For each set of 5 bits, 2 must be ON— hence, 2 of 5. See Table 10.7.1 for the key to this code. A variation to the basic 2 of 5 is the Interleaved 2-of-5, in which the spaces between the bars also can be either narrow or wide, permitting the reading of a set of bars, a set of spaces, a set of bars, a set of spaces, etc., and yielding more information in less space. See Fig. 10.7.1. Both the 2-of-5 and the Interleaved 2-of-5 codes have ten numbers in their character sets. Code 39, another symbology, permits alphanumeric information to be encoded by allowing each symbol to have 9 modules (locations) along the bar, 3 of which must be ON. Each character comprises five bars and four spaces. Code 39 has 43 characters in its set (1-10, A-Z, six symbols, and one start/stop signal). Each bar or space in a bar code is called an element. In Code 39, each element is one of two widths, referred to as wide and narrow. As in other bar codes, the narrowest element is referred to as the X dimension and the ratio of the widest to narrowest element widths is referred to as the N of the code. For each code, X and

10-70

MATERIAL STORAGE AND WAREHOUSING Table 10.7.1

Bar Code 2-of-5 and I 2/5 Key

Table 10.7.2 Uniform Symbology Specification Code 39 Symbol Character Set

Code Character

1

2

4

7

P

0 1 2 3 4 5 6 7 8 9

0 1 0 1 0 1 0 0 1 0

0 0 1 1 0 0 1 0 0 1

1 0 0 0 1 1 1 0 0 0

1 0 0 0 0 0 0 1 1 1

0 1 1 0 1 0 0 1 0 0

Wide bars and spaces  1 Narrow bars and spaces  0

N are constants. These values are used to calculate length of the bar code labels. The Y dimension of a bar code is the length (or height) of its bars and spaces. It influences the permissible angle at which a label may be scanned without missing any of the pattern. See Table 10.7.2 for the Code 39 key. The patterns for the Code 39 bars and spaces are designed in such a way that changing or misreading a single bit in any of them

Fig. 10.7.1 A sample of a Uniform Symbology Specification Interleaved 2of-5 bar code (also called I 2/5 or ITF).

results in an illegal code word. The bars have only an even number of wide elements and the spaces have only an odd number of wide elements. The code is called self-checking because that design provides for the immediate detection of single printing or reading errors. See Fig. 10.7.2. Code 39 was developed by Intermec (Everett, WA), as was Code 93, which requires less space by permitting more sizes of bars and spaces. Code 93 has 128 characters in its set (the full ASCII set) and permits a maximum of about 14 characters per inch, versus Code 39’s maximum of about 9. Code 39 is discrete, meaning that each character is printed independently of the other characters and is separated from the characters on both sides of it by a space (called an intercharacter gap) that is not part of the data. Code 93 is a continuous code, meaning that there are no intercharacter gaps in it and all spaces are parts of the data symbols. Code 128, a high-density alphanumeric symbology, has 128 characters and a maximum density of about 18 characters per inch (cpi) for numeric data and about 9 cpi for alphanumeric data. A Code 128 character comprises six elements: three bars and three spaces, each of four possible widths, or modules. Each character consists of eleven 1X wide modules, where X is the width of the narrowest bar or space. The sum of the number of bar modules in any character is always even (even parity), while the sum of the space modules is always odd (odd parity), thus permitting self-checking of the characters. See Table 10.7.3 for the

SOURCE: Uniform Symbology Specification Code 39, Table 2: Code 39 Symbol Character Set, © Copyright 1993 Automatic Identification Manufacturers, Inc.

Code 128 key. Code 128 is a continuous, variable-length, bidirectional code. The Universal Product Code (U.P.C.) is a 12-digit bar code adopted by the U.S. grocery industry in 1973. See Figure 10.7.3 for a sample of the U.P.C. standard symbol. Five of the six digits in front of the two long bars in the middle [called the center guard pattern (CGP)] identify the item’s manufacturer and five of the six digits after the CGP identifies the specific product. There are several variations of the U.P.C., all those in the United States are controlled by the Uniform Code Council, located in Dayton, OH. One variation is an 8-digit code that identifies both the manufacturer and the specific product with one number. The U.P.C. uses prefix and suffix digits separated from the main number. The prefix digit is the number system character. See Table 10.7.4 for the key to these prefix numbers. The suffix is a “modulo-10” check digit. It serves to confirm that the prior 11 digits were read correctly. See Figure 10.7.4 for the method by

Table 10.7.3

Uniform Symbology Specification Code 128 Symbol Character Set

SOURCE: Uniform Symbology Specification Code 128, Table 2: Code 128 Symbol Character Set, © Copyright 1993 Automatic Identification Manufacturers, Inc. 10-71

10-72

MATERIAL STORAGE AND WAREHOUSING Table 10.7.4 U.P.C. Prefix Number System Character Key The human-readable character identifying the encoded number system will be shown in the left-hand margin of the symbol as per Fig. 10.7.3. Number system character 0 2 3

4

5 6, 7 1, 8, 9

Fig. 10.7.2 A sample of Uniform Symbology Specification Code 39 bar code.

which the check digit is calculated. Each U.P.C. character comprises two bars and two spaces, each of which may be one, two, three, or four modules wide, such that the entire character consumes a total of seven modules of space. See Table 10.7.5 for the key to the code. It will be noted that the codes to the left of the CGP all start with a zero (a space)

Specified use Regular U.P.C. codes (Versions A and E) Random-weight items, such as meat and produce, symbol-marked at store level (Version A) National Drug Code and National Health Related Items Code in current 10-digit code length (Version A). Note that the symbol is not affected by the various internal structures possible with the NDC or HRI codes. For use without code format restriction with check digit protection for in-store marking of nonfood items (Version A). For use on coupons (Version A). Regular U.P.C. codes (Version A). Reserved for uses unidentified at this time.

SOURCE: U.P.C. Symbol Specification Manual, January 1986, © Copyright 1986 Uniform Code Council, Inc. The Uniform Code Council prohibits the commercial use of their copyrighted material without prior written permission.

The following example will illustrate the calculation of the check character for the symbol shown in Fig. 10.7.3. Note that the code shown in Fig. 10.7.3 is in concurrent number system 0. Step 1: Starting at the left, sum all the characters in the odd positions (that is, first from the left, third from the left, and so on), starting with the number system character. (For the example, 0  2  4  6  8  0  20.) Step 2: Multiply the sum obtained in Step 1 by 3. (The product for the example is 60.) Step 3: Again starting at the left, sum all the characters in the even positions. (For the example, 1  3  5  7  9  25.) Step 4: Add the product of step 2 to the sum of step 3. (For the example, the sum is 85.) Step 5: The modulo-10 check character value is the smallest number which when added to the sum of step 4 produces a multiple of 10. (In the example, the check character value is 5.) The human-readable character identifying the encoded check character will be shown in the right-hand margin of the symbol as in Fig. 10.7.3.

Fig. 10.7.3 A sample of the Universal Product Code Standard Symbol. (U.P.C. Symbol Specification Manual, January 1986, Copyright © 1986 Uniform Code Council, Inc. The Uniform Code Council prohibits the commercial use of their copyrighted material without prior written permission.)

and end with a one (a bar) while those to the right of the CGP all start with a one and end with a zero. This permits the code to be scanned from either left to right or right to left (that is, it is omnidirectional) and allows the computer to determine the direction and whether it needs to be reversed before use by the system. The U.P.C. label also has the number printed in human-readable form along the bottom. When integrated into a complete system, the U.P.C. can serve as the input data point for supermarket pricings, checkouts, inventory control, reordering stock, employee productivity checks, cash control, special promotions, purchasing habits and history of customers holding store “club” cards, sales analyses, and management reports. Its use increases checkout speed and reduces errors. In some stores, it is combined with

Fig. 10.7.4 U.P.C. check character calculation method. (U.P.C. Symbol Specification Manual, January 1986, Copyright © 1986 Uniform Code Council, Inc. The Uniform Code Council prohibits the commercial use of their copyrighted material without prior written permission.)

a speech synthesizer that voices the description and price of each item scanned. It is a vital component in a fully automated store. As with many new technologies, different users use different protocols. New users must choose to use an existing protocol or create one of their own better suited to their particular needs. Soon there arise several camps of conflicting “standards,” where one universally accepted standard is essential for the full application of the technology in all products, systems, companies, and countries. The longer users of each different design go separate ways, the more difficult it is for them to change, as their investment in hardware and software increases. Such was the case with bar codes. Hence, in January 1, 2005, in order to have their bar code compatible with the rest of the world, and as directed by the Uniform Code Council, the U.S. and Canada added a digit to their 12-digit UPC to make a 13-digit pattern that would match the European Article Numbering system pattern and become the global standard.

IDENTIFICATION AND CONTROL OF MATERIALS Table 10.7.5 U.P.C. Encodation Key Encodation for U.P.C. characters, number system character, and modulo check character. Decimal value

Left characters (Odd parity—O)

Right characters (Even parity—E)

0 1 2 3 4 5 6 7 8 9

0001101 0011001 0010011 0111101 0100011 0110001 0101111 0111011 0110111 0001011

1110010 1100110 1101100 1000010 1011100 1001110 1010000 1000100 1001000 1110100

The encodation for the left and right halves of the regular symbol, including UPC characters, number system character and modulo check character, is given in this following chart, which is applicable to version A. Note that the left-hand characters always use an odd number (3 or 5) of modules to make up the dark bars, whereas the right-hand characters always use an even number (2 or 4). This provides an “odd” and “even” parity encodation for each character and is important in creating, scanning, and decoding a symbol. SOURCE: U.P.C. Symbol Specification Manual, January 1986, © Copyright 1986 Uniform Code Council, Inc. The Uniform Code Council prohibits the commercial use of their copyrighted material without prior written permission.

Also, the Uniform Code Council and EAN International combined to form a global code-controlling organization, called GSI, headquartered in Brussels. Over the years, nearly 50 different one-dimensional bar code symbologies have been introduced. Around 20 found their way into common use at one time or another, with six or seven being most important. These are the Interleaved 2-of-5, Code 39, Code 93, Code 128, Codabar, Code 11, and the U.P.C. Table 10.7.6 compares some of the characteristics of these codes. The choice depends on the use by others in the same industry, the need for intercompany uniformity, the type and amount of data to be coded, whether mere numeric or alphanumeric plus special symbols will be needed, the space available on the item, and intracompany compatibility requirements. Some bar codes are limited to a fixed amount of data. Others can be extended to accommodate additional data and the length of their label is variable. Figure 10.7.5 shows a sample calculation of the length of a Code 39 label.

10-73

The foregoing one-dimensional, linear bar codes are sometimes called license plates because all they can do is contain enough information to identify an item, thereby permitting locating more information stored in a host computer. Two-dimensional stacked codes contain more information. The stacked codes, such as Code 49, developed by Dr. David Allais at Intermec, Code 16K, invented by Ted Williams, of Laserlight Systems, Inc., Dedham, MA, and others are really onedimensional codes in which miniaturized linear codes are printed in multiple rows, or tiers. They are useful on small products, such as electronic components, single-dose medicine packages, and jewelry, where there is not enough space for a one-dimensional code. To read them, a scanner must traverse each row in sequence. Code 49 (see Fig. 10.7.6) can arrange all 128 ASCII encoded characters on up to eight stacked rows. Figure 10.7.7 shows a comparison of the space required for the same amount (30 characters) of alphanumeric information in three bar code symbologies: Code 39, Code 93, and Code 49. Code 16K (Fig. 10.7.8), which stands for 128 squared, shrinks and stacks linear codes in from 2

Fig. 10.7.6 Code 49. (Uniform Symbology Specification Code 49, Copyright © 1993 Automatic Identification Manufacturers, Inc.)

to 16 rows. It offers high-data-density encoding of the full 128-character ASCII set and double-density encoding of numerical data strings. It can fit 40 characters of data in the same space as eight would take in Code 128, and it can be printed and read with standard printing and scanning equipment.

Example used: Symbology: Code 39 System: Each bar or space  one element Each character  9 elements (5 bars and 4 spaces) Each character  3 wide and 6 narrow elements (That is, 3 of 9 elements must be wide) Width of narrow element  X dimension  0.015 in (0.0381 cm) Wide to narrow ratio  N value  3 : 1  3 Length segments per data character  3(3)  6  15 Number of data characters in example  6 Length of data section  6  5  90.0 segments of length Start character  15.0 Stop character  15.0 Intercharacter gaps (at 1 segment per gap): No. of gaps  (data characters  1)  start  stop  (6  1)  1  1  7  7.0 Quiet zones (2, each at least 10 segments)  20.0 Total segments  147.0 Times X, length of each segment 0.015  2.205 in (5.6007 cm)

Fig. 10.7.5 Calculation of a bar code label length.

Fig. 10.7.7 A comparison of the space required by three bar codes. Each bar code contains the same 30 alphanumeric characters.

Two-dimensional codes allow extra information regarding an item’s description, date and place of manufacture, expiration date, lot number, package size, contents, tax code, price, etc. to be encoded, rather than merely its identity. Two-dimensional matrix codes permit the encoding of very large amounts of information. Read by video cameras or special laser scanners and decoders, they can contain a complete data file about

Fig. 10.7.8 Code 16K with human-readable interpretation. (Uniform Symbology Specification Code 16K, Copyright © 1993 Automatic Identification Manufacturers, Inc.)

10-74

MATERIAL STORAGE AND WAREHOUSING

Table 10.7.6

Comparison of Bar Code Symbologies Interleaved

Code 39

Codabar

Code 11

U.P.C./EAN

Code 128

Code 93

Date of inception

1972

1974

1972

1977

1973

1981

1982

Industry-standard specification

AIM ANSI UPCC AIAG

AIM ANSI AIAG HIBC

CCBBA ANSI

AIM

UPCC IAN

AIM

AIM

Computer Identics

Intermec

Government support

DOD

Corporate sponsors

Computer Identics

Intermec

Welch Allyn

Intermec

Most-prominent application area

Industry

Industry

Medical

Industry

Retail

New

New

a

Variable length Alphanumeric Discrete Self-checking Constant character width Simple structure (two element widths)

No No No Yes Yes Yes

Yes Yes Yes Yes Yes Yes

Yes No Yes Yes Yesb No

Yes No Yes No Yes No

No No No Yes Yes No

Yes Yes No Yes Yes No

Yes Yes No No Yes No

Number of data characters in set Densityc: Units per character Smaller nominal bar, in Maximum characters/inch

10

43/128

16

11

10

103/128

47/128

7–9 0.0075 17.8

13–16 0.0075 9.4

12 0.0065 10

8–10 0.0075 15

7 0.0104 13.7

11 0.010 9.1

9 0.008 13.9

0.0018

0.0017

0.0015

0.0017

0.0014 0.0015 0.0030

0.0010 0.0014 0.0029

0.0022 0.0013 0.0013

Does print tolerance leave more than half of the total tolerance for the scanner?

Yes

Yes

No

Yes

No

No

Yes

Data securityd

High

High

High

High

Moderate

High

High

Specified print tolerance at maximum density, in Bar width Edge to edge Pitch

a

Interleaved 2 of 5 is fundamentally a fixed-length code. b Using the standard dimensions Codabar has constant character width. With a variant set of dimensions, width is not constant for all characters. c Density calculations for interleaved 2 of 5, Code 39, and Code 11 are based on a wide-to-narrow ratio of 2.25 : 1. Units per character for these symbols are shown as a range corresponding to wide/narrow ratios from 2 : 1 to 3 : 1. A unit in Codabar is taken to be the average of narrow bars and narrow spaces, giving about 12 units per character. d High data security corresponds to less than 1 substitution error per several million characters scanned using reasonably good-quality printed symbols. Moderate data security corresponds to 1 substitution error per few hundred thousand characters scanned. These values assume no external check digits other than those specified as part of the symbology and no file-lookup protection or other system safeguards. e AIM  Automatic Identification Mfrs Inc (Pittsburgh) ANSI  American National Standards Institute (New York) AIAG  Automative Industry Action Group (Southfield, MI) IAN  Intl Article Numbering Assn (Belgium) DOD  Dept of Defense CCBBA  Committee for Commonality in Blood Banking (Arlington, VA) UPCC  Uniform Product Code Council Inc (Dayton, OH) SOURCE: Intermec—A Litton Company, as published in the March, 1987 issue of American Machinist magazine, Penton Publishing Co., Inc.

the product to which they are affixed. In many cases, this eliminates the need to access the memory of a host computer. That the data file travels with (on) the item is also an advantage. Their complex patterns and interpretation algorithms permit the reading of complete files even when part of the pattern is damaged or missing. Code PDF417, developed by Symbol Technologies Inc., Bohemia, NY (and put in the public domain, as are most codes), is a two-dimensional bar code symbology that can record large amounts of data, as well as digitized images such a fingerprints and photographs. To read the highdensity PDF417 code, Symbol Technologies Inc. has developed a rastering laser scanner. This scanner is available in both a hand-held and fixed-mount version. By design, PDF417 is also scannable using a wide range of technologies, as long as specialized decoding algorithms are used. See Fig. 10.7.9 for an illustration of PDF417’s ability to compress Abraham Lincoln’s Gettysburg Address into a small space. The minimal unit that can represent data in PDF417 is called a codeword. It comprises 17 equal-length modules. These are grouped to form 4 bars and 4 spaces, each from one to six modules wide, so that they total 17 modules. Each unique codeword pattern is given one of 929 values. Further, to utilize different data compression algorithms based on the data type (i.e., ASCII versus binary) the system is multimodal, so that a codeword value may have different meanings depending on which of the various modes is being used. The modes are switched automatically

Fig. 10.7.9 A sample of PDF417 Encoding: Abraham Lincoln’s Gettysburg Address. (Symbol Technologies, Inc.)

during the encoding process to create the PDF417 code and then again during the decoding process. The mode-switching instructions are carried in special codewords as part of the total pattern. Thus, the value of a scanned codeword is first determined (by low-level decoding), then its meaning is determined by virtue of which mode is in effect (by highlevel decoding). Code PDF417 is able to detect and correct errors because of its sophisticated error-correcting algorithms. Scanning at

IDENTIFICATION AND CONTROL OF MATERIALS

10-75

angles is made possible by giving successive rows of the stacked code different identifiers, thereby permitting the electronic “stitching” of the individual codewords scanned. In addition, representing the data by using codewords with numerical values that must be interpreted by a decoder permits the mathematical correction of reading errors. At its simplest level, if two successive codewords have values of, say, 20 and 52, another codeword could be made their sum, a 72. If the first codeword were read, but the second missed or damaged, the system would subtract 20 from 72 and get the value of the second codeword. More complex high-order simultaneous polynomial equations and algorithms are also used. The PDF417 pattern serves as a portable data file that provides local access to large amounts of information about an item (or person) when access to a host computer is not possible or economically feasible. PDF417 provides a low-cost paper-based method of transmitting machine-readable data between systems. It also has use in WORM (written once and read many times) applications. The size of the label and its aspect ratio (the shape, or relationship of height to width) are variable so as to suit the user’s needs. Data Matrix, a code developed by International Data Matrix Inc., Clearwater, FL, is a true matrix code. It may be square or rectangular, and can be enlarged or shrunk to whatever size [from a 0.001-in (0.254-mm), or smaller, to a 14-in (36-cm), or larger, square] suits the user and application. Figure 10.7.10 shows the code in three sizes, but with the same contents. Its structure is uniform—square cells all the same size for a given pattern. To fit more user data into a matrix of a specific size, all the internal square cells are reduced to a size that will accommodate the

Fig. 10.7.11 Five Data Matrix patterns, all the same size but containing different amounts of information. (International Data Matrix, Inc.)

Fig. 10.7.10 Three sizes of Data Matrix patterns, all with the same contents. (International Data Matrix, Inc.)

amount of data to be encoded. Figure 10.7.11 shows five patterns, all the same size, containing different amounts of data. As a user selects additional data, the density of the matrix naturally increases. A pattern can contain from 1 to 2,000 characters of data. Two adjacent sides are always solid, while the remaining two sides are always composed of

alternating light and dark cells. This signature serves to have the video [charge-coupled device (CCD)] camera locate and identify the symbol. It also provides for the determination of its angular orientation. That capability permits a robot to rotate the part to which the code is affixed to the proper orientation for assembly, test, or packaging. Further, the information it contains can be program instructions to the robot. The labels can be printed in any combination of colors, as long as their difference provides a 20 percent or more contrast ratio. For aesthetic or security reasons, it can be printed in invisible ink, visible only when illuminated with ultraviolet light. The labels can be created by any printing method (laser etch, chemical etch, dot matrix, dot peen, thermal transfer, ink jet, pad printer, photolithography, and others) and on any type of computer printer through the use of a software interface driver provided by the company. The pattern’s binary code symbology is read with standard CCD video cameras attached to a company-made controller linked to the user’s computer system with standard interfaces and a Data Matrix Command Set. The algorithms for error correction

10-76

MATERIAL STORAGE AND WAREHOUSING

that it uses create data values that are cumulative and predictable so that a missing or damaged character can be deduced or verified from the data values immediately preceding and following it. Code One (see Fig. 10.7.12), invented by Ted Williams, who also created Code 128 and Code 16K, is also a two-dimensional checker-board or matrix type code read by a two-dimensional imaging device, such as a CCD video camera, rather than a conventional scanner. It can be read at any angle. It can encode the full ASCII 256-character set in addition to four function characters and a pad character. It can encode from 6 to over 2,000 characters of data in a very small space. If more data is needed, additional patterns can be linked together.

Fig. 10.7.12 Code One. (Laserlight Systems, Inc.)

Figure 10.7.13 shows a comparison of the sizes of four codes—Code 128, Code 16K, PDF-417, and Code One—to illustrate how much smaller Code One is than the others. Code One can fit 1,000 characters into a 1-in (2.54-cm) square of its checkerboard-like pattern. All Code One patterns have a finder design at their center. These patterns serve several purposes. They permit the video camera to find the label and determine its position and angular orientation. They eliminate the need for space-consuming quiet zones. And, because the internal patterns are different for each of the versions (sizes, capacities, and configurations) of the code, they tell the reader what version it is seeing. There are 10 different versions and 14 sizes of the symbol. The smallest contains 40 checkerboard-square type bits and the largest has 16,320. Code 1H, the

Fig. 10.7.13 Comparative sizes of four symbols encoding the same data with the same x dimension. (Laserlight Systems, Inc.)

largest version, can encode 3,550 numeric or 2,218 alphanumeric characters, while correcting up to 2,240 bit errors. The data is encoded in rectangular tile groups, each composed of 8-bit squares arranged in a four-wide by two-high array. Within each tile, each square encodes one bit of data, with a white square representing a zero and a black square representing a one. The squares are used to create the 8-bit bytes that are the symbol’s characters. The most significant bit in the character pattern is in the top left corner of the rectangle, proceeding left to right through the upper row and then the lower row, to the least significant bit in the bottom right corner of the pattern. The symbol characters are ordered left to right and then top to bottom, so that the first character is in the top left corner of the symbol and the last character is in the bottom right corner. The fact that the patterns for its characters are arrays of spots whose images are captured by a camera, rather than the row of bars that onedimensional and stacked two-dimensional codes have, frees it from the Y-dimension scanning angle restraint of those codes and makes it less susceptible to misreads and no-reads. The size of each square data bit is controlled only by the smallest optically resolvable or printable spot. Bar codes may be printed directly on the item or its packaging or on a label that is affixed thereon. Some are scribed on with a laser. In many cases, they are printed on the product or package at the same time other printing or production operations are being performed, making the cost of adding them negligible. Labels may be produced in house or ordered from an outside supplier. A recent application (made compulsory in 2006) of bar codes is on hospital medications to ensure that patients get the correct dose of the correct drug at the correct time so as to reduce the thousands of hospitalized patients and nursing home residents who die annually from drug errors. Matching the bar code on the patient’s wristband with the drugs prescribed can also catch the administration of drugs which might cause an allergic reaction, drugs in conflict with other drugs, and drugs that should be given in liquid rather than pill form and vice versa. In addition to the bar code symbology, an automatic identification system also requires a scanner or a video camera. A scanner is an electrooptical sensor which comprises a light source [typically a lightemitting diode (LED), laser diode, or helium-neon-laser tube], a light detector [typically a photodiode, photo-integrated circuit (photo-IC), phototransistor, or charge-coupled device (CCD)], an analog signal amplifier, and (for digital scanners) an analog-to-digital converter. The system also needs a microprocessor-based decoder, a data communication link to a computer, and one or more output or actuation devices. See Fig. 10.7.14. The scanner may be hand-held or mounted in a fixed position, but something must move relative to the other, either the entire scanner, the scanner’s oscillating beam of light, or the package or item containing the bar code. For moving targets, strobe lights are sometimes used to freeze the image while it is being read. The four basic categories of scanners are: hand-held fixed beam, hand-held moving beam, stationary fixed beam, and stationary moving beam. Some scanners cast crosshatched patterns on the object so as to catch the bar code regardless of its position. Others create holographic laser patterns that wrap a field of light beams around irregularly shaped articles. Some systems require contact between the scanner and the bar code, but most are of the noncontact type. A read-and-beep system emits a sound when a successful reading has been made. A specialized type of scanner, a slot scanner, reads bar codes on badges, cards, envelopes, documents, and the like as they are inserted in the slot and swiped. In operation, the spot or aperture of the scanner beam is directed by optics onto the bar code and reflected back onto a photodetector. The relative motion of the scanner beam across the bar code results in an analog response pattern by the photodetector as a function of the relative reflectivities of the dark bars and light spaces. A sufficient contrast between the bars, spaces, and background color is required. Some scanners see red bars as white. Others see aluminum as black, requiring the spaces against such backgrounds to be made white. The system anticipates an incoming legitimate signal by virtue of first receiving a zero input from the “quiet zone”—a space of a certain

IDENTIFICATION AND CONTROL OF MATERIALS

10-77

Fig. 10.7.14 Components of a bar code scanner.

width (generally at least 10 times the code’s X dimension) that is clear of all marks and which precedes the start character and follows the stop character. The first pattern of bars and spaces (a character) that the scanner sees is the start/stop code. These not only tell the system that data codes are coming next and then that the data section has ended, but in bidirectional systems (those that can be scanned from front to back or from back to front) they also indicate that direction. In such bidirectional systems, the data is read in and then reversed electronically as necessary before use. The analog signals pass through the scanner’s signal conditioning circuitry, were they are amplified, converted into digital form, tested to see whether they are acceptable, and, if so, passed to the decoder section microprocessor for algorithmic interpretation. The decoder determines which elements are bars and which are spaces, determines the width (number of modules) of each element, determines whether the code has been read frontward or backward and reverses the signal if needed, makes a parity check, verifies other criteria, converts the signal into a computer language character—the most common of which is asynchronous American Standard Code for Information Interchange (ASCII)—and sends it, via a data communication link, to the computer. Virtually all automatic identification systems are computer-based. The bar codes read become input data to the computer, which then updates its stored information and/or triggers an output. Other types of automatic identification systems include radio-frequency identification, smart cards, machine vision, voice recognition, magnetic stripes, and optical character recognition. Radio-frequency identification has many advantages over the other identification, control, and data-collection methods. It is immune to factory noise, heat, cold, and harsh environments. Its tags can be covered with dirt, grease, paint, etc. and still operate. It does not require line of sight from the reader, as do bar codes, machine vision, and OCR. It can be read from a distance, typically 15 ft (4.6 m). Its tags can both receive and send signals. Many types have higher data capacities than onedimensional bar codes, magnetic stripes, and OCR. Some tags can read/write and process data, much like smart cards. However, RFID also has some disadvantages, compared to other methods. The tags are not human-readable and their cost for a given amount of data capacity is higher than that of some other methods. The RFID codes are currently closed—that is, a user’s code is not readable by suppliers or customers—and there are no uniform data identifiers (specific numeric or alphanumeric codes that precede the data code and that identify the data’s industry, category, or use) like the U.P.C. has. Since 2004, the U.S. military, faced with frequent separations of wounded personnel and their medical records, has recorded medical data

on passive RFID chips sewn into wristbands, embedded in dog tags, and otherwise attached to the person, so that the records and the wounded travel together. The attached RFID chips also serve to identify wounded or killed personnel. The chips, which can hold 2 kbits of information, are also used by the military to monitor and warn individuals going into and out of chemical, biological, or radioactive “hot” areas and also to keep track of the movements, levels, and ages of munitions and supplies. An RFID system consists of a transceiver and a tag, with a host computer and peripheral devices possible. The transceiver has a coil or antenna, a microprocessor, and electronic circuitry, and is called a reader or scanner. The tag can also have an antenna coil, a capacitor, a nonvolatile type of memory, transponder electronic circuitry, and control logic—all encapsulated for protection. The system’s operations are very simple. The reader transmits an RF signal to the tag, which is usually on a product, tote bin, person, or other convenient carrier and can be made small and concealed. This signal tells the tag that a reader is present and wants a response. The activated tag responds by broadcasting a coded signal on a different frequency. The reader receives that signal and processes it. Its output may be as simple as emitting a sound for the unauthorized removal of an item containing the tag (retail theft, library books, and the like), flashing a green light and/or opening a door (employee ID badges), displaying information on a human-readable display, directing a robot or automatic guided vehicle (AGV), sending a signal to a host computer, etc. The signals from the reader to the tags can also provide energy to power the tag’s circuitry and supply the digital clock pulses for the tag’s logic section. The tags may be passive or active. Passive tags use power contained in the signal from the reader. The normal range for these is 18 in (0.46 m) or less. Active tags contain a battery for independent power and usually operate in the 3- to 15-ft (0.9- to 4.6-m) range. Tags programmed when made are said to be factory programmed and are read-only devices. Tags programmed by plugging them into, or placing them very near, a programmer are called field-programmed devices. Tags whose program or data can be changed while they are in their operating locations are called in-use programmed devices. The simplest tags contain only one bit of code. They are typically used in retail stores to detect shoplifting and are called electronic article surveillance (EAS), presence sensing, or Level I tags. The next level up are the tags that, like bar codes, identify an object and serve as an input code to a database in a host computer. These “electronic license plates” are called Level II tags and typically have a capacity of between 8 and 128 bits. The third level devices are called Level III, or transaction/routing tags. They hold up to 512 bits so that the item to which they are attached may be

10-78

MATERIAL STORAGE AND WAREHOUSING

described as well as identified. The next level tags, Level IV, are called portable databases, and, like matrix bar codes, carry large amounts of information in text form, coded in ASCII. The highest current level tags incorporate microprocessors, making them capable of data processing and decision making, and are the RF equivalent of smart cards. Smart cards, also called chipcards, or memory cards, are the size of credit cards but twice as thick. They are composed of self-contained circuit boards, stacked memory chips, a microprocessor, a battery, and input/output means, all laminated between two sheets of plastic which usually contain identifiers and instructions. They are usually read by inserting them into slots and onto pins of readers. The readers can be free-standing devices or components of production and inventory control systems, computer systems, instruments, machines, products, maintenance records, calibration instruments, store checkout registers, vending machines, telephones, automatic teller machines, road toll booths, doors and gates, hospital and patient records, etc. One smart card can hold the complete data file of a person or thing and that data can be used as read and/or updated. The cards may be described as being in one of four classifications: contactless cards, those which contain a tiny antenna, such that energy as well as data are exchanged via electromagnetic fields rather than direct physical contact and are used mostly for general access, such as transportation; intelligent memory cards, those having an EEPROM (electrically erasable programmable read-only memory), a logic circuit, and security feature and which are used mainly as telephone, shopping, and health cards; microcontroller cards, which have an EEPROM, RAM, ROM, I/O, and a CPU, making them a complete computer system with a security feature and are typically used as network access cards and banking cards; and cryptocards, which contain microcontrollers plus a mathematics coprocessor for cryptographic applications and high-security situations. Cards with up to 64-Mbytes of flash memory are available. They are made by tape-automated bonding of 20 or more blocks of two-layered memory chips onto both sides of a printed-circuit board. Machine vision can be used to: 1. Detect the presence or absence of an entire object or a feature of it, and sort, count, and measure. 2. Find, identify, and determine the orientation of an object so that a robot can go to it and pick it up. 3. Assure that the robot or any other machine performed the task it was supposed to do, and inspect the object. 4. Track the path of a seam so that a robot can weld it. 5. Capture the image of a matrix-type bar code for data input or a scene for robotic navigation (obstacle avoidance and path finding). In operation, the machine-vision system’s processor digitizes the image that the camera sees into an array of small square cells (called picture elements, or pixels). It classifies each pixel as light or dark or a relative value between those two extremes. The system’s processor then employs methods such as pixel counting, edge detection, and connectivity analysis to get a pattern it can match (a process called windowing and feature analysis) to a library of images in its memory, triggering a specific output for each particular match. Rather than match the entire image to a complete stored pattern, some systems use only common feature tests, such as total area, number of holes, perimeter length, minimum and maximum radii, and centroid position. The resolution of a machine vision system is its field of view (the area of the image it captures) divided by its number of pixels (e.g., 256  256  65,536, 600  800  480,000). The system can evaluate each pixel on either a binary or a gray scale. Under the binary method, a threshold of darkness is set such that pixels lighter than it are assigned a value of 1 and those darker than it are given a value of 0 for computer processing. Under the gray scale method, several intensity levels are established, the number depending on the size of the computer’s words. Four bits per pixel permit 16 levels of gray, 6 bits allow 64 classifications, and 8-bit systems can classify each pixel into any of 256 values, but at the cost of more computer memory, processing time, and expense. As the cost of vision systems drops, their use in automatic identification increases. Their value in reading matrix-type bar codes and in robotics can justify their cost.

Voice recognition systems process patterns received and match them to a computer’s library of templates much like machine vision does. Memory maps of sound pattern templates are created either by the system’s manufacturer or user, or both. A speaker-dependent system recognizes the voice of only one or a few people. It is taught to recognize their words by having the people speak them several times while the system is in the teach mode. The system creates a speech template for each word and stores it in memory. The system’s other mode is the recognition mode, during which it matches new input templates to those in its memory. Speaker-independent systems can recognize almost anyone’s voice, but can handle many fewer words than speaker-dependent systems because they must be able to recognize many template variations for each word. The number of words that each system can recognize is constantly being increased. In operation, a person speaks into a microphone. The sound is amplified and filtered and fed into an analyzer, thence to a digitizer. Then a synchronizer encodes the sound by separating its pattern into equal slices of time and sends its frequency components to a classifier where it is compared to the speech templates stored in memory. If there is a match, the computer produces an output to whatever device is attached to the system. Some systems can respond with machine-generated speech. The combination of voice recognition and machine speech is called speech processing or voice technology. Products that can “hear” and “talk” are called conversant products. Actual systems are more complicated and sophisticated than the foregoing simplified description. The templates, for example, instead of having a series of single data points define them, have numerical fields based on statistical probabilities. This gives the system more tolerant template-to-template variations. Further, and to speed response, neural networks, fuzzy logic, and associative memory techniques are used. Some software uses expert systems that apply grammatical and context rules to the inputs to anticipate that a word is a verb, noun, number, etc. so as to limit memory search and thereby save time. Some systems are adaptive, in that they constantly retrain themselves by modifying the templates in memory to conform to permanent differences in the speakers’ templates. Other systems permit the insertion by each user of an individual module which loads the memory with that person’s particular speech templates. This increases the capacity, speed, and accuracy of the system and adds a new dimension: security against use by an unauthorized person. Like fingerprints, a person’s voice prints are unique, making voice recognition a security gate to entry to a computer, facility, or other entity. Some systems can recognize thousands of words with almost perfect accuracy. Some new products, such as the VCP 200 chip, of Voice Control Products, Inc., New York City, are priced low enough, by eliminating the digital signal processor (DSP) and limiting them to the recognition of only 8 or 10 words (commands), to permit their economic inclusion in products such as toys, cameras, and appliances. A major use of voice recognition equipment is in industry where it permits assemblers, inspectors, testers, sorters, and packers to work with both hands while simultaneously inputting data into a computer directly from the source. They are particularly useful where keyboards are not suitable data-entry devices. They can eliminate paperwork, duplication, and copying errors. They can be made to cause the printing of a bar code, shipping label, or the like. They can also be used to instruct robots—by surgeons describing and directing an operation, for example—and in many other applications. Magnetic stripes encode data on magnetic material in the form of a strip or stripe on a card, label, or the item itself. An advantage to the method is that the encoded data can be changed as required. Optical character recognition uses a video camera to read numbers, letters, words, signs, and symbols on packages, labels, or the item itself. It is simpler and less expensive than machine vision but more limited. The information collected from the foregoing can be used at the location where it is obtained, observed by a human, stored in the sensing device for later input to a computerized system, or transmitted in real time to a central location via either fixed wires or by a wireless means. Users should be aware that wireless signals travel through the air,

STORAGE EQUIPMENT

making them more vulnerable than wired connections to reception by unauthorized persons, unless security features are included. Devices equipped with Bluetooth components for shorter-range transmissions or WiFi (wireless fidelity) technology for longer ranges can communicate wirelessly with other devices so equipped. WiFi requires the use of locations called access points or “hot spots” to connect to the Internet, networks of portable computers, or other WLANs (wireless local area networks). The industry’s trade group is the Automatic Identification Manufacturers, Inc. (AIM), located in Pittsburgh.

10-79

removed. Figure 10.7.19 shows mobile racks. Here the racks are stored next to one another. When material is required, the racks are separated and become accessible. Figure 10.7.20 shows racks where the picker passes between the racks. The loads are supported by arms attached to

STORAGE EQUIPMENT

The functions of storage and handling devices are to permit: The greatest use of the space available, stacking high, using the “cube” of the room, rather than just floor area Multiple layers of stacked items, regardless of their sizes, shapes, and fragility “Unit load” handling (the movement of many items of material each time one container is moved) Protection and control of the material Shelves and Racks Multilevel, compartmentalized storage is possible by using shelves, racks, and related equipment. These can be either prefabricated standard size and shape designs or modular units or components that can be assembled to suit the needs and space available. Variations to conventional equipment include units that slide on floor rails (for denser storage until needed), units mounted on carousels to give access to only the material wanted, and units with inclined shelves in which the material rolls or slides forward to where it is needed.

Fig. 10.7.17 Cantilever racks. (Modern Materials Handling, Feb. 22, 1980, p. 85.)

uprights. Figure 10.7.21 shows block storage requiring no rack. Density is very high. The product must be self-supporting or in stacking frames. The product should be stored only a short time unless it has a very long shelf life.

Fig. 10.7.18 1980, p. 85.)

Fig. 10.7.15 Pallet racks. (Modern Materials Handling, Feb. 22, 1980, p. 85.)

Flow-through racks. (Modern Materials Handling, Feb. 22,

Bins, Boxes, Baskets, and Totes Small, bulk, odd-shaped, or fragile material is often placed in containers to facilitate unit handling. These bins, boxes, baskets, and tote pans are available in many sizes, strength grades, and configurations. They may be in one piece or with hinged flaps for ease of loading and unloading. They may sit flat on the

Figure 10.7.15 shows the top view of a typical arrangement of pallet racks providing access to every load in storage. Storage density is low because of the large amount of aisle space. Figure 10.7.16 shows multiple-depth pallet racks with proportionally smaller aisle requirement. Figure 10.7.17 shows cantilever racks for holding long items. Figure 10.7.18 shows flow-through racks. The racks are loaded on one side and emptied from the other. The lanes are pitched so that the loads advance as they are

Fig. 10.7.19 Mobile racks and shelving. (Modern Materials Handling, Feb. 22, 1980, p. 85.)

Fig. 10.7.16 Multiple-depth pallet racks. (Modern Materials Handling, Feb. 22, 1980, p. 85.)

Fig. 10.7.20 Drive-in drive-through racks. (Modern Materials Handling, Feb. 22, 1980, p. 85.)

floor or be raised to permit a forklift or pallet truck to get under them. Most are designed to be stackable in a stable manner, such that they nest or interlock with those stacked above and below them. Many are sized to fit (in combination) securely and with little wasted space in trucks, ships, between building columns, etc.

10-80

MATERIAL STORAGE AND WAREHOUSING

Skids typically stand higher off the ground than do pallets and stand on legs or stringers. They can have metal legs and framing for extra strength and life (Fig. 10.7.27), be all steel and have boxes, stacking alignment tabs, hoisting eyelets, etc. (Fig. 10.7.28).

Fig. 10.7.21 Block storage. (Modern Materials Handling, Feb. 22, 1980, p. 85.)

Fig. 10.7.22 Single-faced pallet.

Pallets are flat, horizontal structures, usually made of wood, used as platforms on which material is placed so that it is unitized, off the ground, stackable, and ready to be picked up and moved by a forklift truck or the like. They can be single-faced (Fig. 10.7.22), double-faced, nonreversible (Fig. 10.7.23) double-faced, reversible (Fig. 10.7.24), or solid (slip pallets). Pallets with overhanging stringers are called wing pallets, single or double (Figs. 10.7.25 and 10.7.26). Pallets and Skids

Fig. 10.7.27 Wood skid with metal legs.

Fig. 10.7.28 Steel skid box.

AUTOMATED STORAGE/RETRIEVAL SYSTEMS

An automated storage/retrieval system (AS/RS) is a high-rise, highdensity material handling system for storing, transferring, and controlling inventory in raw materials, work-in-process, and finished goods stages. It comprises: 1. Structure 2. Aisle stacker cranes or storage-retrieval machines and their associated transfer devices 3. Controls

Fig. 10.7.23 Double-faced pallet.

Fig. 10.7.24 Double-faced reversible pallet.

Fig. 10.7.29 Schematic diagram of an automated storage and retrieval system (AS/RS) structure. (Harnischfeger Corp.)

Fig. 10.7.25 Single-wing pallet.

Fig. 10.7.26 Double-wing pallet.

The AS/RS structure (see Fig. 10.7.29) is a network of steel members assembled to form an array of storage spaces, arranged in bays, rows, and aisles. These structures typically can be 60 to 100 ft (18 to 30 m) high, 40 to 60 ft (12 to 18 m) wide and 300 to 400 ft (91 to 121 m) long. Usually, they are erected before their protective building, which might be merely a “skin” and roof wrapped around them. The AS/RS may contain only one aisle stacker crane which is transferred between aisles or it may have an individual crane for each aisle. The function of the stacker crane storage-retrieval machine is to move down the proper aisle to the proper bay, then elevate to the proper storage space and then, via its shuttle table, move laterally into the space to deposit or fetch a load of material.

LOADING DOCK DESIGN

10-81

AS/RS controls are usually computer-based. Loads to be stored need not be given a storage address. The computer can find a suitable location, direct the S/R machine to it, and remember it for retrieving the load from storage.

items should be placed in the most readily accessible positions. Gravity racks, with their shelves sloped forward so that as one item is picked, those behind it slide or roll forward into position for ease of the next picking, are used when possible.

ORDER PICKING

LOADING DOCK DESIGN

Material is often processed or manufactured in large lot sizes for more economical production. It is then stored until needed. If the entire batch is not needed out of inventory at one time, order picking is required. Usually, to fill a complete order, a few items or cartons of several different materials must be located, picked, packed in one container, address labeled, loaded, and delivered intact and on time. In picking material, a person can go to the material, the material can be brought to the person doing the packing, or the process can be automated, such that it is released and routed to a central point without a human order picker. When the order picking is to be done by a human, it is important to have the material stored so that it can be located and picked rapidly, safely, accurately, and with the least effort. The most frequently picked

In addition to major systems and equipment such as automatic guided vehicles and automated storage/retrieval systems, a material handling capability also requires many auxiliary or support devices, such as the various types of forklift truck attachments, slings, hooks, shipping cartons, and packing equipment and supplies. Dock boards to bridge the gap between the building and railroad cars or trucks may be onepiece portable plates of steel or mechanically or hydraulically operated leveling mechanisms. Such devices are required because the trucks that deliver and take away material come in a wide variety of sizes. The shipping-receiving area of the building must be designed and equipped to handle these truck size variations. The docks may be flush, recessed, open, closed, side loading, saw tooth staggered, straight-in, or turn-around.

Section

11

Transportation BY

CHARLES A. AMANN Principal Engineer, KAB Engineering V. TERREY HAWTHORNE late Senior Engineer, LTK Engineering Services KEITH L. HAWTHORNE Vice President–Technology, Transportation Technology Center, Inc. MICHAEL C. TRACY Rear Admiral, U.S. Navy MICHAEL W. M. JENKINS Professor, Aerospace Design, Georgia Institute of Technology SANFORD FLEETER McAllister Distinguished Professor, School of Mechanical Engineering,

Purdue University WILLIAM C. SCHNEIDER Retired Assistant Director Engineering/Senior Engineer, NASA

Johnson Space Center and Visiting Professor, Texas A&M University G. DAVID BOUNDS Senior Engineer, Duke Energy Corp.

11.1 AUTOMOTIVE ENGINEERING by Charles A. Amann Automotive Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3 Tractive Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-4 Fuel Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-4 Transmissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-6 Driveline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-9 Suspensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-11 Steering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-12 Braking Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-13 Wheels and Tires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15 Heating, Ventilating, and Air Conditioning (HVAC). . . . . . . . . . . . . . . . . . 11-16 Automobile construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-17 11.2 RAILWAY ENGINEERING by V. Terrey Hawthorne and Keith L. Hawthorne (in collaboration with E. Thomas Harley, Charles M. Smith, Robert B. Watson, and C. A. Woodbury) Diesel-Electric Locomotives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18 Electric Locomotives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-24 Freight Cars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-25 Passenger Equipment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-32 Track . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-36 Vehicle-Track Interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-38 11.3 MARINE ENGINEERING by Michael C. Tracy The Marine Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-40 Marine Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-40 Seaworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-40 Engineering Constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-46 Propulsion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-47 Main Propulsion Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-48 Propulsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-52 Propulsion Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-55 High-Performance Ship Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-56 Cargo Ships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-58

11.4 AERONAUTICS by M. W. M. Jenkins Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-59 Standard Atmosphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-59 Upper Atmosphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-60 Subsonic Aerodynamic Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-60 Airfoils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-61 Control, Stability, and Flying Qualities . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-70 Helicopters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-71 Ground-Effect Machines (GEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-72 Supersonic and Hypersonic Aerodynamics. . . . . . . . . . . . . . . . . . . . . . . . . 11-72 Linearized Small-Disturbance Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-78

11.5 JET PROPULSION AND AIRCRAFT PROPELLERS by Sanford Fleeter Essential Features of Airbreathing or Thermal-Jet Engines. . . . . . . . . . . . . 11-84 Essential Features of Rocket Engines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-87 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-89 Thrust Equations for Jet-Propulsion Engines . . . . . . . . . . . . . . . . . . . . . . . 11-91 Power and Efficiency Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-91 Performance Characteristics of Airbreathing Jet Engines . . . . . . . . . . . . . . 11-92 Criteria of Rocket-Motor Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-97 Aircraft Propellers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-98

11.6 ASTRONAUTICS by William C. Schneider Space Flight (BY AARON COHEN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-104 Shuttle Thermal Protection System Tiles . . . . . . . . . . . . . . . . . . . . . . . . . 11-104 Dynamic Environments (BY MICHAEL B. DUKE) . . . . . . . . . . . . . . . . . . . . 11-108 Space-Vehicle Trajectories, Flight Mechanics, and Performance (BY O. ELNAN, W. R. PERRY, J. W. RUSSELL, A. G. KROMIS, AND D. W. FELLENZ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-109 Orbital Mechanics (BY O. ELNAN AND W. R. PERRY) . . . . . . . . . . . . . . . . 11-110 Lunar- and Interplanetary-Flight Mechanics (BY J. W. RUSSELL). . . . . . . . 11-111 Atmospheric Entry (BY D. W. FELLENZ) . . . . . . . . . . . . . . . . . . . . . . . . . . 11-112 11-1

11-2

TRANSPORTATION

Attitude Dynamics, Stabilization, and Control of Spacecraft (BY M. R. M. CRESPO DA SILVA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-114 Metallic Materials for Aerospace Applications (BY STEPHEN D. ANTOLOVICH; REVISED BY ROBERT L. JOHNSTON) . . . . . 11-116 Structural Composites (BY TERRY S. CREASY) . . . . . . . . . . . . . . . . . . . . . 11-117 Materials for Use in High-Pressure Oxygen Systems (BY ROBERT L. JOHNSTON) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-118 Meteoroid/Orbital Debris Shielding (BY ERIC L. CHRISTIANSEN) . . . . . . . . 11-119 Space-Vehicle Structures (BY THOMAS L. MOSER AND ORVIS E. PIGG) . . . 11-125 Inflatable Space Structures for Human Habitation (BY WILLIAM C. SCHNEIDER) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-126 TransHab (BY HORACIO DE LA FUENTE). . . . . . . . . . . . . . . . . . . . . . . . . 11-127 Portable Hyperbaric Chamber (BY CHRISTOPHER P. HANSEN AND JAMES P. LOCKE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-129 Vibration of Structures (BY LAWRENCE H. SOBEL). . . . . . . . . . . . . . . . . . . 11-130 Space Propulsion (BY HENRY O. POHL) . . . . . . . . . . . . . . . . . . . . . . . . . . 11-131 Spacecraft Life Support and Thermal Management (BY WALTER W. GUY). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-133

Docking of Two Free-Flying Spacecraft (BY SIAMAK GHOFRANIAN AND MATTHEW S. SCHMIDT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-138 11.7 PIPELINE TRANSMISSION by G. David Bounds Natural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-139 Crude Oil and Oil Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-145 Solids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-147 11.8 CONTAINERIZATION (Staff Contribution) Container Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-149 Road Weight Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-150 Container Fleets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-150 Container Terminals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-150 Other Uses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-150

11.1 AUTOMOTIVE ENGINEERING by Charles A. Amann REFERENCES: Sovran and Blazer, “A Contribution to Understanding Automotive Fuel Economy and Its Limits,” SAE Paper 2003–01–2070. Toboldt, Johnson, and Gauthier, “Automotive Encyclopedia,” Goodheart-Willcox Co., Inc. “Bosch Automotive Handbook,” Society of Automotive Engineers. Gillespie, “Fundamentals of Vehicle Dynamics,” Society of Automotive Engineers. Hellman and Heavenrich, “Light-Duty Automotive Technology Trends: 1975 Through 2003,” Environmental Protection Agency. Davis et al., “Transportation Energy Book,” U.S. Department of Energy. Various publications of the Society of Automotive Engineers, Inc. (SAE International). AUTOMOTIVE VEHICLES

The dominant mode of personal transportation in the United States is the light-duty automotive vehicle. This category is composed not only of passenger cars, but of light-duty trucks as well. Light-duty trucks include passenger vans, SUVs (suburban utility vehicles), and pickup trucks. Among light-duty trucks, the distinction between personal and commercial vehicles is blurred because they may be used for either purpose. Beyond the light-duty truck, medium- and heavy-duty trucks are normally engaged solely in commercial activities. In 2003, there were 135.7 million passenger cars registered in the United States, and 87.0 million light trucks. Purchasing preferences of the citizenry have caused a gradual shift from passenger cars to light trucks in recent decades. In 1970, light trucks comprised a 14.8 percent share of light-vehicle sales. By 2003, that share had swollen to nearly half of new vehicle sales. Among heavier vehicles, there were 7.9 million heavy trucks (more than two axles or more than four tires) registered in 2003, of which 26 percent were trailer-towing truck tractors. In addition, there were 0.7 million buses in use, 90 percent of which were employed in transporting school students. U.S. highway vehicles traveled a total of 2.89 trillion miles in 2003. Of this mileage, 57 percent was accumulated by passenger cars, 34 percent by light trucks, and 8 percent by heavy trucks and buses. In 2000, 34 percent of U.S. households operated one vehicle, 39 percent had two, and 18 percent had three or more, leaving only 9 percent of households without a vehicle. The 2001 National Household Travel Survey found that the most common use of privately owned vehicles was in trips to or from work, which accounted for 27 percent of vehicle miles. Other major uses were for shopping (15 percent), other family or personal business (19 percent), visiting friends and relatives (9 percent), and other social or recreational outlets (13 percent). According to a 2002 survey by the U.S. Department of Transportation, slightly over 11 billion tons of freight were transported in the United States. Of this, 67 percent was carried by truck, 16 percent by rail, 4 percent over water, and 6 percent by pipeline. Much of the balance was transported multimodally, with truck transport often being one of those modes. Trucks are often grouped by gross vehicle weight (GVW) rating into light duty (0 to 14,000 lb), medium-duty (14,001 to 33,000 lb), and heavy-duty (over 33,000 lb) categories. They are further subdivided into classes 1 through 8, as outlined in Table 11.1.1. Examples of Classes 6 through 8 are illustrated by the silhouettes of Fig. 11.1.1.

Table 11.1.1 Gross Vehicle Weight Classification GVW group

Class

Weight kg (max)

6000 lb or less 6001–10,000 lb 10,001–14,000 lb 14,001–16,000 lb 16,001–19,500 lb 19,501–26,000 lb 26,001–33,000 lb 33,001 lb and over

1 2 3 4 5 6 7 8

2722 4536 6350 7258 8845 11,794 14,469 14,970

Fig. 11.1.1 Medium and heavy-duty truck classification by gross weight and vehicle usage. (Reprinted from SAE, SP-868, © 1991, Society of Automotive Engineers, Inc.)

CR is normally considered constant with speed, although it does rise slowly at very high speeds. In the days of bias-ply tires, it had a typical value of 0.015, but with modern radial tires, coefficients as low as 0.006 have been reported. The rolling resistance of cold tires is greater than these warmed-up values, however. On a 708F day, it may take 20 minutes of driving for a tire to reach its equilibrium running temperature. Under-inflated tires also have higher rolling resistance. Figure 11.1.2

TRACTIVE FORCE

The tractive force (FTR) required to propel an automobile, measured at the tire-road interface, is composed of four components—the rolling resistance (FR), the aerodynamic drag (FD), the inertia force (FI), and the grade requirement (FG). That is, FTR  FR  FD  FI  FG. The force of rolling resistance is CRW, where CR is the rolling resistance coefficient of the tires and W is vehicle weight. For a given tire,

Fig. 11.1.2 Dependence of rolling resistance on inflation pressure for FR78-14 tire, 1280-lb load and 60-mi/h speed. (“Tire Rolling Losses,” SAE Proceedings P-74.) 11-3

11-4

AUTOMOTIVE ENGINEERING

illustrates how the rolling resistance of a particular tire falls off with increasing inflation pressure. Aerodynamic drag force is given by CD A r V 2/2g, where CD is the aerodynamic drag coefficient, A is the projected vehicle frontal area, r is the ambient air density, V is the vehicle velocity, and g is the gravitational constant. The frontal area of most cars falls within the range of 20 to 30 ft2 (1.9 to 2.8 m2). The frontal areas of light trucks generally range from 27 to 38 ft2 (2.5 to 3.5 m2). For heavy-duty trucks and trucktrailer combinations, frontal area can range from 50 to 100 ft2 (4.6 to 9.3 m2). The drag coefficient of a car is highly dependent on its shape, as illustrated in Fig. 11.1.3. The power required to overcome aerodynamic drag is the product of the aerodynamic component of tractive force and vehicle velocity. That power is listed at various speeds in Figure 11.1.3 for a car with a frontal area of 2 m2. For most contemporary cars, CD falls between 0.30 and 0.35, although values around 0.20 have been reported for certain advanced concepts. For light trucks, CD generally falls between 0.40 and 0.50. Drag coefficients for heavy-duty trucks and truck-trailers range from 0.6 to over 1.0, with 0.7 being a typical value for a Class 8 tractor-trailer.

Fig. 11.1.3 Drag coefficient and aerodynamic power requirements for various body shapes. (Bosch, “Automotive Handbook,” SAE.)

The tractive force at the tire patch that is associated with changes in vehicle speed is given by FI  (W/g) a, where a is vehicle acceleration. If the vehicle is decelerating, then a (hence FI ) is negative. If the vehicle is operated on a grade, an additional tractive force must be supplied. Within the limits of grades normally encountered, that force is FG  W tan u, where u is the angle of the grade, measured from the horizontal. For constant-speed driving over a horizontal roadbed, FI and FG are both zero, and the tractive force is simply FTR  FR  FD. This is termed the road load force. In Fig. 11.1.4, curve T shows this road-load force requirement for a car with a test weight of 4000 lb (1814 kg). Curves A and R represent the aerodynamic and rolling resistance components of the total resistance, respectively, on a level road with no wind. Curves T paralleling curve T show the displacement of curve T for operation on 5, 10, and 15 percent grades. Curve E shows the tractive force delivered from the engine at full throttle. The differences between the E curve and the T curves represent the tractive force available for acceleration. The intersections of the E curve with the T curves indicate maximum speed capabilities of 98, 87, and 70 mi/h on grades of 0, 5, and 10 percent. Operation on a 15 percent grade is seen to exceed the capability of the engine in this transmission gear ratio. Therefore, operation on a 15 percent grade requires a transmission downshift. Tractive forces on the ordinate of Fig. 11.1.4 can be converted to power by multiplying by the corresponding vehicle velocities on the abscissa.

Fig. 11.1.4 Traction available and traction required for a typical large automobile.

PERFORMANCE

In the United States, a commonly used performance metric is acceleration time from a standstill to 60 mi/h. A zero to 60 mi/h acceleration at wide-open throttle is seldom executed. However, a vehicle with inferior acceleration to 60 mi/h is likely to do poorly in more important performance characteristics, such as the ability to accelerate uphill at highway speeds, to carry a heavy load or pull a trailer, to pass a slowmoving truck, or to merge into freeway traffic. The most important determinant of zero-to-60 mi/h acceleration time is vehicle weight-to-power ratio. To calculate this acceleration time rigorously, the appropriate power is the power available from the engine at each instant, diminished for the inefficiencies of the transmission and driveline. Furthermore, the actual weight of the vehicle is replaced by its effective weight (Weff), which includes the rotational inertia of the wheels, driveline, transmission, and engine. In the highest (drive) gear, Weff may exceed W by about 10 percent. In lower gears (higher input/output speed ratios), the engine and parts of the transmission rotate at higher speeds for a given vehicle velocity. Because rotational energy varies as the square of angular velocity, operation in lower gears increases the effective mass of the vehicle more significantly. The maximum acceleration capability calculated in this manner may be limited by loss of traction between the driving wheels and the road surface. This is dependent upon the coefficient of friction between tire and road. For a new tire, this coefficient can range from 0.85 on dry concrete to 0.5 if water is puddled on the road. Under the same conditions, the coefficient for a moderately worn tire can range from 1.0 down to 0.25. Using special rubber compounds, coefficients as high as 1.8 have been attained with special racing tires. To avoid all these variables when estimating the acceleration performance of a typical light-duty vehicle under normal driving conditions on a level road, various empirical correlations have been developed by fitting test data from large fleets of production cars. Such correlations for the time from zero to 60 mi/h (t60) have generally taken the form, C(P/W)n. In one simple correlation, t60 is approximated in seconds when P is the rated engine horsepower, W is the test weight in pounds, C  0.07 and n  1.0. According to the U.S. Environmental Protection Agency, the fleet-average t60 for new passenger cars has decreased rather consistently from 14.2 s in 1975 to 10.3 s in 2001. Over that same time span, it has decreased from 13.6 to 10.6 s for light-duty trucks. FUEL CONSUMPTION

Automotive fuels are refined from petroleum. In 1950, less than 10 percent of U.S. petroleum consumed was imported, but that fraction has grown to 52.8 percent in 2002. Much of this imported petroleum comes from politically unstable parts of the world. Therefore, it is in the national interest to reduce automotive fuel consumption, or conversely, to increase automotive fuel economy. In the United States, passenger-car fuel-economy standards were promulgated by Congress in 1975 and took effect in 1978. Standards for light-duty trucks took effect in 1982. The fuel economies of new

FUEL CONSUMPTION

vehicles are monitored by the U.S. Environmental Protection Agency (EPA). Each vehicle model is evaluated by running that model over prescribed transient driving schedules of velocity versus time on a chassis dynamometer. Those driving schedules are plotted in Figs. 11.1.5 and 11.1.6. The urban driving schedule (Fig. 11.1.5) simulates 7.5 mi of driving at an average speed of 19.5 mi/h. It is the same schedule used in measuring emissions compliance. It requires a 12-h soak of the vehicle at room temperature, with driving initiated 20 s after the cold start. The 18 start-stop cycles of the schedule are then followed by engine shutdown and a 10-min hot soak, after which the vehicle is restarted and the first five cycles repeated. The highway driving schedule (Fig. 11.1.6) is free of intermediate stops, covering 10.2 mi at an average speed of 48.2 mi/h. The combined fuel economy of each vehicle model is calculated by assuming that the 55 percent of the distance driven is accumulated in the urban setting and 45 percent on the highway. Consequently, the combined value is sometimes known as the 55/45 fuel economy. It is derived from MPG55/45  1/[(0.55 MPGU  0.45 MPGH)], where MPG represents fuel economy in mi/gal, and subscripts U and H signify the urban and highway schedules, respectively. Each manufacturer’s CAFE (corporate average fuel economy) is the sales-weighted average of the 55/45 fuel economies for its vehicle sales

in a given year. CAFE standards are set by the U.S. Congress. From 1990–2004, the CAFE standard for passenger cars was 27.5 mi/gal. For light-duty trucks, it was 20.7 mi/gal from 1976–2004. Manufacturers failing to meet the CAFE standard in any given year are fined for their shortfall. The 55/45 fuel economy of a vehicle depends on its tractive-force requirements, the power consumption of its accessories, the losses in its transmission and driveline, the vehicle-to-engine speed ratio, and the fuel-consumption characteristics of the engine itself. For a given level of technology, of all these variables, the most significant factor is the EPA test weight. This is the curb weight of the vehicle plus 300 lb. In a given model year, test weight correlates with vehicle size, which is loosely related for passenger cars to interior volume. The various EPA classification of vehicles are listed in Table 11.1.2, along with their fleet-average test weights and fuel economies, for the 2003 model year. After the CAFE regulation had been put in place, the driving public complained of its inability to achieve the fuel economy on the road that had been measured in the laboratory for regulatory purposes. Over-theroad fuel consumption is affected by ambient temperature, wind, hills, and driver behavior. Jack-rabbit starts, prolonged engine idling, short trips, excessive highway speed, carrying extra loads, not anticipating stops at traffic lights and stop signs, tire underinflation and inadequate

80 70

Speed, mi/h

60 50 40 30 20 10 0

0

200

400

600

800

1000

1200

Time, s

Fig. 11.1.5 Urban driving schedule.

80 70

Speed, mi/h

60 50 40 30 20 10 0

0

100

200

300

400 Time, s

Fig. 11.1.6 Highway driving schedule.

11-5

500

600

700

11-6

AUTOMOTIVE ENGINEERING Table 11.1.2

Characteristics of U.S. New-Car Fleet (2003 model year) Passenger Cars Sales fraction

Volume, ft3

Test weight, lb

Two-seater Minicompact Subcompact Compact Midsize Large

0.010 0.006 0.033 0.198 0.156 0.081

— 81.5 95.5 103.9 114.4 125.1

3444 3527 3379 3088 3582 3857

21.5 22.7 23.5 27.6 23.8 22.1

Station wagons Small Midsize Large

0.025 0.015 0.001

116.0 133.2 170.2

3296 3687 4500

25.0 28.2 19.9

Adj. 55/45 mpg

Light trucks (gross vehicle weight rating 8500 lb) Sales fraction

Wheel base, in.

Test weight, lb

Vans Small Midsize Large

Adj. 55/45 mpg

0.000 0.073 0.008

105 105–115 115

— 4345 5527

— 20.3 15.4

SUVs Small Midsize Large

0.017 0.128 0.089

109 109–124 124

3508 4140 5383

21.6 19.0 15.8

Pickups Small Midsize Large

0.013 0.028 0.119

100 100–110 110

3750 3966 4978

20.1 18.7 16.1

SOURCE: “Light-Duty Automotive Technology and Fuel Economy Trends, 1975 through 2003,” U.S. EPA, April 2003.

maintenance all contribute to higher fuel consumption. To compensate for the difference between the EPA 55/45 fuel economy and the overthe-road fuel economy achieved by the average driver, EPA tested a fleet of cars in typical driving. It determined that on average, the public should expect to achieve 90 percent of MPGU in city driving and 78 percent of MPGH on the highway. Applying these factors to the urban and highway economies measured in the laboratory results in adjusted fuel economy for each of the two driving schedules. These can be combined to derive an adjusted 55/45 fuel economy, as listed for 2003 in Table 11.1.2. Typically, the adjusted 55/45 fuel economy is about 15 percent less than the measured laboratory fuel economy on which CAFE regulation is based. The historical trend in adjusted 55/45 fuel economy for new cars, light-duty trucks, and the combined fleet of both are represented for each year in Fig. 11.1.7. The decline in fuel economy for the combined fleet since 1988 reflects the influence of an increasing consumer preference for less fuel-efficient vans, SUVs, and pickups instead of passenger cars. TRANSMISSIONS

The torque developed by the engine is fed into a transmission. One function of the transmission is to control the ratio of engine speed to vehicle velocity, enabling improved fuel economy and/or performance. This may be done either with a manual transmission, which requires the driver to select that ratio with a gear-shift lever, or with automatic transmission, in which the transmission controls select that ratio unless overridden by the driver. Another function of the transmission is to provide a means for operating the vehicle in reverse. Because an internal-combustion engine cannot sustain operation at zero rotational speed, a means must be provided that allows the engine to run at its idle speed while the driving wheels of the vehicle are stationary. With the manual transmission, this is accomplished with a dry friction clutch (Fig. 11.1.8). During normal driving, the pressure plate is held tightly against the engine flywheel by compression springs, forcing the flywheel and plate to rotate as one. At idle, depression of the clutch pedal by the driver counters the force of the springs, freeing the engine to idle while

the pressure plate and driving wheels remain stationary. The transmission gearbox on the output side of the clutch changes its output/input speed ratio, typically in from three to six discrete gear steps. As the driver shifts the transmission from one gear ratio to the next, he disengages the clutch and then slips it back into engagement as the shift is completed. With an automatic transmission, such a dry clutch is unnecessary at idle because a fluid element between the engine and the gearbox accommodates the required speed disparity between the engine and the driving wheels. However, the wet clutch, cooled by transmission fluid, is used in the gearbox to ease the transition from one gear set to another during shifting. Typically, the hydraulically operated multiple-disk clutch is used (Fig. 11.1.9). The band clutch is also used in automatic transmissions. The fluid element used between the engine and gearbox in early automatic transmissions was the fluid coupling (Fig. 11.1.10). It assumes the shape of a split torus, filled with transmission fluid. Each semitorus is lined with radial blades. As the engine rotates the driving member, fluid is forced outward by centrifugal force. The angular momentum of this swirling flow leaving the periphery to the driving member is then removed by the slower rotating blades of the driven member as the fluid flows inward to reenter the driving member at its inner diameter. This transmits the torque on the driving shaft to the driven shaft with an input/output torque ratio of unity, independent of input/output speed ratio. That gives the fluid coupling the torque delivery characteristic of a slipping clutch. Transmitted torque being proportional to the square of input speed, very little output torque is developed when the engine is idling. This minimizes vehicle creep at stoplights. Because torque is transmitted from the fastest to the slowest member in either direction, the engine may be used as a brake during vehicle declaration. This characteristic also permits starting the engine by pushing the car. In modern automatic transmissions, the usual fluid element is the three-component torque converter, equipped with a one-way clutch (Fig. 11.1.11a). As with the fluid coupling, transmission fluid is centrifuged outward by the engine-driven pump and returned inward through a turbine connected to the output shaft delivering torque to the gearbox. However, a stator is interposed between the turbine exit and the pump inlet. The shaft supporting the stator is equipped with sprag clutch.

TRANSMISSIONS

11-7

30

25

Adjusted 55/45 MPG

Cars

Both 20 Trucks

15

10 1970

1975

1980

1985

1990

1995

2000

2005

Model year Fig. 11.1.7 Historical trends in average adjusted combined fuel economy for new-vehicle fleet. (U.S. Environmental Protection Agency, April 2003.)

Fig. 11.1.8 Single-plate dry-disk friction clutch.

The blading in the torque converter is nonradial, being oriented to provide torque multiplication. When the output shaft is stalled, the sprag locks the stator in place, multiplying the output torque to from 2.0 to 2.7 times the input torque (Fig. 11.1.11b). As the vehicle accelerates and the output/input speed ratio rises, the torque ratio falls toward 1.0. Beyond that speed ratio, known as the coupling point, the direction of the force imposed on the stator by the fluid reverses, the sprag allows the stator to free wheel, and the torque converter assumes the characteristic of a fluid coupling.

Fig. 11.1.9 Schematic of two hydraulically operated multiple-disk clutches in an automatic transmission. (Ford Motor Co.)

It is clear from Fig. 11.1.11b that multiplication of torque by the torque converter comes at the expense of efficiency. Therefore, most modern torque-converter transmissions incorporate a hydraulically operated lockup clutch capable of eliminating the speed differential between pump and turbine above a preset cruising speed, say 40 mi/h in high gear. In the lower gears, lockup is less likely to be used. When a significant increase in engine power is requested, the clutch is also disengaged to capitalize on the torque multiplication of the torque

11-8

AUTOMOTIVE ENGINEERING

Fig. 11.1.10 Fluid coupling.

Fig. 11.1.11 Torque converter coupling: (a) section; (b) characteristics.

converter. During its engagement, the efficiency-enhancing elimination of torque-converter slip comes with a loss of the torsional damping quality of the fluid connection. Sometimes the loss of damping is countered by allowing modest slippage in the lockup clutch. A cross section through the gearbox of a three-speed manual transmission is shown in Fig. 11.1.12. The input and output shafts share a common axis. Below, a parallel countershaft carries several gears of differing diameter, one of which is permanently meshed with a gear on the input shaft. Additional gears of differing diameter are splined to the output shaft. Driver operation of the gear-shift lever positions these splined gears along the output shaft, determining which pair of gears on the output shaft and the countershaft is engaged. In direct drive (high gear), the output shaft is coupled directly to the input shaft. Reverse is secured by interposing an idler gear between gears on the output shaft and the countershaft. Helical gears are favored to minimize noise production. During gear changes, a synchromesh device, acting as a friction clutch, brings the gears to be meshed approximately to the correct speed just before their engagement. Typically, transmission gear ratios follow an approximately geometric pattern. For example, in a four-speed gearbox, the input/output speed ratios might be 2.67 in first, 1.93 in second, 1.45 in third, and 1.0 in fourth, or direct, drive. The gearbox for a typical automatic transmission uses planetary gear sets, any one set of which can supply one or two gear-reduction ratios, and reverse, by actuating multiple-disk or band clutches that lock various elements of the planetary system in Fig. 11.1.13. These clutches are actuated hydraulically according to a built-in control schedule that utilizes such input signals as vehicle velocity and engine throttle position.

Fig. 11.1.12 Three-speed synchromesh transmission. (Buick.)

Fig. 11.1.13 Planetary gear action: (a) Large speed reduction: ratio  1  (internal gear diam.)/(sun gear diam.)  3.33 for example shown. (b) Small speed reduction: ratio  1  (sun gear diam.)/(internal gear diam.)  1.428. (c) Reverse gear ratio  (internal gear diam.)/(sun gear diam.)  2.33.

DRIVELINE

11-9

Fig. 11.1.14 Three-element torque converter and planetary gear. (General Motors Corp.)

A schematic of such an automatic transmission, combining a torque converter and a compound planetary gearbox, is shown in Fig. 11.1.14. Increasingly, electronic transmission control is taking over from totally hydraulic control. Although hydraulic actuation is retained, electronic modules control gear selection and modulate hydraulic pressure in accordance with torque flow, providing smoother shifts. The influence of the transmission on the interaction between a vehicle and its roadbed is illustrated by Fig. 11.1.15, where the tractive force required to propel the vehicle is plotted against its road speed. The hyperbolas of constant horsepower superimposed on these coordinates are independent of engine characteristics. The family of curves rising from the left axis at increasing rates trace the traction requirements of the vehicles as it travels at constant speed on grades of increasing steepness. Two curves representing the engine operating at full throttle in high gear are overlaid on this basic representation of vehicle characteri– stics. One is for a manual transmission followed by a differential with an input/output speed ratio of 3.6. The other is for the same engine with a torque-converter transmission using a differential with a speed ratio of 3.2. With either transmission, partial closing of the engine throttle allows operation below these two curves, but operation above them is not possible. The difference in tractive force between the engine curve and the road-load curve indicates the performance potential—the excess power available for acceleration or for hill climbing on a level road. The high-speed intersection of these two curves indicates that with either transmission, the vehicle can sustain a speed of 87 mi/h on a grade of nearly 5 percent. From 87 mi/h down to 37 mi/h, the manual transmission shows slightly greater performance potential. However, at 49 mi/h the torque converter reaches its coupling point, torque

multiplication begins to occur, and below 37 mi/h its performance potential in high gear exceeds that of the manual transmission by an increasing margin. This performance disadvantage of the manual transmission may be overcome by downshifting. This transmission comparison illustrates why the driver is usually satisfied with an automatic transmission that has one fewer forward gear ratio than an equivalent manual transmission. An evolving transmission development is the continuously variable transmission (CVT), which changes the engine-to-vehicle speed ratio without employing finite steps. This greater flexibility promises improved fuel economy. Its most successful implementation in passenger cars to date varies transmission input/output speed ratio by using a steel V-belt composed of individual links held together by steel bands. This belt runs between variable-geometry sheaves mounted on parallel input and output shafts. Each of these two sheaves is split in the plane of rotation such that moving its two halves apart decreased its effective diameter, and vice versa. The control system uses hydraulic pressure to move the driving sheaves apart while concurrently moving the driven sheaves together in such a way that the constant length of the steel belt is accommodated. Thus the effective input/output diameter ratio, hence speed ratio, of the sheaves can be varied over a range of about 6:1. To accommodate the infinite speed ratio required when the vehicle is stopped and the engine is idling, and to manage vehicle acceleration from a standstill, this type of CVT typically includes a torque converter or a wet multidisk clutch. DRIVELINE

The driveline delivers power from the transmission output to the driving wheels of the vehicle, whether they be the rear wheels or the front. An important element of the driveline is the differential, which in its basic form delivers equal torque to both left and right driving wheels while allowing one wheel to rotate faster than the other, as is essential during turning. Fig. 11.1.16 is a cross section through the differential

Fig. 11.1.15 Comparative traction available in the performance of a fluid torque-converter coupling and a friction clutch.

Fig. 11.1.16 Rear-axle hypoid gearing.

11-10

AUTOMOTIVE ENGINEERING

Fig. 11.1.17 Rear axle. (Oldsmobile.)

of a rear-wheel driven vehicle as viewed in the longitudinal plane. It employs hypoid gearing, which allows the axis of the driving pinion to lie below the centerline of the differential gear. This facilitates lowering the intruding hump in the floor of a passenger car, which is needed to accommodate the longitudinal drive shaft. Figure 11.1.17 is a crosssectional view of the differential from above, showing its input connection to the drive shaft, and its output connection to one of the driving wheels. The driving pinion delivers transmission-output torque to a ring gear, which is integral with a differential carrier and rotates around a transverse axis. A pair of beveled pinion gears is mounted on the pinion pin, which serves as a pinion shaft and is fixed to the carrier. The pinion gears mesh with a pair of beveled side gears splined to the two rear axles. In straight-ahead driving, the pinion gears do not rotate on the pinion gear shaft. In a turn, they rotate in opposite directions, thus accommodating the difference in speeds between the two rear wheels. If one of the driving wheels slips on ice, the torque transmitted to that wheel falls essentially to zero. In the basic differential, the torque on the opposite wheel then falls to the same level and the tractive force is insufficient to move the vehicle. Various limited-slip differentials have been devised to counter this problem. In one type, disk clutches are introduced into the differential. When one wheel begins to lose traction, energizing the clutches transmits power to the wheel with the best traction. Figure 11.1.17 illustrates a semifloating rear axle. This axle is supported in bearings at each of its ends. The vehicle weight supported by the rear wheel is imposed on the axle shaft and transmitted to the axle housing through the outboard wheel bearing between the shaft and that housing. In contrast, most commercial vehicles use a full-floating rear axle. With this arrangement the load on the rear wheel is transmitted directly to the axle housing through a bearing between it and the wheel itself. Thus freed of vehicle support, the full-floating axle can be withdrawn from the axle housing without disturbing the wheel. In both types of rear axle, of course, the axle shaft transmits the driving torque to the wheels. With rear-wheel drive, a driveshaft (or propeller shaft) connects the transmission output to the differential input. Misalignment between the latter two shafts is accommodated through a pair of universal joints (U joints) of the Cardan (or Hooke) type, one located at either end of the driveshaft. With this type of universal joint, the driving and the driven shaft each terminate in a slingshot-like fork. The planes of the forks are oriented perpendicular to one another. Each fork is pinned to the perpendicular arms of a cruciform connector in such a way that rotary motion can be transmitted from the driving to the driven shaft, even when they are misaligned. Vehicles with a long wheel base may use a two-piece driveshaft. Then a third U joint is required, and an additional bearing is placed near that third U joint to support the driveshaft.

With front-wheel drive, the traditional driveshaft becomes unnecessary as the engine is coupled directly to a transaxle, which combines the transmission and the differential in a single unit that is typically housed in a single die-cast aluminum housing. A cutaway view of a transaxle with an automatic transmission appears in Fig. 11.1.18. Output from a transversely mounted engine is delivered to the torque converter, which shares a common centerline with the gearbox and its various gears and clutches. The gearbox output is delivered to the transfer-shaft gear, which rotates about a parallel axis. Its shaft contains the parking sprag. The gear at the opposite end of this shaft feeds torque into the differential, which operates around a third rotational axis, as seen at the bottom of Fig. 11.1.19. The axle shafts driving the front wheels may be of unequal length. A constant-velocity (CV) joint is located at each end of each frontdrive axle. During driving, the front axles may be required to flex through angles up to 208 while transmitting substantial torque—a duty too severe for the type of U joint commonly used on the driveshaft of a rear-wheel-drive vehicle. Two types of CV joints are in common use. In the Rzeppa joint, the input and output shafts are connected through a balls-in-grooves arrangement. In the tripod joint, the connection is through a rollers-in-grooves arrangement. To enhance operation on roads with a low-friction-coefficient surface, various means for delivering torque to all four wheels have been devised. In the four-wheel drive system, two-wheel drive is converted to four-wheel drive by making manual adjustments to the driveline,

Fig. 11.1.18 Cross-section view of a typical transaxle. (Chrysler Corp.)

SUSPENSIONS

11-11

Fig. 11.1.19 Automatic transaxle used on a front-wheel-drive car. (Chrysler Corp.)

normally with the vehicle at rest. In all-wheel drive systems, torque is always delivered to all four wheels, by a system capable of distributing power to the front and rear wheels at varying optimal ratios according to driving conditions and the critical limits of vehicle dynamics. In vehicles capable of driving through all four wheels, a transfer case receives the transmission output torque and distributes it through drive shafts and differentials to the front and rear axles.

Fig. 11.1.20 Front-wheel suspension for rear-wheel-drive car.

SUSPENSIONS

The suspension system improves ride quality and handling in driving on irregular roadbeds and during maneuvers like turning and braking. The vehicle body is supported on the front and rear springs, which are important elements in the suspension system. Three types used are the coil spring, the leaf spring, and the torsion bar. Also important are the shock absorbers. These piston-in-tube devices dampen spring oscillation, road shocks, and vibration. As the hydraulic shock-absorber strut telescopes, it forces fluid through a restrictive orifice, thus damping its response to deflections imposed by the springs. Especially in the front suspension, most vehicles incorporate an antiroll bar (sway bar, stabilizer). This is a U-shaped bar mounted transversely across the vehicle, each end attached to an axle. During highspeed turns, torsion developed in the bar reduces the tendency of the vehicle to lean to one side, thus improving handling. A wide variety of suspension systems have seen service. Front Suspension All passenger cars use independent front-wheel suspension, which allows one wheel to move vertically without disturbing the opposite wheel. Rear-wheel drive cars typically use the short-and-long-arm (SLA) system illustrated in Fig. 11.1.20. The steering knuckle is held between the outboard ends of the upper and lower control arms (wishbones) by spherical joints. The upper control arm is always shorter than the lower. The load on these control arms is generally taken by coil springs acting on the lower arm, or by torsionbar springs mounted longitudinally. Figure 11.1.21 shows a representative application of the spring strut (MacPherson) system for the front suspension of front-wheel drive cars. The lower end of the shock absorber (strut) is mounted within the coil spring and connected to the steering knuckle. The upper end is anchored to the body structure. Rear Suspensions The trailing-arm rear suspension illustrated in Fig. 11.1.22 is used on many passenger cars. Each longitudinal trailing arm pivots at its forward end. Motion of its opposite end is constrained by a coil spring. One or more links may be added to enhance stability. Using leaf springs in place of coil springs is common practice on trucks. The Hotchkiss drive, used with a solid rear axle, employs longitudinal leaf springs. They are connected to the chassis at their front and rear ends, with the rear axle attached near their midpoints.

Fig. 11.1.21 Typical MacPherson strut, coil spring front suspension for frontwheel-drive car. (Chrysler Corp.)

Fig. 11.1.22 Trailing-arm type rear suspension with coil springs, used on some front-wheel-drive cars. (Chrysler Corp.)

Active Suspension A variety of improvements to the traditional passive suspension system have been introduced. Generally they require some form of input energy. The air suspension widely used in heavyduty trucks, for example, requires an air compressor. To an increasing degree, advanced systems employ sensors and electronic controls. Some of the improvements available with active suspension are improved ride quality (adaptation to road roughness), control of vehicle height (low for reduced aerodynamic drag on highways; high for

11-12

AUTOMOTIVE ENGINEERING

off-road driving), and vehicle leveling (to correct change in pitch when the trunk is heavily loaded; avoid pitching during braking and heavy accelerations). STEERING

When turning a corner, each wheel traces an arc of different radius, as illustrated in Fig. 11.1.23. To avoid tire scrubbing when turning a corner of radius R, the point of intersection M for the projected frontwheel axes must fall in a vertical plane through the center of the rear axle. This requires each front wheel to turn through a different angle. For small turning angles of the vehicle, a and b can be shown to be simple functions of W, L, and R. A steering geometry conforming to Fig. 11.1.23 is known as Ackerman geometry, and L/R defines the Ackerman angle. The difference in front-wheel angles, b  a, varies with turning radius R. Having the a and b corresponding to Ackerman geometry improves handling in low-speed driving, as in parking lots. At higher speeds, dynamic effects become significant. In a turn, the vehicle may encounter either understeer or oversteer. With understeer, the vehicle fails to turn as sharply as the position of the steering wheel suggests it should. With oversteer, the vehicle turns more sharply than intended, and in extreme cases may spin around. These tendencies are amplified by increased speed and are affected by many variables, e.g., tire pressure, weight distribution, design of the suspension system, and the application of brakes.

A common motion transformer is the rack and pinion, in which a pinion gear attached to the steering column engages a geared bar, or rack, causing it to slide to the left or right as the steering wheel is turned. The ends of the rack are attached to the inner ends of the tie rods. In the variation of the rack-and-pinion illustrated in Fig. 11.1.25a, the rotation of the pinion on the end of the steering shaft causes a rotatable worm gear to control the tie-rod positions as it moves left and right like the rack. With the rotatable ball system of Fig. 11.1.25b, a worm is mounted on the end of the steering column. Its spiral grooves contain ball bearings. The outer surfaces of the balls rotate in spiral grooves lining the inside of a ball nut. The outer surface of the ball nut contains gear teeth that mesh with a gear sector on the pitman arm. The end of pitman arm is connected to the tie rods through a series of links.

α

β

β−α

Fig. 11.1.25 Two commonly used types of steering gear. (a) Variation of rackand-pinion using rotating worm in place of translating rack; (b) recirculating ball.

L R

M W Fig. 11.1.23 Ackerman steering geometry.

The physical implementation of a front-wheel steering system is shown schematically from above in Fig. 11.1.24. The front wheels are steered around the axes of their respective kingpins. For improved handling, the axes of the two kingpins are not vertical. In the transverse plane they lean inward, and in the longitudinal plane they lean rearward. The steering arms are the lever arms for turning the front wheels. Each tie rod is pinned at one end to a steering arm and at its opposite end to some type of motion transformer, the purpose of which is to convert the rotational input of the steering column into translational motion at the inner ends of the tie rods.

With variable-ratio steering, the response of the front wheels to steering-wheel rotation is increased as the steering wheel approaches the extremes of its travel. This is especially helpful in curbside parallel parking. Variable-ratio steering can be achieved by nonuniformly spacing the worm grooves in Fig. 11.1.25a, or by varying the lengths of the teeth on the sector of the pitman arm in Fig. 11.1.25b. Most modern U.S. cars have power-assisted steering. A pressurized hydraulic fluid is used to augment the steering effort input by the driver, but if the engine is off or if there is a failure in the power-steering system, the driver retains steering control, albeit with greater effort. The power-steering circuit contains an engine-driven hydraulic pump, a fluid reservoir, a hose assembly, return lines, and a control valve. One of a variety of pumps that has been used is the rotary-vane pump of Fig. 11.1.26. As the rotor is turned by a belt drive from the engine, the radially slidable vanes are held against the inner surface of the eccentric case by centrifugal force, springs, and/or hydraulic pressure. Not shown are the inlet and exit ports, and a pressure-relief valve. A balanced-spool control valve is represented in Fig. 11.1.27. In straight-ahead driving, centering springs hold the spool in the neutral position. In the position illustrated, however, the driver has turned the steering wheel to position the spool valve at the right extremity of its travel.

Tie rod Kingpin axis

Steering column

Motion transformer

Steering arm Fig. 11.1.24 Schematic of front-wheel steering system.

Fig. 11.1.26 Rotary power-steering pump. (General Motors, Saginaw.)

BRAKING SYSTEMS

Fig. 11.1.27 Control valve positioned for full-turn power steering assistance using maximum pump pressure.

That directs all of the pump discharge flow to the left side of a piston, moving the piston rod to the right. The piston illustrated can, for example, be incorporated into the rack of a rack-and-pinion steering system to add to the turning effort put into the steering wheel by the driver. Some cars now use electric power steering. Both the need for hydraulic fluid and the equipment pictured in Fig. 11.1.27 are eliminated. They are replaced by a variable-speed electric motor, and electronic sensors that match the appropriate degree of assistance to driver demand and road resistance. Compared to hydraulic power steering, electric power steering effects a small improvement in fuel economy by reducing the parasitic load on the engine. BRAKING SYSTEMS Brakes The moving automobile is stopped with hydraulic service brakes on all four wheels. The service brakes are engaged by depress-

ing and holding down a self-returning foot pedal. The mechanical parking brake, used to prevent the vehicle from rolling when parked, acts on the rear wheels. It is engaged with an auxiliary foot pedal or a hand pull and is held by a ratchet until released. Drum brakes, once used extensively on all four wheels, are now used primarily on rear wheels. Typically, disk brakes are used on the front wheels, but sometimes on the rear wheels. A Bendix dual-servo drum brake is illustrated in Fig. 11.1.28c. It is self-energizing because the rotating drum tries to drag the brake lining and shoe counter-clockwise, along with it. Because the anchor pin prevents this for the secondary shoe, and the adjusting screw prevents it for the primary shoe, both are called leading shoes. This causes both shoes to push more strongly against the drum, increasing the friction force. This self-energization is lost when the wheel turns in the reverse direction. With this brake configuration, typically a linkage turns a notched

Fig. 11.1.28 Three types of internal-expanding brakes.

11-13

wheel on the adjusting screw when the brake is applied with the car moving in reverse. This automatically adjusts for lining wear. In the arrangement of Fig. 11.1.28a, the anchor pin has been moved 1808. The leading shoe continues to be self-energized, but that quality is lost on the trailing (following) shoe. In this arrangement the leading shoe provides most of the braking torque and experiences most of the wear when traveling forward. In reverse, however, the trailing shoe in forward travel becomes the leading shoe and provides most of the braking torque. This arrangement is used on the rear wheels of some front-wheel drive cars. The Caliper disk brake is illustrated in Fig. 11.1.29. The disk rotates with the wheel. The fixed caliper straddles the disk. Within the caliper, the pistons in opposing hydraulic cylinders press the attached pads against opposite sides of the disk when actuated.

Fig. 11.1.29 Schematic of caliper disk brake.

As friction heats the brake lining and the surface on which it rubs during stopping, the coefficient of friction often falls. This undesirable characteristic causes what is known as brake fade. Because brakes are normally applied only intermittently, the time available for cool down between successive brake applications keeps brake fade at a tolerable level. However, closely spaced panic stops from high speeds can cause objectionable fade. The “disk” illustrated in Fig. 11.1.29 often consists of two parallel disks that are joined together by radial vanes. The ability of air to access the nonfriction side of each of these disks, and the air pumped between the disks by the vanes when the wheel rotates, help to avoid brake overheating. Brake fade is more probable with drum brakes because the more enclosed nature of their construction makes brake cooling more difficult. Many modern disk brakes use a single-cylinder caliper. One cylinder and its actuating piston and attached pad are retained on the inboard side. However, the outboard cylinder is eliminated, the pad on that side instead being fastened directly to the inner surface of the caliper. Instead of being fixed, the caliper is allowed to slide inboard on mounting bolts when the single piston is actuated. Transverse movement of the caliper on its mounting bolts then causes each pad to rub on its adjacent disk surface. Because disk brakes are not self-energizing, and the rubbing area of the pads is less than the rubbing area of the shoes on an equivalent drum

11-14

AUTOMOTIVE ENGINEERING

brake, a higher hydraulic pressure is required with disk brakes than with drums. If disk brakes are used on the front wheels and drums on the rear, then provisions are made in the upstream hydraulic system to provide that difference. Cast iron is the preferred material for the rubbing surface on both rotors and drums. The brake lining on pad or shoe may contain hightemperature resins, metal powder, solid lubricant, abrasives, and fibers of metal, plastic, and ceramic. Friction coefficients from 0.35 to 0.50 are typical with these combination. The asbestos once common in brake linings has been abandoned because of the adverse health effect of asbestos wear particles. Hydraulics When the brake pedal is depressed, it moves pistons in the master cylinder, as illustrated in Fig. 11.1.30. This forces pressurized brake fluid through lines to the wheel cylinders for brake application. The magnitude of the fluid pressure is proportional to the magnitude of the force applied to the brake pedal. As is typical, this dual master cylinder contains two pistons in tandem (Fig. 11.1.31), providing a split system. One operates the rear brakes and the other the front brakes, providing the ability to brake the vehicle even if one of the two systems should fail. Late model cars often use a diagonal-split arrangement, where each piston in the dual master cylinder serves diagonally opposite wheels.

The conventional cast-iron master cylinder with integral reservoir has been largely replaced by the aluminum master cylinder with translucent plastic reservoirs for visual inspection of brake-fluid level. The specially formulated brake fluid must be free of petroleum-based oil, which would cause swelling of rubber parts in the braking system. Doublewalled steel lines are used to distribute the brake fluid. Near the wheels, these rigid lines are replaced by flexible hydraulic hoses to accommodate wheel movement. Power Brakes Power brakes relieve the driver of much physical effort in retarding or stopping a car. Several types have been used, but on passenger cars, the vacuum-suspended type illustrated in Fig. 11.1.32 is the most common. The power section contains a large diaphragm. In the unapplied condition, the chambers on each side of the diaphragm are exposed to intake-manifold vacuum. When the brake pedal is depressed, first the vacuum passage is closed and then, with further depression, air is admitted to one side of the diaphragm. The pressure difference across the diaphragm, acting over its area, creates an axial force that is applied to the piston in the master cylinder. This augments the force provided directly by the driver’s foot. Without requiring an additional reservoir, the vacuum captured allows at least one stop after the engine operation has terminated.

Fig. 11.1.32 Power-assisted brake installation.

Fig. 11.1.30 “Split” hydraulic brake system.

Fig. 11.1.31 Typical design of a dual master cylinder for a split brake system.

When the brake pedal is released, a spring in the master cylinder returns the pistons to their original positions. This uncovers ports in the master cylinder, allowing communication with the brake-fluid reservoirs on the master cylinder and relieving the pressure in the wheel cylinders. A check valve maintains a minimum pressure in the system to prevent the entrance of air. Air in the system makes the brakes feel spongy, and, if it gains entrance, must be bled from the system. A combination valve is usually included in vehicles using disks in the front and drums in the rear. Among its functions, the proportioning valve regulates pressure individually to the front and rear wheels to compensate for the different pressure requirements of disks and drums, and the pressure differential valve checks to ensure that both sides of a diagonally split system experience the same pressures.

Another type of power brake used a hydraulic booster. The accumulator, a reservoir of pressurized fluid provided by the power-steering pump, provides the force that moves the piston in the master cylinder. Heavy-duty tractor-trailer combinations require braking forces far in excess of what can be supplied from a simple foot pedal. Air brakes are common in such applications. Air supplied from an engine-driven air compressor is stored in a high-pressure tank, from which the air is withdrawn for brake augmentation as needed. Retarders Using traditional wheel brakes to control the road speed of a heavily laden tractor-trailer rig on long downhill runs promotes brake fade and accelerates brake-lining wear. Several types of retarders are available that decelerate the vehicle independent of wheel brakes. With the compression-release braking system (often called the Jake brake), fuel injection in the diesel engine is terminated and engine valve timing is temporarily changed so that the exhaust valve opens near the end of the compression stroke. This releases the air compressed during compression directly into the exhaust system and imposes the negative work of compression on the crankshaft. A disadvantage of this system is that the sudden intermittent release of high-pressure air from the cylinder creates considerable noise. The hydrodynamic retarder consists essentially of a centrifugal fluid pump that is run by the driveshaft. The high tangential velocity of the discharge fluids is removed by stator vanes as the fluid is returned to the pump inlet. Because such a pump consumes little torque at low rotational speeds, a booster retarder that is geared to rotate at a higher speed may be added in parallel. Since the braking energy generated appears as heat in the recirculating fluid, adequate cooling must be provided for it. The electrodynamic retarder resembles an eddy-current dynamometer. Voltage applied from the battery or the alternator to the field coils controls

WHEELS AND TIRES

braking torque. The electrodynamic retarder offers greater braking torque at low speeds than the hydrodynamic retarder. However, its braking ability falls off significantly as it is heated by the energy absorbed, and heat rejection is problematic. Antilock Brakes At a steady forward speed, vehicle velocity equals product of the rolling radius of the tire and its angular velocity. When the brakes are applied, the vehicle velocity exceeds that product by a difference known as slip. As slip in any wheel increases, its braking force rises to a peak, then slowly decreases until the wheel locks up and the tire merely slides over the road surface. At lockup, the brakes have not only lost much of their effectiveness, but steering control is also sacrificed. The antilock braking system (ABS) incorporates wheelspeed sensors, an electronic control unit (ECU), and solenoids to release and reapply pressure to each brake. When the ECU determines from a wheel-speed signal that lockup is imminent, it signals the corresponding solenoid to release hydraulic pressure in that brake line. Once that wheel has accelerated again, the solenoid reapplies its brake-line pressure. This sequence is repeated in each wheel at a high frequency, thus avoiding lockup and generally shortening stopping distance during severe braking or on slippery roads. Traction Control The traction control system (TCS) helps maintain traction and directional stability, especially on slippery roads. It uses many of the same components as ABS. If one driven wheel spins, the ECU applies the brake on that wheel until traction is regained. If both driven wheels spin, the ECU temporarily overrides the command signal from the accelerator pedal to reduce engine output torque. Electronic Stability Control The step beyond ABS and TCS is electronic stability control, which helps the driver to maintain steering during severe vehicle maneuvers. By adding sensor measurements of steering-wheel angle and vehicle yaw rate to the ECU input, it is possible to reduce the incidence of inadvertent departure of the vehicle from the road, or of vehicle rollover. This is accomplished by controlling the wheel brakes as with TCS, and in addition, by temporarily reducing engine torque. Stopping Distance The service brakes are the most important element in arresting a moving vehicle, but they derive some help from other sources. Aerodynamic drag provides a retarding force. Because it varies with the square of speed, it is most influential at highway speeds. The engine, when it is driven by the inertia of the vehicle during braking, and transmission and driveline losses, all contribute resistance to forward motion. On the other hand, during forward motion the inertia of these components, as well as that of the wheels, all oppose the action of the brakes. The braking force from the motoring engine can be especially significant if it is joined to the driven wheels by a manual transmission. The force can be amplified by downshifting that transmission to increase the speed of the crankshaft. With the conventional automatic transmission, however, engine braking is minimal because the torque converter is relatively ineffective in transmitting torque in the reverse direction. A small retarding force is also derived from the rolling resistance of the tires. In braking, the sequence of events involves, successively, (1) the driver reaction time tr , during which the driver responds to an external signal by engaging the brake pedal, (2) a response time from pedal engagement to full pedal depression, (3) a buildup time, during which brake line pressure increases to develop a substantial braking force, and (4) a prolonged active braking time. In estimating minimum stopping distance from initial vehicle velocity V, the vehicle may be assumed to move at its original speed during the reaction time, response time, and half of the buildup time. The sum of the latter two is the delay time td . During the second half of the buildup time and the active braking time, the braking force depends on m, the average coefficient of friction between tire and roadbed. Then the stopping distance is S  V(tr  td)  V 2/2mg, where g is the gravitational constant. The reaction time typically consumes 1 0.7 s. For brakes in good condition, the delay time may range from 0.35 s for passenger cars to 0.55 s for large vehicles. The static friction coefficient depends on speed and the condition of tires and road. For new tires on dry paved roads, it is in the range of 0.75 to 0.85, increasing about 20 percent with wear. On moderately wet pavement

11-15

the coefficient for new tires may fall to the 0.55 to 0.65 range. On wet pavement, the coefficient falls even more, especially at high speeds. The sliding friction coefficient, applicable when the wheels are locked and skid over the roadbed, is less than the static coefficient. WHEELS AND TIRES

The total weight of a vehicle is the sum of the sprung weight supported by the tires and suspension system, and the unsprung weight. Wheels and tires are primary contributors to unsprung weight. For a given vehicle, the lower the unsprung weight, the better the acceleration performance, and the ride and handling qualities on rough roads. Because of cost considerations, most automotive wheels are fabricated from sheet steel. Special lower weight wheels have been forged or cast in aluminum, however, and even made of magnesium. Maintenance of proper front-wheel alignment is important for optimum vehicle performance and handling. Normally, proper camber, caster, and toe-in are specified by the manufacturer. Camber is the tilt angle of the wheel plane with respect to the longitudinal plane of the vehicle. When the plane of the wheel is perpendicular to the ground, the camber angle is 08. During operation, that angle is affected by the changing geometry of the suspension system as the vehicle is loaded or traverses a bump in the road, or as the body rolls during cornering. Taking such factors into account, the manufacturer specifies the proper camber angle for each vehicle design when the vehicle is at rest. With positive camber, the distance between wheels is greater at their tops than between their bottoms, where they contact the roadbed. Normally, a zero to slightly negative camber is specified. The caster angle is the angle between the kingpin axis (axis around which the wheel is steered) and a radial line from the center of the wheel to the ground, when both are projected onto a longitudinal plane. When the caster is positive, that radial line intersects the ground behind the intersection of the kingpin axis with the ground. Increasing the caster angle increases the tendency of a turned wheel to restore itself to the straight-ahead position as the vehicle moves forward. This self-restoring tendency can also be influenced somewhat by the inclination of the kingpin in the transverse plane. When viewing an unloaded vehicle at rest from above, the front wheels may be set either to toe in or toe out slightly to compensate for the effects of driving forces and kingpin inclination. This adjustment influences the tendency of the wheel to shimmy. Toe-in is typical for rear-wheel-drive cars, while the front wheels of front-wheel-drive cars may be set to toe out. The automotive pneumatic tire performs four main functions: supporting a moving load; generating steering forces; transmitting driving and braking forces; and providing isolation from road irregularities by acting as a spring in the total suspension system. A tire made up of two basic parts: the tread, or road-contacting part, which must provide traction and resist wear and abrasion; and the body, consisting of rubberized fabric that gives the tire strength and flexibility. The three principal types of automobile and truck tires are the crossply or bias-ply, the radial-ply, and the bias-belted (Fig. 11.1.33). In the bias-ply design, the cords in each layer of fabric run at an angle from

Fig. 11.1.33 Three types of tire construction.

11-16

AUTOMOTIVE ENGINEERING

one bead (rim edge) to the opposite bead. To balance the tire strength symmetrically across the tread center, another layer is added at an opposing angle of 30 to 388, producing a two-ply tire. The addition of two criss-crossed plies makes a four-ply tire. In the radial tire (used on most automobiles), the cords run straight across from bead to bead. The second layer of cords runs the same way; the plies do not cross over one another. Reinforcing belts of two or more layers are laid circumferentially around the tire between the radial body piles and the tread. The bias-belted tire construction is similar to that of the conventional bias-ply tire, but it has circumferential belts like those in the radial tire. Of the three types, the radial-ply offers the longest tread life, the best traction, the coolest running, the highest gasoline mileage, and the greatest resistance to road hazards. The bias-ply provides a softer, quieter ride and is least expensive. The bias-belted design is intermediate between the good quality bias-ply and the radial tire. It has a longer tread life, and it gives a smoother low-speed ride than the radial tire. Tire specifications are molded into the sidewall. Consider, for example, the specification “P225/60TR16.” The initial P designates “passenger car” (LT  light truck, ST  trailer). Next, 225 is the tire width in mm. Following the slash, 60 is the aspect ratio, i.e., the ratio of sidewall height, from bead to tread, to tire width, expressed as a percentage. T indicates a maximum speed rating of 190 km/h. Speed designators progress from S  180 km/h through T (190 km/h), U (200 km/h), H (210 km/h), V (240 km/h) to Z (240 km/h). On some tires the speed rating is omitted. R indicates radial construction (D  bias ply; B  bias belted). Finally, 16 indicates a wheel with a 16-in. rim diameter. Alternatively, such a tire could be labeled “P225/60R 89T,” where the speed rating comes last and is preceded by a numerical load index indicating carrying capacity. For passenger cars, the recommended tire pressure is posted on the rear edge of the driver’s door. Underinflation causes excessive tire flexing and heat generation in the tire, more rapid wear, poor handling, and poor fuel economy. Overinflation causes a rough ride, unusual wear, poor handling, and increased susceptibility to tire damage from road hazards. As a safety measure, a dashboard signal of tire under-inflation can be provided for the driver. Early versions of such a system made use of the wheel-speed signal necessary for traction control. Any wheel that consistently rotates at a higher speed than the rest suggests a reduced running radius, as would occur with an underinflated tire. Later versions measure the pressure of individual tires with wheel-mounted sensors, electronically transmitting their signals to an on-board receiver for dashboard display. An out-of-balance wheel causes uneven tire wear and adversely affects ride quality. Wheel-tire assemblies are balanced individually by attaching balance weights to the rim of the wheel. For static balance, a proper weight on the outside rim of the wheel suffices. For the dynamic balance required at high speeds, weights are necessary on both the outside and inside rims. Dynamic balancing necessitates spinning the wheel to determine correct placement of the balance weights. The trunk space occupied by a fifth wheel on which a spare tire is mounted to replace a flat tire has long bothered consumers and manufacturers alike. To regain part of that space, some cars have used a collapsible tire that must be inflated when pressed into service, using either a compressed-air storage reservoir or an onboard air compressor. Alternatively, some cars store a smaller-than-standard compact spare tire that has been preinflated to a high pressure. The run-flat tire is another option. It eliminates the need to carry a jack. It may rely upon an extra stiff sidewall to carry the load if the tire is punctured, or upon a plastic donut built into the tire. Such run-flat tires eliminate not only the storage-space requirement for carrying any spare, but also the need to carry a jack. When used, all of these alternatives to carrying a fifth wheel/tire combination entail some sacrifice in ride quality, and a limitation on allowable travel distance and speed.

HEATING, VENTILATING, AND AIR CONDITIONING (HVAC)

The heating, ventilation, and air conditioning (HVAC) system is installed to provide a comfortable environment within the passenger compartment. Fresh air for HVAC usually enters through an opening in front of the windshield and into a plenum that separates rain from the ambient air. The airflow rate is augmented by a variable-speed electrically driven blower. In cold weather, heater air is passed through a finned heat-exchanger core, where it is heated by the engine coolant. Compartment temperature is controlled by (1) mixing ambient air with heated air, (2) mixing heated air with recirculated air, or (3) changing blower speed. Provision is made to direct heated air against the interior surface of the windshield to prevent the formation of ice or fog. Figure 11.1.34 schematically illustrates a system in which a multispeed blower drives fresh air through (1) a radiator core or (2) a bypass. The flow rate and direction of the air entering the passenger compartment are controlled with manually operated doors and vanes in the discharge ducts.

Fig. 11.1.34 Heater airflow diagram.

For cooling in hot weather, an electric clutch is engaged to operate the air-conditioning compressor, which is belt-driven from the engine (Fig. 11.1.35). The pressurized, hot gaseous refrigerant exiting the

Fig. 11.1.35 Schematic of air conditioning system using HFC-134a refrigerant. (Reprinted with permission from SAE, SP-916, © 1992, Society of Automotive Engineers, Inc.)

AUTOMOBILE CONSTRUCTION

compressor enters the condenser, a heat exchanger typically located in front of the engine radiator. Ambient air passed through this heat exchanger by a fan cools the refrigerant, liquefying it. The high-pressure liquid then passes through a liquid reservoir and on to an expansion valve that drops the refrigerant pressure as it sprays the refrigerant into the evaporator. The evaporator is a heat exchanger normally mounted on the firewall. Compartment air circulated through the evaporator rejects its heat to the colder refrigerant, thus cooling as it vaporizes the refrigerant in preparation for reentry of the refrigerant into the compressor. Moisture condensed from the ambient air during cooling is leaked onto the roadbed. Figure 11.1.36 shows schematically a combined air-heating and aircooling system. Various dampers control the proportions of fresh and recirculated air to the heater and evaporator cores. Compartment-air temperature is controlled electronically by a thermostat that modulates the mixing of cooled and heated air, and cycles the compressor on and off through a magnetic clutch. During defrosting, mixing cooled air in with heated air helps to remove humidity from the air directed to the interior of the windshield.

11-17

Fig. 11.1.37 Separate body and frame design.

Fig. 11.1.36 Combined heater and air conditioner. (Chevrolet.) Fig. 11.1.38 Unit-construction design for automobiles.

For many years the refrigerant in automotive air conditioners was CFC-12, a chlorinated fluorocarbon also known as F-12. About 1990 the depletion of stratospheric ozone, which shields humankind from harmful solar ultraviolet radiation, became an environmental concern. The chlorine in CFC-12 having been identified as a contributor to the problem, the use of CFC-12 was banned worldwide. About the same time, concern mounted about the threat of global warming from the anthropogenic production of greenhouse gases. Carbon dioxide from the combustion of fossil fuels is a major contributor. It was estimated that on a mass basis, CFC-12 was about 10,000 times as serious a greenhouse gas as carbon dioxide. In the United States, an immediate response to the ozone depletion concern was to require mechanics to capture spent CFC-12 for recycling, rather than venting it to the atmosphere, every time an air conditioner was serviced. Beyond that, within a few years, CFC-12 was replaced by chlorinefree HFC-134a, which has a global warming effect about one-eighth that of CFC-12. Because HFC-134a is still 1300 times as serious a greenhouse gas as carbon dioxide, a replacement for HFC-134a is being sought.

AUTOMOBILE CONSTRUCTION

Light-duty vehicles include passenger cars and light-duty trucks. Light-duty trucks include passenger vans, sports utility vehicles (SUVs), and pickup trucks. The layout of light-duty vehicles generally falls into one of four categories: (1) front engine, rear-wheel drive— used mostly in large cars and light-duty trucks, (2) front engine, frontwheel drive—used mostly in small cars and passenger vans, (3) rear or midengine, rear-wheel drive—sometimes used in sports cars, and (4) four-wheel drive—used mostly in light-duty trucks and premium large cars. The two most common basic body constructions, illustrated in Fig. 11.1.37 and Fig. 11.1.38, are body-and-frame and unit construction. The body-and-frame design uses a body that is bolted to a

separate frame. Most of the suspension, bumper, and brake loads are transmitted to the frame. Body-and-frame construction dominates in pickups and SUVs. With unit construction, most common in passenger cars, the frame is eliminated. Its function is assumed by a framework consisting of stamped sheet-metal sections joined together by welding. Outer panels are then fastened to this framework, primarily by bolting. A third construction option that has been borrowed from aircraft practice and used to a limited extent is the space frame and monocoque combination. The space frame is a skeleton to which the body panels are attached, but the body assembly is designed in a manner that causes it to contribute significantly to the strength of the vehicle. An important aspect of auto design is passenger protection. Crumple zones are designed into the structure, front and rear, to absorb collision impacts. Beams may be built into doors to protect against side impact. The 1990s drive for improved passenger-car fuel economy, hence smaller and lighter cars, and for improved corrosion resistance has led to a gradual shift in material usage. This trend is evident in Table 11.1.3. The weight percentage of ferrous content has declined consistently, but remains over half the total weight of the average car. The shift from conventional to high-strength steel is evident. In areas likely to experience rust, the steel may be galvanized. The percentage use of lighter aluminum has doubled. The use of plastics and composites, while still less than 10 percent of the car weight, has increased by 50 percent on a mass basis. The interest in recycling materials from scrapped vehicles has also increased. In the mid-1990s, 94 percent of scrapped vehicles were processed for recycling. Approximately 75 percent of vehicle content was recycled. Table 11.1.4 indicates the disposition of materials from recycled vehicles.

11-18

RAILWAY ENGINEERING Table 11.1.3

Average Materials Consumption for a Domestic Automobile 1978

Material

1985

2001

Pounds

Percent

Pounds

Percent

Pounds

Percent

Ferrous materials Conventional steel High-strength steel Stainless steel Other steels Iron Total ferrous

1880.0 127.5 25.0 56.0 503.0 2591.5

53.8% 3.6 0.7 1.6 14.4 74.1%

1481.5 217.5 29.0 54.5 468.0 2250.5

46.5% 6.8 0.9 1.7 14.7 70.6%

1349.0 351.5 54.5 25.5 345.0 2125.5

40.8% 10.6 1.6 0.8 10.4 64.2%

Nonferrous materials Powdered metal1 Aluminum Copper Zinc die castings Plastics/composites Rubber Glass Fluids & lubricants Other materials2 Total nonferrous

16.0 112.0 39.5 28.0 176.0 141.5 88.0 189.0 112.5 902.5

0.5 3.2 1.1 0.8 5.0 4.1 2.5 5.4 3.2 25.8%

0.6 4.3 1.4 0.5 6.6 4.3 2.7 5.8 3.2 29.4%

37.5 256.5 46.0 11.0 253.0 145.5 98.5 196.0 139.5 1183.5

1.1 7.8 1.4 0.3 7.6 4.4 3.0 5.9 4.2 35.7%

Total

3494.0 lb

19.0 138.0 44.0 18.0 211.5 136.0 85.0 184.0 101.5 937.02 3187.5 lb

3309.0 lb

1

Powdered metals may include ferrous materials. 2 Other materials include sound deadeners, paint and anti-corrosion dip, ceramics, cotton, cardboard, battery lead, magnesium castings, etc. SOURCE: Reproduced with permission from American Metal Market. Copyright 2004 American Metal Market LLC, a division of Metal Bulletin plc. All rights reserved.

Table 11.1.4

Materials Disposition from Recycled Vehicles Recycled materials

Ferrous metals

• Steel • Iron

Nonferrous metals

• • • •

Aluminum Copper Lead Zinc

Fluids

• Engine oil • Coolant • Refrigerant

Shredder residue

• • • • •

Plastic Rubber Glass Fluids Other

11.2 RAILWAY ENGINEERING by V. Terrey Hawthorne and Keith L. Hawthorne (in collaboration with E. Thomas Harley, Charles M. Smith, Robert B. Watson, and C. A. Woodbury) REFERENCES: Proceedings, Association of American Railroads (AAR), Mechanical Division. Proceedings, American Railway Engineering Association (AREA). Proceedings, American Society of Mechanical Engineers (ASME). “The Car and Locomotive Cyclopedia,” Simmons-Boardman. J. K. Tuthill, “High Speed Freight Train Resistance,” University of Illinois Engr. Bull., 376, 1948. A. I. Totten, “Resistance of Light Weight Passenger Trains,” Railway Age, 103, Jul. 17, 1937. J. H. Armstrong, “The Railroad—What It Is, What It Does,” SimmonsBoardman, 1978. Publications of the International Government-Industry Research Program on Track Train Dynamics, Chicago. Publications of AAR Research and Test Department, Chicago. Max Ephraim, Jr., “The AEM-7—A New High Speed, Light Weight Electric Passenger Locomotive,” ASME RT Division, 82-RT-7, 1982. W. J. Davis, Jr., General Electric Review, October 1926. R. A. Allen, Conference on the Economics and Performance of Freight Car Trucks, October 1983. “Engineering and Design of Railway Brake Systems,” The Air Brake Association, Sept. 1975. Code of Federal Regulations, Title 40, Protection of Environment. Code of Federal Regulations, Title 49, Transportation. Standards and Recommended Practices and Proceedings, Association of American Railroads (AAR) Mechanical Division, www.aar.com. “Standard for the Design and Construction of Passenger Railroad Rolling Stock,” American Public Transportation Association (APTA), www.apta.com. “Manual for Railway

Engineering,” American Railway Engineering and Maintenance-of-Way Association (AREMA), www.arema.com. Proceedings, American Society of Mechanical Engineers (ASME), www.asme.org. “The Car and Locomotive Cyclopedia of American Practices,” Simmons-Boardman Books, Inc., www.transalert.com. Publications of AAR Research and Test Department, www.aar.com. Code of Federal Regulations, Title 40, Protection of Environment and Title 49, Transportation, published by the Office of the Federal Register, National Archives and Records Administration. “The Official Railway Equipment Register—Freight Connections and Freight Cars Operated by Railroads and Private Car Companies of North America,” R.E.R. Publishing Corp., East Windsor, NJ, www.cbizmedia.com. DIESEL-ELECTRIC LOCOMOTIVES

Diesel-electric locomotives and electric locomotives are classified by wheel arrangement; letters represent the number of adjacent driving axles in a rigid truck (A for one axle, B for two axles, C for three axles, etc.). Idler axles between drivers are designated by numerals. A plus sign indicates articulated trucks or motive power units. A minus sign

DIESEL-ELECTRIC LOCOMOTIVES

indicates separate nonarticulated trucks. This nomenclature is fully explained in RP-5523, issued by the Association of American Railroads (AAR). Virtually all modern locomotives are of either B-B or C-C configuration. The high efficiency of the diesel engine is an important factor in its selection as a prime mover for traction. This efficiency at full or partial load makes it ideally suited to the variable service requirements of routine railroad operations. The diesel engine is a constant-torque machine that cannot be started under load and hence requires a variably coupled transmission arrangement. The electric transmission system allows it to make use of its full rated power output at low track speeds for starting as well as for efficient hauling of heavy trains at all speeds. Examples of the most common diesel-electric locomotive types in service are shown in Table 11.2.1. A typical diesel-electric locomotive is shown in Fig. 11.2.1. Diesel-electric locomotives have a dc generator or rectified alternator coupled directly to the diesel engine crankshaft. The generator/alternator is electrically connected to dc series traction motors having nose suspension mountings. Many recent locomotives utilize gate turn-off inverters and ac traction motors to obtain the benefits of increased adhesion and higher tractive effort. The gear ratio for the axle-mounted bull gears to the motor pinions that they engage is determined by the locomotive speed range, which is related to the type of service. A high ratio is used for freight service where high tractive effort and low speeds are common, whereas high-speed passenger locomotives have a lower ratio. Diesel Engines Most new diesel-electric locomotives are equipped with either V-type, two strokes per cycle, or V-type, four strokes per cycle, engines. Engines range from 8 to 20 cylinders each. Output power ranges from 1,000 hp to over 6,000 hp for a single engine application. These medium-speed diesel engines have displacements from 560 to over 1,000 in3 per cylinder. Two-cycle engines are aspirated by either a gear-driven blower or a turbocharger. Because these engines lack an intake stroke for natural

Table 11.2.1

11-19

aspiration, the turbocharger is gear-driven at low engine speeds. At higher engine speeds, when the exhaust gases contain enough energy to drive the turbocharger, an overriding clutch disengages the gear train. Free-running turbochargers are used on four-cycle engines, as at lower speeds the engines are aspirated by the intake stroke. The engine control governor is an electrohydraulic or electronic device used to regulate the speed and power of the diesel engine. Electrohydraulic governors are self-contained units mounted on the engine and driven from one of the engine camshafts. They have integral oil supplies and pressure pumps. They utilize four solenoids that are actuated individually or in combination from the 74-V auxiliary generator/ battery supply by a series of switches actuated by the engineer’s throttle. There are eight power positions of the throttle, each corresponding to a specific value of engine speed and horsepower. The governor maintains the predetermined engine speed through a mechanical linkage to the engine fuel racks, which control the amount of fuel metered to the cylinders. Computer engine control systems utilize electronic sensors to monitor the engine’s vital functions, providing both electronic engine speed control and fuel management. The locomotive throttle is interfaced with an on-board computer that sends corresponding commands to the engine speed control system. These commands are compared to input from timing and engine speed sensors. The pulse width and timing of the engine’s fuel injectors are adjusted to attain the desired engine speed and to optimize engine performance. One or more centrifugal pumps, gear driven from the engine crankshaft, force water through passages in the cylinder heads and liners to provide cooling for the engine. The water temperature is automatically controlled by regulating shutter and fan operation, which in turn controls the passage of air through the cooling radiators. The fans are driven by electric motors. The lubricating oil system supplies clean oil at the proper temperature and pressure to the various bearing surfaces of the engine, such as the crankshaft, camshaft, wrist pins, and cylinder walls. It also provides

Locomotives in Service in North America Tractive effort

Builder

Model

Service

Arrangement

Weight min./max. (1,000 lb)

Bombardier Bombardier Bombardier EMD EMD EMD EMD EMD EMD EMD GE GE GE GE GE EMD EMD EMD GE GE GE

HR-412 HR-616 LRC SW-1001 MP15 GP40-2 SD40-2 GP50 SD50 F40PH-2 B18-7 B30-7A C30-7A B36-7 C36-7 AEM-7 GM6C GM10B E60C E60CP E25B

General purpose General purpose Passenger Switching Multipurpose General purpose General purpose General purpose General purpose Passenger General purpose General purpose General purpose General purpose General purpose Passenger Freight Freight General purpose Passenger Freight

B-B C-C B-B B-B B-B B-B C-C B-B C-C B-B B-B B-B C-C B-B C-C B-B C-C B-B-B C-C C-C B-B

240/280 380/420 252 nominal 230/240 248/278 256/278 368/420 260/278 368/420 260 nominal 231/268 253/280 359/420 260/280 367/420 201 nominal 365 nominal 390 nominal 364 nominal 366 nominal 280 nominal

Locomotive*

* Engines: Bombardier—model 251, 4-cycle, V-type, 9  101⁄2 in cylinders. EMD—model 645E, 2-cycle, V-type, 91⁄16  10 in cylinders. GE—model 7FDL, 4-cycle, V-type, 9  101⁄2 in cylinders. † Starting tractive effort at 25% adhesion. ‡ Continuous tractive effort for smallest pinion (maximum). § Electric locomotive horsepower expressed as diesel-electric equivalent (input to generator). ¶ Not applicable.

Starting† for min./max. weight (l,000 lb)

At continuous‡ speed (mi/h)

Number of cylinders

Horsepower rating (r/min)

60/70 95/105 63 58/60 62/69 64/69 92/105 65/69 92/105 65 58/67 63/70 90/105 65/70 92/105 50 91 97 91 91 70

60,400 (10.5) 90,600 (10.0) 19,200 (42.5) 41,700 (6.7) 46,822 (9.3) 54,700 (11.1) 82,100 (11.0) 64,200 (9.8) 96,300 (9.8) 38,240 (16.1) 61,000 (8.4) 64,600 (12.0) 96,900 (8.8) 64,600 (12.0) 96,900 (11.0) 33,500 (10.0) 88,000 (11.0) 100,000 (5.0) 82,000 (22.0) 34,000 (55.0) 55,000 (15.0)

12 16 16 8 12 16 16 16 16 16 8 12 12 16 16 NA¶ NA NA NA NA NA

2,700 (1,050) 3,450 (1,000) 3,725 (1,050) 1,000 (900) 1,500 (900) 3,000 (900) 3,000 (900) 3,500 (950) 3,500 (950) 3,000 (900) 1,800 (1,050) 3,000 (1,050) 3,000 (1,050) 3,600 (1,050) 3,600 (1,050) 7,000 § 6,000 § 10,000 § 6,000 § 6,000 § 2,500 §

11-20

RAILWAY ENGINEERING

Toilet

Sandbox

Draft gear

TA22-CABB generator/ alternator

Electrical control cabinet

Engineman's console

Conductor's desk

Electrical control cabinet

Heat. vent. air cond

TC cabinet

Refrigerator

TC cabinet blwr mtr

ITB2830 traction motors

TC cabinet blwr

TM blwr duct

Main reservoir

Inertial filters

TC cabinet

Air dryer

Fuel pump

Horn

Gen./alt. blwr

IPR resistor

Eng. air Gen. room partition filter

TM blwr duct

TM blwr.

Exhaust silencer

TA22-CABB generator/ alternator

Lube oil cooler

Oil pan

16 cylinder 710G3C-ES

Fule tank (5900 gal)

Engine water tank

Engine water tank

Fuel filters

Cooling fans

Lube oil filters

4 cyl air comp

AC cabinet

Dyn. brk. blw mtr

4-cyl air compressor drive motor

Traction mtr blw

DYN brk grids & mtr

Dyn. brk. grids

Traction mtr blw

Sandbox

Draft gear

Fig. 11.2.1 A typical diesel-electric locomotive. (Electro-Motive Division, General Motors Corp.)

Fig. 11.2.2 Axle-hung traction motors for diesel-electric and electric locomotives.

oil internally to the heads of the pistons to remove excess heat. One or more gear-type pumps driven from the crankshaft are used to move the oil from the crankcase through filters and strainers to the bearings and the piston cooling passages, after which the oil flows by gravity back to the crankcase. A heat exchanger is part of the system as all of the oil passes through it at some time during a complete cycle. The oil is cooled by engine cooling water on the other side of the exchanger. Paper-element cartridge filters in series with the oil flow remove fine impurities in the oil before it enters the engine. Electric Transmission Equipment A dc main generator or threephase ac alternator is directly coupled to the diesel engine crankshaft. Alternator output is rectified through a full-wave bridge to keep ripple to a level acceptable for operation of series field dc motors or for input to a solid-state inverter system for ac motors. Generator power output is controlled by (1) varying engine speed through movement of the engineer’s controller and (2) controlling the flow of current in its battery field or in the field of a separate exciter generator. Shunt and differential fields (if used) are designed to maintain constant generator power output for a given engine speed as the load and voltage vary. The fields do not completely accomplish this, thus the battery field or separately excited field must be controlled by a load regulator to provide the final adjustment in excitation to load the engine properly. This field can also be automatically

deenergized to reduce or remove the load to prevent damage to the power plant or other traction equipment when certain undesirable conditions occur. The engine main generator or alternator power plant functions at any throttle setting as a constant-horsepower source of energy. Therefore, the main generator voltage must be controlled to provide constant power output for each specific throttle position under the varying conditions of train speed, train resistance, atmospheric pressure, and quality of fuel. The load regulator, which is an integral part of the governor, accomplishes this within the maximum safe values of main generator voltage and current. For example, when the locomotive experiences an increase in track gradient with a consequent reduction in speed, traction motor counter emf decreases, causing a change in traction motor and main generator current. Because this alters the load demand on the engine, the speed of the engine tends to change to compensate. As the speed changes, the governor begins to reposition the fuel racks, but at the same time a pilot valve in the governor directs hydraulic pressure into a load regulator vane motor, which changes the resistance value of the load regulator rheostat in series with the main generator excitation circuit. This alters the main generator excitation current and consequently, main generator voltage and returns the value of main generator power output to normal. Engine fuel racks return to normal, consistent with constant values of engine speed. The load regulator is effective within maximum and minimum limit values of its rheostat. Beyond these limits the power output of the engine is reduced. However, protective devices in the main generator excitation circuit limit its voltage to ensure that values of current and voltage in the traction motor circuits are within safe limits. Auxiliary Generating Apparatus DC power for battery charging, lighting, control, and cab heaters is provided by a separate generator, geared to the main engine. Voltage output is regulated within 1 percent of 74 V over the full range of engine speeds. Auxiliary alternators with full-wave bridge rectifiers are also utilized for this application. Modern locomotives with on-board computer systems also have power supplies

DIESEL-ELECTRIC LOCOMOTIVES

to provide “clean” dc for sensors and computer systems. These are typically 24-V systems. Traction motor blowers are mounted above the locomotive underframe. Air from the centrifugal blower housings is carried through the underframe and into the motor housings through flexible ducts. Other designs have been developed to vary the air output with cooling requirements to conserve parasitic energy demands. The main generator/alternators are cooled in a similar manner. Traction motors are nose-suspended from the truck frame and bearing-suspended from the axle (Fig. 11.2.2). The traction motors employ series exciting (main) and commutating field poles. The current in the series field is reversed to change locomotive direction and may be partially shunted through resistors to reduce counter emf as locomotive speed is increased. Newer locomotives dispense with field shunting to improve commutation. Early locomotive designs also required motor connection changes (series, series/parallel, parallel) referred to as transition, to maintain motor current as speed increased. AC traction motors employ a variable-frequency supply derived from a computer controlled solid-state inverter system, fed from the rectified alternator output. Locomotive direction is controlled by reversing the sequence of the three-phase supply. The development of the modern traction alternator with its high output current has resulted in a trend toward dc traction motors that are permanently connected in parallel. DC motor armature shafts are equipped with grease-lubricated roller bearings, while ac traction motor shafts have grease-lubricated roller bearings at the free end and oil-lubricated bearings at the pinion drive end. Traction motor support bearings are usually of the plain sleeve type with lubricant wells and spring-loaded felt wicks, which maintain constant contact with the axle surface; however, many new passenger locomotives have roller support bearings. Electrical Controls In the conventional dc propulsion system, electropneumatic or electromagnetic contactors are employed to make and break the circuits between the traction motors and the main generator. They are equipped with interlocks for various control circuit functions. Similar contactors are used for other power and excitation circuits of lower power (current). An electropneumatic or electric motor-operated cam switch, consisting of a two-position drum with copper segments moving between spring loaded fingers, is generally used to reverse traction motor field current (“reverser”) or to set up the circuits for dynamic braking. This switch is not designed to operate under load. On some dc locomotives these functions have been accomplished with a system of contractors. In the ac propulsion systems, the power control contractors are totally eliminated since their function is performed by the solidstate switching devices in the inverters. Locomotives with ac traction motors utilize inverters to provide phase-controlled electrical power to their traction motors. Two basic arrangements of inverter control for ac traction motors have emerged. One arrangement uses an individual inverter for each axle. The other uses one inverter per truck. Inverters are typically GTO (gate turn-off) or IGBT (insulated-gate bipolar transistor) devices. Cabs In order to promote uniformity and safety, the AAR has issued standards for many locomotive cab features, RP-5104. Locomotive cab noise standards are prescribed by 49 CFR § 229.121. Propulsion control circuits transmit the engineer’s movements of the throttle lever, reverse lever, and transition or dynamic brake control lever in the controlling unit to the power-producing equipment of each unit operating in multiple in the locomotive consist. Before power is applied, all reversers must move to provide the proper motor connections for the direction of movement desired. Power contractors complete the circuits between generators and traction motors. For dc propulsion systems, excitation circuits then function to provide the proper main generator field current while the engine speed increases to correspond to the engineer’s throttle position. In ac propulsion systems, all power circuits are controlled by computerized switching of the inverter. To provide for multiple unit operation, the control circuits of each locomotive unit are connected by jumper cables. The AAR has issued Standard S-512 covering standard dimensions and contact identification for 27-point control jumpers used between diesel-electric locomotive units.

11-21

Wheel slip is detected by sensing equipment connected either electrically to the motor circuits or mechanically to the axles. When slipping occurs on some units, relays automatically reduce main generator excitation until slipping ceases, whereupon power is gradually reapplied. On newer units, an electronic system senses small changes in motor current and reduces motor current before a slip occurs. An advanced system recently introduced adjusts wheel creep to maximize wheel-torail adhesion. Wheel speed is compared to ground speed, which is accurately measured by radar. A warning light and/or buzzer in the operating cab alerts the engineer, who must notch back on the throttle if the slip condition persists. With ac traction the frequency selected prevents wheel slip if the wheel tries to exceed a speed dictated by the input frequency. Batteries Lead-acid storage batteries of 280 or 420 ampere-hour capacity are usually used for starting the diesel engine. Thirty-two cells on each locomotive unit are used to provide 64 V to the system. (See also Sec. 15.) The batteries are charged from the 74-V power supply. Air Brake System The “independent” brake valve handle at the engineer’s position controls air pressure supplied from the locomotive reservoirs to the brake cylinders on only the locomotive itself. The “automatic” brake valve handle controls the air pressure in the brake pipe to the train (Fig. 11.2.3). On more recent locomotives, purely pneumatic braking systems have been supplanted by electropneumatic systems. These systems allow for more uniform brake applications, enhancing brake pipe pressure control and enabling better train handling. The AAR has issued Standard S-5529, “Multiple Unit Pneumatic Brake Equipment for Locomotives.”

Fig. 11.2.3 Automatic-brake valve-handle positions for 26-L brake equipment.

Compressed air for braking and for various pneumatic controls on the locomotive is usually supplied by a two-stage, three-cylinder compressor connected directly or through a clutch to the engine crankshaft, or with electric motor drive. A compressor control switch is activated to maintain a pressure of approximately 130 to 140 lb/in2 (896 to 965 kPa) in the main reservoirs. Dynamic Braking On most locomotives, dynamic brakes supplement the air brake system. The traction motors are used as generators to convert the kinetic energy of the locomotive and train into electrical energy, which is dissipated through resistance grids located near the locomotive roof. Motor-driven blowers, designed to utilize some of this braking energy, force cooler outside air over the grids and out through roof hatches. By directing a generous and evenly distributed air stream over the grids, their physical size is reduced, in keeping with the relatively small space available in the locomotive. On some locomotives, resistor grid cooling is accomplished by an engine-driven radiator/braking fan, but energy conservation is causing this arrangement to be replaced by motor-driven fans that can be energized in response to need, using the parasitic power generated by dynamic braking itself. By means of a cam-switch reverser, the traction motors are connected to the resistance grids. The motor fields are usually connected in series across the main generator to supply the necessary high excitation current. The magnitude of the braking force is set by controlling the traction motor excitation and the resistance of the grids. Conventional

11-22

RAILWAY ENGINEERING

dynamic braking is not usually effective below 10 mi/h (16 km/h), but it is very useful at 20 to 30 mi/h (32 to 48 km/h). Some locomotives are equipped with “extended range” dynamic braking which enables dynamic braking to be used at speeds as low as 3 mi/h (5 km/h) by shunting out grid resistance (both conventional and extended range are shown on Fig. 11.2.4). Dynamic braking is now controlled according to the “tapered” system, although the “flat” system has been used in the past. Dynamic braking control requirements are specified by AAR Standard S-5018. Dynamic braking is especially advantageous on long grades where accelerated brake shoe wear and the potential for thermal damage to wheels could otherwise be problems. The other advantages are smoother control of train speed and less concern for keeping the pneumatic trainline charged. Dynamic brake grids can also be used for a self-contained load-test feature, which permits a standing locomotive to be tested for power output. On locomotives equipped with ac traction motors, a constant dynamic braking (flat-top) force can be achieved from the horsepower limit down to 2 mi/h (3 km/h).

circuits) and gearing to develop rail horsepower. Power output of the main generator for traction may be expressed as Wattstraction 5 E g 3 Im

hprail 5 V 3 TE/375

Performance Engine Indicated Horsepower The power delivered at the diesel locomotive drawbar is the end result of a series of subtractions from the original indicated horsepower of the engine, which takes into account the efficiency of transmission equipment and the losses due to the power requirements of various auxiliaries. The formula for the engine’s indicated horsepower is:

ihp 5

PLAN 33,000

V 375

(11.2.4)

where V is the speed in mi/h. Train resistance calculations are discussed later under “Vehicle/Track Interaction.” Theoretically, therefore, drawbar horsepower available is the power output of the diesel engine less the parasitic loads and losses described above. Speed-Tractive Effort At full throttle the losses vary somewhat at different values of speed and tractive effort, but a curve of tractive effort plotted against speed is nearly hyperbolic. Figure 11.2.5 is a typical speed-tractive effort curve for a 3,500-hp (2,600-kW) freight locomotive. The diesel-electric locomotive has full horsepower available over the entire speed range (within the limits of adhesion described below). The reduction in power as continuous speed is approached is known as power matching. This allows multiple operation of locomotives of different ratings at the same continuous speed. Adhesion In Fig. 11.2.6 the maximum value of tractive effort represents the level usually achievable just before the wheels slip under average rail conditions. Adhesion is usually expressed as a percentage of vehicle weight on drivers, with the nominal level being 25 percent. This means that a force equal to 25 percent of the total locomotive weight on drivers is available as tractive effort. Actually, adhesion will vary widely with rail conditions, from as low as 5 percent to as high as 35 percent or

(11.2.1)

where P  mean effective pressure in the cylinder, lb/in2; L  length of piston stroke, ft; A  piston area, in2; and N  total number of cycles completed per min. Factor P is governed by the overall condition of the engine, quality of fuel, rate of fuel injection, completeness of combustion, compression ratio, etc. Factors L and A are fixed with design of engine. Factor N is a function of engine speed, number of working chambers, and strokes needed to complete a cycle. Engine Brake Horsepower In order to calculate the horsepower delivered by the crankshaft coupling to the main-generator, frictional losses in bearings and gears must be subtracted from the indicated horsepower (ihp). Some power is also used to drive lubricating oil pumps, governor, water pumps, scavenging blower, and other auxiliary devices. The resultant horsepower at the coupling is brake horsepower (bhp). Rail Horsepower A portion of the engine bhp is transmitted mechanically via couplings or gears to operate the traction motor blowers, air compressor, auxiliary generator, and radiator cooling fan generator or alternator. Part of the auxiliary generator electrical output is used to run some of the auxiliaries. The remainder of the engine bhp transmitted to the main generator or main alternator for traction purposes must be multiplied by generator efficiency (usually about 91 percent), and the result again multiplied by the efficiency of the traction motors (including power

(11.2.3)

where V is the velocity, mi/h, and TE is tractive effort at the rail, lb. Thermal Efficiency The thermal efficiency of the diesel engine at the crankshaft, or the ratio of bhp output to the rate at which energy of the fuel is delivered to the engine, is about 33 percent. Thermal efficiency at the rail is about 26 percent. Drawbar Horsepower The drawbar horsepower represents power available at the rear of the locomotive to move the cars and may be expressed as hpdrawbar 5 hp rail 2 locomotive running resistance 3

Fig. 11.2.4 Dynamic-braking effort versus speed. (Electro-Motive Division, General Motors Corp.)

(11.2.2)

where Eg is the main-generator voltage and Im is the traction motor current in amperes, multiplied by the number of parallel paths or the dc link current in the case of an ac traction system. Rail horsepower may be expressed as

Fig. 11.2.5 Tractive effort versus speed.

DIESEL-ELECTRIC LOCOMOTIVES

11-23

the temperature of the motor to its maximum safe limit when cooling air at maximum expected ambient temperature is forced through it at the prescribed rate by the blowers. Continuous operation at this current level ideally allows the motor to operate at its maximum safe power level, with waste heat generated equal to heat dissipated. The tractive effort corresponding to this current is usually somewhat lower than that allowed by adhesion at very low speeds. Higher current values may be permitted for short periods of time (as when starting). These ratings are specified in intervals of time (minutes) and are posted on or near the load meter (ammeter) in the cab. Maximum Speed Traction motors are also rated in terms of their maximum safe speed in r/min, which in turn limits locomotive speed. The gear ratio and wheel diameter are directly related to speed as well as the maximum tractive effort and the minimum speed at which full horsepower can be developed at the continuous rating of the motors. Maximum locomotive speed may be expressed as follows: smi/hdmax 5 Fig. 11.2.6 Typical tractive effort versus speed characteristics. (Electro-Motive Division, General Motors Corp.)

more. Adhesion is severely reduced by lubricants which spread as thin films in the presence of moisture on running surfaces. Adhesion can be increased with sand applied to the rails from the locomotive sanding system. More recent wheel slip systems permit wheel creep (very slow controlled slip) to achieve greater levels of tractive effort. Even higher adhesion levels are available from ac traction motors; for example, 45 percent at start-up and low speed, with a nominal value of 35 percent. Traction Motor Characteristics Motor torque is a function of armature current and field flux (which is a function of field current). Since the traction motors are series-connected, armature and field current are the same (except when field shunting circuits are introduced), and therefore tractive effort is solely a function of motor current. Figure 11.2.7 presents a group of traction motor characteristic curves with tractive effort, speed and efficiency plotted against motor current for full field (FF) and at 35 (FS1) and 55 (FS2) percent field shunting. Wheel diameter and gear ratio must be specified when plotting torque in terms of tractive effort. (See also Sec. 15.) Traction motors are usually rated in terms of their maximum continuous current. This represents the current at which the heating due to electrical losses in the armature and field windings is sufficient to raise

(11.2.5)

where the gear ratio is the number of teeth on the gear mounted on the axle divided by the number of teeth on the pinion mounted on the armature shaft. Locomotive Compatibility The AAR has developed two standards in an effort to improve compatibility between locomotives of different model, manufacture, and ownership: a standard 27-point control system (Standard S-512, Table 11.2.2) and a standard control stand (RP-5132). The control stand has been supplanted by a control console in many road locomotives (Fig. 11.2.8). Energy Conservation Efforts to improve efficiency and fuel economy have resulted in major changes in the prime movers, including

Table 11.2.2 Standard Dimensions and Contact Identification of 27-Point Control Plug and Receptacle for Diesel-Electric Locomotives Receptacle point 1 2 3 4* 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Fig. 11.2.7 Traction motor characteristics. (Electro-Motive Division, General Motors Corp.)

wheel diameter sind 3 maximum motor r/min gear ratio 3 336

Function Power reduction setup, if used Alarm signal Engine speed Negative Emergency sanding Generator field Engine speed Forward Reverse Wheel slip Spare Engine speed Positive control Spare Engine speed Engine run Dynamic brake Unit selector circuit 2d negative, if used Brake warning light Dynamic brake Compressor Sanding Brake control/power reduction control Headlight Separator blowdown/ remote reset Boiler shutdown

Code (PRS) SG DV N ES GF CV FO RE WS BV PC AV ER B US (NN) BW BG CC SA BC/PRC

Wire size, AWG 14 14 14 14 or 10 14 12 14 12 12 14 14 14 12 14 14 14 14 12 12 14 14 14 14 14

HL SV/RR

12 14

BS

14

*Receptacle point 4—AWG wire size 12 is “standard” and AWG wire size 10 is “Alternate standard” at customer’s request. A dab of white paint in the cover latch cavity must be added for ready identification of a no. 10 wire present in a no. 4 cavity.

11-24

RAILWAY ENGINEERING Left side CB/switch panel

Front instrument panel

Console top surface

Upper control console

Emissions The U.S. Environmental Protection Agency has promulgated regulations aimed at reducing diesel locomotive emissions, especially oxides of nitrogen (NOx). These standards also include emissions reductions for hydrocarbons (HC), carbon monoxide (CO), particulate matter (PM), and smoke. These standards are executed in three “tiers.” Tier 0 going into effect in 2000, Tier 1 in 2002, and Tier 3 in 2005. EPA locomotive emissions standards are published under 40 CFR § 85, § 89, and § 92. ELECTRIC LOCOMOTIVES

Lower control console

Fig. 11.2.8 Locomotive control console. (Electro-Motive Division, General Motors Corp.)

more efficient turbocharging, fuel injection, and combustion. The auxiliary (parasitic) power demands have also been reduced by improvements, which include fans and blowers that move only the air required by the immediate demand, air compressors that declutch when unloaded, and a selective low-speed engine idle. Fuel-saver switches permit dropping trailing locomotive units off the line when less than maximum power is required, while allowing the remaining units to operate at maximum efficiency.

Electric locomotives are presently in very limited use in North America. Freight locomotives are in dedicated service primarily for hauling coal or mineral products. Electric passenger locomotives are used in highdensity service in the northeastern United States. Electric locomotives draw power from overhead catenary or third-rail systems. While earlier systems used either direct current up to 3,000 V or single-phase alternating current at 11,000 V, 25 Hz, the newer systems in North America use 25,000 or 50,000 V at 60 Hz. The higher voltage levels can be used only where clearances permit. A three-phase power supply was tried briefly in this country and overseas many years ago, but was abandoned because of the complexity of the required double catenary. While the older dc locomotives used resistance control, ac locomotives have used a variety of systems including Scott-connected transformers; series ac motors; motor generators and dc motors; ignitrons and dc motors; silicon thyristors and dc motors; and, more recently, chopper control and inverter with ac motors. Examples of the various electric locomotives in service are shown in Table 11.2.1. An electric locomotive used in high-speed passenger service is shown in Fig. 11.2.9. High short-time ratings (Figs. 11.2.10 and 11.2.11)

Fig. 11.2.9 Modern electric high-speed locomotive.(Courtesy of Bombardier Transportation.)

FREIGHT CARS

11-25

receive different current references, depending on their respective adhesion conditions. All auxiliary machines—air compressor, traction motor blower, cooling fans, etc.—are driven by three-phase, 400-V, 60-Hz induction motors powered by a static inverter that has a rating of 175 kVA at a 0.8 power factor. When cooling requirements are reduced, the control system automatically reduces the voltage and frequency supplied to the blower motors to the required level. As a backup, the system can be powered by the static converter used for the head-end power requirements of the passenger cars. This converter has a 500-kW, 480-V, threephase, 60-Hz output capacity and has a built-in overload capacity of 10 percent for half of any 1-hour period. Dynamic brake resistors are roof-mounted and are cooled by ambient airflow induced by locomotive motion. The dynamic brake capacity relative to train speed is shown in Fig. 11.2.12. Regenerative braking can be utilized with electric locomotives by returning braking energy to the distribution system. Fig. 11.2.10 Speed-horsepower characteristics (half-worn wheels). [Gear ratio  85/36; wheel diameter  50 in (1270 mm); ambient temperature  60F (15.5C).] (Courtesy of M. Ephraim, ASME, Rail Transportation Division.)

Fig. 11.2.12 Dynamic-brake performance. (Courtesy of M. Ephraim, ASME, Rail Transportation Division.)

Many electric locomotives utilize traction motors identical to those used on diesel-electric locomotives. Some, however, use framesuspended motors with quill-drive systems (Fig. 11.2.13). In these systems, torque is transmitted from the traction motor by a splined coupling to a quill shaft and rubber coupling to the gear unit. This transmission allows for greater relative movement between the traction motor (truck frame) and gear (wheel axle) and reduces unsprung weight. Fig. 11.2.11 Tractive effort versus speed (half-worn wheels). [Voltage  11.0 kV at 25 Hz; gear ratio  85/36; wheel diameter  50 in (1270 mm).] (Courtesy of M. Ephraim, ASME, Rail Transportation Division.)

render electric locomotives suitable for passenger service where high acceleration rates and high speeds are combined to meet demanding schedules. The modern electric locomotive in Fig. 11.2.9 obtains power for the main circuit and motor control from the catenary through a pantograph. A motor-operated switch provides for the transformer change from series to parallel connection of the primary windings to give a constant secondary voltage with either 25-kV or 12.5-kV primary supply. The converters for armature current consist of two asymmetric-type bridges for each motor. The traction-motor fields are each separately fed from a one-way connected field converter. Identical control modules separate the control of motors on each truck. Motor sets are therefore connected to the same transformer winding. Wheel slip correction is also modularized, utilizing one module for each two-motor truck set. This correction is made with a complementary wheel-slip detection and correction system. A magnetic pickup speed signal is used for the basic wheel slip. Correction is enhanced by a magnetoelastic transducer used to measure force swings in the traction motor reaction rods. This system provides the final limit correction. To optimize the utilization of available adhesion, the wheel-slip control modules operate independently to allow the motor modules to

Fig. 11.2.13 Frame-suspended locomotive transmission. (Courtesy of M. Ephraim, ASME, Rail Transportation Division.)

FREIGHT CARS Freight Car Types

Freight cars are designed and constructed for either general service or specific ladings. The Association of American Railroads (AAR) has established design and maintenance criteria to assure the interchangeability of freight cars on all North American railroads. Freight cars that do not conform to AAR standards have been placed in service by

11-26

RAILWAY ENGINEERING

special agreement with the railroads over which the cars operate. The AAR “Manual of Standards and Recommended Practices” specifies dimensional limits, weights, and other design criteria for cars, which may be freely interchanged between North American railroads. This manual is revised annually by the AAR. Many of the standards are reproduced in the “Car and Locomotive Cyclopedia,” which is revised periodically. Safety appliances (such as end ladders, sill steps, and uncoupling levers), braking equipment, and certain car design features and maintenance practices must comply with the Safety Appliances Act, the Power Brake Law, and the Freight Car Safety Standards covered by the Federal Railroad Administration’s (FRA) Code of Federal Regulations (Title 49 CFR). Maintenance practices are set forth in the “Field Manual of the AAR Interchange Rules” and repair pricing information is provided in the companion “AAR Office Manual.” The AAR identifies most cars by nominal capacity and type. In some cases there are restrictions as to type of load (i.e., food, automobile parts, coil steel, etc.). Most modern cars have nominal capacities of either 70 or 100 tons.* A 100-ton car is limited to a maximum gross rail load of 263,000 lb (119.3 t*) with 61⁄2 by 12 in (165 by 305 mm) axle journals, four pairs of 36-in (915-mm) wheels and a 70-in (1,778-mm) rigid truck wheel base. Some 100-ton cars are rated at 286,000 lb and are handled by individual railroads on specific routes or by mutual agreement between the handing railroads. A 70-ton car is limited to a maximum gross rail load of 220,000 lb (99.8 t) with 6- by 11-in (152- by 280-mm) axle journals, four pairs of 33-in (838-mm) wheels with trucks having a 66-in (1,676-mm) rigid wheel base. On some special cars where height limitations are critical, 28-in (711-mm) wheels are used with 70-ton axles and bearings. In these cases wheel loads are restricted to 22,400 lb (10.2 t). Some special cars are equipped with two 125-ton two-axle trucks with 38-in (965-mm) wheels in four-wheel trucks having 7 by 14 in (178 by 356 mm) axle journals and 72-in (1,829-mm) truck wheel base. This application is prevalent in articulated double-stack (two-high) container cars. Interchange of these very heavy cars is by mutual agreement between the operating railroads involved. The following are the most common car types in service in North America. Dimensions given are for typical cars; actual measurements may vary. Box cars (Fig. 11.2.14a) come in six popular types: 1. Standard box cars may have either sliding or plug doors. Plug doors provide a tight seal from weather and a smooth interior. Unequipped box cars are usually of 70-ton capacity. These cars have tongue-and-groove or plywood lining on the interior sides and ends with nailable floors (either wood or steel with special grooves for locking nails). The cars carry typical general merchandise lading: packaged, canned, or bottled foodstuffs; finished lumber; bagged or boxed bulk commodities; or, in the past when equipped with temporary door fillers, bulk commodities such as grain [70-ton: L  50 ft 6 in (15.4 m), H  11 ft 0 in (3.4 m), W  9 ft 6 in (2.9 m), truck centers  40 ft 10 in (12.4 m)]. 2. Specially equipped box cars usually have the same dimensions as standard cars but include special interior devices to protect lading from impacts and over-the-road vibrations. Specially equipped cars may have hydraulic cushion units to dampen longitudinal shock at the couplers. 3. Insulated box cars have plug doors and special insulation. These cars can carry foodstuffs such as unpasteurized beer, produce, and dairy products. These cars may be precooled by the shipper and maintain a heat loss rate equivalent to 18F (0.558C) per day. They also can protect loads from freezing when operating at low ambient temperatures. 4. Refrigerated box cars are used where transit times are longer. These cars are equipped with diesel-powered refrigeration units and are primarily used to carry fresh produce and meat. They are often 100-ton cars [L  52 ft 6 in (16.0 m), H  10 ft 6 in (3.2 m), truck centers  42 ft 11 in (13.1 m)]. 5. “All door” box cars have doors which open the full length of the car for loading package lumber products such as plywood and gypsum board.

*

1 ton  1 short ton  2,000 lb; 1 t  1 metric ton  1,000 kg  2205 lb.

6. High cubic capacity box cars with an inside volume of 10,000 ft3 (283 m3) have been designed for light density lading, such as some automobile parts and low-density paper products. Box car door widths vary from 6 to 10 ft for single-door cars and 16 to 20 ft (4.9 to 6.1 m) for double-door cars. “All door” cars have clear doorway openings in excess of 25 ft (7.6 m). The floor height above rail for an empty uninsulated box car is approximately 44 in (1,120 mm) and for an empty insulated box car approximately 48 in (1,220 mm). The floor height of a loaded car can be as much as 3 in (76 mm) lower than the empty car. Covered hopper cars (Fig. 11.2.14b) are used to haul bulk commodities that must be protected from the environment. Modern covered hopper cars are typically 100-ton cars with roof hatches for loading and from two to six bottom outlets for discharge. Cars used for dense commodities, such as fertilizer or cement, have two bottom outlets, round roof hatches, and volumes of 3,000 to 4,000 ft3 (84.9 to 113.2 m3) [100-ton: L  39 ft 3 in (12.0 m). H  14 ft 10 in (4.5 m), truck centers  26 ft 2 in (8.0 m)]. Cars used for grain service (corn, wheat, rye, etc.) have three or four bottom outlets, longitudinal trough roof hatches, and volumes of from 4,000 to 5,000 ft3 (113 to 142 m3). Cars used for hauling plastic pellets have four to six bottom outlets (for pneumatic unloading with a vacuum system), round roof hatches, and volumes of from 5,000 to 6,000 ft3 (142 to 170 m3) [100-ton: L  65 ft 7 in (20.0 m), H  15 ft 5 in (4.7 m), truck centers  54 ft 0 in (16.5 m)]. Open-top hopper cars (Fig. 11.2.14c) are used for hauling bulk commodities such as coal, ore, or wood chips. A typical 100-ton coal hopper car will vary in volume depending on the light weight of the car and density of the coal to be hauled. Volumes range from 3,900 to 4,800 ft3 (110 to 136 m3). Cars may have three or four manually operated hopper doors or a group of automatically operated doors. Some cars are equipped with rotating couplers on one end of allow rotary dumping without uncoupling [100-ton: L  53 ft 1⁄2 in (16.2 m), H  12 ft 81⁄2 in (3.9 m), truck centers  40 ft 6 in (12.3 m)]. Cars intended for aggregate or ore service have smaller volumes for the more dense commodity. These cars typically have two manual bottom outlets [100-ton: L  40 ft 8 in (12.4 m), H  11 ft 10 in (3.6 m), truck centers  29 ft 9 in (9.1 m)]. Hopper cars used for wood chip service are configured for low-density loads. Volumes range from 6,500 to 7,500 ft3 (184 to 212 m3). High-side gondola cars (Fig. 11.2.14d) are open-top cars typically used to haul coal or wood chips. These cars are similar to open-top hopper cars in volume but require a rotary coupler on one end for rotary dumping to discharge lading since they do not have bottom outlets. Rotarydump coal gondolas are usually used in dedicated, unit-train service between a coal mine and an electric power plant. The length over coupler pulling faces is approximately 53 ft 1 in (16.2 m) to suit the standard coal dumper [100-ton: L  50 ft 51⁄2 in (15.4 m), H  11 ft 9 in (3.6 m), W  10 ft 5 in (3.2 m), truck centers  50 ft 4 in (15.3 m)]. Wood chip cars are used to haul chips from sawmills to paper mills or particle board manufacturers. These high-volume cars are either rotary-dumped or, when equipped with end doors, end-dumped [100-ton: L  62 ft 3 in (19.0 m), H  12 ft 7 in (3.8 m), W  10 ft 5 in (3.2 m), truck centers  50 ft 4 in (15.3 m)]. Rotary dump aggregate or ore cars, called “ore jennies,” have smaller volumes for the high-density load. Bulkhead flat cars (Fig. 11.2.14e) are used for hauling such commodities as packaged finished lumber, pipe, or, with special inward canted floors, pulpwood. Both 70- and 100-ton bulkhead flats are used. Typical deck heights are approximately 45 to 50 in (1,143 to 1,270 mm) [100-ton: L  66 ft 2 in (20.2 m), W  10 ft 5 in (3,175 m), H  14 ft 2 9⁄16 in (4.3 m), truck centers  48 ft 2 in (14.7 m)]. Special “center beam” bulkhead flat cars designed for pulpwood and lumber service have a full-height, longitudinal beam from bulkhead to bulkhead. Tank cars (Fig 11.2.14f ) are used for liquids, compressed gases, and other ladings, such as sulfur, which can be loaded and unloaded in a molten state. Nonhazardous liquids such as corn syrup, crude oil, and mineral spring water are carried in nonpressure cars. Cars used to haul hazardous substances such as liquefied petroleum gas (LPG), vinyl chloride, and anhydrous ammonia are regulated by the U.S. Department of Transportation. Newer and earlier-built retrofitted cars equipped for hazardous commodities have safety features including safety valves;

FREIGHT CARS

11-27

Fig. 11.2.14 Typical freight cars.

especially designed top and bottom “shelf” couplers, which increase the interlocking effect between couplers and decrease the danger of disengagement due to derailment; head shields on the ends of the tank to prevent puncturing; bottom outlet protection if bottom outlets are used; and thermal insulation and jackets to reduce the risk of rupturing in a fire. These features resulted from industry-government studies in the RPIAAR Tank Car Safety Research and Test Program. Cars for asphalt, sulfur, and other viscous liquid service have heating coils on the shell so that steam may be used to liquefy the lading for discharge [pressure cars, 100-ton: volume  20,000 gal (75.7 m3), L  59 ft 113⁄4 in (18.3 m), truck centers  49 ft 1⁄4 in (14.9 m)] [nonpressure, 100-ton: volume  21,000 gal (79.5 m3), L  51 ft 31⁄4 in (15.6 m), truck centers  38 ft 111⁄4 in (11.9 m)]. Conventional intermodal cars are 89 ft 4 in (27.2-m) intermodal flat cars equipped to haul one 45-ft (13.7-m) and one 40-ft (12.2-m) trailer with or without end-mounted refrigeration units, two 45-ft (13.7-m) trailers (dry vans), or combinations of containers from 20 to 40 ft (6.1 to 12.2 m) in length. Hitches to support trailer fifth wheels may be fixed for trailer-only cars or retractable for conversion to haul containers or to facilitate driving

trailers onto the cars in the rare event where “circus” loading is still required. In some cars a drawbar connects two units so three 57-ft trailers can be loaded, with one trailer over the drawbar. Trailer hauling service (Fig. 11.2.14g) is called TOFC (trailer on flat car); container service (Fig. 11.2.14h) is called COFC (container on flat car) [70-ton: L  89 ft 4 in (27.1 m), W  10 ft 3 in (3.1 m), truck centers  64 ft 0 in (19.5 m)]. Introduction of larger trailers in highway service has led to the development of alternative TOFC and COFC cars. Other techniques involve articulation of well-type car bodies or skeletonized spine car bodies, assembled into multiunit cars for lift-on loading and lift-off unloading (stand-alone well car, Fig. 11.2.14n and articulated well car, Fig. 11.2.14o). These cars are typically composed of from three to ten units. Well cars consist of a center well for double-stacked containers [10-unit spine car; L  46 ft 6 in (14.2 m) per end unit, L  465 ft 31⁄2 in (141.8 m)]. In the 1980s another approach was to haul larger trailers on two axle spine cars. These cars, used either singly or in multiple combinations, can haul a single trailer from 40 to 48 ft long (12.2 to 14.6 m) with endmounted refrigeration unit and 36 or 42-in (914 to 1070 mm) spacing between the kingpin and the front of the trailer.

11-28

RAILWAY ENGINEERING

Bilevel and trilevel auto rack cars (Fig. 11.2.14i) are used to haul finished automobiles and other vehicles. Most recent designs of these cars feature fully enclosed racks to provide security against theft and vandalism [70-ton: L  89 ft 4 in (27.2 m), H  18 ft 11 in (5.8 m), W  10 ft 7 in (3.2 m), truck centers  64 ft 0 in (19.5 m)]. Because of increased clearances provided by some railroads for double-stack cars, auto rack cars with a height of 20 ft 2 in (6.1 m) are now being produced. Mill gondolas (Fig. 11.2.14 j) are 70- or 100-ton open-top cars principally used to haul pipe, structural steel, scarp metal, and when specially equipped, coils of aluminum or tinplate and other steel materials [100-ton: L  52 ft 6 in (16.0 m), W  9 ft 6 in (2.9 m), H  4 ft 6 in (1.4 m), truck centers  43 ft 6 in (13.3 m)]. General-purpose or machinery flat cars (Fig. 11.2.14k) are 70- or 100-ton cars used to haul machinery such as farm equipment and highway tractors. These cars usually have wood decks for nailing lading restraint dunnage. Some heavy-duty six-axle cars are used for hauling off highway vehicles such as army tanks and mining machinery [100-ton, fouraxle: L  60 ft 0 in (18.3 m), H  3 ft 9 in (1.1 m), truck centers  42 ft 6 in (13.0 m)] [200-ton, 8-axle: L  44 ft 4 in (13.5 m), H  4 ft 0 in (1.2 m), truck centers  33 ft 9 in (10.3 m)]. Depressed-center flat cars (Fig. 11.2.14l) are used for hauling transformers and other heavy, large materials that require special clearance considerations. Depressed-center flat cars may have four- or six-axle or dual four-axle, trucks with span bolsters, depending on weight requirements. Schnabel cars (Fig. 11.2.14m) are special cars for transformers and power plant components. With these cars the load itself provides the

center section of the car structure during shipment. Some Schnabel cars are equipped with hydraulic controls to lower the load for height restrictions and shift the load laterally for wayside restrictions [472-ton: L  22 ft 10 in to 37 ft 10 in (7.0 to 11.5 m), truck centers  55 ft 6 in to 70 ft 6 in (16.9 to 21.5 m)]. Schnabel cars must be operated in special trains. Freight Car Design

The AAR provides specifications to cover minimum requirements for design and construction of new freight cars. Experience has demonstrated that the AAR specifications alone do not ensure an adequate design for all service conditions. The designer must be familiar with the specific service and increase the design criteria for the particular car above the minimum criteria provided by the AAR. The AAR requirements include stress calculations for the load-carrying members of the car and physical tests that may be required at the option of the AAR Equipment Engineering Committee (EEC) approving the car design. In the application for approval of a new or untired type of car, the EEC may require either additional calculations or tests to assess the design’s ability to meet the AAR minimum requirements. These tests might consist of a static compression test of 1,000,000 lb (4.4 MN), a static vertical test applied at the coupler, and impact tests simulating yard impact conditions. In some cases, it is advisable to operate an instrumented prototype car in service to detect problems which might result from unexpected track or train-handling input forces. The car design must comply with width and height restrictions shown in AAR clearance plates furnished in the specifications, Fig. 11.2.15. In addition, there are limitations on

UNRESTRICTED ON ALL ROADS EXCEPT ON CERTAIN ROUTES. FOR SPECIFIC RESTRICTED AREAS ON SUCH ROADS SEE “RAILWAY LINE CLEARANCES”.

10'-8'' 10'-0'' 7'-0''

LIGHT CAR CONDITIONS CARS MAY BE CONSTRUCTED TO AN EXTREME WIDTH OF 10'-8" AND TO THE OTHER LIMITS OF THIS DIAGRAM WHEN TRUCK CENTERS DO NOT EXCEED 46'-3" AND WHEN, WITH TRUCK CENTERS OF 46'-3". THE SWINGOUT AT ENDS OF CAR DOES NOT EXCEED THE SWINGOUT AT CENTER OF CAR ON A 13" CURVE; A CAR TO THESE DIMENSIONS IS DEFIND AS THE BASE CAR.

9'' 3'' 24

1'−3''

3'−4''

CARS WITH RAIL LOADS IN EXCESS OF 65,750 LBS. PER AXLE CANNOT BE OPERATED IN UNRESTRICTED INTERCHANGE. HOWEVER, THEY MAY BE PERMITTED UNDER CONTROLLED CONDITIONS WHERE SPECIAL AGREEMENT HAS BEEN REACHED BETWEEN PARTICIPATING RAILROADS TO SO HANDLE.

7'-4'' 8'-0'' 9'-0'' 9'-4'' 10'-8'' (b)

Fig. 11.2.15 AAR Equipment diagrams. (a) Plate B unrestricted interchange service; (b) Plate C limited interchange service; (c) Plate H controlled interchange service.

15'-6''

MAXIMUM CAR WIDTHS FOR VARIOUS TRUCK CENTERS, AT CENTER OF CAR ARE SHOWN ON PLATE C-1. MAXIMUM CAR WIDTH AT LOCATIONS OTHER THAN CENTER OF CAR ARE SHOWN ON PLATE D.

14'-2'' 14'-8''

WHEN TRUCK CENTERS EXCEED 46'-3", CAR WIDTH FOR ENTIRE CLEARANCE OUTLINE SHALL BE REDUCED TO COMPENSATE FOR THE INCREASED SWINGOUT AT CENTER AND/OR ENDS OF CAR ON A 13" CURVE SO THAT THE WIDTH OF CAR SHALL NOT PROJECT BEYOND THE CENTER OF TRACK MORE THAN THE BASE CAR.

FREIGHT CARS

11-29

Many of the design equations and procedures are available from the AAR. Important information on car design and approval testing is contained in the AAR “Manual of Standards and Recommended Practice,” Section C-II M-1001, Chapter XI. Freight-Car Suspension

Most freight cars are equipped with standard three-piece trucks (Fig. 11.2.16) consisting of two side-frame castings and one bolster casting. Side-frame and bolster designs are subjected to both static and fatigue test requirements specified by the AAR. The bolster casting is equipped with a female centerplate bowl upon which the car body rests and with side bearings generally located 25 in (635 mm) on each side of the centerline. In most cases, the side bearings have clearance to the car body and are equipped with either flat sliding plates or rollers. In some cases, constant-contact side bearings provide a resilient material between the car body and the truck bolster.

Fig. 11.2.15 (Continued)

the height of the center of gravity of the loaded car and on the vertical and horizontal curving capability allowed by the clearance provided at the coupler. The AAR provides a method of calculating the minimum radius curve that the car design can negotiate when coupled to another car of the same type or to a standard AAR “base” car. In the case of horizontal curves, the requirements are based on the length of the car over the pulling faces of the couplers. Freight cars are designed to withstand single-end impact or coupling loads on the basis of the type of energy absorption, cushioning, provided in the car design. Conventional friction, elastomer, or combination draft gears or short-travel hydraulic cushion units that provide less than 6 in (152 mm) of travel require a structure capable of withstanding a 1,250,000-lb (5.56-MN) impact load. For cars with hydraulic units that provide greater than 14 in (356 mm) of travel, the required design impact load is 600,000 lb (2.7 MN). In all cases, the structural connections to the car must be capable of withstanding a static compressive (squeeze) end load of 1,000,000 lb (4.44 MN) or a dynamic (impact) compressive load of 1,250,000 lb (5.56 MN). The AAR has adopted requirements for unit trains of high-utilization cars to be designed for 3,000,000 mi (4.8 Gm) of service based upon fatigue life estimates. General-interchange cars that accumulate less mileage in their life should be designed for 1,000,000 mi (1.6 Gm) of service. Road environment spectra for various locations within the car are being developed for different car designs for use in this analysis. The fatigue strengths of various welded connections are provided in the AAR “Manual of Standards and Recommended Practices,” Section C, Part II.

Fig. 11.2.16 Three-piece freight car truck. (Courtesy AAR Research and Test Department.)

The centerplate arrangement may have a variety of styles of wear plates or friction materials and a vertical loose or locked pin between the truck centerplate and the car body. Truck springs nested into the bottom of the side-frame opening support the end of the truck bolster. Requirements for spring designs and the grouping of springs are generally specified by the AAR. Historically, the damping provided within the spring group has utilized a combination of springs and friction wedges. In addition to friction wedges, in more recent years, some cars have been equipped with hydraulic damping devices that parallel the spring group. A few trucks have a “steering” feature that includes an interconnection between axles to increase the lateral interaxle stiffness and decrease the interaxle yaw stiffness. Increased lateral stiffness improves the lateral stability and decreased yaw stiffness improves the curving characteristics.

11-30

RAILWAY ENGINEERING

Freight-Car Wheel-Set Design

A freight-car wheel set consists of two wheels, one axle, and two bearings. Cast- and wrought-steel wheels are used on freight cars in North America (AAR “Manual of Standards and Recommended Practice,” Sec. G). Freight-car wheels are subjected to thermal loads from braking, as well as mechanical loads at the wheel-rail interface. Experience with thermal damage to wheels has led to the requirement for “low stress” or curved-plate wheels (Fig. 11.2.17). These wheels are less susceptible to the development of circumferential residual tensile stresses which render the wheel vulnerable to sudden failure if a flange or rim crack occurs. New wheel designs introduced for interchange service must be evaluated by using a finite-element technique employing both thermal and mechanical loads (AAR S-660).

are rim-quenched. Class C wheels have a carbon content of 0.67 to 0.77 percent and are also rim-quenched. Rim quenching provides a hardened running surface for a long wear life. Lower carbon levels than those in Class B may be used where thermal cracking is experienced, but freight-car equipment generally does not require their use. Axles used in interchange service are solid steel forgings with raised wheel seats. Axles are specified by journal size for different car capacities, Table 11.2.3. Most freight car journal bearings are grease-lubricated, tapered-roller bearings (see Sec. 8). Current bearing designs eliminate the need for periodic field lubrication. Wheels are mounted and secured on axles with an interference fit. Bearings are mounted with an interference fit and retained by an end cap bolted to the end of the axle. Wheels and bearings for cars in interchange service must be mounted by an AAR-inspected and -approved facility. Special Features

Fig. 11.2.17 Wheel-plate designs. (a) Flat plate; (b) parabolic plate; (c) S-plate curved wheel.

Freight-car wheels range in diameter from 28 to 38 in (711 to 965 mm), depending on car weight (Table 11.2.3). The old AAR standard tread profile (Fig. 11.2.18a) has been replaced with the AAR-1B (Fig. 11.2.18c) profile which represents a “worn” profile to minimize early tread loss due to wear and provide a more stable profile over the life of the tread. Several variant tread profiles, including the AAR-1B, were developed from the basic Heumann design, Fig. 11.2.18b. One of these, for application in Canada, provided increasing conicity into the throat of the flange, similar to the Heumann profile. This reduces curving resistance and extends wheel life. Wheels are also specified by chemistry and heat treatment. Lowstress wheel designs of Classes B and C are required for freight cars. Class B wheels have a carbon content of 0.57 to 0.67 percent and Table 11.2.3

Wheel and Journal Sizes of Eight-Wheel Cars

Nominal car capacity, ton

Maximum gross weight, lb

Journal (bearing) size, in

Bearing AAR class

Wheel diameter, in

50 * 70 100 110† 110 125†

177,000 179,200 220,000 263,000 286,000 286,000 315,000

51⁄2  10 6  11 6  11 61⁄2  12 61⁄2  12 61⁄2  9 7  12

E E F F K G

33 28 33 36 36 36 38

* Load limited by wheel rating. † Not approved for free interchange.

Many components are available to enhance the usefulness of freight cars. In most cases, the design or performance of the component is specified by the AAR. Coupler Cushioning Switching of cars in a classification yard can result in relatively high coupler forces at the time of the impact between the moving and standing cars. Nominal coupling speeds of 4 mi/h (6.4 km/h) or less are sometimes exceeded, with lading damage a possible result. Conventional cars are equipped with an AARapproved draft gear, usually a friction-spring energy-absorbing device, mounted between the coupler and the car body. The rated capacity of draft gears ranges between 20,000 ft-lb (27.1 kJ) for earlier units to over 65,000 ft  lb (88.1 kJ) for later designs. Impact forces of 1,250,000 lb (5.56 MN) can be expected when a moving 100-ton car strikes standing cars at 8 to 10 mi/h (12.8 to 16 km/h). Hydraulic cushioning devices are available to reduce the impact force to 500,000 lb (2.22 MN) at coupling speeds of 12 to 14 mi/h (19 to 22 km/h). These devices may be mounted either at each end of the car (end-of-car devices) or in a long beam which extends from coupler to coupler (sliding centersill devices). Lading Restraint Many forms of lading restraint are available, from tie-down chains for automobiles on rack cars to movable bulkheads for boxcars. Most load-restraining devices are specified by the AAR “Manual of Standards and Recommended Practices” and approved car loading arrangements are specified in AAR “Loading Rules,” a multivolume publication for enclosed, open-top, and TOFC and COFC cars. Covered Hopper Car Discharge Gates The majority of covered hopper cars are equipped with rack-and-pinion–operated sliding gates, which allow the lading to discharge by gravity between the rails. These gates can be operated manually, with a simple bar or a torque-multiplying wrench, or mechanically with an impact or hydraulic wrench. Many special covered hopper cars have discharge gates with nozzles and metering devices for vacuum or pneumatic unloading. Coupling Systems The majority of freight cars are connected with AAR standard couplers. A specification has been developed to permit the use of alternative coupling systems such as articulated connectors, drawbars, and rotary-dump couplers; see AAR M-215.

Fig. 11.2.18 Wheel-tread designs. (a) Obsolete standard AAR; (b) Heumann; (c) new AAR-1B.

FREIGHT CARS Freight-Train Braking

The retarding forces acting on a railway train are rolling and mechanical resistance, aerodynamic drag, curvature, and grade, plus that force resulting from friction of brake shoes rubbing the wheel treads. On locomotives so equipped, dynamic or rheostatic brake using the traction motors as generators can provide all or a portion of the retarding force to control train speed. Quick-action automatic air brakes of the type specified by the AAR are the common standard in North America. With the automatic air brake system, the brake pipe extends through every vehicle in the train, connected by hoses between each locomotive unit and car. The front and rear end brake pipe angle cocks are closed. Air pressure is provided by compressors on the locomotive units to the main reservoirs, usually at 130 to 150 lb/in2 (900 to 965 kPa). (Pressure values are gage pressures.) The engineer’s automatic brake valve, in “release” position, provides air to the brake pipe on freight trains at reduced pressure, usually at 75, 80, 85, or 90 lb/in2 (520, 550, 585, or 620 kPa), depending on the type of service, train weight, grades, and speeds at which a train will operate. In passenger service, brake pipe pressure is usually 90 or 110 lb/in2 (620 to 836 kPa). When brake pipe pressure is increased, the control valve allows the reservoir capacity on each car and locomotive to be charged and at the same time connects the brake cylinders to exhaust. Brake pipe pressure is reduced when the engineer’s brake valve is placed in a “service” position, and the control valve cuts off the charging function and allows the reservoir air on each car to flow into the brake cylinder. This moves the piston and, through a system of levers and rods, pushes the brake shoes against the wheel treads. When the engineer’s automatic brake valve is placed in the emergency position, the brake pipe pressure (BP) is reduced very rapidly. The control valves on each car move to the emergency-application position and rapidly open a large vent valve, exhausting brake pipe pressure to atmosphere. This will serially propagate the emergency application through the train at from 900 to 950 ft/s (280 to 290 m/s.) With the control valve in the emergency position both auxiliary- and emergencyreservoir volumes (pressures) equalize with the brake cylinder, and higher brake cylinder pressure (BCP) results, building up at a faster rate than in service applications. The foregoing briefly describes the functions of the fundamental automatic air brake based on the functions of the control valve. AARapproved brake equipment is required on all freight cars used in interchange service. The functions of the control valve have been refined to permit the handling of longer trains by more uniform brake performance. Important improvements in this design have been (1) reduction of the time required to apply the brakes on the last car of a train,

Table 11.2.4

(2) more uniform and faster release of the brakes, and (3) availability of emergency application with brake pipe pressure greater than 40 lb/in2 (275 kPa). Faster brake response has been achieved by a number of railroads using “distributed power,” where radio-controlled locomotives are spaced within the train or placed at both ends of the train. The braking ratio of a car is defined as the ratio of brake shoe (normal) force to the car’s rated gross weight. Two types of brake shoes, highfriction composition and high-phosphorus cast iron, are used in interchange service. As these shoes have very different friction characteristics, different braking ratios are required to assure uniform train braking performance (Table 11.2.4). Actual or net shoe forces are measured with calibrated devices. The calculated braking ratio R (nominal) is determined from the equation R 5 PLANE 3 100/W

(11.2.6)

where P  brake cylinder pressure, 50 lb/in gage; L  mechanical ratio of brake levers; A  brake cylinder area, in2; N  number of brake cylinders; E  brake rigging efficiency 5 E r 3 E b 3 E c; and W  car weight, lb. To estimate rigging efficiency consider each pinned joint and horizontal sliding joint as a 0.01 loss of efficiency; i.e., in a system with 20 pinned and horizontal sliding joints, Er  0.80. For unit-type (hangerless) brake beams Eb  0.90 and for the brake cylinder Ec  0.95, giving an overall efficiency of 0.684 or 68.4 percent. The total retarding fore in pounds per ton may be taken as: 2

F  (PLef/W)  FgG

(11.2.7)

where P  total brake-cylinder piston force, lbf; L  multiplying ratio of the leverage between cylinder pistons and wheel treads; ef  product of the coefficient of brake shoe friction and brake rigging efficiency; W  loaded weight of vehicle, tons; Fg  force of gravity, 20 lb/ton/percent grade; G  ascending grade, percent. Stopping distance can be found by adding the distance covered during the time the brakes are fully applied to the distance covered during the equivalent instantaneous application time. 0.0334V 22

S5 B

WnBn s pa /pndef R R 1 ¢ ≤ 6 sGd Wa 2000 1 1.467 t1 B V1 2 ¢

R 1 2000G t1 ≤ R 91.1 2

With 50 lb/in2 brake cylinder pressure Percent of gross rail load Type of brake rigging and shoes

(11.2.8)

where S  stopping distance, ft; V1  initial speed when brake applied, mi/h; V2  speed, mi/h, at time t1; Wn  weight on which

Braking Ratios, AAR Standard S-401

Conventional body-mounted brake rigging or truck-mounted brake rigging using levers to transmit brake cylinder force to the brake shoes Cars equipped with cast iron brake shoes Cars equipped with high-friction composition brake shoes Direct-acting brake cylinders not using levers to transmit brake cylinder force to the brake shoes Cars equipped with cast iron brake shoes Cars equipped with high-friction composition brake shoes Cabooses† Cabooses equipped with cast iron brake shoes Cabooses equipped with high-friction composition brake shoes

11-31

Hand brake* Minimum percent of gross rail load

Min.

Max.

Maximum percent of light weight

13 6.5

20 10

53 30

13 11

6.5

10

33

11

35–45 18–23

* Hand brake force applied at the horizontal hand brake chain with AAR certified or AAR approved hand brake. † Effective for cabooses ordered new after July 1, 1982, hand brake ratios for cabooses to the same as lightweight ratios for cabooses. NOTE: Above braking ratios also apply to cars equipped with empty and load brake equipment.

11-32

RAILWAY ENGINEERING

braking ratio Bn is based, lb (see Table 11.2.5 for values of Wn for freight cars; for passenger cars and locomotives: Wn is based on

Table 11.2.5 Capacity versus Load Limit for Railroad Freight Cars Capacity, ton

Wn, 1,000 lb

50 70 100 110 125

177 220 263 286 315

empty or ready-to-run weight); Bn  braking ratio (total brake shoe force at stated brake cylinder, lb/in2, divided by Wn); Pn  brake cylinder pressure on which Bn is based, usually 50 lb/in2; Pa  full brake cylinder pressure; e  overall rigging and cylinder efficiency, decimal; f  typical friction of brake shoes, see below; R  total resistance, mechanical plus aerodynamic and curve resistance, lb/ton; G  grade in decimal,  upgrade,  downgrade; t1  equivalent instantaneous application time, s. Equivalent instantaneous application time is that time on a curve of average brake cylinder buildup versus time for a train or car where the area above the buildup curve is equal to the area below the curve. A straight-line buildup curve starting at zero time would have a t1 of half the total buildup time. The friction coefficient f varies with the speed; it is usually lower at high speed. To a lesser extent, it varies with brake shoe force and with the material of the wheel and shoe. For stops below 60 mi/h (97 km/h), a conservative figure for a high-friction composition brake shoe on steel wheels is approximately ef  0.30

(11.2.9)

In the case of high-phosphorus iron shoes, this figure must be reduced by approximately 50 percent. Pn is based on 50 lb/in2 (345 kPa) air pressure in the cylinder; 80 lb/in2 is a typical value for the brake pipe pressure of a fully charged freight train. This will give a 50 lb/in2 (345 kPa) brake cylinder pressure during a full service application on AB equipment, and a 60 lb/in2 (415 kPa) brake cylinder pressure with an emergency application. To prevent wheel sliding, FR  fW, where FR  retarding force at wheel rims resisting rotation of any pair of connected wheels, lb; f  coefficient of wheel-rail adhesion or friction (a decimal); and W  weight upon a pair of wheels, lb. Actual or adhesive weight on wheels when the vehicle is in motion is affected by weight transfer (force transmitted to the trucks and axles by the inertia of the car body through the truck center plates), center of gravity, and vertical oscillation of body weight upon truck springs. The value of f varies with speed as shown in Fig. 11.2.19. The relationship between the required coefficient f of wheel-rail adhesion to prevent wheel sliding and rate of retardation A is miles per hour per second may be expressed by A 5 21.95f1. There has been some encouraging work to develop an electric brake that may eventually obsolete the present pneumatic brake systems. Test Devices Special devices have been developed testing brake components and cars on a repair track. End-of-Train Devices To eliminate the requirement for a caboose crew car at the end of the train, special electronic devices have been developed to transmit the end-of-train brake-pipe pressure to the locomotive operator by telemetry.

PASSENGER EQUIPMENT

During the past two decades most mainline or long-haul passenger service in North America has become a function of government agencies, i.e., Amtrak in the United States and Via Rail in Canada. Equipment for

Fig. 11.2.19 Typical wheel-rail adhesion. Track has jointed rails. (Air Brake Association.)

intraurban service is divided into three major categories: commuter rail, heavy rail rapid transit, and light rail transit, depending upon the characteristics of the service. Commuter rail equipment operates on conventional railroad rights-of-way, usually intermixed with other long-haul passenger and freight traffic. Heavy-rail rapid transit (HRT), often referred to as “metro,” operates on a dedicated right-of-way, which is commonly in subways or on elevated structures. Light-rail transit (LRT) utilizing light rail vehicles (LRV) has evolved from the older trolley or streetcar concepts and may operate in any combination of surface, subway, and elevated dedicated rights-of-way, semireserved surface rightsof-way with grade crossing, and shared rights-of-way with other traffic on surface streets. In a few cases LRT shares the trackage with freight operations, which are time-separated to comply with FRA regulations. Since mainline and commuter rail equipment operates over conventional railroad rights-of-way, the structural design is heavy to provide the FRA-required “crashworthiness” of vehicles in the event of collisions with other trains or at grade crossings with automotive vehicles. Although HRT vehicles are designed to stringent structural criteria, the requirements are somewhat different, since the operation is separate from freight equipment and there are usually no highway grade crossings. Minimum weight is particularly important for transit vehicles so that demanding schedules over lines with close station spacing can be met with minimum energy consumption. Main-Line Passenger Equipment There are four primary passenger train sets in operation across the world: diesel locomotive hauled, diesel multiple unit (DMU), electric locomotive hauled, and electric MU (EMU). In recent years the design of mainline passenger equipment has been controlled by specifications provided by APTA and the operating authority. Most of the newer cars provided for Amtrak have had stainless steel structural components. These cars have been designed to be locomotive-hauled and to use a separate 480-V threephase power supply for heating, ventilation, air conditioning, food car services, and other control and auxiliary power requirements. Figure 11.2.20a shows an Amtrak coach car of the Superliner class; Fig. 11.2.20b shows an intercity high-speed coach. Trucks for passenger equipment are designed to provide a superior ride to freight-car trucks. As a result, passenger trucks include a form of “primary” suspension to isolate the wheel set from the frame of the truck. A softer “secondary” suspension is provided to isolate the truck from the car body. In most cases, the primary suspension uses either coil springs, elliptical springs, or elastomeric components. The secondary suspension generally utilizes either large coil ring springs or pneumatic springs with special leveling valves to control the height of the car body. Hydraulic dampers are also applied to improve the vertical and lateral ride quality.

PASSENGER EQUIPMENT

Fig. 11.2.20 Mainline passenger cars. (a) Double-deck coach; (b) Single-deck coach. (Courtesy of Bombardier Transportation.)

11-33

11-34

RAILWAY ENGINEERING

Fig. 11.2.21 Commuter rail car. (Courtesy of Bombardier Transportation.) Commuter Rail Passenger Equipment Commuter rail equipment can be either locomotive-hauled or self-propelled (Fig. 11.2.21). Some locomotive-hauled equipment is arranged for “push-pull” service. This configuration permits the train to be operated with the locomotive either pushing or pulling the train. For push-pull service, some of the passenger cars must be equipped with cabs to allow the engineer to operate the train from the end opposite the locomotive during the push operation. All cars must have control trainlines to connect the lead (cab) car to the trailing locomotive. Locomotive-hauled commuter rail cars use

AAR-type H couplers. Most self-propelled (single- or multiple-unit) cars use other automatic coupler designs, which can be mechanically coupled to an AAR coupler with an adaptor. Heavy Rail Rapid Transit Equipment This equipment is used on traditional subway/elevated properties in such cities as Boston, New York, Philadelphia (Fig. 11.2.22), and Chicago in a semiautomatic mode of operation over dedicated rights-of-way which are constrained by limiting civil features. State-of-the-art subway-elevated properties include such cities as Washington, Atlanta, Miami, and San Francisco,

Fig. 11.2.22 Heavy rail transit car. (Courtesy of Bombardier Transportation.)

PASSENGER EQUIPMENT

where the equipment provides highly automated modes of operation on rights-of-way with generous civil alignments. The cars can operate bidirectionally in multiple with as many as 12 or more cars controlled from the leading cab. They are electrically propelled, usually from a dc third rail, which makes contact with a shoe insulated from and supported by the frame of the truck. Occasionally, roof-mounted pantographs are used. Voltages range from 600 to 1,500 V dc. The cars range from 48 to 75 ft (14.6 to 22.9 m) over the anticlimbers, the longer cars being used on the newer properties. Passenger seating varies from 40 to 80 seats per car, depending upon length and the local policy (or preference) regarding seated to standee ratio. Older properties require negotiation of curves as sharp as 50 ft (15.2 m) minimum radius with speeds up to only 50 mi/h (80 km/h) on tangent track, while newer properties usually have no less than 125 ft (38.1 m) minimum radius curves with speeds up to 75 mi/h (120 km/h) on tangent track. All North American properties operate on standard-gage track with the exception of the 5 ft 6 in (1.7 m) San Francisco Bay Area Rapid Transit (BART), the 4 ft 10 in (1.5 m) Toronto Transit Subways, and the 5 ft 21⁄2 in (1.6 m) Philadelphia Southeastern Pennsylvania Transportation Authority (SEPTA) Market-Frankford line. Grades seldom exceed 3 percent, and 1.5 to 2.0 percent is the desired maximum. Typically, newer properties require maximum acceleration rates of between 2.5 and 3.0 mi/(h  s) [4.0 to 4.8 km/(h  s)] as nearly independent of passenger loads as possible from 0 to approximately 20 mi/h (32 km/h). Depending upon the selection of motors and gearing, this rate falls off as speed is increased, but rates can generally be controlled at a variety of levels between zero and maximum. Deceleration is typically accomplished by a blended dynamic and electropneumatic friction tread or disk brake, although a few properties use an electrically controlled hydraulic friction brake. In some cases energy is regenerated and sent back into the power system. All of these systems usually provide a maximum braking rate of between 3.0 and 3.5 mi/(h  s) [4.8 and 5.6 km/(h  s)] and are made as independent of passenger loads as possible by a load-weighing system that adjusts braking effort to suit passenger loads. Some employ regenerative braking to supplement the other brake systems. Dynamic braking is generally used as the primary stopping mode and is effective from maximum speed down to 10 mi/h (16 km/h) with friction braking supplementation as the characteristic dynamic fade occurs. The friction brakes provide the final stopping forces. Emergency braking rates depend upon line constraints, car subsystems, and other factors, but generally rely on the maximum retardation force that can be

11-35

provided by the dynamic and friction brakes within the limits of available wheel-to-rail adhesion. Acceleration and braking on modern properties are usually controlled by a single master controller handle that has power positions in one direction and a coasting or neutral center position and braking positions in the opposite direction. A few properties use foot-pedal control with a “deadman” pedal operated by the left foot, a brake pedal operated by the right foot, and an accelerator (power) pedal also operated by the right foot. In either case, the number of positions depends upon property policy and control subsystems on the car. Control elements include motor-current sensors and some or all of the following: speed sensors, rate sensors, and load-weighing sensors. Signals from these sensors are processed by an electronic control unit (ECU), which provides control functions to the propulsion and braking systems. The propulsion systems currently include pilot-motor-operated cams which actuate switches or electronically controlled unit switches to control resistance steps, or chopper or inverter systems that electronically provide the desired voltages and currents at the motors. In some applications, dynamic braking utilizes the traction motors as generators to dissipate energy through the onboard resistors also used in acceleration. It is expected that regenerative braking will become more common since energy can be returned to the line for use by other cars. Theoretically, 35 to 50 percent of the energy can be returned, but the present state-of-the-art is limited to a practical 20 percent on properties with large numbers of cars on close headways. Car bodies are made of welded stainless steel or low-alloy hightensile (LAHT) steel of a design that carries structural loads. Earlier problems with aluminum, primarily electrolytic action among dissimilar metals and welding techniques, have been resolved and aluminum is also used on a significant number of new cars. Trucks may be cast or fabricated steel with frames and journal bearings either inside or outside the wheels. Axles are carried in roller-bearing journals connected to the frames so as to be able to move vertically against a variety of types of primary spring restraint. Metal springs or air bags are used as the secondary suspension between the trucks and the car body. Most wheels are solid cast or wrought steel. Resilient wheels have been tested but are not in general service on heavy rail transit equipment. All heavy rail rapid transit systems use high-level loading platforms to speed passenger flow. This adds to the civil construction costs, but is necessary to achieve the required level of service. Light Rail Transit Equipment The cars are called light rail vehicles (Fig. 11.2.23 shows typical LRVs) and are used on a few remaining

Fig. 11.2.23 Light-rail vehicle (LRV). (a) Articulated; (b) nonarticulated

11-36

RAILWAY ENGINEERING

streetcar systems such as those in Boston, Philadelphia, and Toronto on city streets and on state-of-the-art subway, surface, and elevated systems such as those in Calgary, Edmonton, Los Angeles, San Diego, and Portland (oregon) and in semiautomated modes over partially or wholly reserved rights-of-way. As a practical matter, the LRV’s track, signal systems, and power systems utilize the same subsystems as heavy rail rapid transit and are not “lighter” in a physical sense. Most LRVs are designed to operate bidirectionally in multiple, with up to four cars controlled from the leading cab. The LRVs are electrically propelled from an overhead contact wire, often of catenary design, that makes contact with a pantograph or pole on the roof. For reasons of wayside safety, third-rail pickup is not used. Voltages range from 550 to 750 V dc. The cars range from 60 to 65 ft (18.3 to 19.8 m) over the anticlimbers for single cars and from 70 to 90 ft (21.3 to 27.4 m) for articulated cars. The choice is determined by passenger volumes and civil constraints. Articulated cars have been used in railroad and transit applications since the 1920s, but have found favor in light rail applications because of their ability to increase passenger loads in a longer car that can negotiate relatively tight-radius curves. These cars require the additional mechanical complexity of the articulated connection and a third truck between two car-body sections. Passenger seating varies from 50 to 80 or more seats per car, depending upon length, placement of seats in the articulation, and the policy regarding seated/standee ratio. Older systems require negotiation of curves down to 30 ft (9.1 m) minimum radius with speeds up to only 40 mi/h (65 km/h) on tangent track, while newer systems usually have no less than 75 ft (22.9 m) minimum radius curves with speeds up to 65 mi/h (104 km/h) on tangent track. The newer properties all use standard-gage track; however, existing older systems include 4 ft 10 in (1,495 mm) and 5 ft 21⁄2 in (1,587 mm) gages. Grades have reached 12 percent but 6 percent is now considered the maximum and 5 percent is preferred. Typically, newer properties require maximum acceleration rates of between 3.0 to 3.5 mi/(h  s) [4.8 to 5.6 km/(h  s)] as nearly independent of passenger load as possible from 0 to approximately 20 mi/h (32 km/h). Depending upon the selection of motors and gearing, this rate falls off as speed is increased, but the rates can generally be controlled at a variety of levels. Unlike most heavy rail rapid transit cars, LRVs incorporate three braking modes: dynamic, friction, and track brake, which typically provide maximum service braking at between 3.0 and 3.5 mi/(h  s) [4.8 to 5.6 km/(h  s)] and 6.0 mi/(h  s) [9.6 km/(h  s)] maximum emergency braking rates. The dynamic and friction brakes are usually blended, but a variety of techniques exist. The track brake is intended to be used primarily for emergency conditions and may or may not be controlled with the other braking systems. The friction brakes are almost exclusively disk brakes, since LRVs use resilient wheels that can be damaged by tread-brake heat buildup. No single consistent pattern exists for the actuation mechanism. Allelectric, all-pneumatic, electropneumatic, electrohydraulic, and electropneumatic over hydraulic are in common use. Dynamic braking is generally used as the primary braking mode and is effective from maximum speed down to about 5 mi/h (8 km/h) with friction braking supplementation as the characteristic dynamic fade occurs. As with heavy rail rapid transit, the emergency braking rates depend upon line constraints, car-control subsystems selected, and other factors, but the use of track brakes means that higher braking rates can be achieved because the wheel-to-rail adhesion is not the limiting factor. Acceleration and braking on modern properties are usually controlled by a single master controller handle that has power positions in one direction and a coasting or neutral center position and braking positions in the opposite direction. A few properties use foot-pedal control with a “deadman” pedal operated by the left foot, a brake pedal operated by the right foot, and an accelerator (power) pedal also operated by the right foot. In either case, the number of positions depends upon property policy and control subsystems on the car.

Control elements include motor-current sensors and some or all of the following: speed sensors, rate sensors, and load-weighing sensors. Signals from these sensors are processed by an electronic control unit (ECU), which provides control functions to the propulsion and braking systems. The propulsion systems currently include pilot-motor-operated cams that actuate switches or electronically controlled unit switches to control resistance steps. Chopper and inverter systems that electronically provide the desired voltages and currents at the motors are also used. Most modern LRVs are equipped with two powered trucks. In twosection articulated designs (Fig. 11.2.23a), the third (center) truck may be left unpowered but usually has friction and track brake capability. Some European designs use three powered trucks but the additional cost and complexity have not been found necessary in North America. Unlike heavy rail rapid transit, there are three major dc-motor configurations in use: the traditional series-wound motors used in bimotor trucks, the European-derived monomotor, and a hybrid monomotor with a separately excited field—the last in chopper-control version only. The bimotor designs are rated between 100 and 125 shaft hp per motor at between 300 and 750 V dc, depending upon line voltage and series or series-parallel control schemes (electronic or electromechanical control). The monomotor designs are rated between 225 and 250 shaft hp per motor at between 300 and 750 V dc (electronic or electromechanical control). The motors, gear units (right angle or parallel), and axles are joined variously through flexible couplings. In the case of the monomotor, it is supported in the center of the truck, and right-angle gearboxes are mounted on either end of the motor. Commonly, the axle goes through the gearbox and connection is made with a flexible coupling arrangement. Electronic inverter control drives with ac motors have been applied in recent conversions and new equipment. Dynamic braking is achieved in the same manner as in heavy rail rapid transit. Unlike heavy rail rapid transit, LRV bodies are usually made only of welded LAHT steel and are of a load-bearing design. Because of the semireserved right-of-way, the risk of collision damage with automotive vehicles is greater than with heavy rail rapid transit and the LAHT steel has been found to be easier to repair than stainless steel or aluminum. Although LAHT steel requires painting, this can be an asset since the painting can be performed in a highly decorative manner pleasing to the public and appropriate to themes desired by the cities. Trucks may be cast or fabricated steel with either inside or outside frames. Axles are carried in roller-bearing journals, which are usually resiliently coupled to the frames with elastomeric springs as a primary suspension. Both vertical and a limited amount of horizontal movement occur. Since tight curve radii are common, the frames are usually connected to concentric circular ball-bearing rings, which, in turn, are connected to the car body. Air bags, solid elastomeric springs, or metal springs are used as a secondary suspension. Resilient wheels are used on virtually all LRVs. Newer LRVs have low-level loading doors and steps, which minimize station platform costs. TRACK Gage The gage of railway track is the distance between the inner sides of the rail heads measured 0.626 in (15.9 mm) below the top running surface of the rail. North American railways are laid with a nominal gage of 4 ft 81⁄2 in (1,435 mm), which is known as standard gage. Rail wear causes an increase in gage. On sharp curves [over 88, see Eq. (11.2.10)], it has been the practice to widen the gage to reduce rail wear. Track Structure In basic track structure is composed of six major elements: rail, tie plates, fasteners, cross ties, ballast, and subgrade (Fig. 11.2.24). Rail The American Railway Engineering and Maintenance of Way Association had defined standards for the cross section of rails of

TRACK

Fig. 11.2.24 Track structure.

various weights, which are identified by their weight (in pounds) per yard. The AREMA standards provide for rail weights varying from 90 to 141 lb/yd. However, recommendations for new rail purchases are limited to weights of 115 lb/yd and above. Many railroads historically used special sections of their own design; however, the freight industry is converging on the use of the 115-, 136-, and 141-lb/yd sections. The standard length in use is 80 ft (31.5 m), although some mills are now considering making longer lengths available. The prevailing weight of rails used in mainline track is from 112 to 141 lb/yd. Secondary and branch lines are generally laid with 90- to 115-lb/yd or heavier rail that has been previously used and partly worn in mainline service. In a very limited application, rail weights as low as 60 lb/yd exist in some tracks having very light usage. In the past, rail sections were connected by using bolted joint bars. Current practice is the use of continuous welded rail (CWR). In general, rails are first welded into lengths averaging 1,440 feet (440 m) or more. After these strings, or “ribbons,” are installed, they are later field welded to eliminate conventional joints. At one time the gas-fusion welding process was common, but today almost all rail is welded by the electric-flash or aluminothermic processes. CWR requires the use of more rail anchors than jointed rail in order to clamp the base of the rail and position it on the cross ties to prevent rail movement. This is necessary to restrain longitudinal forces that result from thermal expansion and contraction with ambient temperature changes. Failure to adequately restrain the rail can result in track buckling in extremely hot weather or in rail “pull aparts” in extremely cold weather. Care must be exercised to install the rail at temperatures that do not approach high or low extremes. A variety of heat-treating and alloying processes have been used to produce rail that is more resistant to wear and less susceptible to fatigue failure. Comprehensive studies indicate that both gage face and top-ofrail lubrication can dramatically reduce the wear rate of rail in curves (see publications of AAR Research and Test Department). Friction modification using top-of-rail lubrication must be carefully managed so that it does not interfere with traction required to move and brake trains. Reduced wear and higher permissible axle loads have led to fatigue becoming the dominant cause for rail replacement at extended life. Grinding the running surface of rail has been greatly increased to reduce the damage caused by fatigue. Tie Plates Rail is supported on tie plates that restrain the rail laterally, transmit the vehicle loads to the cross tie, and position (cant) the rail for optimum vehicle performance. Tie plates cant the rail toward the gage side. Cants vary from 1:14 to 1:40, depending on service conditions and operating speeds, with 1:40 being most common. Fasteners Rail and tie plates are fixed to the cross ties by fasteners. Conventional construction with wood cross ties utilizes steel spikes driven into the tie through holes in the tie plate. In some cases, screw-type fasteners are used to provide more resistance to vertical forces. Elastic, clip-type fasteners are used on concrete and, at times, on wooden cross ties to provide a uniform, resilient attachment and longitudinal restraint in lieu of anchors. This type of fastening system also reduces the likelihood of rail overturns. Cross Ties To retain gage and provide a further distribution of vehicle loads to the ballast, lateral cross ties are used. The majority of cross ties used in North America are wood. However, concrete cross ties are being used more frequently in areas where track surface and lateral

11-37

stability are difficult to maintain and in high-speed passenger corridors. Slab track designs are now being given scrutiny for high-utilization areas. Ballast Ballast serves to further distribute vehicle loads to the subgrade, restrain vertical and lateral displacement of the track structure, and provide for drainage. Typical mainline ballast materials are limestone, trap rock, granite, or slag. Lightly built secondary lines have used gravel, cinders, or sand in the past. Subgrade The subgrade serves as the interface between the ballast and the native soil. Subgrade material is typically compacted or stabilized soil. Instability of the subgrade due to groundwater permeation is a concern in some areas. Subgrade instability must be analyzed and treated on a case-by-case basis. Track Geometry Railroad maintenance practices, state standards, AREMA standards, and, more recently, FRA safety standards (49 CFR Title § 113) specify limitations on geometric deviations for track structure. Among the characteristics considered are: Gage Alignment (the lateral position of the rails) Profile (the elevation of the rail) Curvature Easement (the rate of change from tangent to curve) Superelevation (the elevation of the outer or “high” rail in a curve compared to the inner or “low” rail) Runoff (the rate of change in superelevation) Cross level (the relative height of opposite rails) Twist (the change in cross level over a specified distance) FRA standards specify maximum operating speeds by class of track ranging from Class 1 (lowest acceptable) to Class 9. Track geometry is a significant consideration in vehicle suspension design. For operating speeds greater than 110 mi/h, FRA has promulgated regulations (track Classes 6 through 9) that require qualification of locomotives and cars through a test program over each operating route. These tests call for comprehensive measurement of wheel/rail forces and vehicle accelerations (49 CFT § 213). Curvature The curvature of track is designated in terms of “degrees of curvature.” The degree of curve is the number of degrees of central angle subtended by a chord of 100-ft length (measured on the track centerline). Equation (11.2.10) gives the approximate radius R in feet for ordinary railway curves. R5

5,730 degrees per 100-ft chord

(11.2.10)

On important main lines where trains are operated at relatively high speed, the curves are ordinarily not sharper than 6 to 88. In mountainous territory, and in other rare instances, curves as sharp as 188 occur. Most diesel and electric locomotives are designed to traverse curves up to 218. Most uncoupled cars will pass considerably sharper curves, the limiting factor in this case being the clearance of the truck components to car body structural members or equipment or the flexibility of the brake connections to the trucks (AAR “Manual of Standards and Recommended Practices,” Sec. C). Clearances For new construction, the AREMA standard clearance diagrams provide for a clear height of 23 ft (7.0 m) above the tops of the railheads and for a width of 18 ft (5.5 m). Where conditions require, as in tunnels and bridges, these dimensions are reduced at the top and bottom of the diagram. For tracks entering buildings, an opening 16 ft (4.9 m) wide and 18 ft (5.5 m) high is recommended and will ordinarily suffice to pass the largest locomotives and most freight cars. A standard car clearance diagram for railway bridges (Fig. 11.2.25) represents the AAR recommendation for new construction. Minimum clearances are dictated by state authorities, and are summarized in Chapter 28, Table 3.3, of the AREMA Manual. Individual railroads have their own standards for existing lines and new construction, which are generally more stringent than the legal requirements. The actual clearance limitations on a particular railroad can be found in the official publication “Railway Line Clearances,” Railway Equipment and Publishing Co.

11-38

RAILWAY ENGINEERING

industry research programs to reduce train resistance. At very high speeds, the effect of air resistance can be approximated as an aid to studies in its reduction by means of cowling and fairing. The resistance of a car moving in still air on straight, level track increases parabolically with speed. Because the aerodynamic resistance is independent of car weight, the resistance in pounds per ton decreases as the weight of the car increases. The total resistance in pounds per ton of a 100-ton car is much less than twice as great as that of a 50-ton car under similar conditions. With known conditions of speed and car weight, inherent resistance can be predicted with reasonable accuracy. Knowledge of track conditions will permit further refining of the estimate, but for very rough track or extremely cold ambient temperatures, generous allowances must be made. Under such conditions normal resistance may be doubled. A formula proposed by Davis (General Electric Review, Oct. 1926) and revised by Tuthill (“High Speed Freight Train Resistance.” Univ. of Illinois Engr. Bull., 376, 1948) has been used extensively for inherent freight-train resistances at speeds up to 40 mi/h: R  1.3W  29n  0.045WV  0.0005 AV 2

Fig. 11.2.25 Wayside clearance diagram. (AREA “Manual for Railway Engineering—Fixed Properties.”)

Track Spacing The distance between centers of mainline track commonly varies between 12 and 15 ft (3.6 and 4.6 m). Twelve-foot spacing exists, however, only on older construction. Most states require a minimum of 14 ft 0 in (4.3 m) between main tracks; however, many railroads have adopted 15 ft 0 in (3.6 m) as their standard. In a few cases, where space permits, 20 ft (6.1 m) is being used. Turnouts Railway turnouts are defined by frog number. The frog number is the cotangent of the frog angle, which is the ratio of the lateral displacement of the diverging track per unit of longitudinal distance along the nondiverging track. Typical turnouts in yard and industry trackage range from number 7 to number 10. For mainline service, typical turnouts range from number 10 to number 20. The other important part of a turnout is the switch. The switch has movable points that direct the wheel down the straight or diverging side of the turnout. The higher the turnout number, the longer the switch, and the higher the allowable speed for trains traversing the diverging track. Allowable speeds are governed by the curvature of the diverging tack, and the angle of the switch. As a general rule, the allowable speed (in mi/h) is slightly less than twice the turnout number. In recent track construction for high-speed service special turnouts with movable point frogs have been applied. Noise Allowable noise levels from railroad operation are prescribed by 40 CFR § 201. VEHICLE-TRACK INTERACTION Train Resistance The resistance to a train in motion along the track is of prime interest, as it is reflected directly in locomotive energy requirement. This resistance is expressed in terms of pounds per ton of train weight. Gross train resistance is that force that must be overcome by the locomotives at the driving-wheel–rail interface. Trailing train resistance must be overcome at the rear drawbar of the locomotive. There are two classes of resistance that must be overcome: inherent and incidental. Inherent resistance includes the rolling resistance of bearings and wheels and aerodynamic resistance due to motion through still air. It may be considered equal to the force necessary to maintain motion at constant speed on level tangent track in still air. Incidental resistance includes resistance due to grade, curvature, wind, and vehicle dynamics. Inherent Resistance Of the elements of inherent resistance, at low speeds rolling resistance is dominant, but at high speeds aerodynamic resistance is the predominant factor. Attempts to differentiate and evaluate the various elements through the speed range are a continuing part of

(11.2.11)

where R  train resistance, lb/car; W  weight per car, tons; V  speed, mi/h; n  total number of axles; A  cross-sectional area, ft2. With freight train speeds of 50 to 70 mi/h (80 to 112 km/h), it has been found that actual resistance values fall considerably below calculations based on the above formula. Several modifications of the Davis equation have been developed for more specific applications. All of these equations apply to cars trailing locomotives. 1. Davis equation as modified by I. K. Tuthill (University of Illinois Engr. Bull, 376, 1948): R  1.3W  29n  0.045WV  0.045V 2

(11.2.12)

Note: In the Totten modification, the equation is augmented by a matrix of coefficients when the velocity exceeds 40 mi/h. 2. Davis equation as modified by the Canadian National Railway: R  0.6W  20n  0.01WV  0.07V 2

(11.2.13)

3. Davis equation as modified by the Canadian National Railway and former Erie-Lackawanna Railroad for trailers and containers on flat cars: R  0.6W  20n  0.01WV  0.2V 2

(11.2.14)

Other modifications of the Davis equation have been developed for passenger cars by Totten (“Resistance of Light Weight Passenger Trains,” Railway Age, 103, July 17, 1937). These formulas are for passenger cars pulled by a locomotive and do not include head-end air resistance. 1. Davis equations as modified by Totten for streamlined passenger cars: R  1.3W  29n  0.045WV  [0.00005  0.060725(L/100)0.88]V 2 (11.2.15) 2. Davis equations as modified by Totten for nonstreamlined passenger cars R  1.3W  29n  0.045WV  [0.00005  0.1085(L/100)0.7]V 2 (11.2.16) where L  car length in feet. Aerodynamic and Wind Resistance Wind-tunnel testing has indicated a significant effect on freight train resistance resulting from vehicle spacing, open tops of hopper and gondola cars, open boxcar doors, vertical side reinforcements on railway cars and intermodal trailers, and protruding appurtenances on cars. These effects can cause significant increases in train resistance at higher speeds. For example, the spacing of intermodal trailers or containers greater than approximately 6 ft can result in a new frontal area to be considered in determining train resistance. Frontal or cornering ambient wind conditions can also have an adverse effect on train resistance, which is increased with discontinuities along the length of the train. Curve Resistance Train resistance due to track curvature varies with speed and degree of curvature. The behavior of rail vehicles in curve negotiation is the subject of several ongoing AAR studies.

VEHICLE-TRACK INTERACTION

Lubrication of the rail gage face or wheel flanges has become common practice for reducing friction and the resulting wheel and rail wear. Recent studies indicate that flange and/or gage face lubrication can significantly reduce train resistance on tangent track as well (Allen, Conference on the Economics and Performance of Freight Car Trucks, October 1983). In addition, a variety of special trucks (wheel assemblies) that reduce curve resistance by allowing axles to steer toward a radial position in curves have been developed. For general estimates of car resistance and locomotive hauling capacity on dry (unlubricated) rail with conventional trucks, speed and gage relief may be ignored and a figure of 0.8 lb/ton per degree of curvature used. Grade resistance depends only on the angle of ascent or descent and relates only to the gravitational forces acting on the vehicle. It equates to 20 lb/ton for each percent of grade, or 0.379 lb/ton for each foot per mile rise. Acceleration The force (tractive effort) required to accelerate the train is the sum of the forces required for linear acceleration and that required for rotational acceleration of the wheels about their axle centers. A linear acceleration of 1 mi/h  s (1.6 km/h  s) is produced by a force of 91.1 lb/ton. The rotary acceleration requirement adds 6 to 12 percent, so that the total is nearly 100 lb/ton (the figure commonly used) for each mile per hour per second. If greater accuracy is required, the following expression is used: Ra  A(91.05W  36.36n)

(11.2.17)

where Ra  the total accelerating force, lb; A  acceleration, mi/h  s; W  weight of train, tons; n  number of axles. Acceleration and Distance If a distance of S ft the speed of a car or train changes from V1 to V2 mi/h, the force required to produce acceleration (or deceleration if the speed is reduced) is Ra 5 74sV 22 2 V 21d/S

(11.2.18)

The coefficient, 74, corresponds to the use of 100 lb/ton. This formula is useful in the calculation of the energy required to climb a grade with the assistance of stored energy. In any train-resistance calculation or analysis, assumptions with regard to acceleration will generally submerge all other variables; e.g., an acceleration of 0.1 mi/h  s (0.16 km/h  s) requires more tractive force than that required to overcome inherent resistance for any car at moderate speeds. Starting Resistance Most railway cars are equipped with roller bearings requiring a starting force of 5 or 6 lb/ton. Vehicle Suspension Design The primary consideration in the design of the vehicle suspension system is to isolate track input forces from the vehicle car body and lading. In addition, there are a few specific areas of instability that railway suspension systems must address; see AAR “Manual of Standards and Recommended Practice,” Sec. CII-M-1001, Chap. XI. Harmonic roll is the tendency of a freight car with a high center of gravity to rotate about its longitudinal axis (parallel to the track). This instability is excited by passing over staggered low rail joints at a speed that causes the frequency of the input for each joint to match the natural roll frequency of the car. Unfortunately, in many car designs, this occurs for loaded cars at 12 to 18 mi/h (19.2 to 28.8 km/h), a common speed for trains moving in yards or on branch lines where tracks are not well maintained. Many freight operations avoid continuous operation in this speed range. This adverse behavior is more noticeable in cars with truck centers approximately the same as the rail length. The effect of harmonic roll can be mitigated by improved track surface and by damping in the truck suspension. Pitch and bounce are the tendencies of the vehicle to either translate vertically up and down (bounce), or rotate (pitch) about a horizontal axis perpendicular to the centerline of track. This response is also excited by low track joints and can be relieved by increased truck damping.

11-39

Yaw is the tendency of the car to rotate about its axis vertical to the centerline of track. Yaw responses are usually related to truck hunting. Truck hunting is an instability inherent in the design of the truck and dependent on the stiffness parameters of the truck and on wheel conicity (tread profile). The instability is observed as “parallelogramming” of the truck components at a frequency of 2 to 3 Hz, causing the car body to yaw or translate laterally. This response is excited by the effect of the natural frequency of the gravitational stiffness of the wheel set when the speed of the vehicle approaches the kinematic velocity of the wheel set. This problem is discussed in analytic work available from the AAR Research and Test Department. Superelevation As a train passes around a curve, there is a tendency for the cars to roll toward the outside of the curve in response to centrifugal force acting on the center of gravity of the car body (Fig. 11.2.26a). To compensate for this effect, the outside rail is superelevated, or raised, relative to the inside rail (Fig. 11.2.26b). The amount of superelevation for a particular curve is based upon the radius of the curve and the operating speed of the train. The “balance” or equilibrium speed for a given curve is that speed at which the weight of the vehicle is equally distributed on each rail. The FRA allows a railroad to operate with 3 in of unbalance, or at the speed at which equilibrium would exist if the superelevation were 3 in greater than that which exists. The maximum superelevation is usually 6 in but maybe lower if freight operation is used exclusively. Longitudinal Train Action Longitudinal train (slack) action is associated with the dynamic action between individual cars in a train. An example would be the effect of starting a long train in which the couplers between each car had been compressed (i.e., bunched up). As the locomotive begins to pull the train, the slack between the locomotive and the first car must be traversed before the first car begins to move. Next the slack between the first and second car must traversed before the second car beings to move, and so on. Before the last car in a long train begins to move, the locomotive and the moving cars may be traveling at a rate of several miles per hour. This effect can result in coupler forces sufficient to cause the train to break in two. Longitudinal train (slack) action is also induced by serial braking, undulating grades, or by braking on varying grades. The AAR Track Train Dynamics Program has published guidelines titled “Track Train Dynamics to Improve Freight Train Performance” that explain the causes of undesirable train action and how to minimize its effects. Analysis of the forces developed by longitudinal train action requires the application of the Davis equation to represent the resistance of each vehicle based upon its velocity and location on a grade or curve. Also, the longitudinal stiffness of each car and the tractive effort of the locomotive must be considered in equations that model the kinematic response of each vehicle in the train. Computer programs are available from the AAR to assist in the analysis of longitudinal train action.

Fig. 11.2.26 Effect of superelevation on center of gravity of car body.

11.3 MARINE ENGINEERING by Michael C. Tracy REFERENCES: Harrington, “Marine Engineering,” SNAME, 1992. Lewis, “Principles of Naval Architecture,” SNAME, 1988/89. Myers, “Handbook of Ocean and Underwater Engineering,” McGraw-Hill, 1969. “Rules for Building and Classing Steel Vessels,” American Bureau of Shipping. Rawson and Tupper, “Basic Ship Theory,” Oxford: Butterworth-Heinemann, 2001. Gillmer and Johnson, “Introduction to Naval Architecture,” Naval Institute, 1982. Barnaby, “Basic Naval Architecture,” Hutchinson, London, 1967. Jour. Inst. Environ, Sci. Taggert, “Ship Design and Construction,” SNAME, 1980. Trans. Soc. Naval Architects and Marine Engrs., SNAME. Naval Engrs. Jour., Am. Soc. of Naval Engrs., ASNE. Trans. Royal Inst. of Naval Architects, RINA. Figures and examples herein credited to SNAME have been included by permission of The Society of Naval Architects and Marine Engineers. Marine engineering is an integration of many engineering disciplines directed to the development and design of systems of transport, warfare, exploration, and natural-resource retrieval which have one thing in common: operation in or on a body of water. Marine engineers are responsible for the engineering systems required to propel, work, or fight ships. They are responsible for the main propulsion plant; the powering and mechanization aspects of ship functions such as steering, anchoring, cargo handling, heating, ventilation, air conditioning, etc.; and other related requirements. They usually have joint responsibility with naval architects in areas of propulsor design; hull vibration excited by the propeller or main propulsion plant; noise reduction and shock hardening, in fact, dynamic response of structures or machinery in general; and environmental control and habitability. Marine engineering is a distinct multidiscipline and characteristically a dynamic, continuously advancing technology.

Displacement Hull Forms

Displacement hull forms are the familiar monohull, the catamaran, and the submarine. The moderate-to-full-displacement monohull form provides the best possible combinations of high-payload-carrying ability, economical powering characteristics, and good seakeeping qualities. A more slender hull form achieves a significant reduction in wave-making resistance, hence increased speed; however, it is limited in its ability to carry topside weight because of the low transverse stability of its narrow beam. Multihull ships provide a solution to the problem of low transverse stability. They are increasingly popular in sailing yachts, high-speed passenger ferries, and research and small support ships. Sailing catamarans, with their superior transverse stability permitting large sail-plane area, gain a speed advantage over monohull craft of comparable size. A powered catamaran has the advantage of increased deck space and relatively low roll angles over a monohull ship. The submarine, operating at depths which preclude the formation of surface waves, experiences significant reductions in resistance compared to a well-designed surface ship of equal displacement. Planing Hull Forms

The planing hull form, although most commonly used for yachts and racing craft, is used increasingly in small, fast commercial craft and in coastal patrol craft. The weight of the planing hull ship is partially borne by the dynamic lift of the water streaming against a relatively flat or V-shaped bottom. The effective displacement is hence reduced below that of a ship supported statically, with significant reduction in wavemaking resistance at higher speeds. High-Performance Ships

THE MARINE ENVIRONMENT

Marine engineers must be familiar with their environment so that they may understand fuel and power requirements, vibration effects, and propulsionplant strength considerations. The outstanding characteristic of the open ocean is its irregularity in storm winds as well as under relatively calm conditions. The irregular sea can be described by statistical mathematics based on the superposition of a large number of regular waves having different lengths, directions, and amplitudes. The characteristics of idealized regular waves are fundamental for the description and understanding of realistic, irregular seas. Actual sea states consist of a combination of many sizes of waves often running in different directions, and sometimes momentarily superimposing into an exceptionally large wave. Air temperature and wind also affect ship and machinery operation. The effects of the marine environment also vary with water depth and temperature. As a ship passes from deep to shallow water, there is an appreciable change in the potential flow around the hull and a change in the wave pattern produced. Additionally, silt, sea life, and bottom growth may affect seawater systems or foul heat exchangers.

MARINE VEHICLES

The platform is additionally a part of marine engineers’ environment. Ships are supported by a buoyant force equal to the weight of the volume of water displaced (Archimedes’ principle). For surface ships, this weight is equal to the total weight of the structure, machinery, outfit, fuel, stores, crew, and useful load. The principal sources of resistance to propulsion are skin friction and the energy lost to surface waves generated by moving in the interface between air and water. Minimization of one or both of these sources of resistance has generally been a primary objective in the design of marine vehicles. 11-40

In a search for high performance and higher speeds in rougher seas, several advanced concepts to minimize wave-making resistance have been developed, such as hydrofoil craft, surface-effect vehicles, and smallwaterplane-area twin hull (SWATH) forms (Fig. 11.3.1). The hydrofoil craft has a planing hull that is raised clear of the water through dynamic lift generated by an underwater foil system. The surface-effect vehicles ride on a cushion of compressed air generated and maintained in the space between the vehicle and the surface over which it hovers or moves. The most practical vehicles employ a peripheral-jet principle, with flexible skirts for obstacle or wave clearance. A rigid sidewall craft, achieving some lift from hydrodynamic effects, is more adaptable to marine construction techniques. The SWATH gains the advantages of the catamaran, twin displacement hulls, with the further advantage of minimized wave-making resistance and wave-induced motions achieved by submarine-shaped hulls beneath the surface and the small water-plane area of the supporting struts at the air-water interface. SEAWORTHINESS

Seaworthiness is the quality of a marine vehicle being fit to accomplish its intended mission. In meeting their responsibilities to produce seaworthy vehicles, marine engineers must have a basic understanding of the effects of the marine environment with regard to the vehicle’s (1) structure, (2) stability (3) seakeeping, and (4) resistance and powering requirements. Units and Definitions

The introduction of the International System of units (SI) to the marine engineering field presented somewhat of a revolutionary change. Reference will generally be made to both USCS and SI systems in subsequent examples.

SEAWORTHINESS

11-41

Fig. 11.3.1 Family of high-performance ships.

The displacement  is the weight of the water displaced by the immersed part of the vehicle. It is equal (1) to the buoyant force exerted on the vehicle and (2) to the weight of the vehicle (in equilibrium) and everything on board. Displacement is expressed in long tons (1.01605 metric tons). Since the specific weight of seawater averages about 64 lb/ft3, displacement in saltwater is measured by the displaced volume * in ft3 divided by 35. In freshwater, divide by 35.9. The standard maritime industry metric weight unit is the metric ton (tonne) of 1,000 kilograms. Under the SI system, the mass displacement of a ship is commonly represented in metric tons, which under gravity, leads to a force displacement represented in metric tonneforce or newtons. Two measurements of a merchant ship’s earning capacity that are of significant importance to its design and operation are deadweight and tonnage. The deadweight of a ship is the weight of cargo, stores, fuel, water, personnel, and effects that the ship can carry when loaded to a specific load draft. Deadweight is the difference between the load displacement, at the minimum permitted freeboard, and the light displacement, which comprises hull weight and machinery. Deadweight is expressed in long tons (2,240 lb each) or MN. The volume of a ship is expressed in tons of 100 ft3 (2.83 m3) each and is referred to as its tonnage. Charges for berthing, docking, passage through canals and locks, and for many other facilities are based on tonnage. Gross tonnage was based on cubic capacity below the tonnage deck, generally the uppermost complete deck, plus allowances for certain compartments above, which are used for cargo, passengers, crew, and navigating equipment. Deduction of spaces for propulsion machinery, crew quarters, and other prescribed volumes from the gross tonnage leaves the net tonnage. The dimensions of a ship may refer to the molded body (or form defined by the outside of the frames), to general outside or overall dimensions, or to dimensions on which the determination of tonnage or of classification is based. There are thus (1) molded dimensions, (2) overall dimensions, (3) tonnage dimensions, and (4) classification dimensions. The published rules and regulations of the classification societies and the U.S. Coast Guard should be consulted for detailed information. The designed load waterline (DWL) is the waterline at which a ship would float freely, at rest in still water, in its normally loaded or designed condition. The keel line of most ships is parallel to DWL. Some keel lines are designed to slope downward toward the stern, termed designed drag. A vertical line through the intersection of DWL and the foreside of the stem is called the forward perpendicular, FP. A vertical line through the intersection of DWL with the afterside of the straight portion of the rudder post, with the afterside of the stern contour, or with the centerline of the rudder stock (depending upon stern configuration), is called the after perpendicular, AP. The length on the designed load waterline, LWL, is the length measured at the DWL, which, because of the stern configuration, may be

equal to the length between perpendiculars, Lpp. This is generally the same as classification-society-defined length, with certain stipulations. The extreme length of the ship is the length overall, LOA. The molded beam B is the extreme breadth of the molded form. The extreme or maximum breadth is generally used, referring to the extreme transverse dimension taken to the outside of the plating. The draft T (molded) is the distance from the top of the keel plate or bar keel to the load waterline. It may refer to draft amidships, forward, or aft. Trim is the longitudinal inclination of the ship usually expressed as the difference between the draft forward, TF, and the draft aft, TA. Coefficients of Form Assume the following notation: L  length on waterline; B  beam; T  draft; *  volume of displacement; AWP  area of water plane; AM  area of midship section, up to draft T; v  speed in ft/s (m/s); and V  speed in knots. Block coefficient, CB*/LBT, may vary from about 0.38, for highpowered yachts and destroyers, to greater than 0.90 for slow-speed seagoing cargo ships and is a measure of the fullness of the underwater body. Midship section coefficient, CM  AM /BT, varies from about 0.75 for tugs or trawlers to about 0.99 for cargo ships and is a measure of the fullness of the maximum section. Prismatic coefficient, CP*/L AM CB /CM, is a measure of the fullness of the ends of the hull, and is an important parameter in powering estimates. Water-plane coefficient, CWP  AWP /LB, ranges from about 0.67 to 0.95, is a measure of the fullness of the water plane, and may be estimated by CWP < 2⁄3CB 1 1⁄3. 3 Displacement/length ratio,  L 5  /sL/100d , is a measure of the slenderness of the hull, and is used in calculating the power of ships and in recording the resistance data of models. A similar coefficient is the volumetric coefficient, CV 5 =/sL /10d3, which is commonly used as this measure. Table 11.3.1 presents typical values of the coefficients with representative values for Froude number sv/ 2gLd, discussed under “Resistance and Powering.” Structure

The structure of a ship is a complex assembly of small pieces of material. Common hull structural materials for small boats are wood, aluminum, and fiberglass-reinforced plastic. Large ships are nearly always constructed of steel, with common application of alloyed aluminum in superstructures of some ship types, particularly for KG-critical designs. The methods for design and analysis of a ship’s structure have undergone significant improvement over the years, with research from the ocean environment, shipboard instrumentation, model testing, computer programming and probabilistic approaches. Early methods consisted of

11-42

MARINE ENGINEERING Table 11.3.1

Coefficients of Form

Type of vessel

2gL

CB

CM

CP

CWP

 sL/100d3

Great Lakes ore ships Slow ocean freighters Moderate-speed freighters Fast passenger liners Battleships Destroyer, cruisers Tugs

0.116–0.128 0.134–0.149 0.164–0.223 0.208–0.313 0.173–0.380 0.536–0.744 0.268–0.357

0.85–0.87 0.77–0.82 0.67–0.76 0.56–0.65 0.59–0.62 0.44–0.53 0.45–0.53

0.99–0.995 0.99–0.995 0.98–0.99 0.94–0.985 0.99–0.996 0.72–0.83 0.71–0.83

0.86–0.88 0.78–0.83 0.68–0.78 0.59–0.67 0.60–0.62 0.62–0.71 0.61–0.66

0.89–0.92 0.85–0.88 0.78–0.84 0.71–0.76 0.69–0.71 0.67–0.73 0.71–0.77

70–95 180–200 165–195 75–105 86–144 40–65 200–420

v

standard strength calculations involving still water and quasi-static analysis along with conventions and experience from proven designs. These methods were extremely useful and remain the basis, as well as a starting tool, for comparative analysis with the newer applications and rules in use today. For detailed discussion of these methods, see the SNAME publications referenced earlier. Weight, buoyancy, and load curves (Figs. 11.3.2 and 11.3.3) are developed for the ship for the determination of shear force and bending moment. Several extreme conditions of loading may be analyzed. The weight curves include the weights of the hull, superstructure, rudder, and castings, forgings, masts, booms, all machinery and accessories, solid ballast, anchors, chains, cargo, fuel, supplies, passengers, and baggage. Each individual weight is distributed over a length equal to the distance between frames at the location of that particular weight. The traditional method of calculating the maximum design bending moment involved the determination of weight and buoyancy distributions with the ship poised on a wave of trochoidal form whose length Lw is equal to the length of the ship, Lpp, and whose height Hw  Lw /20. To more accurately model waves whose lengths exceed 300 ft, Hw 5 1.1 2L has been widely used. With the wave crest amidships, the ship is in a hogging condition (Fig. 11.3.4a); the deck is in tension and the bottom shell in compression. With the trough amidships, the ship is in a sagging condition (Fig. 11.3.4b); the deck is in compression and the

Fig. 11.3.2 Representative buoyancy and weight curves for a ship in still water.

bottom in tension. For cargo ships with the machinery amidships, the hogging condition produces the highest bending moment, whereas for tankers and ore carriers with the machinery aft, the sagging condition gives the highest bending moment. The limitations of the above approach should be apparent; nevertheless, it has been an extremely useful one for many decades, and a great deal of ship data have been accumulated upon which to base refinements. Many advances have been made in the theoretical prediction of the actual loads a ship is likely to experience in a realistic, confused seaway over its life span. Today it is possible to predict reliably the bending moments and shear forces a ship will experience over a short term in irregular seas. Long-term prediction techniques use a probabilistic approach which associates load severity and expected periodicity to establish the design load. The maximum longitudinal bending stress normally occurs in the deck or bottom plating in vicinity of the midship section. In determining the design section modulus Z which the continuous longitudinal material in the midship section must meet, an allowable bending stress all must be introduced into the bending stress equation. Based on past experience, an appropriate choice of such an allowable stress is 1.19 L1/3 tons/in2 (27.31 L1/3 MN/m2) with L in ft (m). For ships under 195 ft (90 m) in length, strength requirements are dictated more on the basis of locally induced stresses than longitudinal bending stresses. This may be accounted for by reducing the allowable stress based on the above equation or by reducing the K values indicated by the trend in Table 11.3.2 for the calculation of the ship’s bending moment. For aluminum construction, Z must be twice that obtained for steel construction. Minimum statutory values of section modulus are published by the U.S. Coast Guard in “Load Line Regulations.” Shipbuilding classification societies such as the American Bureau of Shipping have section modulus standards somewhat greater. Rules for section modulus have a permissible (allowable) bending stress based on the hull material. The maximum bending stress at each section can be computed by the equation s 5 M/Z where s  maximum bending stress, lb/in2 (N/m2) or tons/in2 (MN/ m2); M  maximum bending moment, ft  lb (N  m) or ft  tons; Z  section modulus, in2  ft (m3); Z  I/y, where I  minimum vertical moment Table 11.3.2

Fig. 11.3.3 Representative moment and net-load curves for a ship in still water.

Constants for Bending Moment Approximations

Type Tankers High-speed cargo passenger Great Lakes ore carrier Trawlers Crew boats

Fig. 11.3.4 Hogging (a) and sagging (b) conditions of a vessel. (From “Principles of Naval Architecture,” SNAME, 1967.)

 tons (MN)

L ft (m)

35,000–150,000 (349–1495) 20,000–40,000 (199–399) 28,000–32,000 (279–319) 180–1,600 (1.79–16) 65–275 (0.65–2.74)

600–900 (183–274) 500–700 (152–213) 500–600 (152–183) 100–200 (30–61) 80–130 (24–40)

K* 35–41 29–36 54–67 12–18 10–16

3 3 *Based on allowable stress of sall. 5 1.19 2 L tons/in2 s27.31 2 L MN/m2d with L in ft smd.

SEAWORTHINESS

11-43

of inertia of section, in2  ft2 (m4), and y  maximum distances from neutral axis to bottom and strength deck, ft (m). The maximum shear stress can be computed by the equation t 5 Vac/It where t  horizontal shear stress, lb/in2 (N/m2); V  shear force, lb (N); ac  moment of area above shear plane under consideration taken about neutral axis, ft3 (m3); I  vertical moment of inertia of section, in2  ft2 (m4); t  thickness of material at shear plane, ft (m). For an approximation, the bending moment of a ship may be computed by the equation M 5  L/K ft # tons sMN # md where   displacement, tons (MN); L  length, ft (m); and K is as listed in Table 11.3.2. The above formulas have been successfully applied in the past as a convenient tool for the initial assessment of strength requirements for various designs. At present, computer programs are routinely used in ship structural design. Empirical formulas for the individual calculation of both still water and wave-induced bending moments have been derived by the classification societies. They are based on ship length, breadth, block coefficient, and effective wave height (for the waveinduced bending moment). Similarly, permissible bending stresses are calculated from formulas as a function of ship length and its service environment. Finite element techniques in the form of commercially available computer software packages combined with statistical reliability methods are being applied with increasing frequency. For ships, the maximum longitudinal bending stresses occur in the vicinity of the midship section at the deck edge and in the bottom plating. Maximum shear stresses occur in the shell plating in the vicinity of the quarter points at the neutral axis. For long, slender girders, such as ships, the maximum shear stress is small compared with the maximum bending stress. The structure of a ship consists of a grillage of stiffened plating supported by longitudinals, longitudinal girders, transverse beams, transverse frames, and web frames. Since the primary stress system in the hull arises from longitudinal bending, it follows that the longitudinally continuous structural elements are the most effective in carrying and distributing this stress system. The strength deck, particularly at the side, and the keel and turn of bilge are highly stressed regions. The shell plating, particularly deck and bottom plating, carry the major part of the stress. Other key longitudinal elements, as shown in Fig. 11.3.5, are longitudinal deck girders, main deck stringer plate, gunwale angle, sheer strake, bilge strake, inner bottom margin plate, garboard strake, flat plate keel, center vertical keel, and rider plate. Transverse elements include deck beams and transverse frames and web frames which serve to support and transmit vertical and transverse loads and resist hydrostatic pressure. Good structural design minimizes the structural weight while providing adequate strength, minimizes interference with ship function, includes access for maintenance and inspection, provides for effective continuity of the structure, facilitates stress flow around deck openings and other stress obstacles, and avoids square corner discontinuities in the plating and other stress concentration “hot spots.” A longitudinally framed ship is one which has closely spaced longitudinal structural elements and widely spaced transverse elements. A transversely framed ship has closely spaced transverse elements and widely spaced longitudinal elements. Longitudinal framing systems are generally more efficient structurally, but because of the deep web frames supporting the longitudinals, it is less efficient in the use of internal space than the transverse framing system. Where interruptions of open internal spaces are unimportant, as in tankers and bulk carriers, longitudinal framing is universally used. However, modern practice tends increasingly toward longitudinal framing in other types of ships also.

Fig. 11.3.5 Key structural elements.

vertical line as the force of buoyancy, acting upward at the center of buoyancy B. Figure 11.3.6 shows that an upsetting, transverse couple acting on the ship causes it to rotate about a longitudinal axis, taking a list f. G does not change; however, B moves to B1, the centroid of the new underwater volume. The resulting couple, created by the transverse separation of the two forces sGZd opposes the upsetting couple, thereby righting the ship. A static stability curve (Fig. 11.3.7), consisting of

Fig. 11.3.6 Ship stability.

Stability

A ship afloat is in vertical equilibrium when the force of gravity, acting at the ship’s center of gravity G, is equal, opposite, and in the same

Fig. 11.3.7 Static stability curve.

11-44

MARINE ENGINEERING

values for the righting arm GZ plotted against angles of inclination (f), gives a graphic representation of the static stability of the ship. The primary indicator for the safety of a ship is the maximum righting arm it develops and the angle heel at which this righting arm occurs. For small angles of inclination (f 108), centers of buoyancy follow a locus whose instantaneous center of curvature M is known as the transverse metacenter. When M lies above G sGM . 0d, the resulting gravity-buoyancy couple will right the ship; the ship has positive stability. When G and M are coincident, the ship has neutral stability. When G is above MsGM , 0d, negative stability results. Hence GM, known as the transverse metacentric height, is an indication of initial stability of a ship. The transverse metacentric radius BM and the vertical location of the center of buoyancy are determined by the design of the ship and can be calculated. Once the vertical location of the ship’s center of gravity is known, then GM can be found. The vertical center of gravity of practically all ships varies with the condition of loading and must be determined either by a careful calculation or by an inclining experiment. Minimum values of GM ranging from 1.5 to 3.5 ft (0.46 to 1.07 m), corresponding to small and large seagoing ships, respectively, have been accepted in the past. The GM of passenger ships should be under 6 percent of the beam to ensure a reasonable (comfortable) rolling period. This relatively low value is offset by the generally large range of stability due to the high freeboard of passenger ships. The question of longitudinal stability affects the trim of a ship. As in the transverse case, a longitudinal metacenter exists, and a longitudinal metacentric height GM L can be determined. The moment to alter trim 1 in, MT1, is computed by MT1 5 GM L/12L, ft  tons/in and moment to alter trim 1 m is MT1 5 106 GM L/L,N # m/m. Displacement  is in tons and MN, respectively. The location of a ship’s center of gravity changes as small weights are shifted within the system. The vertical, transverse, or longitudinal component of movement of the center of gravity is computed by GG1  w  d/, where w is the small weight, d is the distance the weight is shifted in a component direction, and  is the displacement of the ship, which includes w. Vertical changes in the location of G caused by weight addition or removal can be calculated by KG1 5

 KG 6 wKg 1

where  1 5  6 w and Kg is the height of the center of gravity of w above the keel. The free surface of the liquid in fuel oil, lubricating oil, and water storage and service tanks is deleterious to ship stability. The weight

shift of the liquid as the ship heels can be represented as a virtual rise in G, hence a reduction in GM and the ship’s initial stability. This virtual rise, called the free surface effect, is calculated by the expression GGv 5 gii/gw= where gi, gw  specific gravities of liquid in tank and sea, respectively; i  moment of inertia of free surface area about its longitudinal centroidal axis; and *  volume of displacement of ship. The ship designer can minimize this effect by designing long, narrow, deep tanks or by using baffles. Seakeeping

Seakeeping is how a ship moves in response to the ocean’s wave motion and determines how successfully the ship will function in a seaway. These motions in turn directly affect the practices of marine engineers. The motion of a floating object has six degrees of freedom. Figure 11.3.8 shows the conventional ship coordinate axes and ship motions. Oscillatory movement along the x axis is called surge; along the y axis, sway; and along the z axis, heave. Rotation about x is called roll; about y, pitch; and about z, yaw. Rolling has a major effect on crew comfort and on the structural and bearing requirements for machinery and its foundations. The natural period of roll of a ship is T 5 2pK/ 2gGM, where K is the radius of gyration of virtual ship mass about a longitudinal axis through G. K varies from 0.4 to 0.5 of the beam, depending on the ship depth and transverse distribution of weights. Angular acceleration of roll, if large, has a very distressing effect on crew, passengers, machinery, and structure. This can be minimized by increasing the period or by decreasing the roll amplitude. Maximum angular acceleration of roll is d 2f/dt 2 5 24p2 sfmax/T 2d where fmax is the maximum roll amplitude. The period of the roll can be increased effectively by decreasing GM ; hence the lowest value of GM compatible with all stability criteria should be sought. Pitching is in many respects analogous to rolling. The natural period of pitching, bow up to bow down, can be found by using the same expression used for the rolling period, with the longitudinal radius of gyration KL substituted for K. A good approximation is KL  L/4, where L is the length of the ship. The natural period of pitching is usually between one-third and one-half the natural period of roll. A by-product of pitching is slamming, the reentry of bow sections into the sea with heavy force. Heaving and yawing are the other two principal rigid-body motions caused by the sea. The amplitude of heave associated with head seas, which generate pitching, may be as much as 15 ft (4.57 m); that arising

Fig. 11.3.8 Conventional ship coordinate axes and ship motions.

SEAWORTHINESS

from beam seas, which induce rolling, can be even greater. Yawing is started by unequal forces acting on a ship as it quarters into a sea. Once a ship begins to yaw, it behaves as it would at the start of a rudderactuated turn. As the ship travels in a direction oblique to its plane of symmetry, forces are generated which force it into heel angles independent of sea-induced rolling. Analysis of ship movements in a seaway is performed by using probabilistic and statistical techniques. The seaway is defined by a mathematically modeled wave energy density spectrum based on data gathered by oceanographers. This wave spectrum is statistically calculated for various sea routes and weather conditions. One way of determining a ship response amplitude operator or transfer function is by linearly superimposing a ship’s responses to varying regular waves, both experimentally and theoretically. The wave spectrum multiplied by the ship response transfer function yields the ship response spectrum. The ship response spectrum is an energy density spectrum from which the statistical character of ship motions can be predicted. (See Edward V. Lewis, Motions in Waves, “Principles of Naval Architecture,” SNAME, 1989.) Resistance and Powering

Resistance to ship motion through the water is the aggregate of (1) wave-making, (2) frictional, (3) pressure or form, and (4) air resistances. Wave-making resistance is primarily a function of Froude number, Fr 5 v/ 2gL, where v  ship speed, ft/s (m/s); g  acceleration of gravity, ft/s2 (m/s2); and L  ship length, ft (m). In many instances, the dimensional speed-length ratio V/ 2L is used for convenience, where V is given in knots. A ship makes at least two distinct wave patterns, one from the bow and the other at the stern. There also may be other patterns caused by abrupt changes in section. These patterns combine to form the total wave system for the ship. At various speeds there is mutual cancellation and reinforcement of these patterns. Thus, a plot of total resistance of the ship versus Fr or V/ 2L is not smooth but shows humps and hollows corresponding to the wave cancellation or reinforcement. Normal procedure is to design the operating speed of a ship to fall at one of the low points in the resistance curve. Frictional resistance is a function of Reynolds number (see Sec. 3). Because of the size of a ship, the Reynolds number is large and the flow is always turbulent. Pressure or form resistance is a viscosity effect but is different from frictional resistance. The principal observed effects are boundary-layer separation and eddying near the stern. It is usual practice to combine the wave-making and pressure resistances into one term, called the residuary resistance, assumed to be a function of Froude number. The combination, although not strictly legitimate, is practical because the pressure resistance is usually only 2 to 3 percent of the total resistance. The frictional resistance is then the only term which is considered to be a function of Reynolds number and can be calculated. Based on an analysis of the water resistance of flat, smooth planes, Schoenherr gives the formula

11-45

frictional resistance (Rfs), and a correlation allowance that allows for the roughness of ship’s hull opposed to the smooth hull of a model. Rtm Rrm Rrs Rts

5 5 5 5

measured resistance Rtm 2 Rfm Rtm ss/md Rrs 1 Rfs 1 Ra

A nominal value of 0.0004 is generally used as the correlation allowance coefficient Ca. Ra  0.5rSv2Ca. From Rts, the total effective power PE required to propel the ship can be determined: PE 5

Rtsv 550

ehp

¢ PE 5

Rtsv 1,000

kWE ≤

where Rts  total ship resistance, lb (N); v  velocity, ft/s (m/s); and PE  Ps  P.C., where Ps is the shaft power (see Propulsion Systems) and P.C. is the propulsive coefficient, a factor which takes into consideration mechanical losses, propeller efficiency, and the flow interaction with the hull. P.C. 0.45 to 0.53 for high-speed craft; 0.50 to 0.60, for tugs and trawlers; 0.55 to 0.65, for destroyers; and 0.63 to 0.72, for merchant ships. Figure 11.3.9 illustrates the specific effective power for various displacement hull forms and planing craft over their appropriate speed regimes. Figure 11.3.10 shows the general trend of specific resistance versus Froude number and may be used for coarse powering estimates.

Fig. 11.3.9 Specific effective power for various speed-length regimes. (From “Handbook of Ocean and Underwater Engineering,” McGraw-Hill, 1969.)

Rf 5 0.5rSv 2Cf where Rf  frictional resistance, lb (N); r  mass density, lb/s2  ft4 (kg/m3); S  wetted surface area, ft2 (m2); v  velocity, ft/s (m/s); and Cf is the frictional coefficient computed from the ITTC formula Cf 5

0.075 slog 10 Re 2 2d2

and Re  Reynolds number  vL/v. Through towing tests of ship models at a series of speeds for which Froude numbers are equal between the model and the ship, total model resistance (Rtm) is determined. Residuary resistance (Rrm) for the model is obtained by subtracting the frictional resistance (Rfm). By Froude’s law of comparison, the residuary resistance of the ship (Rrs) is equal to Rrm multiplied by the ratio of ship displacement to model displacement. Total ship resistance (Rts) then is equal to the sum of Rrs, the calculated ship

Fig. 11.3.10 General trends of specific resistance versus Froude number.

11-46

MARINE ENGINEERING

In early design stages, the hull form is not yet defined, and model testing is not feasible. Reasonably accurate power calculations are made by using preplotted series model test data of similar hull form, such as from Taylor’s Standard Series and the Series 60. The Standard Series data were originally compiled by Admiral David W. Taylor and were based on model tests of a series of uniformly varied twin screw, cruiser hull forms of similar geometry. Revised Taylor’s Series data are available in “A Reanalysis of the Original Test Data for the Taylor Standard Series,” Taylor Model Basin, Rept. no. 806. Air Resistance The air resistance of ships in a windless sea is only a few percent of the water resistance. However, the effect of wind can have a significant impact on drag. The wind resistance parallel to the ship’s axis is roughly 30 percent greater when the wind direction is about 308 (p/6 rad) off the bow vice dead ahead, since the projected above-water area is greater. Wind resistance is approximated by RA  0.002B 2V 2R lb s0.36B 2V 2R Nd, where B  ship’s beam, ft (m) and VR  ship speed relative to the wind, in knots (m/s). Powering of Small Craft The American Boat and Yacht Council, Inc. (ABYC) provides guidance for determining the maximum power for propulsion of outboard boats, to evaluate the suitability of power installed in inboard boats, and to determine maneuvering speed. Figures 11.3.11 and 11.3.12 demonstrate such data. The latest standard review dates are available at www.abycinc.org. The development of high-power lightweight engines facilitated the evolution of planing hulls, which develop dynamic lift. Small high-speed craft primarily use gasoline engines for powering because of weight and cost considerations. However, small high-speed diesel engines are commonly used in pleasure boats.

ENGINEERING CONSTRAINTS

The constraints affecting marine engineering design are too numerous, and some too obvious, to include in this section. Three significant categories, however, are discussed. The geometry of the hull forms immediately suggests physical constraints. The interaction of the vehicle with the marine environment suggests dynamic constraints, particularly vibration, noise, and shock. The broad topic of environmental protection is one of the foremost engineering constraints of today, having a very pronounced effect on the operating systems of a marine vehicle. Physical Constraints

Formerly, tonnage laws in effect made it economically desirable for the propulsion machinery spaces of a merchant ship to exceed 13 percent of the gross tonnage of the ship so that 32 percent of the gross tonnage could be deducted in computing net tonnage. In most design configurations, however, a great effort is made to minimize the space required for the propulsion plant in order to maximize that available to the mission or the money-making aspects of the ship. Specifically, space is of extreme importance as each component of support equipment is selected. Each component must fit into the master compact arrangement scheme to provide the most efficient operation and maintenance by engineering personnel. Weight constraints for a main propulsion plant vary with the application. In a tanker where cargo capacity is limited by draft restrictions, the weight of machinery represents lost cargo. Cargo ships, on the other hand, rarely operate at full load draft. Additionally, the low weight of propulsion machinery somewhat improves inherent cargo ship stability deficiencies. The weight of each component of equipment is constrained by structural support and shock resistance considerations. Naval shipboard equipment, in general, is carefully analyzed to effect weight reduction. Dynamic Constraints

Fig. 11.3.11 Boat brake power capacity for length-width factor under 52. (From “Safety Standards for Small Craft,” ABYC, 1974.)

Fig. 11.3.12 Boat brake power capacity for length-width factor over 52. (From “Safety Standards for Small Craft,” ABYC, 1974.)

Dynamic effects, principally mechanical vibration, noise, shock, and ship motions, are considered in determining the dynamic characteristics of a ship and the dynamic requirements for equipment. Vibration (See Sec. 9.) Vibration analyses are especially important in the design of the propulsion shafting system and its relation to the excitation forces resulting from the propeller. The main propulsion shafting can vibrate in longitudinal, torsional, and lateral modes. Modes of hull vibration may be vertical, horizontal, torsional, or longitudinal; may occur separately or coupled; may be excited by synchronization with periodic harmonics of the propeller forces acting either through the shafting, by the propeller force field interacting with the hull afterbody, or both; and may also be set up by unbalanced harmonic forces from the main machinery, or, in some cases, by impact excitation from slamming or periodic-wave encounter. It is most important to reduce the excitation forces at the source. Very objectionable and serious vibrations may occur when the frequency of the exciting force coincides with one of the hull or shafting-system natural frequencies. Vibratory forces generated by the propeller are (1) alternating pressure forces on the hull due to the alternating hydrodynamic pressure fields of the propeller blades; (2) alternating propeller-shaft bearing forces, primarily caused by wake irregularities; and (3) alternating forces transmitted throughout the shafting system, also primarily caused by wake irregularities. The most effective means to ensure a satisfactory level of vibration is to provide adequate clearance between the propeller and the hull surface and to select the propeller revolutions or number of blades to avoid synchronism. Replacement of a four-blade propeller, for instance, by a three- or five-blade, or vice versa, may bring about a reduction in vibration. Singing propellers are due to the vibration of the propeller blade edge about the blade body, producing a disagreeable noise or hum. Chamfering the edge is sometimes helpful. Vibration due to variations in engine torque reaction is hard to overcome. In ships with large diesel engines, the torque reaction tends to

PROPULSION SYSTEMS

produce an athwartship motion of the upper ends of the engines. The motion is increased if the engine foundation strength is inadequate. Foundations must be designed to take both the thrust and torque of the shaft and be sufficiently rigid to maintain alignment when the ship’s hull is working in heavy seas. Engines designed for foundations with three or four points of support are relatively insensitive to minor working of foundations. Considerations should be given to using flexible couplings in cases when alignment cannot be assured. Torsional vibration frequency of the prime mover–shafting-propeller system should be carefully computed, and necessary steps taken to ensure that its natural frequency is clear of the frequency of the mainunit or propeller heavy-torque variations; serious failures have occurred. Problems due to resonance of engine torsional vibration (unbalanced forces) and foundations or hull structure are usually found after ship trials, and correction consists of local stiffening of the hull structure. If possible, engines should be located at the nodes of hull vibrations. Although more rare, a longitudinal vibration of the propulsion shafting has occurred when the natural frequency of the shafting agreed with that of a pulsating axial force, such as propeller thrust variation. Other forces inducing vibrations may be vertical inertia forces due to the acceleration of reciprocating parts of an unbalanced reciprocating engine or pump; longitudinal inertia forces due to reciprocating parts or an unbalanced crankshaft creating unbalanced rocking moments; and horizontal and vertical components of centrifugal forces developed by unbalanced rotating parts. Rotating parts can be balanced. Noise The noise characteristics of shipboard systems are increasingly important, particularly in naval combatant submarines where remaining undetected is essential, and also from a human-factors point of view on all ships. Achieving significant reduction in machinery noise level can be costly. Therefore desired noise levels should be analyzed. Each operating system and each piece of rotating or reciprocating machinery installed aboard a submarine are subjected to intensive airborne (noise) and structureborne (mechanical) vibration analyses and tests. Depending on the noise attenuation required, similar tests and analyses are also conducted for all surface ships, military and merchant. Shock In naval combatant ships, shock loading due to noncontact underwater explosions is a major design parameter. Methods of qualifying equipment as “shock-resistant” might include “static” shock analysis, “dynamic” shock analysis, physical shock tests, or a combination. Motions Marine lubricating systems are specifically distinguished by the necessity of including list, trim, roll, and pitch as design criteria. The American Bureau of Shipping requires satisfactory functioning of lubricating systems when the vessel is permanently inclined to certain angles athwartship and fore and aft. In addition, electric generator bearings must not spill oil under momentary roll specifications. Military specifications are similar, but generally add greater requirements for roll and pitch. Bearing loads are also affected by ship roll and pitch accelerations, and thus rotating equipment often must be arranged longitudinally.

11-47

extends the provisions of the Tank Vessel Act, which protects against hazards to life and property, to include protection of the marine environment. Regulations stemming from this Act govern standards of tanker design, construction, alteration, repair, maintenance, and operation. The most substantive marine environmental protection legislation is the Federal Water Pollution Control Act (FWPCA), 1948, as amended (33 U.S.C. 466 et seq.). The 1972 and 1978 Amendments contain provisions which greatly expand federal authority to deal with marine pollution, providing authority for control of pollution by oil and hazardous substances other than oil, and for the assessment of penalties. Navigable waters are now defined as “. . . the waters of the United States, including territorial seas.” “Hazardous substances other than oil” and harmful quantities of these substances are defined by the EPA in regulations. The Coast Guard and EPA must ensure removal of such discharges, costs borne by the responsible vehicle owner or operator if liable. Penalties now may be assessed by the Coast Guard for any discharge. The person in charge of a vehicle is to notify the appropriate U.S. government agency of any discharge upon knowledge of it. The Coast Guard regulations establish removal procedures in coastal areas, contain regulations to prevent discharges from vehicles and transfer facilities, and regulations governing the inspection of cargo vessels to prevent hazardous discharges. The EPA regulations govern inland areas and non-transportation-related facilities. Section 312 of the FWPCA as amended in 1972 deals directly with discharge of sewage. The EPA must issue standards for marine sanitation devices, and the Coast Guard must issue regulations for implementing those standards. In June, 1972, EPA published standards that now prohibit discharge of any sewage from vessels, whether it is treated or not. Federal law now prohibits, on all inland waters, operation with marine sanitation devices which have not been securely sealed or otherwise made visually inoperative so as to prevent overboard discharge. The Marine Plastic Pollution Control Act of 1987 (PL100-220) prohibits the disposal of plastics at sea within U.S. waters. The Oil Pollution Act of 1990 (PL101-380) provides extensive legislation on liability and includes measures that impact the future use of single-hull tankers. PROPULSION SYSTEMS (See Sec. 9 for component details.)

The basic operating requirement for the main propulsion system is to propel the vehicle at the required sustained speed for the range or endurance required and to provide suitable maneuvering capabilities. In meeting this basic requirement, the marine propulsion system integrates the power generator/prime mover, the transmission system, the propulsor, and other shipboard systems with the ship’s hull or vehicle platform. Figure 11.3.13 shows propulsion system alternatives with the most popular drives for fixed-pitch and controllable-pitch propellers.

Environmental Constraints

The Refuse Act of 1899 (33 U.S.C. 407) prohibits discharge of any refuse material from marine vehicles into navigable waters of the United States or into their tributaries or onto their banks if the refuse is likely to wash into navigable waters. The term refuse includes nearly any substance, but the law contains provisions for sewage. The Environmental Protection Agency (EPA) may grant permits for the discharge of refuse into navigable waters. The Oil Pollution Act of 1961 (33 U.S.C. 1001-1015) prohibits the discharge of oil or oily mixtures (over 100 mg/L) from vehicles generally within 50 mi of land; some exclusions, however, are granted. (The Oil Pollution Act of 1924 was repealed in 1970 because of supersession by subsequent legislation.) The Port and Waterways Safety Act of 1972 (PL92-340) grants the U.S. Coast Guard authority to establish and operate mandatory vehicle traffic control systems. The control system must consist of a VHF radio for ship-to-shore communications, as a minimum. The Act, in effect, also

Definitions for Propulsion Systems Brake power PB, bhp (kWB), is the power delivered by the output coupling of a prime mover before passing through speed-reducing and transmission devices and with all continuously operating engine-driven auxiliaries in use. Shaft power Ps, shp (kWs), is the net power supplied to the propeller shafting after passing through all reduction gears or other transmission devices and after power for all attached auxiliaries has been taken off. Shaft power is measured in the shafting within the ship by a torsionmeter as close to the propeller or stern tube as possible. Delivered power PD, dhp (kWD), is the power actually delivered to the propeller, somewhat less than Ps because of the power losses in the stern tube and other bearings between the point of measurement and the propeller. Sometimes called propeller power, it is also equal to the effective power PE, ehp (kWE), plus the power losses in the propeller and the losses in the interaction between the propeller and the ship.

11-48

MARINE ENGINEERING

Fig. 11.3.13 Alternatives in the selection of a main propulsion plant. (From “Marine Engineering” SNAME, 1992.)

Normal shaft power or normal power is the power at which a marine vehicle is designed to run most of its service life. Maximum shaft power is the highest power for which propulsion machinery is designed to operate continuously. Service speed is the actual speed maintained by a vehicle at load draft on its normal route and under average weather and sea conditions typical of that route and with average fouling of bottom. Designed service speed is the speed expected on trials in fair weather at load draft, with clean bottom, when machinery is developing a specified fraction of maximum shaft power. This fraction may vary but is of order of 0.8. MAIN PROPULSION PLANTS Steam Plants

The basic steam propulsion plant contains main boilers, steam turbines, a condensate system, a feedwater system, and numerous auxiliary components necessary for the plant to function. A heat balance calculation, the basic tool for determining the effect of various configurations on plant thermal efficiency, is demonstrated for the basic steam cycle. Both fossil-fuel and nuclear energy sources are successfully employed for marine applications.

Main Boilers (See Sec. 9.) The pressures and temperatures achieved in steam-generating equipment have increased steadily over recent years, permitting either a higher-power installation for a given space or a reduction in the size and weight of a given propulsion plant. The trend in steam pressures and temperatures has been an increase from 600 psig (4.14 MN/m2) and 8508F (4548C) during World War II to 1,200 psig (8.27 MN/m2) and 9508F (5108C) for naval combatants in and since the postwar era. For merchant ships, the progression has been from 400 psig (2.76 MN/m2) and 7508F (3998C) gradually up to 850 psig (5.86 MN/m2) and 8508F (4548C) in the 1960s, with some boilers at 1,500 psig (10.34 MN/m2) and 1,0008F (5388C) in the 1970s. The quantity of steam produced by a marine boiler ranges from approximately 1,500 lb/h (680 kg/h) in small auxiliary boilers to over 400,000 lb/h (181,500 kg/h) in large main propulsion boilers. Outputs of 750,000 lb/h (340,200 kg/h) or more per boiler are practical for highpower installations. Most marine boilers are oil-fired. Compared with other fuels, oil is easily loaded aboard ship, stored, and introduced into the furnace, and does not require the ash-handling facilities required for coal firing. Gas-fired boilers are used primarily on power or drill barges which are fixed in location and can be supplied from shore (normally classed in the Ocean Engineering category). At sea, tankers designed to carry liquefied natural gas (LNG) may use the natural boil-off from their cargo gas tanks as a supplemental fuel (dual fuel system). The cargo gas boiloff is collected and pumped to the boilers where it is burned in conjunction with oil. The quantity of boil-off available is a function of the ambient sea and air temperatures, the ship’s motion, and the cargo loading; thus, it may vary from day to day. For economy of space, weight, and cost and for ease of operation, the trend in boiler installations is for fewer boilers of high capacity rather than a large number of boilers of lower capacity. The minimum installation is usually two boilers, to ensure propulsion if one boiler is lost; one boiler per shaft for twin-screw ships. Some large ocean-going ships operate on single boilers, requiring exceptional reliability in boiler design and operation. Combustion systems include forced-draft fans or blowers, the fuel oil service system, burners, and combustion controls. Operation and maintenance of the combustion system are extremely important to the efficiency and reliability of the plant. The best combustion with the least possible excess air should be attained. Main Turbines (See Sec. 9.) Single-expansion (i.e., single casing) marine steam turbines are fairly common at lower powers, usually not exceeding 4,000 to 6,000 shp (2,983 to 4,474 kWS). Above that power range, the turbines are usually double-expansion (cross-compound) with high- and low-pressure turbines each driving the main reduction gear through its own pinion. The low-pressure turbine normally contains the reversing turbine in its exhaust end. The main condenser is either underslung and supported from the turbine, or the condenser is carried on the foundations, with the low-pressure turbine supported on the condenser. The inherent advantages of the steam turbine have favored its use over the reciprocating steam engine for all large, modern marine steam propulsion plants. Turbines are not size-limited, and their high temperatures and pressures are accommodated safely. Rotary motion is simpler than reciprocating motion; hence unbalanced forces can be eliminated in the turbine. The turbine can efficiently utilize a low exhaust pressure; it is lightweight and requires minimum space and low maintenance. The reheat cycle is the best and most economical means available to improve turbine efficiencies and fuel rates in marine steam propulsion plants. In the reheat cycle, steam is withdrawn from the turbine after partial expansion and is passed through a heat exchanger where its temperature is raised; it is then readmitted to the turbine for expansion to condenser pressure. Marine reheat plants have more modest steam conditions than land-based applications because of lower power ratings and a greater reliability requirement for ship safety. The reheat cycle is applied mostly in high-powered units above 25,000 shp (18,642 kWs) and offers the maximum economical thermal

MAIN PROPULSION PLANTS

efficiency that can be provided by a steam plant. Reheat cycles are not used in naval vessels because the improvement in efficiency does not warrant the additional complexity; hence a trade-off is made. Turbine foundations must have adequate rigidity to avoid vibration conditions. This is particularly important with respect to periodic variations in propeller thrust which may excite longitudinal vibrations in the propulsion system. Condensate System (See Sec. 9.) The condensate system provides the means by which feedwater for the boilers is recovered and returned to the feedwater system. The major components of the condensate system of a marine propulsion plant are the main condenser, the condensate pumps, and the deaerating feed tank or heater. Both singlepass and two-pass condenser designs are used; the single-pass design, however, allows somewhat simpler construction and lower water velocities. The single-pass condenser is also adaptable to scoop circulation, as opposed to pump circulation, which is practical for higher-speed ships. The deaerating feedwater heater is supplied from the condensate pumps, which take suction from the condenser hot well, together with condensate drains from steam piping and various heaters. The deaerating feed heater is normally maintained at about 35 psig (0.241 MN/m2) and 2808F (1388C) by auxiliary exhaust and turbine extraction steam. The condensate is sprayed into the steam atmosphere at the top of the heater, and the heated feedwater is pumped from the bottom by the feed booster or main feed pumps. In addition to removing oxygen or air, the heater also acts as a surge tank to meet varying demands during maneuvering conditions. Since the feed pumps take suction where the water is almost saturated, the heater must be located 30 to 50 ft (9.14 to 15.24 m) above the pumps in order to provide enough positive suction head to prevent flashing from pressure fluctuations during sudden plant load change. Feedwater System The feedwater system comprises the pumps, piping, and controls necessary to transport feedwater to the boiler or steam generator, to raise water pressure above boiler pressure, and to control flow of feedwater to the boiler. Main feed pumps are so vital that they are usually installed in duplicate, providing a standby pump capable of feeding the boilers at full load. Auxiliary steam-turbine-driven centrifugal pumps are usually selected. A typical naval installation consists of three main feed booster pumps and three main feed pumps for each propulsion plant. Two of the booster pumps are turbine-driven and one is electric. Additionally, an electric-motor-driven emergency booster pump is usually provided. The total capacity of the main feed pumps must be 150 percent of the boiler requirement at full power plus the required recirculation capacity. Reliable feedwater regulators are important. Steam Plant—Nuclear (See Sec. 9.) The compact nature of the energy source and not requiring air for combustion are the most significant characteristics of nuclear power for marine applications. The fission of one gram of uranium per day produces about one megawatt of power. (One pound produces about 600,000 horsepower.) In other terms, the fission of 1 lb of uranium is equivalent to the combustion of about 86 tons (87,380 kg) of 18,500 Btu/lb (43.03 MJ/kg) fuel oil. This characteristic permits utilization of large power plants on board ship without the necessity for frequent refueling or large bunker storage. Economic studies, however, show that nuclear power, as presently developed, is best suited for military purposes, where the advantages of high power and endurance override the pure economic considerations. As the physical size of nuclear propulsion plants is reduced, their economic attractiveness for commercial marine application will increase. Major differences between the nuclear power and fossil-fuel plants are: (1) The safety aspect of the nuclear reactor system—operating personnel must be shielded from fission product radiation, hence the size and weight of the reactor are increased and maintenance and reliability become more complicated; (2) the steam produced by a pressurizedwater reactor plant is saturated and, because of the high moisture content in the turbine steam path, the turbine design requires more careful attention; (3) the steam pressure provided by a pressurized-water reactor plant varies with output, the maximum pressure occurring at no load

11-49

and decreasing approximately linearly with load to a minimum at full power; hence, blade stresses in a nuclear turbine increase more rapidly with a decrease in power than in a conventional turbine. Special attention must be given to the design of the control stage of a nuclear turbine. The N.S. Savannah was designed for 700 psig (4.83 MN/m2) at no load and 400 psig (2.76 MN/m2) at full power. Steam Plant—Heat Balance The heat-balance calculation is the basic analysis tool for determining the effect of various steam cycles on the thermal efficiency of the plant and for determining the quantities of steam and feedwater flow. The thermal arrangement of a simple steam cycle, illustrated in Fig. 11.3.14, and the following simplified analysis are taken from “Marine Engineering,” SNAME, 1992.

Fig. 11.3.14 Simple steam cycle. (From “Marine Engineering,” SNAME, 1992.)

The unit is assumed to develop 30,000 shp (22,371 kWs). The steam rate of the main propulsion turbines is 5.46 lb/(shp  h) [3.32 kg/ (kWS  h)] with throttle conditions of 850 psig (5.86 MN/m2) and 9508F (5108C) and with the turbine exhausting to the condenser at 1.5 inHg abs (5,065 N/m2). To develop 30,000 shp (22,371 kWS), the throttle flow must be 163,800 lb/h (74,298 kg/h). The generator load is estimated to be 1,200 kW, and the turbogenerator is thus rated at 1,500 kW with a steam flow of 10,920 lb/h (4,953 kg/h) for steam conditions of 850 psig (5.86 MN/m2) and 9508F (5108C) and a 1.5 inHg abs (5,065 N/m2) back pressure. The total steam flow is therefore 174,720 lb/h (79,251 kg/h). Tracing the steam and water flow through the cycle, the flow exhausting from the main turbine is 163,800  250  163,550 lb/h (74,298  113  74,185 kg/h), 250 lb/h (113 kg/h) to the gland condenser, and from the auxiliary turbine is 10,870 lb/h (4,930 kg/h), 50 lb/h (23 kg/h) to the gland condenser. The two gland leak-off flows return from the gland leak-off condenser to the main condenser. The condensate flow leaving the main condenser totals 174,720 lb/h (79,251 kg/h). It is customary to allow a 18F (0.5568C) hot-well depression in the condensate temperature. Thus, at 1.5 inHg abs (5,065 N/m2) the condenser saturation temperature is 91.78F (33.28C) and the condensate is 90.78F (32.68C). Entering the gland condenser there is a total energy flow of 174,720  59.7  10,430,784 Btu/h (11,004,894,349 J/h). The gland condenser receives gland steam at 1,281 Btu/lb (2,979,561 J/kg) and drains at a 108F (5.68C) terminal difference or 101.78F (38.68C). This adds a total of 300  (1,281  69.7) or 363,390 Btu/h (383,390,985 J/h) to the condensate, making a total of 10,794,174 Btu/h (11,388,285,334 J/h) entering the surge tank. The feed leaves the surge tank at the same enthalpy with which it enters.

11-50

MARINE ENGINEERING

The feed pump puts an amount of heat into the feedwater equal to the total power of the pump, less any friction in the drive system. This friction work can be neglected but the heat from the power input is a significant quantity. The power input is the total pump head in feet of feedwater, times the quantity pumped in pounds per hour, divided by the mechanical equivalent of heat and the efficiency. Thus, Heat equivalent of feed pump work 5

144Pvf Q 778 E

Btu/h

¢

Pvf Q E

J/h≤

where P  pressure change, lb/in2 (N/m2); vf  specific volume of fluid, ft3/lb (m3/kg); Q  mass rate of flow, lb/h (kg/h); and E  efficiency of pump. Assuming the feed pump raises the pressure from 15 to 1,015 lb/in2 abs (103,421 to 6,998,178 N/m2), the specific volume of the water is 0.0161 ft3/lb (0.001005 m3/kg), and the pump efficiency is 50 percent, the heat equivalent of the feed pump work is 1,041,313 Btu/h (1,098,626,868 J/h). This addition of heat gives a total of 11,835,487 Btu/h (12,486,912,202 J/h) entering the boiler. Assuming no leakage of steam, the steam leaves the boiler at 1,481.2 Btu/lb (3,445,219 J/kg), with a total thermal energy flow of 1,481.2  174,720  258,795,264 Btu/h (273,039,355,300 J/h). The difference between this total heat and that entering [258,795,264  11,835,487  246,959,777 Btu/h (260,552,443,098 J/h)] is the net heat added in the boiler by the fuel. With a boiler efficiency of 88 percent and a fuel having a higher heating value of 18,500 Btu/lb (43,030,353 J/kg), the quantity of fuel burned is or

Fuel flow rate 5 246,959,777/s18,500ds0.88d 5 15,170 lb/h 260,552,443,098/s43,030,353ds0.88d 5 6,881 kg/h

The specific fuel rate is the fuel flow rate divided by the net shaft power [15,170/30,000  0.506 lb/(shp  h) or 6881/22,371  0.308 kg/ (kWS  h)]. The heat rate is the quantity of heat expended to produce one horsepower per hour (one kilowatt per hour) and is calculated by dividing the net heat added to the plant, per hour, by the power produced. Heat rate 5 s15,170 lb/hds18,500 Btu/lbd/30,000 shp 5 9,335 Btu/sship # hd or 296,091,858,933/22,371 5 13,235,522 J/skWs # hd This simple cycle omits many details that must necessarily be included in an actual steam plant. A continuation of the example, developing the details of a complete analysis, is given in “Marine Engineering,” SNAME, 1992. A heat balance is usually carried through several iterations until the desired level of accuracy is achieved. The first heat balance may be done from approximate data given in the SNAME Technical and Research Publication No. 3-11, “Recommended Practices for Preparing Marine Steam Power Plant Heat Balances,” and then updated as equipment data are known. Diesel Engines (See Sec. 9.)

While steam plants are custom-designed, diesel engines and gas turbines are selected from commercial sources at discrete powers. Diesel engines are referred to as being high, medium, or low speed. Low-speed diesels are generally categorized as those with engine speeds less than about 300 r/min, high-speed in excess of 1,200 r/min. Low-speed marine diesel engines are directly coupled to the propeller shaft. Unlike steam turbines and gas turbines, which require special reversing provisions, most direct-drive diesel engines are readily reversible. Slow-speed diesel engines are well-suited to marine propulsion. Although larger, heavier, and initially more expensive than higherspeed engines, they generally have lower fuel, operating, and maintenance costs. Slow-speed engine parts take longer to wear to the same percentage of their original dimension than high-speed engine parts. Largebore, low-speed diesel engines have inherently better combustion performance with lower-grade diesel fuels. However, a well-designed high-speed engine which is not overloaded can give equally good service as a slow-speed engine.

Medium- and High-Speed Diesels The number of medium- and high-speed diesel engines used in marine applications is relatively small compared to the total number of such engines produced. The mediumand high-speed marine engine of today is almost universally an adaptation of engines built in quantities for service in automotive and stationary applications. The automotive field contributes high-speed engine applications in the 400-hp (298-kW) range, with engine speeds commonly varying from 1,800 to 3,000 r/min, depending on whether use is continuous or intermittent. Diesels in the 1,200- to 1,800-r/min range are typical of off-highway equipment engines at powers from 500 to upward of 1,200 hp (373 to 895 kW). Medium-speed diesel engines in units from 6 to 20 cylinders are available at ratings up to and exceeding 8,000 hp (5,966 kW) from both locomotive and stationary engine manufacturers. These applications are not all-inclusive, but only examples of the wide variety of diesel ratings commercially available today. Some of the engines were designed with marine applications in mind, and others require some modification in external engine hardware. The changes are those needed to suit the engine to the marine environment, meaning salt-laden air, high humidity, use of corrosive seawater for cooling, and operating from a pitching and rolling platform. It also may mean an installation made in confined spaces. In order to adapt to this environment, the prime requisite of the marine diesel engine is the ability to resist corrosion. Because marine engines may be installed with their crankshafts at an angle to the horizontal and because they are subjected to more motion than in many other applications, changes are also necessary in the lubricating oil system. The air intake to a marine engine may not be dust-free and dirt-free when operating in harbors, inland waters, or close offshore; therefore, it is as important to provide a good air cleaner as in any automotive or stationary installation. Diesel engines are utilized in all types of marine vehicles, both in the merchant marine and in the navies of the world. The power range in which diesel engines have been used has increased directly with the availability of higher-power engines. The line of demarcation in horsepower between what is normally assigned to diesel and to steam has continually moved upward, as has the power installed in ships in general. Small boats in use in the Navy are usually powered by diesel engines, although the gas turbine is being used in special boats where high speed for short periods of time is the prime requirement. Going a little higher in power, diesels are used in many kinds of workboats such as fishing boats, tugs, ferries, dredges, river towboats and pushers, and smaller types of cargo ships and tankers. They are used in the naval counterparts of these ships and, in addition, for military craft such as minesweepers, landing ships, patrol and escort ships, amphibious vehicles, tenders, submarines, and special ships such as salvage and rescue ships and icebreakers. In the nuclear-powered submarine, the diesel is relegated to emergency generator-set use; however, it is still the best way to power a nonnuclear submarine when not operating on the batteries. This will likely change, giving way to the new technology of air-independent propulsion systems for nonnuclear submarines. Diesel engines are used either singly or in multiple to drive propeller shafts. For all but high-speed boats, the rpm of the modern diesel is too fast to drive the propeller directly with efficiency, and some means of speed reduction, either mechanical or electrical, is necessary. If a single engine of the power required for a given application is available, then a decision must be made whether it or several smaller engines should be used. The decision may be dictated by the available space. The diesel power plant is flexible in adapting to specific space requirements. When more than one engine is geared to the propeller shaft, the gear serves as both a speed reducer and combining gear. The same series of engines could be used in an electric-drive propulsion system, with even greater flexibility. Each engine drives its own generator and may be located independently of other engines and the propeller shaft. Low-Speed, Direct-Coupled Diesels Of the more usual prime mover selections, only low-speed diesel engines are directly coupled to the propeller shaft. This is due to the low rpm required for efficient propeller operation and the high rpm inherent with other types of prime movers. A rigid hull foundation, with a high resistance to vertical, athwartship, and fore-and-aft deflections, is required for the low-speed

MAIN PROPULSION PLANTS

diesel. The engine room must be designed with sufficient overhaul space and with access for large and heavy replacement parts to be lifted by cranes. Because of the low-frequency noise generated by low-speed engines, the operating platform often can be located at the engine itself. Special control rooms are often preferred as the noise level in the control room can be significantly less than that in the engine room. Electric power may be produced by a generator mounted directly on the line shafting. Operation of the entire plant may be automatic and remotely controlled from the bridge. The engine room is often completely unattended for 16 h a day. Gas Turbines (See Sec. 9.)

The gas turbine has developed since World War II to join the steam turbine and the diesel engine as alternative prime movers for various shipboard applications. In gas turbines, the efficiency of the components is extremely important since the compressor power is very high compared with its counterpart in competitive thermodynamic cycles. For example, a typical marine propulsion gas turbine rated at 20,000 bhp (14,914 kWB) might require a 30,000 hp (22,371 kW) compressor and, therefore, 50,000 hp (37,285 kW) in turbine power to balance the cycle. The basic advantages of the gas turbine for marine applications are its simplicity, small size, and light weight. As an internal-combustion engine, it is a self-contained power plant in one package with a minimum number of large supporting auxiliaries. It has the ability to start and go on line very quickly. Having no large masses that require slow heating, the time required for a gas turbine to reach full speed and accept the load is limited almost entirely by the rate at which energy can be supplied to accelerate the rotating components to speed. A further feature of the gas turbine is its low personnel requirement and ready adaptability to automation. The relative simplicity of the gas turbine has enabled it to attain outstanding records of reliability and maintainability when used for aircraft propulsion and in industrial service. The same level of reliability and maintainability is being achieved in marine service if the unit is properly applied and installed. Marine units derived from aircraft engines usually have the gas generator section, comprising the compressor and its turbine, arranged to be removed and replaced as a unit. Maintenance on the power turbine, which usually has the smallest part of the total maintenance requirements, is performed aboard ship. Because of their light weight, small gas turbines used for auxiliary power or the propulsion of small boats, can also be readily removed for maintenance. Units designed specifically for marine use and those derived from industrial gas turbines are usually maintained and overhauled in place. Since they are somewhat larger and heavier than the aviation-type units, removal and replacement are not as readily accomplished. For this reason, they usually have split casings and other provisions for easy access and maintenance. The work can be performed by the usual ship repair forces. Both single-shaft and split-shaft gas turbines can be used in marine service. Single-shaft units are most commonly used for generator drives. When used for main propulsion, where the propeller must operate over a very wide speed range, the single-shaft unit must have a controllablepitch propeller or some equivalent variable-speed transmission, such as an electric drive, because of its limited speed range and poor acceleration characteristics. A multishaft unit is normally used for main propulsion, with the usual arrangement being a split-shaft unit with an independent variable-speed power turbine; the power turbine and propeller can be stopped, if necessary, and the gas producer kept in operation for rapid load pickup. The use of variable-area nozzles on the power turbine increases flexibility by enabling the compressor to be maintained at or near full speed and the airflow at low-power turbine speeds. Nearly full power is available by adding fuel, without waiting for the compressor to accelerate and increase the airflow. Where low-load economy is of importance, the controls can be arranged to

11-51

reduce the compressor speed at low loads and maintain the maximum turbine inlet and/or exhaust temperature for best efficiency. Since a gas turbine inherently has a poor part-load fuel rate performance, this variable-area nozzle feature can be very advantageous. The physical arrangement of the various components, i.e., compressors, combustion systems, and turbines, that make up the gas turbine is influenced by the thermodynamic factors (i.e., the turbine connected to a compressor must develop enough power to drive it), by mechanical considerations (i.e., shafts must have adequate bearings, seals, etc.), and also by the necessity to conduct the very high air and gas flows to and from the various components with minimum pressure losses. In marine applications, the gas turbines usually cannot be mounted rigidly to the ship’s structure. Normal movement and distortions of the hull when under way would cause distortions and misalignment in the turbine and cause internal rubs or bearing and/or structural failure. The turbine components can be mounted on a subbase built up of structural sections of sufficient rigidity to maintain the gas-turbine alignment when properly supported by the ship’s hull. Since the gas turbine is a high-speed machine with output shaft speeds ranging from about 3,600 r/min for large machines up to 100,000 r/min for very small machines (approximately 25,000 r/min is an upper limit for units suitable for the propulsion of small boats), a reduction gear is necessary to reduce the speed to the range suitable for a propeller. Smaller units suitable for boats or driving auxiliary units, such as generators in larger vessels, frequently have a reduction gear integral with the unit. Larger units normally require a separate reduction gear, usually of the double- or triple-reduction type. A gas turbine, in common with all turbine machinery, is not inherently reversible and must be reversed by external means. Electric drives offer ready reversing but are usually ruled out on the basis of weight, cost, and to some extent efficiency. From a practical standpoint there are two alternatives, a reversing gear or a controllable reversible-pitch (CRP) propeller. Both have been used successfully in gas-turbinedriven ships, with the CRP favored. Combined Propulsion Plants

In some shipboard applications, diesel engines, gas turbines, and steam turbines can be employed effectively in various combinations. The prime movers may be combined either mechanically, thermodynamically, or both. The leveling out of specific fuel consumption over the operating speed range is the goal of most combined systems. The gas turbine is a very flexible power plant and consequently figures in most possible combinations which include combined diesel and gas-turbine plants (CODAG); combined gas- and steam-turbine plants (COGAS); and combined gas-turbine and gas-turbine plants (COGAG). In these cycles, gas turbines and other engines or gas turbines of two different sizes or types are combined in one plant to give optimum performance over a very wide range of power and speed. In addition, combinations of diesel or gas-turbine plants (CODOG), or gas-turbine or gas-turbine plants (COGOG), where one plant is a diesel or small gas turbine for use at low or cruising powers and the other a large gas turbine which operates alone at high powers, are also possibilities. Even the combination of a small nuclear plant and a gas turbine plant (CONAG) has undergone feasibility studies. COGAS Gas and steam turbines are connected to a common reduction gear, but thermodynamically are either independent or combined. Both gas- and steam-turbine drives are required to develop full power. In a thermodynamically independent plant, the boost power is furnished by a lightweight gas turbine. This combination produces a significant reduction in machinery weight. In the thermodynamically combined cycle, commonly called STAG (steam and gas turbine) or COGES (used with electric-drive systems), energy is recovered from the exhaust of the gas turbine and is used to augment the main propulsion system through a steam turbine. The principal advantage gained by the thermodynamic interconnection is the potential for improved overall efficiency and resulting fuel savings. In this arrangement, the gas turbine discharges to a heat-recovery boiler where a large quantity of heat in the exhaust gases is used to generate

11-52

MARINE ENGINEERING

Fig. 11.3.15 COGAS system schematic.

steam. The boiler supplies the steam turbine that is geared to the propeller. The steam turbine may be coupled to provide part of the power for the gas-turbine compressor. The gas-turbine may be used to provide additional propulsion power, or its exhaust may be recovered to supply heat for various ship’s services (Fig. 11.3.15). Combined propulsion plants have been used in several applications in the past. CODOG plants have serviced some Navy patrol gunboats and the Coast Guard’s Hamilton class cutters. Both COGOG and COGAS plants have been used in foreign navies, primarily for destroyer-type vessels.

counterclockwise, “left-handed.” In a twin-screw installation, the starboard propeller is normally right-handed and the port propeller lefthanded. The surface of the propeller blade facing aft, which experiences the increase in pressure, producing thrust, is the face of the blade; the forward surface is the back. The face is commonly constructed as a true helical surface of constant pitch; the back is not a helical surface. A true helical surface is generated by a line rotated about an axis normal to itself and advancing in the direction of this axis at constant speed. The distance the line advances in one revolution is the pitch. For simple propellers, the pitch is constant on the face; but in practice, it is common for large propellers to have a reduced pitch toward the hub and less usually toward the tip. The pitch at 0.7 times the maximum radius is usually a representative mean pitch; maximum lift is generated at that approximate point. Pitch may be expressed as a dimensionless ratio, P/D. The shapes of blade outlines and sections vary greatly according to the type of ship for which the propeller is intended and to the designer’s ideas. Figure 11.3.17 shows a typical design and defines many of the terms in common use. The projected area is the area of the projection of the propeller contour on a plane normal to the shaft, and the developed area is the total face area of all the blades. If the variation of helical cord length is known, then the true blade area, called expanded area, can be obtained graphically or analytically by integration.

PROPULSORS

The force to propel a marine vehicle arises from the rate of change of momentum induced in either the water or air. Since the force produced is directly proportional to the mass density of the fluid used, the reasonable choice is to induce the momentum change in water. If air were used, either the cross-sectional area of the jet must be large or the velocity must be high. A variety of propulsors are used to generate this stream of water aft relative to the vehicle: screw propellers, controllable-pitch propellers, water jets, vertical-axis propellers, and other thrust devices. Figure 11.3.16 indicates the type of propulsor which provides the best efficiency for a given vehicle type. Screw Propellers

The screw propeller may be regarded as part of a helicoidal surface which, as it rotates, appears to “screw” its way through the water, driving water aft and the vehicle forward. A propeller is termed “right-handed” if it turns clockwise (viewed from aft) when producing ahead thrust; if

Fig. 11.3.17 Typical propeller drawing. (From “Principles of Naval Architecture,” SNAME, 1988.) Diameter  D Pitch ratio  P/D Pitch  P Blade thickness ratio  t/D No. of blades  4 Pitch angle  f Disk area  area of tip circle  p D2/4  AO Developed area of blades, outside hub  AD Developed area ratio  DAR  AD /AO Projected area of blades (on transverse plane) outside hub  AP Projected area ratio  PAR  AP /AO Blade width ratio  BWR  (max. blade width)/D Mean width ratio  MWR  [AD /length of blades (outside hub)]/D

Consider a section of the propeller blade at a radius r with a pitch angle f and pitch P working in an unyielding medium; in one revolution of the propeller it will advance a distance P. Turning n revolutions in unit time it will advance a distance P  n in that time. In a real fluid, there will be a certain amount of yielding when the propeller is developing thrust and the screw will not advance P  n, but some smaller distance. The difference between P  n and that smaller distance is called the slip velocity. Real slip ratio is defined in Fig. 11.3.18. A wake or a frictional belt of water accompanies every hull in motion; its velocity varies as the ship’s speed, hull shape, the distance from the ship’s side and from the bow, and the condition of the hull surface. For ordinary propeller design the wake velocity is a fraction w of the ship’s speed. Wake velocity  wV. The wake fraction w for a ship may be obtained from Fig. 11.3.19. The velocity of the ship relative to the ship’s wake at the stern is VA  (1  w)V. The apparent slip ratio SA is given by SA 5 sPn 2 Vd/Pn 5 1 2 V/Pn Fig. 11.3.16 Comparison of optimum efficiency values for different types of propulsors. (From “Marine Engineering,” SNAME, 1992.)

Although real slip ratio, which requires knowledge of the wake fraction, is a real guide to ship performance, the apparent slip ratio requires

PROPULSORS

11-53

advance, knots; D  propeller diameter, ft (m); T  thrust, lb (N); hO  open-water propeller efficiency; hR  relative rotative efficiency (0.95 hR 1.0, twin-screw; 1.0 hR 1.05, single-screw), a factor which corrects hO to the efficiency in the actual flow conditions behind the ship. Assume a reasonable value for n and, using known values for PD and VA (for a useful approximation, PD  0.98PS), calculate Bp. Then enter Fig. 11.3.20, or an appropriate series of Taylor or Troost propeller charts, to determine d, hO, and P/D for optimum efficiency. The charts and parameters should be varied, and the results plotted to recognize the most suitable propeller. Fig. 11.3.18 Definition of slip. (From “Principles of Naval Architecture,” SNAME, 1988.) tan w 5 Pn/2pnr 5 P/2pr Real slip ratio  SR  MS/ML  (Pn  VA)/Pn  1  VA/Pn

Fig. 11.3.20 Typical Taylor propeller characteristic curves. (From “Handbook of Ocean and Underwater Engineering,” McGraw-Hill, 1969.) Fig. 11.3.19 Wake fractions for single- and twin-screw models. (From “Principles of Naval Architecture,” SNAME, 1988.)

only ship speed, revolutions, and pitch to calculate and is therefore often recorded in ships’ logs. For P in ft (m), n in r/min, and V in knots, SA  (Pn – 101.3 V)/Pn; [SA  (Pn – 30.9V)/Pn]. Propeller Design The design of a marine propeller is usually carried out by one of two methods. In the first, the design is based upon charts giving the results of open-water tests on a series of model propellers. These cover variations in a number of the design parameters such as pitch ratio, blade area, number of blades, and section shapes. A propeller which conforms with the characteristics of any particular series can be rapidly designed and drawn to suit the required ship conditions. The second method is used in cases where a propeller is heavily loaded and liable to cavitation or has to work in a very uneven wake pattern; it is based on purely theoretical calculations. Basically this involves finding the chord width, section shape, pitch, and efficiency at a number of radii to suit the average circumferential wake values and give optimum efficiency and protection from cavitation. By integration of the resulting thrust and torque-loading curves over the blades, the thrust, torque, and efficiency for the whole propeller can be found. Using the first method and Taylor’s propeller and advance coefficients BP and d, a convenient practical design and an initial estimate of propeller size can be obtained 0.5

BP 5 d5 hO 5

0.5

nsPDd

B

sVAd2.5 nD VA

¢

1.158nsPDd sVAd2.5

R

3.281 nD ≤ VA

TVA 325.7PDhR

¢

TVA ≤ 1,942.5PDhR

where Bp  Taylor’s propeller coefficient; d  Taylor’s advance coefficient; n  r/min; PD  delivered power, dhp (kWD); VA  speed of

Propeller cavitation, when severe, may result in marked increase in rpm, slip, and shaft power with little increase in ship speed or effective power. As cavitation develops, noise, vibration, and erosion of the propeller blades, struts, and rudders are experienced. It may occur either on the face or on the back of the propeller. Although cavitation of the face has little effect on thrust and torque, extensive cavitation of the back can materially affect thrust and, in general, requires either an increase in blade area or a decrease in propeller rpm to avoid. The erosion of the backs is caused by the collapse of cavitation bubbles as they move into higher pressure regions toward the trailing edge. Avoidance of cavitation is an important requirement in propeller design and selection. The Maritime Research Institute Netherlands (MARIN) suggests the following criterion for the minimum blade area required to avoid cavitation:

A2P 5

T2 1,360s p0 2 pvd1.5VA

or

B

T2 R 5.44s p0 2 pvd1.5VA

where Ap  projected area of blades, ft2 (m2); T  thrust, lb (N); p0  pressure at screw centerline, due to water head plus atmosphere, lb/in2 (N/m2); pv  water-vapor pressure, lb/in2 (N/m2); and VA  speed of advance, knots (1 knot  0.515 m/s); and or

p0 2 pv 5 14.45 1 0.45h lb/in2 s p0 2 pv 5 99,629 1 10,179h N/m2d

where h  head of water at screw centerline, ft (m). A four-blade propeller of 0.97 times the diameter of the three-blade, the same pitch ratio, and four-thirds the area will absorb the same power at the same rpm as the three-blade propeller. Similarly, a two-blade propeller of 5 percent greater diameter is approximately equivalent to a three-blade unit. Figure 11.3.20 shows that propeller efficiency increases with decrease in value of the propeller coefficient Bp. A low value of Bp in a slow-speed ship calls for a large-diameter (optimum) propeller. It is frequently necessary to limit the diameter of the propeller and accept the accompanying loss in efficiency. The number of propeller blades is usually three or four. Four blades are commonly used with single-screw merchant ships. Recently five, and

11-54

MARINE ENGINEERING

even seven, blades have been used to reduce vibration. Alternatively, highly skewed blades reduce vibration by creating a smoother interaction between the propeller and the ship’s wake. Highly loaded propellers of fast ships and naval vessels call for large blade area, preferably three to five blades, to reduce blade interference and vibration. Propeller fore-and-aft clearance from the stern frame of large singlescrew ships at 70 percent of propeller radius should be greater than 18 in. The stern frame should be streamlined. The clearance from the propeller tips to the shell plating of twin-screw vessels ranges from about 20 in for low-powered ships to 4 or 5 ft for high-powered ships. U.S. Navy practice generally calls for a propeller tip clearance of 0.25D. Many high-speed motorboats have a tip clearance of only several inches. The immersion of the propeller tips should be sufficient to prevent the drawing in of air. EXAMPLE. Consider a three-blade propeller for a single-screw installation with Ps  2,950 shp (2,200 kWs), V  17 knots, and CB  0.52. Find the propeller diameter D and efficiency hO for n  160 r/min. From Fig. 11.3.19, the wake fraction w  0.165. VA  V(1  0.165)  14.19 knots. PD  0.98Ps  2,891 dhp (2,156 kWD). At 160 r/min, BP  160(2,891)0.5/ (14.19)2.5  11.34. [1.158  160 (2,156)0.5/(14.19)2.5  11.34.] From Fig. 11.3.20, the following is obtained or calculated: r/min

Bp

P/d

hO

d

D, ft (m)

P, ft (m)

160

11.34

0.95

0.69

139

12.33 (3.76)

11.71 (3.57)

For a screw centerline submergence of 9 ft (2.74 m) and a selected projected area ratio PAR  0.3, investigate the cavitation criterion. Assume hR  1.0. Minimum projected area: T 5 325.7 hOhR PD/VA 5 325.7 3 0.69 3 1.0 3 2,891/14.19 5 45,786 lb sT 5 1,942.5 3 0.69 3 2,156/14.19 5 203,646 Nd p0 2 pv 5 14.45 1 0.45s9d 5 18.5 lb/in2 s p0 2 pv 5 99,629 1 10,179s2.74 5 127,519 N/m2d A2p 5 s45,786d2/1,360s18.5d1.5 s14.19d 5 1,365, Ap 5 36.95 ft 2 minimum A2p 5 s203,646d2/5.44s127,519d1.5 s14.19d 5 11.79, Ap 5 3.43 m2 minimum Actual projected area: PAR  Ap /A0  0.3, Ap  0.3A0  0.37p (12.33)2/4  35.82 ft2 (3.33 m2). Since the selected PAR does not meet the minimum cavitation criterion, either the blade width can be increased or a four-blade propeller can be adopted. To absorb the same power at the same rpm, a four-blade propeller of about 0.97 (12.33)  11.96 ft (3.65 m) diameter with the same blade shape would have an Ap  35.82  4/3  (11.96/12.33)2  44.94 ft2 (4.17 m2), or about 25 percent increase. The pitch  0.95 (11.96)  11.36 ft (3.46 m). The real and apparent slip ratios for the four-blade propeller are: SR 5 1 2 s101.3ds14.19d/s11.36ds160d 5 0.209 SA 5 1 2 s30.9ds17.0d/s3.46ds160d 5 0.051 that is, 20.9 and 5.1 percent (SA calculated using SI values). Since the projected area in the three-blade propeller of this example is only slightly under the minimum, a three-blade propeller with increased blade width would be the more appropriate choice. Controllable-Pitch Propellers

Controllable-pitch propellers are screw propellers in which the blades are separately mounted on the hub, each on an axis, and in which the pitch of the blades can be changed, and even reversed, while the propeller is turning. The pitch is changed by means of an internal mechanism consisting essentially of hydraulic pistons in the hub acting on crossheads. Controllable-pitch propellers are most suitable for vehicles which must meet different operating conditions, such as tugs, trawlers, ferries, minesweepers, and landing craft. As the propeller pitch is changed, the engine can still run at its most efficient speed. Maneuvering is more rapid since the pitch can be changed more rapidly than could the shaft revolutions. By use of controllable-pitch propellers, neither reversing mechanisms are necessary in reciprocating engines, nor astern turbines in turbine-powered vehicles, especially important in gas-turbine installations. Except for the larger hub needed

to house the pitch-changing mechanism, the controllable-pitch propeller can be made almost as efficient as the solid, fixed-blade propeller. Gas turbines and controllable reversible-pitch propellers (CRPs) are generally used in all recent U.S. Navy surface ship designs (except carriers). Water Jet

This method usually consists of an impeller or pump inside the hull, which draws water from outside, accelerates it, and discharges it astern as a jet at a higher velocity. It is a reaction device like the propeller but in which the moving parts are contained inside the hull, desirable for shallow-water operating conditions and maneuverability. The overall efficiency is lower than that of the screw propeller of diameter equal to the jet orifice diameter, principally because of inlet and ducting losses. Other disadvantages include the loss of volume to the ducting and impeller and the danger of fouling of the impeller. Water jets have been used in several of the U.S. Navy’s hydrofoil and surface-effect vehicles. Vertical-Axis Propellers

There are two types of vertical-axis propulsor systems consisting of one or two vertical-axis rotors located underwater at the stern. Rotor disks are flush with the shell plating and have five to eight streamline, spadelike, vertical impeller blades fitted near the periphery of the disks. The blades feather during rotation of the disk to produce a maximum thrust effect in any direction desired. In the Kirsten-Boeing system the blades are interlocked by gears so that each blade makes a half revolution about its axis for each revolution of the disk. The blades of the VoithSchneider system make a complete revolution about their own axis for each revolution of the disk. A bevel gear must be used to transmit power from the conventional horizontal drive shaft to the horizontal disk; therefore, limitations exist on the maximum power that can be transmitted. Although the propulsor is 30 to 40 percent less efficient than the screw propeller, it has obvious maneuverability advantages. Propulsors of this type have also been used at the bow to assist in maneuvering. Other Thrust Devices Pump Jet In a pump-jet arrangement, the rotating impeller is external to the hull with fixed guide vanes either ahead and/or astern of it; the whole unit is enclosed in a duct or long shroud ring. The duct diameter increases from the entrance to the impeller so that the velocity falls and the pressure increases. Thus the impeller diameter is larger, thrust loading less, and the efficiency higher; the incidence of cavitation and noise is delayed. A penalty is paid, however, for the resistance of the duct. Kort Nozzles In the Kort nozzle system, the screw propeller operates in a ring or nozzle attached to the hull at the top. The longitudinal sections are of airfoil shape, and the length of the nozzle is generally about one-half its diameter. Unlike the pump-jet shroud ring, the Kort nozzle entrance is much bigger than the propeller, drawing in more water than the open propeller and achieving greater thrust. Because of the acceleration of the water into the nozzle, the pressure inside is less; hence a forward thrust is exerted on the nozzle and the hull. The greatest advantage is in a tug, pulling at rest. The free-running speed is usually less with the nozzle than without. In some tugs and rivercraft, the whole nozzle is pivoted and becomes a very efficient steering mechanism. Tandem and Contrarotating Propellers Two or more propellers arranged on the same shaft are used to divide the increased loading factor when the diameter of a propeller is restricted. Propellers turning in the same direction are termed tandem, and in opposite directions, contrarotating. In tandem, the rotational energy in the race from the forward propeller is augmented by the after one. Contrarotating propellers work on coaxial, contrary-turning shafts so that the after propeller may regain the rotational energy from the forward one. The after propeller is of smaller diameter to fit the contracting race and has a pitch designed for proper power absorption. Such propellers have been used for years on torpedoes to prevent the torpedo body from rotating. Hydrodynamically, the advantages of contrarotating propellers are increased propulsive efficiency, improved vibration characteristics, and higher blade frequency. Disadvantages are the complicated gearing, coaxial shafting, and sealing problems.

PROPULSION TRANSMISSION Supercavitating Propellers When cavitation covers the entire back of a propeller blade, an increase of rpm cannot further reduce the pressure at the back but the pressure on the face continues to increase as does the total thrust, though at a slower rate than before cavitation began. Advantages of such fully cavitating propellers are an absence of back erosion and less vibration. Although the characteristics of such propellers were determined by trial and error, they have long been used on high-speed racing motor boats. The blade section design must ensure clean separation of flow at the leading and trailing edges and provide good lift-drag ratios for high efficiency. Introducing air to the back of the blades (ventilated propeller) will ensure full cavitation and also enables use at lower speeds. Partially Submerged Propellers The appendage drag presented by a propeller supported below high-speed craft, such as planing boats, hydrofoil craft, and surface-effect ships, led to the development of partially submerged propellers. Although vibration and strength problems, arising from the cyclic loading and unloading of the blades as they enter and emerge from the air-water interface, remain to be solved, it has been demonstrated that efficiencies in partially submerged operation, comparable to fully submerged noncavitating operation, can be achieved. The performance of these propellers must be considered over a wide range of submergences. Outboard Gasoline Engine Outboard gasoline engines of 1 to 300 bhp (0.75 to 225 kWB), combining steering and propulsion, are popular for small pleasure craft. Maximum speeds of 3,000 to 6,000 r/min are typical, as driven by the propeller horsepower characteristics (see Sec. 9). PROPULSION TRANSMISSION

In modern ships, only large-bore, slow-speed diesel engines are directly connected to the propeller shaft. Transmission devices such as mechanical speed-reducing gears or electric drives (generator/motor transmissions) are required to convert the relatively high rpm of a compact, economical prime mover to the relatively low propeller rpm necessary for a high propulsive efficiency. In the case of steam turbines, medium and high-speed diesel engines, and gas turbines, speed reduction gears are used. Gear ratios vary from relatively low values for medium-speed diesels up to approximately 50 : 1 for a compact turbine design. Where the prime mover is unidirectional, the drive mechanism must also include a reversing mechanism, unless a CRP is used. Reduction Gears

Speed reduction is usually obtained with reduction gears. The simplest arrangement of a marine reduction gear is the single-reduction, single-input type, i.e., one pinion meshing with a gear (Fig. 11.3.21). This arrangement is used for connecting a propeller to a diesel engine or to an electric motor but is not used for propelling equipment with a turbine drive.

Fig. 11.3.22 Double-reduction, double-input, articulated gear. (From “Marine Engineering,” SNAME, 1992.)

11-55

Fig. 11.3.23 Double-reduction, double-input, locked-train gear. (From “Marine Engineering,” SNAME, 1992.)

Electric Drive

Electric propulsion drive is the principal alternative to direct- or geareddrive systems. The prime mover drives a generator or alternator which in turn drives a propulsion motor which is either coupled directly to the propeller or drives the propeller through a low-ratio reduction gear. Among its advantages are the ease and convenience by which propeller speed and direction are controllable; the freedom of installation arrangement offered by the electrical connection between the generator and the propulsion motor; the flexibility of power use when not used for propulsion; the convenience of coupling several prime movers to the propeller without mechanical clutches or couplings; the relative simplicity of controls required to provide reverse propeller rotation when the prime mover is unidirectional; and the speed reduction that can be provided between the generator and the motor, hence between the prime mover and the propeller, without mechanical speed-reducing means. The disadvantages are the inherently higher first cost, increased weight and space, and the higher transmission losses of the system. The advent of superconductive electrical machinery suitable for ship propulsion, however, has indicated order-of-magnitude savings in weight, greatly reduced volume, and the distinct possibility of lower costs. Superconductivity, a phenomenon occurring in some materials at low temperatures, is characterized by the almost complete disappearance of electrical resistance. Research and development studies have shown that size and weight reductions by factors of 5 or more are possible. With successful development of superconductive electrical machinery and resolution of associated engineering problems, monohulled craft, such as destroyers, can benefit greatly from the location flexibility and maneuverability capabilities of superconductive electric propulsion, but the advantages of such a system will be realized to the greatest extent in the new high-performance ships where mechanical-drive arrangement is extremely complex. In general, electric propulsion drives are employed in marine vehicles requiring a high degree of maneuverability such as ferries, icebreakers, and tugs; in those requiring large amounts of special-purpose power such as fireboats, and large tankers; in those utilizing nonreversing, high-speed, and multiple prime movers; and in deep-submergence vehicles. Diesel-electric drive is ideally adapted to bridge control. Reversing

Fig. 11.3.21 Single-reduction, single-input gear. (From “Marine Engineering,” SNAME, 1992.)

The usual arrangement for turbine-driven ships is the double-reduction, double-input, articulated type of reduction gear (Fig. 11.3.22). The two input pinions are driven by the two elements of a cross-compound turbine. The term articulated applies because a flexible coupling is generally provided between the first reduction or primary gear wheel and the second reduction or secondary pinion. The locked-train type of double-reduction gear has become standard for high-powered naval ships and, because it minimizes the total weight and size of the assembly, is gaining increased popularity for higherpowered merchant ships (Fig. 11.3.23).

Reversing may be accomplished by stopping and reversing a reversible engine, as in the case of many reciprocating engines, or by adding reversing elements in the prime mover, as in the case of steam turbines. It is impracticable to provide reversing elements in gas turbines; hence a reversing capability must be provided in the transmission system or in the propulsor itself. Reversing reduction gears for such transmissions are available up to quite substantial powers, and CRP propellers are also used with diesel or gas-turbine drives. Electrical drives provide reversing by dynamic braking and energizing the electric motor in the reverse direction. Waterjets reverse inserting reversing thrust buckets into the discharge stream. Marine Shafting

A main propulsion shafting system consists of the equipment necessary to transmit the rotative power from the main propulsion engines to the

11-56

MARINE ENGINEERING

Fig. 11.3.24 Shafting arrangement with strut bearings. (From “Marine Engineering,” SNAME, 1992.)

Fig. 11.3.25 Shafting arrangement without strut bearings. (From “Marine Engineering,” SNAME, 1992.)

propulsor; support the propulsor; transmit the thrust developed by the propulsor to the ship’s hull; safely withstand transient operating loads (e.g., high-speed maneuvers, quick reversals); be free of deleterious modes of vibration; and provide reliable operation. Figure 11.3.24 is a shafting arrangement typical of multishaft ships and single-shaft ships with transom sterns. The shafting must extend outboard a sufficient distance for adequate clearance between the propeller and the hull. Figure 11.3.25 is typical of single-screw merchant ships. The shafting located inside the ship is line shafting. The outboard section to which the propeller is secured is the propeller shaft or tail shaft. The section passing through the stern tube is the stern tube shaft unless the propeller is supported by it. If there is a section of shafting between the propeller and stern tube shafts, it is an intermediate shaft. The principal dimensions, kind of material, and material tests are specified by the classification societies for marine propulsion shafting. The American Bureau of Shipping Rules for Steel Vessels specifies the minimum diameter of propulsion shafting, tail shaft bearing lengths, bearing liner and flange coupling thicknesses, and minimum fitted bolt diameters. Tail or propeller shafts are of larger diameter than the line or tunnel shafting because of the propeller-weight-induced bending moment; also, inspection during service is possible only when the vessel is drydocked. Propellers are fitted to shafts over 6 in (0.1524 m) diameter by a taper fit. Aft of the propeller, the shaft is reduced to 0.58 to 0.68 of its specified diameter under the liner and is fitted with a relatively fine thread, nut, and keeper. Propeller torque is taken by a long flat key, the width and total depth of which are, respectively, about 0.21 and 0.11 times the shaft diameter under the liner. The forward end of the key slot in the shaft should be tapered off in depth and terminate well aft of the large end of the taper to avoid a high concentrated stress at the end of the keyway. Every effort is made to keep saltwater out of contact with the steel tail shaft so as to prevent failure from corrosion fatigue. A cast-iron, cast-steel, or wrought-steel-weldment stern tube is rigidly fastened to the hull. In single-screw ships it supports the propeller shaft and is normally provided with a packing gland at the forward end as seawater lubricates the stern-tube bearings. The aft stern-tube bearing is made four diameters in length; the forward stern-tube bearing is

much shorter. A bronze liner, 3⁄4 to 1 in (0.0191 to 0.0254 m) thick, is shrunk on the propeller shaft to protect it from corrosion and to give a good journal surface. Lignum-vitae inserts, epoxy applications, or rubber linings have been used for stern-tube bearings; the average bearing pressure is about 30 lb/in2 (206,842 N/m2). Rubber and phenolic compound bearings, having longitudinal grooves, are also extensively used. White-metal oil-lubricated bearings and, in Germany, roller-type stern bearings have been fitted in a few installations. The line or tunnel shafting is laid out, in long equal lengths, so that, in the case of a single-screw ship, withdrawal of the tail shaft inboard for inspection every several years will require only the removal of the next inboard length of shafting. For large outboard or water-exposed shafts of twin-screw ships, a bearing spacing up to 30 shaft diameters has been used. A greater ratio prevails for steel shafts of power boats. Bearings inside the hull are spaced closer—normally under 15 diameters if the shaft is more than 6 in (0.1524 m) diameter. Usually, only the bottom half of the bearing is completely white-metaled; oil-wick lubrication is common, but oil-ring lubrication is used in high-class installations. Bearings are used to support the shafting in essentially a straight line between the main propulsion engine and the desired location of the propeller. Bearings inside the ship are most popularly known as lineshaft bearings, steady bearings, or spring bearings. Bearings which support outboard sections of shafting are stern-tube bearings if they are located in the stern tube and strut bearings when located in struts. The propeller thrust is transmitted to the hull by means of a main thrust bearing. The main thrust bearing may be located either forward or aft of the slow-speed gear. HIGH-PERFORMANCE SHIP SYSTEMS

High-performance ships have spawned new hull forms and propulsion systems which were relatively unknown just a generation ago. Because of their unique characteristics and requirements, conventional propulsors are rarely adequate. The prime thrust devices range from those using water, to water-air mixtures, to large air propellers. The total propulsion power required by these vehicles is presented in general terms in the Gabrielli and Von Karman plot of Fig. 11.3.26. Basically, there are those vehicles obtaining lift by displacement and those obtaining lift from a wing or foil moving through the fluid. In

HIGH-PERFORMANCE SHIP SYSTEMS

11-57

Fig. 11.3.27 Some foil arrangements and sections.

Fig. 11.3.26 Lift-drag ratio versus calm weather speed for various vehicles.

order for marine vehicles to maintain a reasonable lift-to-drag ratio over a moderate range of speed, the propulsion system must provide increased thrust at lower speeds to overcome low-speed wave drag. The weight of any vehicle is converted into total drag and, therefore, total required propulsion and lift power. Lightweight, compact, and efficient propulsion systems to satisfy extreme weight sensitivity are achieved over a wide range of power by the marinized aircraft gas-turbine engine. Marinization involves addition of a power turbine unit, incorporation of air inlet filtration to provide salt-free air, addition of silencing equipment on both inlet and exhaust ducts, and modification of some components for resistance to salt corrosion. The shaft power delivered is then distributed to the propulsion and lift devices by lightweight shaft and gear power transmission systems. A comparison of high-performance ship propulsion system characteristics is given in Table 11.3.3.

ventilation of foils have given rise to foil system configurations defined in Fig. 11.3.27. The hydrofoil craft has three modes of operation: hullborne, takeoff, and foilborne. The thrust and drag forces involved in hydrofoil operation are shown in Fig. 11.3.28. Design must be a compromise between the varied requirements of these operating conditions. This may be difficult, as with propeller design, where maximum thrust is often required at takeoff speed (normally about one-half the operating speed).

Hydrofoil Craft

Under foilborne conditions, the support of the hydrofoil craft is derived from the dynamic lift generated by the foil system, and the craft hull is lifted clear of the water. Special problems relating to operation and directly related to the conditions presented by waves, high fluid density, surface piercing of foils and struts, stability, control, cavitation, and Table 11.3.3

Fig. 11.3.28 Thrust and drag forces for a typical hydrofoil.

U.S. Navy High-Performance Ship Propulsion System Characteristics Ship

Engine

Propulsor/lift

Hydrofoil, PHM, 215 tons (2.1 MN), 50 knots

Foilborne: one LM-2500 GT, 22,000 bhp (15,405 kWB), 3,600 r/min

Foilborne: one 2-stage axial-flow waterjet, twin aft strut inlets

Surface-effect ship, SES-200, 195 tons (1.94 MN), 30  knots

Propulsion: two 16V-149TI diesel, 1,600 bhp (1,193 kWB) Lift: two 8V-92TI diesel, 800 bhp (597 kWB)

Two waterjet units

Air-cushion vehicle, LCAC, 150 tons (1.49 MN), 50 knots

Propulsion and lift: four TF-40B GT, 3,350 bhp (2,498 kWB), 15,400 r/min

Two ducted-air propellers, 12 ft (3.66 m) dia., 1,250 r/min; four double-entry centrifugal lift fans, 1,900 r/min

SWATH, SSP, 190 tons (1.89 MN), 25 knots

Propulsion: two T-64 GT, 3,200 bhp (2,386 kWB), 1,000 r/min at output of attached gearbox

Two subcavitating controllable-pitch propellers, 450 r/min

Two double-inlet centrifugal lift fans

11-58

AERONAUTICS

A propeller designed for maximum efficiency at takeoff will have a smaller efficiency at design speeds, and vice versa. This is complicated by the need for supercavitating propellers at operating speeds over 50 knots. Two sets of propellers may be needed in such a case, one for hullborne operation and the other for foilborne operation. The selection of a surface-piercing or a fully submerged foil design is determined from specified operating conditions and the permissible degree of complexity. A surface-piercing system is inherently stable, as a deviation from the equilibrium position of the craft results in corrective lift forces. Fully submerged foil designs, on the other hand, provide the best lift-drag ratio but are not stable by themselves and require a depth-control system for satisfactory operation. Such a control system, however, permits operation of the submerged-foil design in higher waves. Surface-Effect Vehicles (SEV)

The air-cushion-supported surface-effect ship (SES) and air-cushion vehicle (ACV) show great potential for high-speed and amphibious operation. Basically, the SEV rides on a cushion of air generated and maintained in the space between the vehicle and the surface over which it moves or hovers. In the SES, the cushion is captured by rigid sidewalls and flexible skirts fore and aft. The ACV has a flexible seal or skirt all around. Figure 11.3.29 illustrates two configurations of the surface-effect principle.

Fig. 11.3.30 Comparison of power requirements: SES versus ACV.

strut by a group of three wide-belt chain drives with speed reduction to the propellers. By contrast, the T-AGOS 19 series SWATH vessels employ diesel-electric drive, with both motors and propulsors in the lower hull. CARGO SHIPS

The development of containerization in the 1950s sparked a growth in ship size. More efficient cargo-handling methods and the desire to reduce transit time led to the emergence of the high-speed container vessel.

Fig. 11.3.29 Two configurations of the surface-effect principle.

The Container Vessel

Since the ACV must rise above either land or water, the propulsors use only air for thrust, requiring a greater portion of the total power allocated to lift than in the SES design as shown in Fig. 11.3.30. While SES technology has greatly improved the capability of amphibious landing vehicles, it is rapidly being developed for highspeed large cargo vessels. Small Water-Plane Area Twin Hull

The SWATH, configured to be relatively insensitive to seaway motions, gains its buoyant lift from two parallel submerged, submarinelike hulls which support the main hull platform through relatively small vertical struts. The small hull area at the waterline lowers the response of the hull to waves. The same small waterplane area makes the SWATH hull form sensitive to changes in weight and load distribution. In this configuration, a propulsion arrangement choice must be with regard to the location of the main engines. In the U.S. Navy’s SSP, power from two independently operated gas turbines is transmitted down each aft

Because of low fuel costs in the 1950s and 1960s, early container vessel design focused on higher speed and container capacity. Rising fuel costs shifted emphasis toward ship characteristics which enhanced reduced operating costs, particularly as affected by fuel usage and trade route capacity. By the early 1980s, ships were generally sized for TEU (twenty-foot-equivalent-unit) capacity less than 2,500 and speeds of about 20 knots, lower if fuel was significant in the overall operating costs. Largely a function of the scale of trade route capacity, container ships carrying in excess of 5,000 TEUs are common today. The larger the trade capacity, the larger the containerships operating the route. However, many trade route scales do not support even moderately sized containership service. The on deck and under deck container capacity varies with the type of vessel, whether a lift on/lift off vessel, a bulk carrier equipped for containers, or a roll on/roll off vessel. Extensive studies have been conducted to optimize the hull form and improve the operating efficiency for the modern containership.

11.4 AERONAUTICS by M. W. M. Jenkins REFERENCES: National Advisory Committee for Aeronautics, Technical Reports (designated NACA-TR with number), Technical Notes (NACA-TN with number), and Technical Memoranda (NACA-TM with number). British Aeronautical Research Committee Reports and Memoranda (designated Br. ARC-R & M with number). Ergebnisse der Aerodynamischen Versuchsanstalt zu Göttingen. Diehl, “Engineering Aerodynamics,” Ronald. Reid, “Applied Wing Theory,” McGraw-Hill.

Durand, “Aerodynamic Theory,” Springer. Prandtl-Tietjens, “Fundamentals of Hydro- and Aeromechanics,” and “Applied Hydro- and Aeromechanics,” McGraw-Hill. Goldstein, “Modern Developments in Fluid Dynamics,” Oxford. Millikan, “Aerodynamics of the Airplane,” Wiley. Von Mises, “Theory of Flight,” McGraw-Hill. Hoerner, “Aerodynamic Drag,” Hoerner. Glauert, “The Elements of Airfoil and Airscrew Theory,” Cambridge. Milne-Thompson, “Theoretical

STANDARD ATMOSPHERE Hydrodynamics,” Macmillan. Munk, “Fluid Dynamics for Aircraft Designers,” and “Principles of Aerodynamics,” Ronald. Abraham, “Structural Design of Missiles and Spacecraft,” McGraw-Hill. “U.S. Standard Atmosphere, 1976,” U.S. Government Printing Office.

11-59

the fluid. The velocity range below the local speed of sound is called the subsonic regime. Where the velocity is above the local speed of sound, the flow is said to be supersonic. The term transonic refers to flows in which both subsonic and supersonic regions are present. The hypersonic regime

is that speed range usually in excess of five times the speed of sound.

DEFINITIONS Aeronautics is the science and art of flight and includes the operation of heavier-than-air aircraft. An aircraft is any weight-carrying device

designed to be supported by the air, either by buoyancy or by dynamic action. An airplane is a mechanically driven fixed-wing aircraft, heavier than air, which is supported by the dynamic reaction of the air against its wings. An aircraft supported by buoyancy is called an airship, which can be either a balloon, a hot-air type, or of semirigid or rigid construction. These airships can be free floating or, if of semirigid or rigid design, propelled by fixed or vectoring propeller or jet propulsion. Ascent or descent is achieved primarily through helium gas or hot air, venting, and by the use of disposable ballast. A helicopter is a kind of aircraft lifted and moved by a large propeller mounted horizontally above the fuselage. It differs from the autogiro in that this propeller is turned by motor power and there is no auxiliary propeller for forward motion. A ground-effect machine (GEM) is a heavier-than-air surface vehicle which operates in close proximity to the earth’s surface (over land or water), never touching except at rest, being separated from the surface by a cushion or film of air, however thin, and depending entirely upon aerodynamic forces for propulsion and control. Aerodynamics is the branch of aeronautics that treats of the motion of air and other gaseous fluids and of the forces acting on solids in motion relative to such fluids. Aerodynamics falls into velocity ranges, depending upon whether the velocity is below or above the local speed of sound in

STANDARD ATMOSPHERE

The U.S. Standard Atmosphere, 1976 of Table 11.4.1a is the work of the U.S. Committee on Extension to the Standard Atmosphere (COESA), established in 1953, which led to the 1958, 1962, 1966, and 1976 versions of this standard. It is an idealized, steady-state representation of the atmosphere and extends to approximately 621.4 miles (1000 kilometres) above the surface. It is assumed to rotate with the earth. It is the same as COESA’s “U.S. Standard Atmosphere—1962” below approximately 32 miles (51 km) but replaces this standard above this altitude. Figure 11.4.1 shows the variation of base temperature K with altitude Z up to approximately 76 miles (122 km). Above 281,952 feet, our distinct altitude zones are identified (Table 11.4.1b) in which

Temperature, °C −100 0

−75

−50

−25

0

25

120

100 Table 11.4.1a U.S. Standard Atmosphere, 1976 Temp T, F†

Pressure ratio, p/p0

Density ratio, r/r0

0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 36,089 40,000 45,000 50,000 55,000 60,000 65,000 65,800 70,000 75,000 80,000 85,000 90,000 95,000 100,000

59.00 41.17 23.34 5.51 12.62 30.15 47.99 65.82 69.70 69.70 69.70 69.70 69.70 69.70 69.70 69.70 67.30 64.55 61.81 59.07 56.32 53.58 50.84

1.0000 0.8320 0.6877 0.5643 0.4595 0.3711 0.2970 0.2353 0.2234 0.1851 0.1455 0.1145 0.09001 0.07078 0.05566 0.05356 0.04380 0.03452 0.02725 0.02155 0.01707 0.01354 0.01076

1.0000 0.8617 0.7385 0.6292 0.5328 0.4481 0.3741 0.3099 0.2971 0.2462 0.1936 0.1522 0.1197 0.09414 0.07403 0.07123 0.05789 0.04532 0.03553 0.02790 0.02195 0.01730 0.01365

0.5

(r0/r)

1.000 1.077 1.164 1.261 1.370 1.494 1.635 1.796 1.835 2.016 2.273 2.563 2.890 3.259 3.675 3.747 4.156 4.697 5.305 5.986 6.970 7.600 8.559

Speed of sound Vs, ft/s‡

80

1,116 1,097 1,077 1,057 1,036 1,015 995 973 968 968 968 968 968 968 968 968 971 974 977 981 984 988 991

60

40

20

Temperature, °F Fig. 11.4.1 Temperature as a function of altitude. (COESA-1976.)

*  0.3048  metres. †  (F  32)/1.8  C. ‡  0.3048  m/s.

Table 11.4.1b Reference Levels and Function Designations for Each of Four Zones of the Upper Atmosphere Base Altitude (Z), km 86 91 110 120 1000

Base altitude, ft 281,952 298,320 361,152 393,888 3,280,992

Base temperature K

F

186.9 186.9 240.0 360.0 1000.0

123.3 123.3 27.7 188.3 1340.3

Form of function relating K to Z (four zones)

r/r0

Linear-constant Elliptical Linear-increasing Exponential —

5.680  10 2.335  106 0.0793  106 0.0181  106 2.907  1015

Base values p/p0 6

3.685  106 1.518  106 0.0701  106 0.0251  106 7.416  1014

Altitude, km

Altitude h, ft*

11-60

AERONAUTICS

temperature altitude relationships are defined relative to the base altitude of each zone. These four zones constitute the current standard atmosphere in terms of base temperature K from 53.4 miles to approximately 621.4 miles (1000 km). For further definition of the standard atmosphere in this four-zone region the reader is referred to U.S. Department of Commerce document “U.S. Standard Atmosphere, 1976,” ADA035728. The assumed sea-level conditions are: pressure, p0  29.91 in (760 mm Hg)  2,116.22 lb/ft2; mass density, r0  0.002378 slugs/ft3 (0.001225 g/cm3); T0  598F (158C). UPPER ATMOSPHERE

High-altitude atmospheric data have been obtained directly from balloons, sounding rockets, and satellites and indirectly from observations of meteors, aurora, radio waves, light absorption, and sound effects. At relatively low altitudes, the earth’s atmosphere is, for aerodynamic purposes, a uniform gas. Above 250,000 ft, day and night standards differ because of dissociation of oxygen by solar radiation. This difference in density is as high as 35 percent, but it is usually aerodynamically negligible above 250,000 ft since forces here will become less than 0.05 percent of their sea-level value for the same velocity. Temperature profile of the COESA atmosphere is given in Fig. 11.4.1. From these data, other properties of the atmosphere can be calculated. Speed in aeronautics may be given in knots (now standard for U.S Armed Forces and FAA), miles per hour, feet per second, or metres per second. The international nautical mile  6,076.1155 ft (1,852 m). The basic relations are: 1 knot  0.5144 m/s  1.6877 ft/s  1.1508 mi/h. For additional conversion factors see Sec. 1. Higher speeds are often given as Mach number, or the ratio of the particular speed to the speed of sound in the surrounding air. For additional data see “Supersonic and Hypersonic Aerodynamics” below. Axes The forces and moments acting on an airplane (and the resultant velocity components) are referred to a set of three mutually perpendicular axes having the origin at the airplane center of gravity (cg) and moving with the airplane. Three sets of axes are in use. The basic difference in these is in the direction taken for the longitudinal, or x, axis, as follows: Wind axes: The x axis lies in the direction of the relative wind. This is the system most commonly used, and the one used in this section. It is shown on Fig. 11.4.2. Body axes: The x axis is fixed in the body, usually parallel to the thrust line. Stability axes: The x axis points in the initial direction of flight. This results in a simplification of the equations of motion.

coefficients are: lift CL  L/qS; drag CD  D/qS; side force CY  Y/qS; where q is the dynamic pressure 1⁄2rV 2, r  air mass-density, and V  air speed. Examples of moment coefficients are Cm  M/qSc; roll Cl  L/qSb; and yaw Cn  N/qSb; where c  mean wing chord and b  wing span. Section Coefficients Basic test data on wing sections are usually given in the form of section lift coefficient Cl and section drag coefficient Cd. These apply directly to an infinite aspect ratio or to twodimensional flow, but aspect-ratio and lift-distribution corrections are necessary in applying to a finite wing. For a wing having an elliptical lift distribution, CL  (p/4)Cl, where Cl is the centerline value of the local lift coefficient.

SUBSONIC AERODYNAMIC FORCES

When an airfoil is moved through the air, the motion produces a pressure at every point of the airfoil which acts normal to the surface. In addition, a frictional force tangential to the surface opposes the motion. The sum of these pressure and frictional forces gives the resultant force R acting on the body. The point at which the resultant force acts is defined as the center of pressure, c.p. The resultant force R will, in general, be inclined to the airfoil and the relative wind velocity V. It is resolved (Fig. 11.4.3) either along wind axes into L  lift  component normal to V D  drag or resistance  component along V, or along body axes into N  normal force  component perpendicular to airfoil chord T  tangential force  component along airfoil chord Instead of specifying the center of pressure, it is convenient to specify the moment of the air forces about the so-called aerodynamic center, a.c. This point lies at a distance a (about a quarter-chord length) back of the leading edge of the airfoil and is defined as the point about which the moment of the air forces remains constant when the angle of attack a is changed. Such a point exists for every airfoil. The force R acting at the c.p. is equivalent to the same force acting at the a.c. plus a moment equal to that force times the distance between the c.p. and the a.c. (see Fig. 11.4.3). The location of the aerodynamic center, in terms of the chord c and the section thickness t, is given for NACA 4- and 5-digit series airfoils approximately by a/c 5 0.25 2 0.40st/cd2

Absolute Coefficients Aerodynamic force and moment data are usually presented in the form of absolute coefficients. Examples of force

Fig. 11.4.2 Wind axes.

Fig. 11.4.3 Forces acting on an airfoil. (a) Actual forces; (b) equivalent forces through the aerodynamic center plus a moment.

AIRFOILS

The distance xc.p. from the leading edge to the center of pressure expressed as a fraction of the chord is, in terms of the moment Mc/4 about the quarter-chord point, M 5 s 1⁄4 2 x d # cN c/4

c.p.

xc.p.

M c/4 1 5 2 cN 4

From dimensional analysis it can be seen that the air force on a body of length dimension l moving with velocity V through air of density r can be expressed as F 5 w # rV 2l 2 where w is a coefficient that depends upon all the dimensionless factors of the problem. In the case of a wing these are: 1. Angle of attack a, the inclination between the chord line and the velocity V 2. Aspect ratio A  b/c, where b is the span and c the mean chord of the wing 3. Reynolds number Re  rVl/m, where m is the coefficient of dynamic viscosity of air 4. Mach number V/Vs, where Vs is the velocity of sound 5. Relative surface roughness 6. Relative turbulence The dependence of the force coefficient w upon a and A can be theoretically determined; the variation of w with the other parameters must be established experimentally, i.e., by model tests. AIRFOILS

Applying Bernoulli’s equation to the flow around a body, if p represents the static pressure, i.e., the atmospheric pressure in the undisturbed air, and if V is superposed as in Fig. 11.4.4 and if p1, V1 represent the pressure and velocity at any point 1 at the surface of the body, p 1 1⁄2rV 2 5 p1 1 1⁄2rV 21 The maximum pressure occurs at a point s on the body at which the velocity is zero. Such a point is defined as the stagnation point. The maximum pressure increase occurs at this point and is ps 2 p 5 1⁄2rV 2 This is called the stagnation pressure, or dynamic pressure, and is denoted by q. It is customary to express all aerodynamic forces in terms of 1⁄2 rV 2, hence

angle of attack corresponding to zero lift. h < 1  0.64(t/c), where t is airfoil thickness. In a wing of finite aspect ratio, the circulation flow around the wing creates a strong vortex trailing downstream from each wing tip (Fig. 11.4.5). The effect of this is that the direction of the resultant velocity at the wing is tilted downward by an induced angle of attack ai  wi /V (Fig. 11.4.6). If friction is neglected, the resultant force R is now tilted to the rear by this same angle ai . The lift force L is approximately the same as R. In addition, there is also a drag component Di, called the induced drag, given by Di 5 L tan ai 5 Lwi /V For a given geometrical angle of attack a the effective angle of attack has been reduced by ai, and thus CL  2ph(a  a0  ai) According to the Lanchester-Prandtl theory, for wings having an elliptical lift distribution

ai 5 wi/V 5 CL/pA rad Di 5 L2/pb 2q and CDi 5 Di /qS 5 C 2L/pA where b  span of wing and A  b2/S  aspect ratio of wing. These results also apply fairly well to wings not differing much from the elliptical shape. For square tips, correction factors are required: CL ai 5 s1 1 td where t 5 0.05 1 0.02A pA s for A , 12 C 2L CDi 5 s1 1 sd where s 5 0.01A 2 0.01 pA These formulas are the basis for transforming the characteristics of rectangular wings from an aspect ratio A1 to an aspect ratio A2: a2 5 a1 1

57.3CL 1 1 t2 1 1 t1 p ¢ A2 2 A1 ≤

For elliptical wings the values of t and s are zero. Most wing-section data are given in terms of aspect ratios 6 and . For other values, the preceding formulas must be used.

For the case of a wing, the forces and moments are expressed as Lift 5 L 5 CL 1⁄2rV 2S 5 CLqS Drag 5 D 5 CD 1⁄2rV 2S 5 CDqS Moment 5 M 5 CM 1⁄2rV 2Sc 5 CMqSc

Fig. 11.4.5 Vortex formation at wing tips.

V1, p1

V

S

p Fig. 11.4.4 Fluid flow around an airfoil.

deg

C 2L 1 1 s2 1 1 s1 2 ≤ CD2 5 CD1 1 p ¢ A2 A1

F 5 1⁄2wrV 2l 2 5 qwl 2

where S  wing area and c  wing chord. The lift produced can be determined from the intensity of the circulatory flow or circulation by the relation Lr  rV , where Lr is the lift per unit width of wing. In a wing of infinite aspect ratio, the flow is twodimensional and the lift reaction is at right angles to the line of relative motion. The lift coefficient is a function of the angle of attack a and by mathematical analysis CL  2p sin a, which for small angles becomes CL  2pa. Experiments show that CL  2ph(a  a0), where a0 is the

11-61

Fig. 11.4.6 Induced angle of attack.

11-62

AERONAUTICS

Characteristics of Wings Wings have airfoil sections that can be constant from wing root to tip, or sections that vary across the wing to tailor the aerodynamic properties of the wing. Wing aerodynamic properties are expressed in terms of the dimensionless coefficients CL, CD, and CM. Airfoil coefficients, termed section characteristics, are usually denoted as Cl, Cd, and Cm, and refer to wings of infnite aspect ratio. Figure 11.4.10 presents tunnel test results from early NACA studies on 6 percent and 12 percent thick two-dimensional sections. The lift coefficient CL is a linear function of the angle of attack up to a critical angle called the stalling angle (Fig. 11.4.7). The maximum lift coefficient CLmax which can be reached is one of the important characteristics of a wing because it determines the landing speed of the airplane.

Below stalling angle

Above stalling angle

plotted for a wing. The difference CD  CDi  CD0, the profile-drag coefficient. Among the desirable characteristics of a wing is a small value of the minimum profile-drag coefficient and a large value of CL/CD (L/D). The moment characteristics of a wing are obtainable from the curve of the center of pressure as a function of a or by the moment coefficient taken about the aerodynamic center CMa.c. as a function of CL. A forward motion of the c.p. as a is increased corresponds to an unstable wing section. This instability is undesirable because it requires a large download on the tail to counteract it. The characteristics of a wing section (Fig. 11.4.9) are determined principally by its mean camber line, i.e., the curvature of the median line of the profile, and secondly by the thickness distribution along the chord. In the NACA system of designation, when a four-digit number is used such as 2412, the significance is always first digit  maximum camber in percent of chord, second digit  location of position of maximum camber in tenths of chord measured from the leading edge (that is, 4 stands for 40 percent), and the last two figures indicate the maximum thickness in percent of chord. NACA also developed a 5-digit and 6-digit series of airfoils, the latter referred to as laminar flow sections. More recently, NASA work has centered on developing low-speed sections for general aviation aircraft. Today most major aerospace companies no longer use the early NACA section data and prefer to develop their own geometric and aerodynamic section properties, through either computational or experimental means. Extensive reshaping of airfoil leading edge geometries has yielded significant increases in maximum lift coefficients, and high-speed sections, termed “peaky” or “supercritical” sections have permitted cruise Mach numbers to increase significantly. These high-speed sections retain a large region of high pressure in front of the airfoil crest, resulting in a peak in pressure coefficient there at high speeds.

Fig. 11.4.7 Stalling angle of an airfoil.

The drag of a wing is made up of three components: the profile drag D0, the induced drag Di, and the wave drag DW, near the speed of sound. The profile drag is due principally to surface friction. At aspect ratio  or at zero lift the induced drag is zero, and thus the entire drag is profile drag. In Fig. 11.4.8 the coefficient of induced drag CDi 5 C 2L/pA is also

Fig. 11.4.9 Characteristics of a wing section. Selection of Wing Section

Fig. 11.4.8 Polar-diagram plot of airfoil data.

In selecting a wing section for a particular airplane the following factors are generally considered: 1. Maximum lift coefficient CLmax 2. Minimum drag coefficient CDmin 3. Moment coefficient at zero lift Cm0 4. Maximum value of the ratio CL/CD For certain special cases it is necessary to consider one or more factors from the following group: 5. Value of CL for maximum CL/CD 6. Value of CL for minimum profile drag 7. Maximum value of C 3/2 L /CD 8. Maximum value of C 1/2 L /CD 9. Type of lift curve peak (stall characteristics) 10. Drag divergence Mach number MDD Characteristics of airfoil sections are given in NACA-TR 586, 647, 669, 708, and 824. Figure 11.4.10 gives data on two typical sections: 0006 is often used for tail surfaces, and NACAI 65-212 is especially suitable for the wings of subsonic airplanes. For Mach numbers greater than 0.6 thin wings must be used. The dimensionless coefficients CL and CD are functions of the Reynolds number Re  rVl/m. For wings, the characteristic length l is taken to be the chord. In standard air at sea level Re 5 6,378V # l 5 9,354V # l ft/s

ft

mi/h

ft

AIRFOILS NACA 0006 wing section

11-63

NACA 65–212 wing section 0.016

1.6

0.012

1.0 0.8

0.008

0.2

Cm (c/4)

0.0 −0.2

0.006

−0.4 −0.6

Section lift coefficient, Cl Moment coefficient, Cm(c/4)

Cd

Section drag coefficient, Cd

Section lift coefficient, Cl Moment coefficient, Cm(c/4)

Cl 0.4

0.012

0.8 Cd 0.4

0.010 Cm(c/4)

0.0

0.008

−0.4

0.006

−0.8

0.004

0.004

−0.8 −1.0

Cl

Section drag coefficient, Cd

0.014

1.2

0.010

0.6

−1.2

−15

−20

−10

−5

0

5

10

15

20

−15

−10

−5

0

5

10

15

0.002 20

Section angle of attack, degrees

Section angle of attack, degrees x, % c y, % c(+/−)

0 0

1.25 0.947

2.5 1.307

5 1.777

7.5 2.1

10 2.341

15 2.673

20 2.869

25 2.971

x, % c y, % c(+/−)

30 3.001

40 2.902

50 2.647

60 2.282

70 1.832

80 1.312

90 0.724

95 0.403

−0.063

100

L.E. Radius: 0.40c

Fig. 11.4.10 Properties of two airfoil sections. (Courtesy of NACA.)

For other heights, multiply by the coefficient K 5 sr/md/sr0/m0d. Values of K are as follows:

Altitude, ft* K

0 1.000

5,000 0.884

10,000 0.779

15,000 0.683

Transonic Airfoils At airplane Mach numbers of about M  0.75, normal airfoil sections begin to show a greater drag, which increases

20,000 0.595

25,000 0.515

30,000 0.443

40,000 0.300

50,000 0.186

60,000 0.116

* 3 0.305 5 m.

The variation of CL and CD with the Reynolds number is known as scale effect. These are shown in Figs. 11.4.11 and 11.4.12 for some typ-

ical airfoils. Figure 11.4.12 shows in dashed lines also the theoretical variation of the drag coefficient for a smooth flat plate for laminar and turbulent flow, respectively, and also for the transition region. In addition to the Reynolds number, airfoil characteristics also depend upon the Mach number (see “Supersonic and Hypersonic Aerodynamics” below). Laminar-flow Wings NACA developed a series of low-drag wings in which the distribution of thickness along the chord is so selected as to maintain laminar flow over as much of the wing surface as possible. The low-drag wing under controlled conditions of surface smoothness may have drag coefficients about 30 percent lower than those obtained on normal conventional wings. Low-drag airfoils are so sensitive to roughness in any form that the full advantage of laminar flow is unobtainable.

Fig. 11.4.11 Scale effect on section maximum lift coefficient.

Fig. 11.4.12 Scale effect on minimum profile-drag coefficient.

sharply as the speed of sound is approached. Airfoil sections known as transonic airfoils have been developed which significantly delay this drag rise to Mach numbers of 0.9 or greater (Fig. 11.4.13). Advantage may

Fig. 11.4.13 Drag rise delay due to transonic section.

11-64

AERONAUTICS

also be taken of these airfoil sections by increasing the thickness of the airfoil at a given Mach number without suffering an increase in drag. Wing sweepback (Fig. 11.4.14) or sweepforward also delays the onset of the drag rise to higher Mach numbers. The flow component normal to the wing leading edge, M cos  determines the effective Mach number felt by the wing. The parallel component, M sin , does not significantly influence the drag rise. Therefore for a flight Mach number M, the wing senses it is operating at a lower Mach number given by M cos .

Table 11.4.2

Flaps* Section CLmax (approx)

Finite wing CLmax (approx)

Plain

2.4

1.9

Split

2.6

2.0

Single slotted

2.8 3.0

2.2 2.4

Double slotted

3.0 3.4

2.4 2.7

Type of flap

Remarks Cf/c  0.20. Sensitive to leakage between flap and wing. Cf/c  0.20. Simplest type of flap. Cf/c  0.25. Shape of slot and location of deflected flap are critical. Cf/c  0.25. Shape of slot and location of deflected flap are critical.

* The data is for wing thickness t/c  0.12 or 0.15. Lift increment is dependent on the leading-edge radius of the wing.

Fig. 11.4.14 Influence of wing sweepback on drag rise Mach number.

Flaps and Slots

The maximum lift of a wing can be increased by the use of slots, retractable or fixed slats on the leading edge, or flaps on the trailing edge. Fixed slots are formed by rigidly attaching a curved sheet of metal or a small auxiliary airfoil to the leading edge of the wing. The trailingedge flap is used to give increased lift at moderate angles of attack and to increase CLmax . Theoretical analyses of the effects are given in NACA-TR 938 and Br. ARCR&M 1095. Flight experience indicates that the four types shown in Fig. 11.4.15 have special advantages over other known types. Values of CLmax for these types are shown in Table 11.4.2. All flaps cause a diving moment (Cm) which must be trimmed out by a download on the tail, and this download reduces CLmax. The correction is CLmax (trim)  Cmo(l/c), where l/c is the tail length in mean chords. Boundary-Layer Control (BLC) This includes numerous schemes for (1) maintaining laminar flow in the boundary layer in the flow over a wing or (2) preventing flow separation. Schemes in the first category

try to obtain the lower frictional drag of laminar flow either by providing favorable pressure gradients, as in the NACA “low-drag” wings, or by removing part of the boundary layer. The boundary-layer thickness can be partially controlled by the use of suction applied either to spanwise slots or to porous areas. The flow so obtained approximates the laminar skin-friction drag coefficients (see Fig. 11.4.12). Schemes in the second category try to delay or improve the stall. Examples are the leading-edge slot, slotted flaps, and various forms of suction or blowing applied through transverse slots. The slotted flap is a highly effective form of boundary-layer control that delays flow separation on the flap. Spanwise Blowing Another technique for increasing the lift from an aerodynamic surface is spanwise blowing. A high-velocity jet of air is blown out along the wing. The location of the jet may be parallel to and near the leading edge of the wing to prevent or delay leading-edge flow separation, or it may be slightly behind the juncture of the trailingedge flap to increase the lift of the flap. Power for spanwise blowing is sensitive to vehicle configuration and works most efficiently on swept wings. Other applications of this technique may include local blowing on ailerons or empennage surfaces (Fig. 11.4.16). The pressure distributions on a typical airfoil are shown in Fig. 11.4.17. In this figure the ratio p/q is given as a function of distance along the chord. In Fig. 11.4.18 the effect of addition of an external airfoil flap is shown. Airplane Performance

In horizontal flight the lift of the wings must be equal to the weight of the airplane, or L  nW, where W is the weight (lb or kg) of the airplane and n is 1.00. When the airplane is in a turn or is performing a pullup, n  1.00. The following equation determines the minimum or stall speed of the airplane at the appropriate value of n: nW 5 CL max

Fig. 11.4.15 Trailing-edge flaps: (1) plain flap; (2) split flap; (3) single-slotted flap; (4) double-slotted flap (retracted); (5) double-slotted flap (extended).

1 rV 2minS 2

or

Vmin 5

Fig. 11.4.16 Influence of spanwise blowing.

2nW Å CL max rS

(11.4.1)

AIRFOILS

11-65

5000 Min. power-dictated speed— sea level 4000

Thrust and drag, lb

Max. speed—sea level

Tmax—sea level A

B

3000

T > D (sea level) Total drag—sea level Tmax — 30,000 ft

2000

Fig. 11.4.17 Pressure distribution over a typical airfoil. 1000 Total drag—30,000 ft

0 0

200

400

600

800

1000

Aircraft speed, ft/s

Fig. 11.4.19 Thrust-drag relationships for airplane performance at sea level and 30,000 ft.

60 Airplane absolute ceiling 50 Max. endurance speed

Fig. 11.4.18 Pressure distribution on an airfoil-flap combination.

ft/s

(11.4.2)

where V is in ft/s. Climb angle u is given by u 5 sin21 ¢

Altitude, 1000 ft

40

This is called the stalling speed and when multiplied by a suitable safety factor, such as 1.2, can be termed the minimum safe flight speed. The interplay of airplane drag D and thrust T dictates the airplane performance. Usually graphical means are used to display performance capability. Here the airplane drag polar and thrust available, at sea level, are superimposed as shown in Fig. 11.4.19. In level flight at constant speed T  D, where T can be generated either by propeller, rocket, or jet engine. For propeller-driven airplanes, T becomes TV and D becomes DV in performance analyses. Our discussion will center on jet propulsion with a summary showing significant propeller-induced equation changes. Figure 11.4.19 shows that at maximum B and minimum A speeds, T  D. The minimum thrust-dictated speed is usually less than the stall speed; the latter then dictates the lower performance boundary speed. Between the maximum and minimum speeds T  D, and this excess in T available can be used to climb (dh/dt), or to accelerate in level flight. Similar curves can be constructed for a range of altitudes (a 30,000-foot set is shown), resulting in the performance boundaries as shown in Fig. 11.4.20. Here are minimum and maximum speeds, best climb speed, stall speed, and best cruise speed. Maximum and minimum speeds are equal at the absolute ceiling where dh/dt  0. Rate of climb, or airplane vertical velocity, is given by dh T2D 5 V¢ ≤ W dt

Min. speedthrust dictated

Max. speed

30

Max. range speed

Stall speed 20

Best rate of climb speed

10

0 0

200

400

600

800

1000

Airplane speed, ft/s

Fig. 11.4.20 Summary of airplane speed variations with altitude.

which the aircraft cannot fly in steady flight. Climb angle becomes zero also and the airplane can fly at one speed only. A practical maximum altitude, termed service ceiling, is assumed to occur when dh/dt  100 ft/min. The absolute ceiling can be determined from the curve of maximum rate of climb as a function of altitude when this is approximated by a straight line, usually accurate at the higher altitudes. See Fig. 11.4.21. For a linear decrease in the rate of climb the following approximations hold: Absolute ceiling H  (r0 h)/(r0  r)

ft

(11.4.4)

where subscript 0 refers to sea level rate of climb and r is the rate of climb, in ft/min, at altitude h. T2D ≤ W

deg

(11.4.3)

Jet engine thrust reduces with increase in altitude approximately as density reduces. At some altitude (T  D) becomes zero (therefore dh/dt  0) and the absolute ceiling of the airplane is reached, beyond

Service ceiling hs  H (r0  100)/r0

ft

(11.4.5)

Altitude climbed in t minutes is

h  H (1  ek) where k  (r0 t)/H.

ft

(11.4.6)

11-66

AERONAUTICS 5000

15 Max. range L/D

4000

L/D max L/D

Total drag D, 1b

10 3000 Max. endurance speed L/D 2000 5 D 1000

Fig. 11.4.21 Characteristic rate of climb curves. Max, range speed 0

Time to climb to altitude h is

0

H H ≤ t 5 2.303 ¢ r ≤ log ¢ H2h 0

min

200

400

600

800

0 1000

Aircraft speed, ft/s

(11.4.7)

Fig. 11.4.22 Aircraft aerodynamic properties: drag polar and L/D ratio.

Power off minimum glide angle g is given by g 5 tan21 ¢

1 ≤ sL/Ddmax

deg

(11.4.8)

The minimum glide angle corresponds to maximum straight line glide distance measured along the ground, sg, given by

sg  hg(L/D)max

ft

(11.4.9)

where hs is the start of glide altitude in feet. For performance purposes, an airplane drag polar is required, which will vary with Mach number. This polar will also change when flaps and landing gears are extended, and is given by the generic equation, CD qS 5 sCD0 1 C 2L/A e pd q S

Aircraft type

Subsonic CD0

Supersonic twin-jet fighter Narrow-body twin-jet transport Wide-body four-jet transport

0.0237 0.0166 0.0170

segment weight at start of mission and subscript 1 to segment weight at end of mission. Endurance E, in hours, is given by

(11.4.10)

where CD0  profile drag coefficient, [ f(Mach number)]  CD0(clean aircraft)  CD0 (flaps)  CD0 (gear). A  wing aspect ratio, e  Oswald’s span efficiency factor, p 5 3.14159, q  1⁄2rV 2, S  wing reference area. A typical clean aircraft drag variation with V, complete with the corresponding L/D is shown in Fig. 11.4.22 which shows that a maximum value of L/D occurs at a given velocity which is termed the best endurance speed. The best range speed occurs at 0.866 (L/D)max as shown. Table 11.4.3 shows typical values of CD0 for various types of jet aircraft. The ratio L/D, (CL/CD), is sometimes called the aerodynamic efficiency of the airplane. Mission requirements dictate the magnitude for competitive values of this ratio which is primarily controlled by the value of the wing aspect ratio and CD0. Airliners usually have high aspect ratio values and fighter aircraft low values. See Fig. 11.4.23. As seen in the following equations, maximizing L/D will maximize range and endurance. Range, more accurately termed maximum range, is defined as the maximum distance flown on a given quantity of fuel. Maximum endurance is defined as the maximum time spent in the air on a given quantity of fuel. Range R at constant altitude is 1/2 2 1 CL 1/2 R52 sW 1/2 0 2 W1 d Å rS c CD

Table 11.4.3 Total CD 0 Values for Typical Aircraft Types (Clean Aircraft)

ft

(11.4.11)

where r  air density in slugs/ft3, c  specific fuel consumption in lb of fuel/lb thrust  s, V  aircraft velocity in ft/s. Subscript 0 refers to the

W0 1 CL E 5 c ¢ ≤ ln ¢ ≤ W1 CD

h

(11.4.12)

In each instance, c and (L/D) correspond to the appropriate altitude and speed, and (W0  W1) represents fuel used. For propeller-powered airplanes, the above equations, starting at Eq. (11.4.2) become TV 2 DV dh 5 ¢ ≤ W dt

ft/s

(11.4.13)

where V is in ft/s. If thrust horsepower available PTa  h # Pa, where Pa 

available engine horsepower, h is the propeller efficiency, and Pr  horse-

power required, then PTa 2 Pr dh 5 550 ¢ ≤ W dt W0 h CL Range R 5 c ¢ ≤ ln ¢ ≤ W1 CD

ft/s

(11.4.14)

ft

(11.4.15)

where the units of c are (lb fuel)/ft  lb/s)  (s). h C 3/2 L 1 ≤ s2rSd1/2 sW 21/2 2 W 21/2 d¢ Endurance E  ¢ c ≤ 1 0 CD 3600

h (11.4.16)

See Table 11.4.4 for dimensions and performance of selected airplanes.

AIRFOILS

11-67

Fig. 11.4.23 Typical aircraft configurations. (Drawings courtesy of Lockeed-Martin.)

Power Available The maximum efficiency hm and the diameter of a propeller to absorb a given power at a given speed and rpm are found from a propeller-performance curve. The thrust horsepower at maximum speed PTm is found from PTm  hm pm . The thrust horsepower at any speed can be approximately determined from the ratios given in the following table:

V/Vmax PT /PTM

Fixed-pitch propeller h constant-rpm propeller

0.20 0.29 0.47

0.30 0.44 0.62

0.40 0.57 0.74

0.50 0.68 0.82

0.60 0.77 0.88

Fixed-pitch propeller constant-rpm propeller

0.70 0.84 0.93

0.80 0.90 0.97

0.90 0.96 0.99

1.00 1.00 1.00

1.10 1.03 1.00

V/Vmax PT /PTm

h

The brake horsepower of an engine decreases with increase of altitude. The variation of PT with altitude depends on engine and propeller characteristics with average values as follows: h, ft PT /PT 0 (fixed pitch) PT /PT 0 (controllable pitch)

0 1.00

5,000 0.82

10,000 0.66

15,000 0.52

20,000 0.41

25,000 0.30

1.00

0.85

0.71

0.59

0.48

0.38

Parasite Drag

The drag of the nonlifting parts of an airplane is called the parasite drag. It consists of two components: the frictional and the eddy-making drag. Frictional drag, or skin friction, is due to the viscosity of the fluid. It is the force produced by the viscous shear in the layers of fluid immediately

11-68

Table 11.4.4

Principle Dimensions and Performance of Typical Air Vehicles General Dynamics

Lockheed Martin

Raytheon Aircraft

Bell Helicopter Textron

Boeing

Boeing

Airbus

Airbus

NorthropGrumman

AgustaWestland

Designation model no.

Gulfstream G 500

F-35C

Beech King Air C-90B

427

747–400

737–800

A380–800

A310–200

B-2A

Super Lynx 300

Type

Executive transport

Fighter

General aviation

Utility helicopter

Wide-body turbofan

Narrow-body turbofan

Wide-body turbofan

Wide-body turbofan

Bomber

Naval helicopter

No passengers

14–19

Pilot only

12

6

416

162

555

220–280

2 crew

9

Cargo capacity, lb

N.A.

15,000

N.A.

N.A.

6,025 ft3

1,555 ft3

N.A.

N.A.

40,000 payload

N.A.

Span, ft

93.5

35

50.3

37*

211.4

112.6

261.8

144

172

42*

Overall length, ft

96.4

51.4

35.5

42.9

231.8

129.5

239.5

153.1

69

50

Overall height, ft

25.8

15

14.3

11.4

63.7

41.2

79.1

51.1

17

12

Wing area, ft2

1136.5

620

294

N.A.

5,650

1,341

9,095

2,360

5,140

N.A.

Weight empty, lb

47,800

30,000

6,810

3,485

398,800

91,660

611,000

174,700

N.A.

N.A.

Weight gross, lb

85,500

60,000

10,100

6,000

875,000

174,200

1,234,600

313,100

336,500

11,750

2  RR Tay– BR710

1  PWF 135– PW-400

2  PWC– PT6A-21

2  PWC PW–206D

4  PW4056

2  CFM56-7B

4  RR Trent 970

2  GE CF6

4  GE-F118

2  LHTEC CTS800-4N

High speed, mi/h (Mach)

(0.89)

(1.6)

283

160

(0.92)

(0.82)

(0.89)

(0.84)

N.A.

173

Cruise speed, mi/h (Mach)

(0.85)

N.A.

283

N.A.

(0.85)

(0.788)

(0.85)

N.A.

N.A.

N.A.

Range, mi

7,200

N.A.

1,640

411

8,356

3,522

9,200

N.A.

N.A.

550

Power plant

NOTES: *rotor diameter. N.A.  not available. SOURCE: Aviation Week, 2003 Source Book, Copyright 2003 by The McGraw-Hill Companies.

AIRFOILS

adjacent to the body. It is always proportional to the wetted area, i.e., the total surface exposed to the air. Eddy-making drag, sometimes called form drag, is due to the disturbance or wake created by the body. It is a function of the shape of the body. The total drag of a body may be composed of the two components in any proportion, varying from almost pure skin friction for a plate edgewise or a good streamline form to 100 percent form drag for a flat plate normal to the wind. A steamline form is a shape having very low form drag. Such a form creates little disturbance in moving through a fluid. When air flows past a surface, the layer immediately adjacent to the surface adheres to it, or the tangential velocity at the surface is zero. In the transition region near the surface, which is called the boundary layer, the velocity increases from zero to the velocity of the stream. When the flow in the boundary layer proceeds as if it were made up of laminae sliding smoothly over each other, it is called a laminar boundary layer. If there are also irregular motions in the layers normal to the surface, it is a turbulent boundary layer. Under normal conditions the flow is laminar at low Reynolds numbers and turbulent at high Re with a transition range of values of Re extending between 5  105 and 5  107. The profile-drag coefficients corresponding to these conditions are shown in Fig. 11.4.12. For laminar flow the friction-drag coefficient is practically independent of surface roughness and for a flat plate is given by the Blasius equation,

For rough surfaces the drag coefficients are increased. The drag coefficients as given above are based on projected area of a double-surfaced plane. If wetted area is used, the coefficients must be divided by 2. The frictional drag is DF  CDF qS, where S is the projected area, or DF  1⁄2C DF qA, where A is the wetted area. Values of CDF for double-surfaced planes may be estimated from the following tabulation: Laminar flow (Blasius equation): Re CDF

10 0.838

102 0.265

103 0.0838

104 0.0265

105 0.0084

106 0.0265

106 0.0089

107 0.0060

108 0.0043

109 0.00315

1010 0.0024

Turbulent flow: Re CDF

105 0.0148

These are double-surface values which facilitate direct comparison with wing-drag coefficients based on projected area. For calculations involving wetted area, use one-half of double-surface coefficients. Interpolations in the foregoing tables must allow for the logarithmic functions; i.e., the variation in CDF is not linear with Re. Drag Coefficients of Various Bodies

CDF 5 2.656/ 2Re The turbulent boundary layer is thicker and produces a greater frictional drag. For a smooth flat plate with a turbulent boundary layer CDF 5 0.91/log Re 2.58

Table 11.4.5

For bodies with sharp edges the drag coefficients are almost independent of the Reynolds number, for most of the resistance is due to the difference in pressure on the front and rear surfaces. Table 11.4.5 gives CD  D/qS, where S is the maximum cross section perpendicular to the wind.

Drag Coefficients

Object

Rectangular plate, sides a and b

Two disks, spaced a distance l apart

Cylinder

Proportions

Attitude

CD

1 4 8 a 5 12.5 b 25 50 

1.16 1.17 1.23 1.34 1.57 1.76 2.00

1 1.5 l 52 d 3

0.93 0.78 1.04 1.52

1 2 l 54 d 7

0.91 0.85 0.87 0.99

Circular disk

1.11

Hemispherical cup, open back

0.41

Hemispherical cup, open front, parachute

1.35

Cone, closed base

11-69

a  608, 0.51 a  308, 0.34

11-70

AERONAUTICS

For rounded bodies such as spheres, cylinders, and ellipsoids the drag coefficient depends markedly upon the Reynolds number, the surface roughness, and the degree of turbulence in the airstream. A sphere and a cylinder, for instance, experience a sudden reduction in CD as the Reynolds number exceeds a certain critical value. The reason is that at low speeds (small Re) the flow in the boundary layer adjacent to the body is laminar and the flow separates at about 838 from the front (Fig. 11.4.24). A wide wake thus gives a large drag. At higher speeds (large Re) the boundary layer becomes turbulent, gets additional energy from the outside flow, and does not separate on the front side of the sphere. The drag coefficient is reduced from about 0.47 to about 0.08 at a critical Reynolds number of about 400,000 in free air. Turbulence in the airstream reduces the value of the critical Reynolds number (Fig. 11.4.25). The Reynolds number at which the sphere drag CD  0.3 is taken as a criterion of the amount of turbulence in the airstream of wind tunnels. Cylinders The drag coefficient of a cylinder with its axis normal to the wind is given as a function of Reynolds number in Fig. 11.4.26. Cylinder drag is sensitive to both Reynolds and Mach number. Figure 11.4.26 gives the Reynolds number effect for M  0.35. The increase in CD because of Mach number is approximately: M CD increase, percent

0.35 0

0.4 2

0.6 20

0.8 50

1.0 70

(See NACA-TN 2960.)

Streamline Forms The drag of a streamline body of revolution depends to a very marked extent on the Reynolds number. The difference between extreme types at a given Reynolds number is of the same order as the change in CD for a given form for values of Re from 106 to 107. Tests reported in NACA-TN 614 indicate that the shape for minimum drag should have a fairly sharp nose and tail. At Re  6.6  106 the best forms for a fineness ratio (ratio of length to diameter) 5 have a drag

D 5 0.040qA 5 0.0175qV 2/3 where A  max cross-sectional area and V  volume. This value is equivalent to about 1.0 lb/ft2 at 100 mi/h. For Re  5  106, the drag coefficients vary approximately as Re0.15. Minimum drag on the basis of cross-sectional or frontal area is obtained with a fineness ratio of the order of 2 to 3. Minimum drag on the basis of contained volume is obtained with a fineness ratio of the order of 4 to 6 (see NACA-TR 291). The following table gives the ordinates for good streamline shapes: the Navy strut, a two-dimensional shape; and the Class C airships, a three-dimensional shape. Streamline shapes for high Mach numbers have a high fineness ratio.

Percent length Percent of max ordinate Percent length Percent of max ordinate

Fig. 11.4.24 Boundary layer of a sphere.

Fig. 11.4.25 Drag coefficients of a sphere as a function of Reynolds number and of turbulence.

h h

Navy strut Class C airship Navy strut Class C airship

1.25 26

2.50 37.1

20

33.5

4.00 52.50 52.60

7.50 63.00 65.80

10.0 72.0

12.50 78.50

75.8

83.50

20 91.1

40 99.5

60 86.1

80 56.2

90 33.8

94.7

99.0

88.5

60.5

49.3

Test data on the RM-10 shape are given in NACA-TR 1160. This shape is a parabolic-arc type for which the coordinates are given by the equation rx  X/7.5(1.0 – X/L). Nonintrusive flow field measurements about airfoils can be made by laser Doppler velocimeter (LDV) techniques. A typical LDV measurement system setup is shown in Fig. 11.4.27a. A seeded airflow generates the necessary reflective particles to permit two-dimensional flow directions and velocities to be accurately recorded.

Fig. 11.4.27a Typical laser Doppler velocimeter measurement system.

CONTROL, STABILITY, AND FLYING QUALITIES

Fig. 11.4.26 Drag coefficients of cylinders and spheres.

Control An airplane is controlled in flight by imposing yawing, pitching, and rolling moments by use of rudders, elevators, and ailerons. In some instances special airplane configurations will use mixtures or combinations of these control methods. It is possible to control aircraft through the use of two controls by suitable blending of aileron and rudder inputs. Stability An airplane is statically stable if the airframe moments, due to a disturbance, tend to return the airplane to its original attitude. It is dynamically stable if the oscillations produced by the static stability are rapidly damped out. A statically unstable aircraft will produce upsetting moments instead of restoring moments. A dynamically unstable aircraft will generate oscillations increasing in amplitude with time. A statically unstable or a dynamically unstable aircraft can be made stable by the use of suitable automatic flight control systems.

HELICOPTERS

Longitudinal static and dynamic stability, and longitudinal trim, are commonly achieved through the use of a suitable sized horizontal tail. Under these conditions the pitching moment equation, representing forces and moments on the airframe, is conditioned as follows: Cm  0 for trim, and ('Cm /'a) 0 for stability, where Cm is the overall pitching moment coefficient about the aircraft center of gravity and a is the aircraft angle of attack. When a definitive value is chosen for ('Cm /'ad in the design process (such as 0.03), this is known as the design static margin. In a similar manner the vertical tail size is determined from lateral/directional stability and lateral trim requirements. The trim requirement is usually overriding in fixing vertical tail size for multiengine aircraft when critical engine-out design requirements are evaluated. It can be shown that the forces and moments, inertial and aerodynamic, acting on an aircraft can be represented by six simultaneous, linear, small perturbation differential equations. These are called the equations of motion of an aircraft, and they represent the three forces and three moments assumed to act on the aircraft center of gravity (Fig. 11.4.27b). For conventional aircraft configurations, it is common practice to separate these into two sets of three each: one set representing longitudinal motions and one set the lateral-directional motions of the aircraft. Six equations of motion

Longitudinal set

Lateral-directional set

Short period

Long period

(oscillatory)

(oscillatory)

a q

Fig. 11.4.27b

Directional

b f Dutch roll (oscillatory)

Height Speed

Lateral

y

f (aperiodic)

Small Civil Propeller Aircraft, W  2,800 lb, V  104 knots, altitude  sea level

Short period Phugoid

t1/2, s

vn, rad/s

z

0.28 40.3

3.6 0.22

0.69 0.08

Rolling (or banking) does not produce any lateral shift in the center of the lift, so that there is no restoring moment as in pitch. However, when banked, the airplane sideslips toward the low wing. A fin placed above the center of gravity gives a lateral restoring moment that can correct the roll and stop the slip. The same effect can be obtained by dihedral, i.e., by raising the wing tips to give a transverse V. An effective dihedral of 1 to 38 on each side is generally required to obtain stability in roll. In low-wing monoplanes, 28 effective dihedral may require 88 or more of geometrical dihedral owing to interference between the wing and fuselage. In a yaw or slip the line of action of the lateral force depends on the size of the effective vertical fin area. Insufficient fin surface aft will allow the skid or slip to increase. Too much fin surface aft will swing the nose of the plane around into a tight spiral. Sound design demands sufficient vertical tail surface for adequate directional control and then enough dihedral to provide lateral stability. When moderate positive effective dihedral is present, the airplane will possess static lateral stability, and a low wing will come up automatically with very little yaw. If the dihedral is too great, the airplane may roll considerably in gusts, but there is little danger of the amplitude ever becoming excessive. Dynamic lateral stability is not assured by static stability in roll and yaw but requires that these be properly proportioned to the damping in roll and yaw. Spiral instability is the result of too much fin surface and insufficient dihedral.

HELICOPTERS REFERENCES: Br. ARC-R & M 1111, 1127, 1132, 1157, 1730, and 1859. NACA-TM 827, 836, 858. NACA-TN 626, 835, 1192, 3323, 3236. NACA-TR 434, 515, 905, 1078. NACA Wartime Reports L-97, L-101, L-110. NACA, “Conference on Helicopters,” May, 1954. Gessow and Myers, “Aerodynamics of the Helicopter,” Macmillan.

Spiral (aperiodic)

Six equations of motion.

In this separation, the assumption is that these motions do not normally influence each other, therefore they can be solved separately to establish the separate dynamic behaviors. The longitudinal set is characterized by two distinct oscillations called short period and long period. Short period motions usually influence angle of attack and pitch angle u, whereas the long period, called “phugoid,” primarily influences the height excursions and small speed changes. The lateral-directional set characterizes the motions in spiral c, the so-called Dutch roll mode, a mixture of bank angle f and sideslip b motions, and in the bank angle response. All response behavior is characterized by the oscillatory parameters, relative damping ratio z, and undamped natural frequency vn, as well as time to one-half amplitude (t1/2), or time to double amplitude (t2). It is the magnitudes of these parameters that establish the flying or handling qualities of an aircraft. Flying Qualities Requirements These are for military aircraft and civil aircraft and are in the form of Federal Aviation Requirements (FARs) and Military Standards (Mil. STD). These imply certain levels for the response parameters to ensure that the aircraft will achieve satisfactory pilot acceptability in the operation of the aircraft. The response parameter requirements vary with mission segments. Typical longitudinal response values for two aircraft, in cruise, are: Large Jet Transport, W  630,000 lb, V  460 knots, altitude  40,000 ft

Short period Phugoid

11-71

t1/2, s

vn, rad/s

z

1.86 211.00

1.00 0.07

0.39 0.05

Helicopters derive lift, propulsion force, and control effect from adjustments in the blade angles of the rotor system. At least two rotors are required, and these may be arranged in any form that permits control over the reaction torque. The common arrangements are main lift rotor; auxiliary torque-control or tail rotor at 908; two main rotors side by side; two main rotors fore and aft; and two main rotors, coaxial and oppositely rotating. The helicopter rotor is an actuator disk or momentum device that follows the same general laws as a propeller. In calculating the rotor performance, the major variables concerned are diameter D, ft (radius R); tip speed, Vt  R, ft/s; angular velocity of rotor  2pn, rad/s; and rotor solidity q ( ratio of blade area/disk area). The rotor performance is usually stated in terms of coefficients similar to propeller coefficients. The rotor coefficients are: Thrust coefficient CT 5 T/rs Rd2pR2 Torque coefficient CQ 5 Q/rs Rd2pR3 Torque Q 5 5,250 bhp/rpm Hovering The hovering flight condition may be calculated from basic rotor data given in Fig. 11.4.28. These data are taken from fullscale rotor tests. Surface-contour accuracy can reduce the total torque coefficient 6 to 7 percent. The power required for hovering flight is greatly reduced near the ground. Observed flight-test data from various sources are plotted in Fig. 11.4.29. The ordinates are heights above the ground measured in rotor diameters. Effect of Gross Weight on Rate of Climb Figure 11.4.30 from Talkin (NACA-TN 1192) shows the rate of climb that can be obtained by reducing the load on a helicopter that will just hover. Conversely, given

11-72

AERONAUTICS

to the experimentally determined values. CTR may be calculated to determine CQR, from which Q is obtained. The performance of rotors at forward speeds involves a number of variables. For more complete treatment, see NACA-TN 1192 and NACA Wartime Report L-110. GROUND-EFFECT MACHINES (GEM)

For data on air-cushion vehicles and hydrofoil craft, see Sec. 11.3 “Marine Engineering.” Fig. 11.4.28 Static thrust performance of NACA 8-H-12 blades. SUPERSONIC AND HYPERSONIC AERODYNAMICS REFERENCES: Liepmann and Roshko, “Elements of Gas Dynamics,” Wiley. “High Speed Aerodynamics and Jet Propulsion” 12 vols., Princeton. Shapiro, “The Dynamics and Thermodynamics of Compressible Fluid Flow,” Vols. I, II, Ronald. Howarth (ed.), “Modern Developments in Fluid Mechanics—High Speed Flow,” Vols. I, II, Oxford. Kuethe and Schetzer, “Foundations of Aerodynamics,” Wiley. Ferri, “Elements of Aerodynamics of Supersonic Flows,” Macmillan. Bonney, “Engineering Supersonic Aerodynamics,” McGraw-Hill.

The effect of the compressibility of a fluid upon its motion is determined primarily by the Mach number M. M 5 V/Vs Fig. 11.4.29 Observed ground effect on Sikorsky-type helicopters.

(11.4.17)

where V  speed of the fluid and Vs  speed of sound in the fluid which for air is 49.02 2T, where T is in 8R and Vs in ft/s. The Mach number varies with position in the fluid, and the compressibility effect likewise varies from point to point. If a body moves through the atmosphere, the overall compressibility effects are a function of the Mach number M of the body, defined as M 5 velocity of body/speed of sound in the atmosphere Table 11.4.6 lists the useful gas dynamics relations between velocity, Mach number, and various fluid properties for isentropic flow. (See also Sec. 4, “Heat.”) V 2 5 2sh0 2 hd 5 2CP sT0 2 Td V2 M2 5 g21 2 V 2s0 11 M 2 Vs 2 r g21 p sg21d/g T ¢p ≤ 5 5 ¢r ≤ 5 ¢ ≤ 5 T0 Vs0 0 0 1⁄ 2

Fig. 11.4.30 Effect of gross weight on the rate of climb of a helicopter.

the rate of climb with a given load, the curves determine the increase in load that will reduce the rate of climb to zero; i.e., they determine the maximum load for which hovering is possible. Performance with Forward Speed The performance of a typical single-main-rotor-type helicopter may be shown by a curve of CTR /CQR plotted against m  v/pnD, where v is the speed of the helicopter, as in Fig. 11.4.31. This curve includes the parasite-drag effects which are appreciable at the higher values of m. It is only a rough approximation

¢

rV 2 p0 5

1 g21 2 11 M 2

sg/2dM 2 g/sg21d g21 ¢1 1 M2 ≤ 2

(11.4.18)

g 2 1 2 sg11d/sg21d 1 2 A 2 ≤ 5 2B ¢1 1 M ≤R A* 2 M g11

where h  enthalpy of the fluid, A  stream-tube cross section normal to the velocity, and the subscript 0 denotes the isentropic stagnation condition reached by the stream when stopped frictionlessly and adiabatically (hence isentropically). Vs0 denotes the speed of sound in that medium. The superscript * denotes the conditions occurring when the speed of the fluid equals the speed of sound in the fluid. For M  1, Eqs. (11.4.18) become 2 T * 5 ¢ V *s ≤ 5 2 T0 Vs0 g11 g/sg21d p* 2 p0 5 ¢ g 1 1 ≤ 1/sg21d r* 2 r0 5 ¢ g 1 1 ≤

Fig. 11.4.31 Performance curve for a single-main-rotor-type helicopter.

For air p*  0.52828p0; r*  0.63394r0; T*  0.83333T0.

(11.4.19)

SUPERSONIC AND HYPERSONIC AERODYNAMICS Table 11.4.6 M

Isentropic Gas Dynamics Relations p p0

V Vs0

A A*

1⁄

2 rV

2

Vs

r0

rV r0Vs0

r r0

T T0

Vs0

0.00 0.05 0.10 0.15 0.20

1.000 0.998 0.993 0.984 0.972

0.000 0.050 0.100 0.150 0.199

 11.59 5.82 3.91 2.964

0.000 0.002 0.007 0.016 0.027

0.000 0.050 0.099 0.148 0.195

1.000 0.999 0.995 0.989 0.980

1.000 0.999 0.998 0.996 0.992

1.000 1.000 0.999 0.998 0.996

0.25 0.30 0.35 0.40 0.45

0.957 0.939 0.919 0.896 0.870

0.248 0.297 0.346 0.394 0.441

2.403 2.035 1.778 1.590 1.449

0.042 0.059 0.079 0.100 0.123

0.241 0.284 0.325 0.364 0.399

0.969 0.956 0.941 0.924 0.906

0.988 0.982 0.976 0.969 0.961

0.994 0.991 0.988 0.984 0.980

0.50 0.55 0.60 0.65 0.70

0.843 0.814 0.784 0.753 0.721

0.488 0.534 0.580 0.624 0.668

1.340 1.255 1.188 1.136 1.094

0.148 0.172 0.198 0.223 0.247

0.432 0.461 0.487 0.510 0.529

0.885 0.863 0.840 0.816 0.792

0.952 0.943 0.933 0.922 0.911

0.976 0.971 0.966 0.960 0.954

0.75 0.80 0.85 0.90 0.95

0.689 0.656 0.624 0.591 0.559

0.711 0.753 0.795 0.835 0.874

1.062 1.038 1.021 1.009 1.002

0.271 0.294 0.315 0.335 0.353

0.545 0.557 0.567 0.574 0.577

0.766 0.740 0.714 0.687 0.660

0.899 0.887 0.874 0.861 0.847

0.948 0.942 0.935 0.928 0.920

1.00 1.05 1.10 1.15 1.20

0.528 0.498 0.468 0.440 0.412

0.913 0.950 0.987 1.023 1.057

1.000 1.002 1.008 1.018 1.030

0.370 0.384 0.397 0.407 0.416

0.579 0.578 0.574 0.569 0.562

0.634 0.608 0.582 0.556 0.531

0.833 0.819 0.805 0.791 0.776

0.913 0.905 0.897 0.889 0.881

1.25 1.30 1.35 1.40 1.45

0.386 0.361 0.337 0.314 0.293

1.091 1.124 1.156 1.187 1.217

1.047 1.066 1.089 1.115 1.144

0.422 0.427 0.430 0.431 0.431

0.553 0.543 0.531 0.519 0.506

0.507 0.483 0.460 0.437 0.416

0.762 0.747 0.733 0.718 0.704

0.873 0.865 0.856 0.848 0.839

1.50 1.55 1.60 1.65 1.70

0.272 0.253 0.235 0.218 0.203

1.246 1.274 1.301 1.328 1.353

1.176 1.212 1.250 1.292 1.338

0.429 0.426 0.422 0.416 0.410

0.492 0.478 0.463 0.443 0.433

0.395 0.375 0.356 0.337 0.320

0.690 0.675 0.661 0.647 0.634

0.830 0.822 0.813 0.805 0.796

1.75 1.80 1.85 1.90 1.95

0.188 0.174 1.161 0.149 0.138

1.378 1.402 1.425 1.448 1.470

1.387 1.439 1.495 1.555 1.619

0.403 0.395 0.386 0.377 0.368

0.417 0.402 0.387 0.372 0.357

0.303 0.287 0.272 0.257 0.243

0.620 0.607 0.594 0.581 0.568

0.788 0.779 0.770 0.762 0.754

2.00 2.50 3.00 3.50 4.00

0.128 0.059 0.027 0.013 0.007

1.491 1.667 1.793 1.884 1.952

1.688 2.637 4.235 6.790 10.72

0.358 0.256 0.172 0.112 0.074

0.343 0.219 0.137 0.085 0.054

0.230 0.132 0.076 0.045 0.028

0.556 0.444 0.357 0.290 0.238

0.745 0.667 0.598 0.538 0.488

0.003 0.002 0.00002

2.003 2.041 2.182

0.049 0.033 0.002

0.035 0.023 0.001

0.017 0.011 0.005

0.198 0.167 0.048

0.445 0.408 0.218

4.50 5.00 10.00

11-73

16.56 25.00 536.00

SOURCE: Emmons, “Gas Dynamics Tables for Air,” Dover Publications, Inc., 1947.

For subsonic regions flow behaves similarly to the familiar hydraulics or incompressible aerodynamics: In particular, an increase of velocity is associated with a decrease of stream-tube area, and friction causes a pressure drop in a tube. There are no regions where the local flow velocity exceeds sonic speed. For supersonic regions, an increase of velocity is associated with an increase of stream-tube area, and friction causes a pressure rise in a tube. M  1 is the dividing line between these two regions. For transonic flows, there are regions where the local flow exceeds sonic velocity, and this mixed-flow region requires special analysis. For supersonic flows, the entire flow field, with the exception of the regime near stagnation areas, has a velocity higher than the speed of sound. The hypersonic regime is that range of very high supersonic speeds (usually taken as M  5) where even a very streamlined body causes disturbance velocities comparable to the speed of sound, and stagnation

temperatures can become so high that the gas molecules dissociate and become ionized. Shock waves may occur at locally supersonic speeds. At low velocities the fluctuations that occur in the motion of a body (at the start of the motion or during flight) propagate away from the body at essentially the speed of sound. At higher subsonic speeds, fluctuations still propagate at the speed of sound in the fluid away from the body in all directions, but now the waves cannot get so far ahead; therefore, the force coefficients increase with an increase in the Mach number. When the Mach number of a body exceeds unity, the fluctuations instead of traveling away from the body in all directions are actually left behind by the body. These cases are illustrated in Fig. 11.4.32. As the supersonic speed is increased, the fluctuations are left farther behind and the force coefficients again decrease.

11-74

AERONAUTICS

Fig. 11.4.32 Propagation of sound waves in moving streams.

Where, for any reason, waves form an envelope, as at M  1 in Fig. 11.4.32, a wave of finite pressure jump may result. The speed of sound is higher in a higher-temperature region. For a compression wave, disturbances in the compressed (hence high-temperature) fluid will propagate faster than and overtake disturbances in the lowertemperature region. In this way shock waves are formed. For a stationary normal shock wave the following properties exist, where subscripts 1 and 2 refer to conditions in front of and behind the shock, respectively, ¢

VS2 T2 a2 ≤ 5 ¢ ≤ 5 ¢a ≤ Å T1 VS1 1

(11.4.20a)

M2 a2 V2 ≤ 5 ¢ ≤ # ¢a ≤ V1 M1 1

(11.4.20b)

¢

In the following equations, also for a normal shock, the second form shown is for air with g 5 1.4. 2gM 21 2 sg 2 1d p2 7M 21 2 1 ¢p ≤ 5 ≤ 5 ¢ g11 6 1 r2 ¢r ≤ 5 1

M2 5 B p20 ¢p ≤ 5 B 10

5

sg 1 1dM 21 sg 2 1dM 21 1 2

sg 2 1dM 21 1 2 2gM 21

sg 2

1dM 21

6M 21 ¢ 2 M1 1

T2 ¢ ≤ 5 T1

1/2

2 sg 2 1d

sg 1 1dM 21

5

11 ¢



3.5

R

5 ¢

R

#B

6M 21 M 21 1 5

M 21 1 5 21

7M 21



2gM 21

g21 11 ¢ ≤ M 22 2

5 ¢

1 1 0.2M 21 1 1 0.2M 22

(11.4.20d)

1/2

2 sg 2 1d

2.5 6 ≤ 7M 21 2 1

g21 ≤ M 21 2



g11

g/sg21d

12 # ¢

5 ¢

(11.4.20c)

(11.4.20e)

tan d 5 B

2 cot uW sM 21 sin2 uW 2 1d M 21 sg 1 cos 2uWd 1 2d

(11.4.20g)

The subscript 20 refers to the stagnation condition after the shock, and the subscript 10 refers to the stagnation condition ahead of the shock. In a normal shock M, V, and p0 decrease; p, r, T, and s increase, whereas T0 remains unchanged. These relations are given numerically in Table 11.4.7. If the shock is moving, these same relations apply relative to the shock. In particular, if the shock advances into stationary fluid, it does so at the speed V1 which is always greater than the speed of sound in the stationary fluid by an amount dependent upon the shock strength.

R

The results of a numerical evaluation of the above equation for 1.5 M1 20, g 5 1.4 (for air) are shown graphically in Fig. 11.4.35. The locus of the maximum wedge angle line differentiates between weak and strong shocks. For small wedge angles, the shock angle differs only slightly from sin1 (1/M1), the Mach angle, and the velocity component normal to this wave is the speed of sound. The pressure jump is small and is given approximately by

1/sg21d

R (11.4.20 f )



When an air vehicle is traveling at high subsonic, transonic, or supersonic speeds, shock waves will form on it. A shock wave is a thin region of air through which air physical properties change. Shock waves can be normal or oblique to the air vehicle direction of flight. If oblique, it behaves exactly like a normal shock to the normal component of the stream. The tangential component is left unchanged. Thus the resultant velocity not only drops abruptly in magnitude but also changes discontinuously in direction. Figure 11.4.33 gives the relations between M1, M2, uW, and d. In supersonic flow past a two-dimensional wedge with semiangle d, one strong and one weak oblique shock wave will theoretically form attached to the leading edge. See Fig. 11.4.34a. In practice only the weak one will be present and downstream of this oblique shock the resultant flow will be parallel to the wedge surface. It is possible to use Eqs. (11.4.20) to generate the flow characteristics about wedges with various wedge angles or more easily with the help of Fig. 11.4.33. The relationship between d, M1, and uW is shown in Fig. 11.4.34b. These are related through the equation

p 2 p1 5 B

gp1M 21 2M 21 2 1

Rd

where d is the wedge semiangle. For wedge semiangles that exceed the corresponding Mach number lines, a detached shock will form at the leading edge (Fig. 11.4.36). For example, if d 5 408 and M1  3.00, then a detached bow wave will form in front of the wedge leading edge (see Fig. 11.4.35). Behind this detached shock there will be a region of subsonic flow reaching the wedge leading edge. Exact solutions also exist for supersonic flow past a cone. Above a certain supersonic Mach number, a conical shock wave is attached to the apex of the cone. Figures 11.4.37 and 11.4.38 show these exact relations; they are so accurate that they are often used to determine the Mach number of a stream by measuring the shock-wave angle on a cone of known angle. For small cone angles the shock differs only slightly from the Mach cone, i.e., a cone whose semiapex angle is the Mach angle; the pressure on the cone is then given approximately by p 2 p1 5 gp1M 21d2 sln 2/d 2M 21 2 1 d where d is the cone semiangle.

(11.4.21)

Next Page SUPERSONIC AND HYPERSONIC AERODYNAMICS Table 11.4.7 M

11-75

Normal Shock Relations p2 /p1

p20 /p1

p20 /p10

M2

Vs2 /Vs1

V2 /V1

T2/T1

r2 /r1

1.00 1.05 1.10 1.15 1.20

1.000 1.120 1.245 1.376 1.513

1.893 2.008 2.133 2.266 2.408

1.000 1.000 0.999 0.997 0.993

1.000 0.953 0.912 0.875 0.842

1.000 1.016 1.032 1.047 1.062

1.000 0.923 0.855 0.797 0.745

1.000 1.033 1.065 1.097 1.128

1.000 1.084 1.169 1.255 1.342

1.25 1.30 1.35 1.40 1.45

1.656 1.805 1.960 2.120 2.286

2.557 2.714 2.878 3.049 3.228

0.987 0.979 0.970 0.958 0.945

0.813 0.786 0.762 0.740 0.720

1.077 1.091 1.106 1.120 1.135

0.700 0.660 0.624 0.592 0.563

1.159 1.191 1.223 1.255 1.287

1.429 1.516 1.603 1.690 1.776

1.50 1.55 1.60 1.65 1.70

2.458 2.636 2.820 3.010 3.205

3.413 3.607 3.805 4.011 4.224

0.930 0.913 0.895 0.876 0.856

0.701 0.684 0.668 0.654 0.641

1.149 1.164 1.178 1.193 1.208

0.537 0.514 0.492 0.473 0.455

1.320 1.354 1.388 1.423 1.458

1.862 1.947 2.032 2.115 2.198

1.75 1.80 1.85 1.90 1.95

3.406 3.613 3.826 4.045 4.270

4.443 4.670 4.902 5.142 5.389

0.835 0.813 0.790 0.767 0.744

0.628 0.617 0.606 0.596 0.586

1.223 1.238 1.253 1.268 1.284

0.439 0.424 0.410 0.398 0.386

1.495 1.532 1.573 1.608 1.647

2.279 2.359 2.438 2.516 2.592

2.00 2.50 3.00 3.50 4.00

4.500 7.125 10.333 14.125 18.500

5.640 8.526 12.061 16.242 21.068

0.721 0.499 0.328 0.213 0.139

0.577 0.513 0.475 0.451 0.435

1.299 1.462 1.637 1.821 2.012

0.375 0.300 0.259 0.235 0.219

1.688 2.138 2.679 3.315 4.047

2.667 3.333 3.857 4.261 4.571

4.50 5.00 10.00

23.458 29.000 116.500

26.539 32.653 129.220

0.092 0.062 0.003

0.424 0.415 0.388

2.208 2.408 4.515

0.208 0.200 0.175

4.875 5.800 20.388

4.812 5.000 5.714

SOURCE: Emmons, “Gas Dynamics Tables for Air,” Dover Publications, Inc., 1947.

Fig. 11.4.33 Oblique shock wave relations.

A nozzle consisting of a single contraction will produce at its exit a jet of any velocity from M  0 to M  1 by a proper adjustment of the pressure ratio. For use as a subsonic wind-tunnel nozzle where a uniform parallel gas stream is desired, it is only necessary to connect the supply section to the parallel-walled or open-jet test section by a smooth gently curving wall. If the radius of curvature of the wall is nowhere less than the largest test-section cross-sectional dimension, no flow separation will occur and a good test gas stream will result. When a converging nozzle connects two chambers with the pressure drop beyond the critical [(p/p0) (p*/p0)], the Mach number at the exit of the nozzle will be 1; the pressure ratio from the supply section to the nozzle exit will be critical; and all additional expansion will take place outside the nozzle. A nozzle designed to supply a supersonic jet at its exit must converge to a minimum section and diverge again. The area ratio from the minimum section to the exit is given in the column headed A/A* in Table 11.4.6. A converging-diverging nozzle with a pressure ratio p/p0  (gas pressure)/(stagnation pressure), Table 11.4.6, gradually

falling from unity to zero will produce shock-free flow for all exit Mach numbers from zero to the subsonic Mach number corresponding to its area ratio. From this Mach number to the supersonic Mach number corresponding to the given area ratio, there will be shock waves in the nozzle. For all smaller pressure ratios, the Mach number at the nozzle exit will not change, but additional expansion to higher velocities will occur outside of the nozzle. To obtain a uniform parallel shock-free supersonic stream, the converging section of the nozzle can be designed as for a simple converging nozzle. The diverging or supersonic portion must be designed to produce and then cancel the expansion waves. These nozzles would perform as designed if it were not for the growth of the boundary layer. Experience indicates that these nozzles give a good first approximation to a uniform parallel supersonic stream but at a somewhat lower Mach number. For rocket nozzles and other thrust devices the gain in thrust obtained by making the jet uniform and parallel at complete expansion must be balanced against the loss of thrust caused by the friction on the wall of the greater length of nozzle required. A simple conical diverging section, cut off experimentally for maximum thrust, is generally used (see also Sec. 11.5 “Jet Propulsion and Aircraft Propellers”). For transonic and supersonic flow, diffusers are used for the recovery of kinetic energy. They follow the test sections of supersonic wind tunnels and are used as inlets on high-speed planes and missiles for ram recovery. For the first use, the diffuser is fed a nonuniform stream from the test section (the nonuniformities depending on the particular body under test) and should yield the maximum possible pressure-rise ratio. In missile use the inlet diffuser is fed by a uniform (but perhaps slightly yawed) airstream. The maximum possible pressure-rise ratio is important but must provide a sufficiently uniform flow at the exit to assure good performance of the compressor or combustion chamber that follows. In simplest form, a subsonic diffuser is a diverging channel, a nozzle in reverse. Since boundary layers grow rapidly with a pressure rise, subsonic diffusers must be diverged slowly, 6 to 88 equivalent cone angle, i.e., the apex angle of a cone with the same length and area ratio.

Previous Page 11-76

AERONAUTICS

A M1 cos θW (tangential component to shock wave) M1

M1 sin θW (normal component to shock wave)

Weak shock θW

δ

Wedge (a)

A

Strong shock

Weak shock θW

Wedge angle δ M1

Wedge (b) Fig. 11.4.34 Shock waves on a wedge.

Similarly, a supersonic diffuser in its simplest form is a supersonic nozzle in reverse. Both the convergent and divergent portions must change cross section gradually. In principle it is possible to design a shock-free diffuser. In practice shock-free flow is not attained, and the design is based upon minimizing the shock losses. Oblique shocks should be produced at the inlet and reflected a sufficient number of times to get compression nearly to M  1. A short parallel section and a divergent section can now be added with the expectation that a weak normal shock will be formed near the throat of the diffuser. For a supersonic wind tunnel, the best way to attain the maximum pressure recovery at a wide range of operating conditions is to make the diffuser throat variable. The ratio of diffuser-exit static pressure to the diffuser-inlet (test-section outlet) total pressure is given in Table 11.4.8. These pressure recoveries are attained by the proper adjustment of the throat section of a variable diffuser on a supersonic wind-tunnel nozzle. A supersonic wind tunnel consists of a compressor or compressor system including precoolers or aftercoolers, a supply section, a supersonic nozzle, a test section with balance and other measuring equipment, a diffuser, and sufficient ducting to connect the parts. The minimum

pressure ratio required from supply section to diffuser exit is given in

Table 11.4.8. Any pressure ratio greater than this is satisfactory. The extra pressure ratio is automatically wasted by additional shock waves that appear in the diffuser. The compression ratio required of the compressor system must be greater than that of Table 11.4.8 by at least an amount sufficient to take care of the pressure drop in the ducting and valves. The latter losses are estimated by the usual hydraulic formulas. After selecting a compressor system capable of supplying the required maximum pressure ratio, the test section area is computed from A 5 1.73

Q A pe Vs0 A* p0

ft 2

(11.4.22)

where Q is the inlet volume capacity of the compressors, ft3/s; Vs0 is the speed of sound in the supply section, ft/s; A/A* is the area ratio given in Table 11.4.6 as a function of M; pe /p0 is the pressure ratio given in Table 11.4.8 as a function of M.

SUPERSONIC AND HYPERSONIC AERODYNAMICS

11-77

80 Strong shock

Shock angle θW, degrees

δmax

Weak shock

60

Conical shock M1

40 M = 1.5

θW δ

Conical shock

M=3 M = 20 20

Fig. 11.4.37 Wave angles for supersonic flow around cones.

γ = 1.4 0 0

10

20

30

40

section. Such tunnels may have steady flow for only a few seconds, but by careful instrumentation, sufficient data may be obtained in this time. A shock tube may be used as an intermittent-wind tunnel as well as to study shock waves and their interactions; it is essentially a long tube of constant or varying cross section separated into two parts by a frangible diaphragm. High pressure exists on one side; by rupturing the diaphragm, a shock wave moves into the gas with the lower pressure. After the shock wave a region of steady flow exists for a few milliseconds. Very high stagnation temperatures can be created in a shock tube, which is not the case in a wind tunnel, so that it is useful for studying hypersonic flow phenomena. Wind-tunnel force measurements are subject to errors caused by the model support strut. Wall interference is small at high Mach numbers for which the reflected model head wave returns well behind the model. Near M  1, the wall interference becomes very large. In fact, the tunnel chokes at Mach numbers given in Table 11.4.6, at

50

Wedge angle δ, degrees

Fig. 11.4.35 Oblique shock properties for air: strong and weak shock variables.

The nozzle itself is designed for uniform parallel airflow in the test section. Since for each Mach number a different nozzle is required, the nozzle must be flexible or the tunnel so arranged that fixed nozzles can be readily interchanged. The Mach number of a test is set by the nozzle selection, and the Reynolds number is set by the inlet conditions and size of model. The Reynolds number is computed from Re  Re0D( p0 /14.7)(540/T0)1.268

(11.4.23)

where Re0 is the Reynolds number per inch of model size for atmospheric temperature and pressure, as given by Fig. 11.4.39, D is model diam, in; p0 is the stagnation pressure, lb/in2; T0 is the stagnation temperature, 8R. For a closed-circuit tunnel, the Reynolds number can be varied independently of the Mach number by adjusting the mass of air in the system, thus changing p0. Intermittent-wind tunnels for testing at high speeds do not require the large and expensive compressors associated with continuous-flow tunnels. They use either a large vacuum tank or a large pressure tank (often in the form of a sphere) to produce a pressure differential across the test

A 5 A*

1 area of model projected on test-section cross section 12 area of test-section cross section (11.4.24)

There are two choking points: one subsonic and one supersonic. Between these two Mach numbers, it is impossible to test in the tunnel. As these Mach numbers are approached, the tunnel wall interference becomes very large.

Detached shock M >1 M1

Sonic line δ > δmax

Wedge Subsonic region M 1.0

0.7 < M < 0.8

(b) Transonic M > 1.0

0.8 < M < 1.0

M < 1.0

(c) 1 < M < 1.2

(d) Supersonic

M > 1.0

M > 1.2

(e)

Fig. 11.4.40 Regions of flow about an airfoil (Mach numbers are approximate).

M Tx

Fan Details

Propeller fans and other axial fans may use blades shaped to airfoil sections or blades of uniform thickness. Blading may be fixed, adjustable at standstill, or variable in operation. Propeller fans have very small hubs. Hub-to-tip diameter ratios ranging from 0.4 to 0.7 are common in vane-axial fans. The larger the hub, the more important it is to have an inner cylinder approximately the hub size located downstream of the impeller. The guide vanes of a vane-axial fan are located in the annular space between the tubular casing and the inner cylinder. Diffusers are generally used between the fan and the discharge ductwork. Centrifugal fans use various types of blading. Forward-curved blades are shallow and curved so that both the tip and the heel point in the direction of rotation. Radial and radial-tip blades both are radial at the tip, but the latter are curved at the heel to point in the direction of rotation. Backward-curved and backward-inclined blades point in the direction opposite rotation at the tip and in the direction of rotation at the heel. All the above blades are of uniform thickness and are designed for radial flow. Airfoil blades have backward-curved chord lines so that the leading edge of the airfoil is at the heel pointing forward and the trailing edge at the tip pointing backward with respect to rotation. Impellers for all blade shapes are usually shrouded and may have single or double inlets. Blade widths are related to the inlet-to-tip-diameter ratio. Tip angles may vary widely, but heel angles should be set to minimize entrance losses. Scroll casings may be fitted with a streamlined inlet bell, an inlet cone, or simply a collar. Tubular centrifugals may be designed for backward-curve, air foil, or mixed-flow impellers. An inlet bell and discharge guide vanes are required for good performance. Cross-flow fans utilize impellers with blading similar to that of a forward-curved centrifugal, but the end shrouds have no inlet holes. Blade-length-to-tip-diameter ratios are limited only by structural considerations. Power roof ventilators may use either axial- or radial-flow impellers. The casings will include either a propeller fan mounting ring or a tubular casing to guide the flow for an axial-flow impeller. If a radial-flow impeller is used, an inlet bell is required, but the scroll case may be replaced by the ventilator hood. Plenum fans and plug fans are fans without traditional casings. Plenum fans are used to move air through a system. They utilize centrifugal impellers and their drives are inside the plenum. Plug fans are used to circulate air or gas within a chamber or system. They may have either centrifugal or axial impellers and their drives are outside the chamber.

14-47

This expression will usually be accurate enough even when moist air is involved. Fan air density is the density of the air corresponding to the total pressure and total temperature at the fan inlet r 5 r1 Volume flow rate is usually determined from pressure measurements, e.g., a velocity-pressure traverse taken with a Pitot static tube or a pressure drop across a flowmeter. The average velocity pressure for a Pitot traverse is Fan Capacity

pvx 5 s g 2pvxr /nd2 where subscript x indicates the plane of the measurements, subscript r indicates a reading at one station, n is the number of stations, and  is the summation sign. The corresponding capacity is: # Q x 5 1,097Ax 2pvx >rx For a flowmeter # Qx 5 1,097CAxY 2p/rx /F where C  coefficient of discharge of meter, Y  expansion factor for gas, p  measured pressure drop, and F  velocity-of-approach factor for the meter installation. Fan capacity is the volumetric flow rate at fan air density # # Q 5 Qx rx /r Fan Total Pressure Fan total pressure is the difference between the total pressure at the fan outlet and the total pressure at the fan inlet

pt 5 pt2 2 pt1 When the fan draws directly from the atmosphere. pt1 5 0 When the fan discharges directly to the atmosphere, pt2 5 pv2 If either side of the fan is connected to ductwork, etc., and the measuring plane is remote, the measured values should be corrected for the approximate pressure drop pt2 5 ptx 1 p22x

pt1 5 ptx 1 px21

14-48

FANS

Fan Velocity Pressure Fan velocity pressure is the pressure corresponding to the average velocity at the fan outlet # Q2 /A2 2 pv 5 a b r2 1,097 Fan Static Pressure Fan static pressure is the difference between the fan total pressure and the fan velocity pressure. Therefore, fan static pressure is the difference between the static pressure at the fan outlet and the total pressure at the fan inlet

ps 5 pt 2 pv 5 ps2 2 pt1 Fan Speed Fan speed is the rotative speed of the impeller. Compressibility Factor The compressibility factor is the ratio of

the fan total pressure prt that would be developed with an incompressible fluid to the fan total pressure pt that is developed with a compressible fluid, all other conditions being equal: prt [n/sn 2 1d][s pt2a /pt1a dsn21d/n 2 1] Kp 5 p 5 t s pt2a /pt1a d 2 1 Compressibility factor can be determined from test measurements using x 5 pt >pt1a

# z 5 [sg 2 1d/g] 6356H/Qpt1a Kp 5 [z log s1 1 xd]/ [x log s1 1 zd] Fan power output is the product of fan capacity and fan total pressure and compressibility factor # Ho 5 Qpt Kp /6,356 Fan Power Output

Fan Power Input Fan power input is the power required to drive the fan and any elements in the drive train which are considered a part of the fan. Power input can be calculated from appropriate measurements for a dynamometer, torque meter, or calibrated motor. Fan Total Efficiency Fan total efficiency is the ratio of the fan power output to the fan power input # ht 5 QptKp /6,356H Fan Static Efficiency Fan static efficiency is the fan total efficiency multiplied by the ratio of fan static pressure to fan total pressure

hs 5 ht ps /pt Fan sound power level is 10 times the logarithm (base 10) of the ratio of the actual sound power in watts to 1012 watts, Fan Sound Power Level

Lw 5 10 log sW/10212d The total sound power level of a fan is usually assumed to be 3 dB higher than either the inlet or outlet component. The casing component varies with construction but will usually range from 15 to 30 dB less than the total. The frequency content of fan noise is usually expressed in octave bands, but may be given in narrow bands or even as an overall value. The sound power level can be calculated from sound pressure measurements or sound intensity measurements. The reverberant room method utilizes a known sound source to calibrate the room and a microphone to measure the sound pressures. The sound intensity method utilizes two microphones in a sound-intensity probe to measure the sound intensities over an enveloping surface. The in-duct method utilizes anechoic terminations to facilitate sound pressure measurements without end reflections. Head The difference between head and pressure is important in fan engineering. Both are measures of the energy in the air. Head is energy per unit weight and can be expressed in ft  lb/lb, which is often abbreviated to ft (of fluid flowing). Pressure is energy per unit volume and can be expressed in ft  lb/ft3, which simplifies to lb/ft2 or force per unit area. The use of the inch water gage (in wg) is a convenience in fan engineering reflecting the usual methods of measurement. It sounds like a head measurement but is actually a pressure measurement corresponding

to 5.192 lb/ft2, the pressure exerted by a column of water 1 inch high. Pressures can be converted into heads and vice versa: p 5 rh/5.192

h 5 5.192p/r

For instance, for air at 0.075 lbm/ft3 density, 1 in wg corresponds to 69.4 ft of air. That is, a column of air at that density would exert a pressure of 5.192 lb/ft2. Lighter air would exert less pressure for a given head. For a given pressure, the head would be higher with lighter air. Although it is not widely used in the United States, head is commonly used in a number of European countries. Fans, like other turbomachines, can be considered constant-headconstant-capacity (volumetric) machines. This means that a fan will develop the same head at a given capacity regardless of the fluid handled, all other conditions being equal. Of course, this also means that a fan will develop a pressure proportional to the density at a given capacity, all other conditions being equal. All the preceding equations are based on the U.S. Customary Units listed under Symbols. If SI units are to be used, certain numerical coefficients will have to be modified. For instance, substitute 22 for 1,097 in the flow equations when using Pa for pressure, kg/m3 for density, and m3/s for capacity. Similarly, substitute 1.0 for 6,356 in the power equations when using Pa for pressure, m3/s for capacity, and W for power. Common metric practice in fan engineering leads to other numerical coefficients. When mm wg is used for pressure, m3/s for capacity, kW for power, and kg/m3 for density, substitute 4.424 for 1,097 in the flow equations and 102.2 for 6,356 in the power equations. The preceding discussion utilized what may be termed the volumeflow-rate-pressure approach to expressing fan performance. An alternative, the mass-flow-rate-specific-energy approach is equally valid but not generally used in the United States. Refer to ASME PTC-11 for additional details. FAN AND SYSTEM PERFORMANCE CHARACTERISTICS

The performance characteristics of a fan are best described by a graph. The conventional method of graphing fan performance is to plot a series of curves with capacity as abscissa and all other variables as ordinates. System characteristics can be plotted in a similar manner. System Characteristics

Most systems served by a fan have characteristics which can be described by a parabola passing through the origin; i.e., the energy required to produce flow through the system (which can be expressed as pressure or head) varies approximately as the square of the flow. In some cases, the system characteristics will not pass through the origin because the energy required to produce flow through an element of the system may be controlled at a particular value, e.g., with venturi scrubbers. In some cases, the system characteristic will not be parabolic because the flow through an element of the system is laminar rather than turbulent, e.g., in some types of filters. Whatever the case, the system designer should establish the characteristics by determining the energy requirements at various flow rates. The energy requirement (pressure drops or head losses) for each element can be determined by reference to handbook or manufacturer’s literature or by test. The true measure of the energy requirement for a system element is the total pressure drop or the total head loss. Only if the entrance velocity for the element equals the exit velocity will the change in static pressure equal the total pressure drop. There are some advantages in using static pressure change, but the system designer is usually well advised to use total pressure drops to avoid errors in fan selection. The sum of the total pressure losses for elements on the inlet side of the fan will equal  pt1. This should include energy losses at the entrance to the system but not the energy to accelerate the air to the velocity at the fan inlet, which is chargeable to the fan. The sum of the total pressure losses for elements on the discharge side of the fan will equal pt2. This should include the kinetic energy of the stream issuing from the system. The total system requirement will be the arithmetic sum of all the appropriate losses or the algebraic difference pt2  pt1, which is also pt.

FAN AND SYSTEM PERFORMANCE CHARACTERISTICS

A system characteristic curve based on total pressure is plotted in Fig. 14.5.2. A characteristic based on static pressure is also shown. The latter recognizes the definition of fan static pressure so that the only difference is the velocity pressure corresponding to the fan outlet velocity. This system could operate at any capacity provided a fan delivered the exact pressure to match the energy requirements shown on the system curve for that capacity. The advantages of plotting the system characteristics on the fan graph will become evident in the following discussions.

Fig. 14.5.2 Fan and system characteristics. Fan Characteristics

The constant-speed performance characteristics of a fan are illustrated in Fig. 14.5.2. These characteristics are for a particular size and type of fan operating at a particular speed and handling air of a particular density. The fan can operate at any capacity from zero to the maximum shown, but when applied on a particular system the fan will operate only at the intersection of the system characteristics with the appropriate fan pressure characteristic. For the case illustrated, the fan will operate at Q  27,300 ft3/min and pt  3.4 in wg or ps  3 in wg, requiring H  18.3 hp at the speed and density for which the curves were drawn. The static efficiency at this point of operation is 73 percent, and the total efficiency is 80 percent. If the system characteristic had been lower, it would have intersected the fan characteristic at a higher capacity and the fan would have delivered more air. Contrariwise, if the system characteristic had been higher, the capacity of the fan would have been less. Capacity reduction can be accomplished, in fact, by creating additional resistance, as with an outlet damper.

14-49

Figure 14.5.3 illustrates the characteristics of a fan with damper control, variable inlet vane control, and variable speed control. These particular characteristics are for a backward-curved centrifugal fan, but the general principles apply to all fans. Outlet dampers do not affect the flow to the fan and therefore can alter fan performance only by adding resistance to the system and producing a new intersection. Point 1 is for wide-open dampers. Points 2 and 3 are for progressively closed dampers. Note that some power reduction accompanies the capacity reduction for this particular fan. Variable-inlet vanes produce inlet whirl, which reduces pressure-producing capability. Point 4 is for wideopen inlet vanes and corresponds to point 1. Points 5 and 6 are for progressively closed vanes. Note that the power reduction at reduced capacity is better for vanes than for dampers. Variable speed is the most efficient means of capacity control. Point 7 is for full speed and points 8 and 9 for progressively reduced speed. Note the improved power savings over other methods. Variable speed also has advantages in terms of lower noise and reduced erosion potential but is generally at a disadvantage regarding first cost. Figure 14.5.4 illustrates the characteristics of a fan with variable pitch control. These characteristics are for an axial-flow fan, but comparisons to the centrifugal-fan ratings in Fig. 14.5.3 can be made. Point 10 corresponds to points 1, 4, or 7. Reduced ratings are shown at points 11 and 12 obtained by reducing the pitch. Power savings can be almost as good as with speed control. Notice that in all methods of capacity control, operation is at the intersection of a system characteristic with a fan characteristic. With variable vanes, variable speed, and variable pitch, the fan characteristic is modified. With damper control, the effect could be considered a change in fan characteristics, but it generally makes more sense to consider it a change in system characteristics. Fan-System Matching

It has already been observed in the discussions of both system and fan characteristics that the point of operation for one fan on a particular system will be at the intersection of their characteristics. Stated another way, the energy required by the system must be provided by the fan exactly. If the fan delivers too much or too little energy, the capacity will be more or less than desired. The effects of utilizing dampers, variable speed, variable-inlet vanes, and variable pitch were illustrated, but in all cases the fan was preselected. The more general case is the one involving the selection or design of a fan to do a particular job. The

Fig. 14.5.3 Fan characteristics with (a) damper control, (b) variable-inlet vane control, (c) variable speed control.

14-50

FANS

+16° +12° 89%

10

+8°

88%

Pressure

86%

60%

70%

75%

+4°

80%

0° 11

−4° −8°

12 −12° −36° −32°

−28°

−24°

−20°

−16° Capacity

Fig. 14.5.4 Characteristics of an axial-flow fan with variable-pitch control.

design of a fan from an aerodynamicist’s point of view is beyond the scope of this discussion. Fortunately, most fan problems can be solved by selecting a fan from the many standard lines available commercially. Again, the crux of the matter is to match fan capability with system requirements. Most fan manufacturers have several standard lines of fans. Each line may consist of various sizes all resembling each other. If the blade angles and proportions are the same, the line is said to be homologous. There is only one fan size in a line of homologous fans that will operate at the maximum efficiency point on a given system. If the fan is too big, operation will be at some point to the left of peak efficiency on a standard characteristic plot. If the fan is too small, operation will be to the right of peak. In either case it will be off peak. A slightly undersized fan is often preferred for reasons of cost and stability. Selecting and rating a fan from a catalog is a matter of fan-system matching. When it is recognized that many of the catalog sizes may be able to provide the capacity and pressure, selection becomes a matter of trying various sizes and comparing speed, power, cost, etc. The choice is a matter of evaluation. There are various reasons for using more than one fan on a system. Supply and exhaust fans are used in ventilation to avoid excessive pressure build-up in the space being served. Forced and induced-draft fans are used to maintain a specified draft over the fire. Two fans may fit the available space better than one larger fan. Capacity control by various fan combinations may be more economical than other control methods. Multistage arrangements may be necessary when pressure requirements exceed the capabilities of a single-stage fan. Standby fans are frequently required to ensure continuous operation. When two fans are used, they may be located quite remote from each other or they may be close enough to share shaft and bearings or even casings. Double-width, double-inlet fans are essentially two fans in parallel in a common housing. Multistage blowers are, in effect, two or more fans in series in the same casing. Fans may also be in series but at opposite ends of the system. Parallel-arrangement fans may have almost any amount of their operating resistance in common. At one extreme, the fans may have common inlet and discharge plenums. At the other extreme, the fans may both have considerable individual ductwork of equal or unequal resistance. Fans in series must all handle the same amount of gas by weight measurements, assuming no losses or gains between stages. The combined total pressure will be the sum of the individual fans’ total pressures. The velocity pressure of the combination can be defined as the pressure corresponding to the velocity through the outlet of the last stage. The static pressure for the combination is the difference between its total and velocity pressures and is therefore not equal to the sum of the individual fan static pressures. The volumetric capacities will differ whenever the inlet densities vary from stage to stage. Compression in one stage

will reduce the volume entering the next if there is no reexpansion between the two. As with any fan, the pressure capabilities are also influenced by density. The combined total pressure-capacity characteristic for two fans in series can be drawn by using the volumetric capacities of the first stage for the abscissa and the sum of the appropriate total pressures for the ordinate. Because of compressibility, the volumetric capacities of the second stage will not equal the volumetric capacities of the first stage. The individual total pressures must be chosen accordingly before they are combined. If the gas can be considered incompressible, the pressures for the two stages may be read at the same capacity. In the area near free delivery, it may be necessary to estimate the negative pressure characteristics of one of the fans in order to combine values at the appropriate capacity. Fans in parallel must all develop sufficient pressure to overcome the losses in any individual ductwork, etc., as well as the losses in the common portions of the system. When such fans have no individual ductwork but discharge into a common plenum, their individual velocity pressures are lost and the fans should be selected to produce the same fan static pressures. If fan velocity pressures are equal, the fan total pressures will be equal in such cases. When the fans do have individual ducts but they are of equal resistance and joined together at equal velocities, the fans should be selected for the same fan total pressures. If fan velocity pressures are equal, fan static pressures will be equal in this case. If the two streams join together at unequal velocities, there will be a transfer of momentum from the higher-velocity stream to the lowervelocity stream. The fans serving the lower-velocity branch can be selected for a correspondingly lower total pressure. The other fan must be selected for a correspondingly higher total pressure than if velocities were equal. The combined pressure-capacity curves for two fans in parallel can be plotted by using the appropriate pressures for ordinates and the sum of the corresponding capacities for abscissa. Such curves are meaningful only when a combined-system curve can be drawn. In the area near shutoff, it may be necessary to estimate the negative capacity characteristics of one of the fans in order to combine values at the appropriate pressure. Figure 14.5.5 illustrates the combined characteristics of two fans with slightly different individual characteristics (A-A and B-B). The combined characteristics are shown for the two fans in series (C-C) and in parallel (D-D). Only total-pressure curves are shown. This is always correct for series arrangements but may introduce slight errors for parallel arrangements. An incompressible gas has been assumed. The questionable areas near shutoff or free delivery have been omitted. Two different system characteristics (E-E and F-F ) have been drawn on the

Fig. 14.5.5 Combined characteristics of fans in series and parallel.

FAN AND SYSTEM PERFORMANCE CHARACTERISTICS

14-51

chart. With the two fans in series, operation will be at point EC or FC if the fan is on system E or F, respectively. Parallel arrangement will lead to operation at ED or FD. Single-fan operation would be at the point indicated by the intersection of the appropriate fan and system curves provided the effect of an inoperative second fan is negligible. Some sort of bypass is required around an inoperative fan in series whereas an inoperative fan in parallel need only be dampered shut. Parallel operation yields a higher capacity than series operation on system F, but the reverse is true on system E. For the type of fan and system characteristics drawn there is only one possible point of operation for any arrangement. Fan Laws

The fan laws are based on the experimentally demonstrable fact that any two members of a homologous series of fans have performance curves which are homologous. At the same point of rating, i.e., at similarly situated points of operation on their characteristic curves, efficiencies are equal and other variables are interrelated according to the fan laws. If size and speed are considered independent variables and if compressibility effects are ignored, the fan laws can be written as follows: h# tc Qc ptc Hc Lwc

5 5 5 5 5

h# tb Qb sDc /Dbd3 sNc /Nbd ptb sDc /Dbd2 sNc /Nbd2 src /rbd Hb sDc /Dbd5 sNc /Nbd3 src /rbd Lwb 1 70 log sDc /Dbd 1 50 log sNc /Nbd 1 20 log src /rbd

The above laws are useful, but they are dangerous if misapplied. The calculated fan must have the same point of rating as the known fan. When in doubt, it is best to reselect the fan rather than attempt to use the fan laws. The fan designer utilizes the fan laws in various ways. Some of the more useful relationships in addition to those above derive from considering fan capacity and fan total pressure as the independent variables. This leads to specific diameter, specific speed, and specific sound power level: # Ds 5 Dspt /rd1>4/Q1>2 # 1/2 Ns 5 NQ /spt >rd3/4 # Lws 5 Lw 2 10 log Q 2 20 log pt Ds , Ns , and Lws are the diameter, speed, and sound power level of a homologous fan which will deliver 1 ft3/min at 1 in wg at the same point of rating as Q and pt for D and N. Ds and Ns can be used to advantage in fan selection. They can also be used by a designer to determine how well a line will fit in with other lines. This is illustrated in Fig. 14.5.6. Each segment represents a particular fan line. Note the trends for each kind of fan. Incidentally, some fan engineers utilize different formulas for specific diameter and specific speed # 1/2 # Dse 5 Dp1/4 Nse 5 NQ1/2 >p3/4 te /Q te where pte is the equivalent total pressure based on standard air. This makes Dse  Ds  1.911 and Nse  Ns  6.978. Specific sound power level is useful in predicting noise levels as well as in comparing fan designs. There appears to be a lower limit of Lws in the vicinity of 45 dB for the more efficient types ranging to 70 dB or more for cruder designs. Actual sound power levels can be figured from # Lw 5 Lws 1 10 log Q 1 20 log pt Another useful parameter which derives from the fan laws is orifice ratio # Ro 5 Q/D2 spt /rd1/2 This ratio can be plotted on a characteristic curve for a known fan. If the ratio is determined for a calculated homologous fan, the point of rating can be established by inspection. Other ratios can be used in the same # manner including pv /pt , pt >Q2, Ds, and Ns.

Fig. 14.5.6 Specific diameter and efficiency versus specific speed for singleinlet fan types. Stability Considerations

The flow through a system and its fan will normally be steady. If the fluctuations occasioned by a temporary disturbance are quickly damped out, the fan system may be described as having a stable operating characteristic. If the unsteady flow continues after the disturbance is removed, the operating characteristic is unstable. To ensure stable operation the slopes of the pressure-capacity curves for the fan and system should be of opposite sign. Almost all systems have a positive slope; i.e., the pressure requirement or resistance increases with capacity. Therefore, for stable operation the fan curve should have a negative slope. Such is the case at or above the design capacity. When the slopes of the fan and system characteristics are of opposite sign, any system disturbance tending to produce a temporary decrease in flow is nullified by the increase in fan pressure. When the slopes are of the same sign, any tendency to decrease flow is strengthened by the resulting decrease in fan pressure. When fan and system curves coincide over a range of capacities, the operating characteristics are extremely unstable. Even if the curves exactly coincide at only one point, the flow may vary over a considerable range. There may or may not be any obvious indication of unstable operation. The pressure and power fluctuations that accompany unsteady flow may be so small and rapid that they cannot be detected by any but the most sensitive instruments. Less rapid fluctuations may be detected on the ordinary instruments used in fan testing. The changes in noise which occur with each change in flow rate are easily detected by ear as individual beats if the beat frequency is below about 10 Hz. In any event, the overall noise level will be higher with unsteady flow than with steady flow. The conditions which accompany unsteady flow are variously described as pulsations, hunting, surging, or pumping. Since these conditions occur only when the operating point is to the left of maximum pressure on the fan curve, this peak is frequently referred to as the surge point or pumping limit. Pulsation can be prevented by rating the fan to the right of the surge point. Fans are usually selected on this basis, but it is sometimes necessary to control the volume delivered to the value below that at the surge

14-52

FANS

point. This may lead to pulsation, particularly if the fan pressure exceeds 10 in wg. If the required capacity is less than that at the pumping limit, pulsation can be prevented in various ways, all of which in effect provide a negatively sloping fan curve at the actual operating point. To accomplish this effect the required pressure must be less than the fan capabilities at the required capacity. One method is to bleed sufficient air for actual operation to be beyond the pumping limit. Other possible methods are the use of pitch, speed, or vane control for volume reduction. In any of these cases, the point of operation on the new fan curve must be to the right of the new surge point. Although in the section on Capacity Control dampers were considered a part of the system, they may also be considered a part of the fan if located in the right position. Accordingly, pulsations may be eliminated in a supply system if the damper is on the inlet of the blower. Similarly, dampering at the outlet of the exhauster may control pulsation in exhaust systems. Another condition frequently referred to as instability is associated with flow separation in the blade passages of an impeller and is evidenced by slight discontinuities in the performance curve. There may be a small range of capacities at which two distinctly different pressures may be developed depending on which of the two flow patterns exists. Such a condition usually occurs at capacities just to the left of peak efficiency. Still another condition of unsteady flow may develop at extremely low capacities. This is known as blowback or puffing because air puffs in and out of a portion of the inlet. Operation in the blowback range should be avoided, particularly with high-energy fans. A different type of unsteady flow may occur when two or more fans are used in parallel. If the individual fan characteristics exhibit a dip in pressure between shutoff and design, the combined characteristic will contain points where the point of operation of the individual fans may be widely separated even for identical fans. If the system characteristic intersects the combined-fan characteristic at such a point, the individual fans may suddenly exchange loads. That is, the fan operating at high capacity may become the one operating at low capacity and vice versa. This can produce undesirable shocks on motors and ducts. Careful matching of fan to system is required for either forward-curved centrifugals or most axials for this reason. Fan Applications

The selection of a particular size and type of fan for a particular application involves considerations of aerodynamic, economic, and functional suitability. Many of the factors involved in aerodynamic suitability have been discussed above. Determination of economic suitability requires an evaluation of first cost and operating costs. The functional suitability of various types of fans with respect to certain applications is discussed below. Heating, ventilating, and air-conditioning systems may require supply and exhaust or return air fans. Historically, high-efficiency centrifugal fans, using either backward-curved or airfoil blades have been used for supply on duct systems. In low-pressure applications these types can be used without sound treatment. In high-pressure applications, sound treatment is almost always required. Axials have long been used for shipboard ventilation because they generally can be made smaller than centrifugals. Both adjustable axials and tubular centrifugals have proved popular on duct systems for exhaust service in building ventilation. Centrifugals with variable inlet vanes and axials with pitch control are being used for supply in variable-air-volume systems. Propeller fans and power roof ventilators are used for either supply or exhaust systems when there is little or no ductwork. Heating, ventilating, and air-conditioning applications are considered clean-air service. Various classes of construction are available in standard lines for different pressure ranges. Both direct and indirect drive are used, the latter being most common. Industrial exhaust systems generally require fans that are less susceptible to the unbalance that may result from dirty-gas applications than the clean-air fans used for heating, ventilating, and air conditioning. Simple, rugged, industrial exhausters are favored for applications up to 200 hp. They have a few radial blades and relatively low efficiency. Most

are V-belt-driven. Extra-heavy construction may be required where significant material passes through the fan. Process air requirements can be met with either centrifugal or axial fans; the latter may be used in single or double stage. The higher pressure ratings are usually provided by a single-stage centrifugal fan with radial blades, known as a pressure blower. These units are generally direct connected to the driver. They not only compete with centrifugal compressors but resemble them. Large industrial process and pollution-control systems involving more than about 200 hp are generally satisfied with a somewhat more sophisticated fan than an industrial exhauster or pressure blower. Centrifugal fans with radial-tip blades are frequently used on the more severe service. For the less severe requirements, backward-curved or airfoil blades may be used. Rugged fixed-pitch axials have also been used. Industrial fans are usually equipped with inlet boxes and independently mounted bearings and are usually direct-driven. Journal bearings are usually preferred. Inlet-box damper control can be used to approximate the power saving available from variable-inlet vane control. Variable-speed hydraulic couplings may be economically justified in some cases. Special methods or special construction may be required to provide protection against corrosion or erosion. These fans tend to take on the name of the application such as sintering fan, scrubber exhaust fan, etc. Mechanical draft systems may utilize any of the fan types described in connection with the above applications. Ventilating fans, industrial exhausters, and pressure blowers have been used for forced draft, induced draft, and primary air service on small steam-generating units. The large generating units are generally equipped with the most efficient fans available consistent with the erosion-corrosion potential of the gas being handled. Both centrifugals and axials are used for forced and induced draft. Axials have predominated in Europe, and there is a growing trend throughout the rest of the world toward axials. Centrifugals have been used almost exclusively in the United States until very recently. Forced-draft centrifugals invariably have airfoil blade impellers. Induced-draft centrifugals may have airfoil-blade impellers, but for scrubber exhaust, radial-tip blades are more common. This is due, in part, to the high pressures required for scrubber operation and in part to the erosion-corrosion potential downstream of a scrubber. Gas recirculating fans are usually of the radial-tip design. Forced-draft control may be by variable speed but is more likely to be variable vanes on centrifugal fans. Variable inlet vanes or inlet-box dampers may be used to control induced-draft fans. All large centrifugals are direct-connected and have independent pedestal-mounted bearings. Journal bearings are almost always used. Forced-draft axials are likely to be of the variable-pitch full-airfoilsection design. Hydraulic systems are almost always used for pitch control, but pneumatic and mechanical systems have been tried. Bearings are usually of the antifriction type. Fixed-pitch axials are usually used for induced-draft duty. Control is by variable-inlet vanes. Either journal or antifriction bearings may be used. Other Systems

Fans are incorporated in many different kinds of machines. Electronic equipment may require cooling fans to prevent hot spots. Driers use fans to circulate air to carry heat to, and moisture away from, the product. Air-support structures require fans to inflate them and maintain the supporting pressure. Ground-effect machines use fans to provide the lift pressure. Air conditioners and other heat exchangers incorporate fans. Aerodynamic, economic, and functional considerations will dictate the type and size of fan to be used. Tunnel ventilation can be achieved by using either axial or centrifugal fans. Transverse ventilation utilizes supply and exhaust fans connected to the tunnel by ductwork. These fans are rated in the manner described above. Another method utilizes specially tested and rated axial fans called jet fans. The fans, which are placed in the tunnel, increase the momentum of some of the air flowing with the traffic. This air induces additional flow.

Section

15

Electrical and Electronics Engineering BY

C. JAMES ERICKSON Retired Principal Consultant, E. I. du Pont de Nemours & Co., Inc. NICHOLAS R. RAFFERTY Retired Technical Associate, E. I. du Pont de Nemours & Co., Inc. BYRON M. JONES Consulting Engineer, Assistant Professor of Electrical Engineering,

University of Wisconsin—Platteville TIMOTHY M. COCKERILL Senior Project Manager, University of Illinois

15.1 ELECTRICAL ENGINEERING bY C. James Erickson revised by Nicholas R. Rafferty Electrical and Magnetic Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-2 Conductors and Resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-4 Electrical Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-6 Magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-8 Batteries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-11 Dielectric Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-15 Transients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-16 Alternating Currents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-17 Electrical Instruments and Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 15-20 DC Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-25 DC Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-27 Synchronous Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-30 Induction Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-34 Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-34 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-34 AC Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-36 AC-DC Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-42 Synchronous Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-42 Rating of Electrical Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-43 Electric Drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-45 Switchboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-46 Power Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-48 Power Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-52 Wiring Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-55

Interior Wiring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-56 Resistor Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-62 Magnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-62 Automobile Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-66

15.2 ELECTRONICS by Byron M. Jones revised by Timothy M. Cockerill Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-68 Discrete-Component Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-70 Integrated Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-75 Linear Integrated Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-75 Digital Integrated Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-79 Computer Integrated Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-82 Computer Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-83 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-84 Power Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-85 Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-86 Telephone Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-89 Wireless Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-90 Display Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15–91 Global Positioning Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-91

15-1

15.1 ELECTRICAL ENGINEERING by C. James Erickson revised by Nicholas R. Rafferty REFERENCES: Knowlton, “Standard Handbook for Electrical Engineers,” McGrawHill. Pender and Del Mar, “Electrical Engineers’ Handbook,” Wiley. Dawes, “Course in Electrical Engineering,” Vols. I and II, McGraw-Hill. Gray, “Principles and Practice of Electrical Engineering,” McGraw-Hill. Laws, “Electrical Measurements,” McGraw-Hill. Karapetoff-Dennison, “Experimental Electrical Engineering and Manual for Electrical Testing,” Wiley. Langsdorf, “Principles of Direct-current Machines,” McGraw-Hill. Hehre and Harness, “Electric Circuits and Machinery,” Vols. I and II, Wiley. Timbie-Higbie, “Alternating Current Electricity and Its Application to Industry,” Wiley. Lawrence, “Principles of Alternating-current Machinery,” McGraw-Hill. Puchstein and Lloyd, “Alternating-current Machinery,” Wiley. Lovell, “Generating Stations,” McGrawHill. Underhill, “Coils and Magnet Wire” and “Magnets,” McGraw-Hill. Abbott, “National Electrical Code Handbook,” McGraw-Hill. Dyke, “Automobile and Gasoline Engine Encyclopedia,” The Goodheart-Wilcox Co., Inc. Fink and Beaty, “Standard Handbook for Electrical Engineers,” McGraw-Hill.

ELECTRICAL AND MAGNETIC UNITS System of Units The International System of Units (SI) is being adopted universally. The SI system has its roots in the metre, kilogram, second (mks) system of units. Since a centimetre, gram, second (cgs) system has been widely used, and may still be used in some instances, Tables 15.1.1 and 15.1.2 are provided for conversion between the two systems. Basic SI units have been adopted by the General Conference on Weights and Measures (CPGM) as base quantities, that is, quantities that are not derived from other quantities. The base units are metre, kilogram (mass), second, ampere, kelvin (thermodynamic temperature), and candela (luminous intensity). Other SI units are derived from these basic units. Electrical Units (See Table 15.1.1.)

Current (I, i) The SI unit of current is the ampere, which is equal to one-tenth the absolute unit of current (abampere). The abampere of current is defined as follows: if 0.01 metre (1 centimetre) of a circuit is bent into an arc of 0.01 metre (1 centimetre) radius, the current is 1 abampere if the magnetic field intensity at the center is 0.01257 ampere per metre (1 oersted), provided the remainder of the circuit produces no magnetic effect at the center of the arc. One international ampere (9.99835 amperes) (dc) will deposit 0.001118 gram per second of silver from a standard silver solution. Quantity (Q) The coulomb is the quantity of electricity transported in one second by a current of one ampere. Potential Difference or Electromotive Force (V, E, emf) The volt is the difference of electric potential between two points of a conductor carrying a constant current of one ampere, when the power dissipated between these points is equal to one watt. Resistance (R, r) The ohm is the electrical resistance between two points of a conductor when a constant difference of potential of one volt, applied between these two points, produces in this conductor a current of one ampere, this conductor not being the source of any electromotive force. Resistivity (r) The resistivity of a material is the dc resistance between the opposite parallel faces of a portion of the material having unit length and unit cross section. Conductance (G, g) The siemens is the electrical conductance of a conductor in which a current of one ampere is produced by an electric potential difference of one volt. One siemens is the reciprocal of one ohm. Conductivity (g) The conductivity of a material is the dc conductance between the opposite parallel faces of a portion of the material having unit length and unit cross section. 15-2

Capacitance (C) is that property of a system of conductors and dielectrics which permits the storage of electricity when potential difference exists between the conductors. Its value is expressed as a ratio of a quantity of electricity to a potential difference. A capacitance value is always positive. The farad is the capacitance of a capacitor between the plates of which there appears a difference of potential of one volt when it is charged by a quantity of electricity equal to one coulomb. Permittivity or dielectric constant (e0) is the electrostatic energy stored per unit volume of a vacuum for unit potential gradient. The permittivity of a vacuum or free space is 8.85  1012 farads per metre. Relative permittivity or dielectric constant (er) is the ratio of electrostatic energy stored per unit volume of a dielectric for a unit potential gradient to the permittivity (e0) of a vacuum. The relative permittivity is a number. Self-inductance (L) is the property of an electric circuit which determines, for a given rate of change of current in the circuit, the emf induced in the same circuit. Thus e1  Ldi1/dt, where e1 and i1 are in the same circuit and L is the coefficient of self-inductance. The henry is the inductance of a closed circuit in which an electromotive force of one volt is produced when the electric current varies uniformly at a rate of one ampere per second. Mutual inductance (M) is the common property of two associated electric circuits which determines, for a given rate of change of current in one of the circuits, the emf induced in the other. Thus e1  Mdi2/dt and e2  Mdi1/dt, where e1 and i1 are in circuit 1; e2 and i2 are in circuit 2; and M is the mutual inductance. The henry is the mutual inductance of two separate circuits in which an electromotive force of one volt is produced in one circuit when the electric current in the other circuit varies uniformly at a rate of one ampere per second. If M is the mutual inductance of two circuits and k is the coefficient of coupling, i.e., the proportion of flux produced by one circuit which links the other, then M  k(L1L2)1/2, where L1 and L2 are the respective self-inductances of the two circuits. Energy (J) in a system is measured by the amount of work which a system is capable of doing. The joule is the work done when the point of application of a force of one newton is displaced a distance of one metre in the direction of the force. Power (W) is the time rate of transferring or transforming energy. The watt is the power required to do work at the rate of one joule per second. Active power (W) at the points of entry of a single-phase, two-wire circuit or of a polyphase circuit is the time average of the values of the instantaneous power at the points of entry, the average being taken over a complete cycle of the alternating current. The value of active power is given in watts when the rms currents are in amperes and the rms potential differences are in volts. For sinusoidal emf and current, W  EI cos u, where E and I are the rms values of volts and currents, and u is the phase difference of E and I. Reactive power (Q) at the points of entry of a single-phase, two-wire circuit, or for the special case of a sinusoidal current and sinusoidal potential difference of the same frequency, is equal to the product obtained by multiplying the rms value of the current by the rms value of the potential difference and by the sine of the angular phase difference by which the current leads or lags the potential difference. Q  EI sin u. The unit of Q is the var (volt-ampere-reactive). One kilovar  103 var. Apparent power (VA) at the points of entry of a single-phase, two-wire circuit is equal to the product of the rms current in one conductor multiplied by the rms potential difference between the two points of entry. Apparent power  VA.

ELECTRICAL AND MAGNETIC UNITS Table 15.1.1

15-3

Electrical Units

Quantity

Symbol

Current Quantity Electromotive force Resistance Resistivity Conductance Conductivity Capacitance Permittivity Relative permittivity Self-inductance Mutual inductance Energy

I, i Q, q E, e R, r r G, g g C e er L M J kWh W jQ VA pf XL XC Z G B Y f T T v

Active power Reactive power Apparent power Power factor Reactance, inductive Reactance, capacitive Impedance Conductance Susceptance Admittance Frequency Period Time constant Angular velocity

Equation I  E/R; I  E/Z; I  Q/t Q  it; Q  CE E  IR; E  W/Q R  E/I; R  rl/A r  RA/l G  gA/l; G  A/rl g  1/r; g = l/RA C  Q/E er  e/e0 L  N(df/dt) M  K(L1L2)1/2 J  eit kWh  kW/3,600; 3.6 MJ W  J/t; W  EI cos u Q  EI sin u VA  EI pf  W/VA; pf  W/(W  jQ) XL  2pfL XC  1/(2pfC) Z  E/I; Z  R  j(XL  XC) G  R/Z 2 B  X/Z 2 Y  I/E; Y  G  jB f  1/T T  1/f L/R; RC v  2pf

SI unit

SI unit symbol

Ampere Coulomb Volt Ohm Ohm-metre Siemens Siemens/metre Farad Farads/metre Numerical Henry Henry Joule Kilowatthour Watt Var Volt-ampere

A C V m S S/m F F/m H H J kWh W var VA

Ohm Ohm Ohm Siemens Siemens Siemens Hertz Second Second Radians/second

S S S Hz s s rad/s

CGS unit Abampere Abcoulomb Abvolt Abohm Abohm-cm Abmho Abmho/cm Abfarad* Stat farad*/cm Numerical Abhenry Abhenry Erg Abwatt Abvar

Abohm Abohm Abohm Abmho Abmho Abmho Cps, Hz Second Second Radians/second

Ratio of magnitude of SI to cgs unit 101 101 108 109 1011 109 1011 109 8.85  1012 1 109 109 107 36  1012 107 107 1 109 109 109 109 109 109 1 1 1 1

* 1 Abfarad (EMU units)  9  1020 stat farads (ESU units).

The susceptance (B) of a portion of a circuit for a sinusoidal current and potential difference of the same frequency is the product of the sine of the angular phase difference between the current and the potential difference times the ratio of the rms current to the rms potential difference, there being no source of power in the portion of the circuit under consideration. B  (I/E) sin u. Susceptance is the imaginary part of admittance.

Power factor (pf) is the ratio of power to apparent power. pf  W/VA  cos u, where u is the phase difference between E and I, both assumed to be sinusoidal. The reactance (X) of a portion of a circuit for a sinusoidal current and potential difference of the same frequency is the product of the sine of the angular phase difference between the current and potential difference times the ratio of the rms potential difference to the rms current, there being no source of power in the portion of the circuit under consideration. X  (E/I) sin u  2p fL ohms, where f is the frequency, and L the inductance in henries; or X  1/2pfC ohms, where C is the capacitance in farads. The impedance (Z) of a portion of an electric circuit to a completely specified periodic current and potential difference is the ratio of the rms value of the potential difference between the terminals to the rms value of the current, there being no source of power in the portion under consideration. Z  E/I ohms. Admittance (Y) is the reciprocal of impedance. Y  I/E siemens. Conductance is the real part of admittance.

Table 15.1.2

Magnetic Units (See Table 15.1.2.)

Magnetic flux (,, f) is the magnetic flow that exists in any magnetic circuit. The weber is the magnetic flux which, linking a circuit of one turn, produces in it an electromotive force of one volt as it is reduced to zero at a uniform rate in one second. Magnetic flux density (b) is the ratio of the flux in any cross section to the area of that cross section, the cross section being taken normal to the direction of flux.

Magnetic Units

Quantity

Symbol

Equation*

SI unit

Magnetic flux Magnetic flux density Pole strength

,, f b Qm

f  F/R b  f/A Qm  F/b; Qm  Fl/NIm0mr

Magnetomotive force Magnetic field intensity Permeability air Relative permeability Reluctivity Permeance Reluctance

^ H m0 mr g P R

^  NI H  ^/l m0  b/H mr  m/m0 g  1/mr P  m0mr A/l R  l/m0mrA

Weber Tesla Ampere-turns-metre Unit pole Ampere-turns Ampere-turns per metre Henry per metre Numeric Numeric Henry 1/Henry

* l  length in metres; A  area in square metres; F  force in newtons; N  number of turns.

SI unit symbol wb T Am A A/m H/m

H 1/H

CGS unit Maxwell Gauss Unit pole Gilbert Oersted Gilbert per oersted Numeric Numeric

Ratio of magnitude of SI to cgs unit 108 104 0.7958  107 1.257 0.01257 1.257  106 1 1 7.96  107 1.257  108

15-4

ELECTRICAL ENGINEERING

The tesla is the magnetic flux density given by a magnetic flux of one weber per square metre. Unit magnetic pole, when concentrated at a point and placed one metre apart in a vacuum from a second unit magnetic pole, will repel or attract the second unit pole with a force of one newton. The weber is the magnetic flux produced by a unit pole. Magnetomotive force (^, mmf) produces magnetic flux and corresponds to electromotive force in an electric circuit. The ampere (turn) is the unit of mmf. Magnetic field intensity (H) at a point is the vector quantity which is measured by a mechanical force which is exerted on a unit pole placed at the point in a vacuum. An ampere per metre is the unit of field intensity. Permeability (m) is the ratio of unit magnetic flux density to unit magnetic field intensity in air (B/H). The permeability of air is 1.257  106 henry per metre. Relative permeability (mr) is the ratio of the magnetic flux in any ele-

ment of a medium to the flux that would exist if that element were replaced with air, the magnetomotive force (mmf) acting on the element remaining unchanged (mr  m/m0). The relative permeability is a number. Permeance (P) of a portion of a magnetic circuit bounded by two equipotential surfaces, and by a third surface at every point of which there is a tangent having the direction of the magnetic induction, is the ratio of the flux through any cross section to the magnetic potential difference between the surfaces when taken within the portion under consideration. The equation for the permeance of the medium as defined above is P  m0mrA/l. Permeance is the reciprocal of reluctance. Reluctivity (g) of a medium is the reciprocal of its permeability. Reluctance (R) is the reciprocal of permeance. It is the resistance to magnetic flow. In a homogeneous medium of uniform cross section, reluctance is equal to the length divided by the product of the area and permeability, the length and area being expressed in metre units. R  l/Am0mr, where m0  1.257  106. CONDUCTORS AND RESISTANCE

where R0 is the resistance at 20C and a is the temperature coefficient of

resistance. For copper, a  0.00393.

With any initial temperature t1, the resistance at temperature tC is R 5 R1[1 1 a1 st 2 t1d]

(15.1.4)

where R1 is the resistance at temperature t1C and a1 is the temperature coefficient of resistance at temperature t1 [see Eq. (15.1.5)]. For any initial temperature t1 the value of a1 is a1 5 1/s234.5 1 t1d

(15.1.5)

Inferred Absolute Zero Between 100 and 0C the resistance of copper decreases at a rate which is practically uniform and which if continued would give a resistance of zero at 234.5C (an easy number to remember). If the resistance at t1C is R1 and the resistance at t2C is R2, then

R2/R1 5 s234.5 1 t2d/s234.5 1 t1d

(15.1.6)

EXAMPLE. The resistance of a copper coil at 25C is 4.26 . Determine its resistance at 45C. Using Eq. (15.1.4) and a1  1/(234.5  25)  0.00385, R  4.26[1  0.00385(45  25)]  4.59 . Using Eq. (15.1.6) R  4.26(234.5  45)/(234.5  25)  4.26  1.077  4.59 .

The inferred absolute zero for aluminum is 228C. Materials The materials generally used for the transmission and distribution of electrical energy are copper, aluminum, and sometimes iron and steel. For resistors and heaters, iron, steel, commercial alloys, and carbon are most used. Copper is the most widely used electrical conductor. It has high conductivity, relatively low cost, good resistance to oxidation, is readily soldered, and has good mechanical characteristics such as tensile strength, toughness, and ductility. Its tensile strength together with its low linear temperature coefficient of expansion are desirable characteristics in its use for overhead transmission lines. The international copper standard for 100 percent conductivity annealed copper is a density of 8.89 g/cm3 (0.321 lb/in3) and resistivity is given in Table 15.1.3. ASTM specifications for minimum conductivities of copper wire are as follows:

Resistivity, or specific resistance, is the resistance of a sample of the material

having both a length and cross section of unity. The two most common resistivity samples are the centimetre cube and the cir milft. If l is the length of a conductor of uniform cross section a, then its resistance is R 5 rl/a

(15.1.1)

where r is the resistivity. With a cir milft r is the resistance of a cir milft and a is the cross section, cir mils. Since v  la is the volume of a conductor, R 5 rl2/v 5 rv/a2

(15.1.2)

A circular mil is a unit of area equal to that of a circle whose diameter is 1 mil (0.001 in). It is the unit of area which is used almost entirely in this country for wires and cables. To obtain the cir mils of a solid cylindrical conductor, square its diameter expressed in mils. For example, the diameter of 000 AWG solid copper wire is 410 mils and its cross section is (410)2  168,100 cir mils. The diameter in mils of a solid cylindrical conductor is the square root of its cross section expressed in cir mils. A cir mil  ft is a conductor having a length of 1 ft and a uniform cross section of 1 cir mil. In terms of the copper standard the resistance of a cir milft of copper at 20C is 10.371 . As a first approximation 10 may frequently be used. At 60C a cir milin of copper has a resistance of 1.0 . This is a very convenient unit of resistivity for magnet coils since the resistance is merely the length of copper in inches divided by its cross section in cir mils. Temperature Coefficient of Resistance The resistance of the pure metals increases with temperature. The resistance at any temperature tC is R 5 R0 s1 1 atd

(15.1.3)

Conductor diam, in

Soft or annealed

Medium hard drawn

Hard drawn

0.040–0.324 0.325–0.460

98.16% 98.16%

96.60% 97.66%

96.16% 97.16%

Aluminum is used to considerable extent for high-voltage transmission lines, because its weight is one-half that of copper for the same conductance. Moreover, the greater diameter reduces corona loss. As it has 1.4 times the linear temperature coefficient of expansion, changes in sag with temperature are greater. Because of its lower melting point, spans may fail more readily with arc-overs. In aluminum cable steelreinforced (ACSR), the center strand is a steel cable, which gives added tensile strength. Aluminum is used occasionally for bus bars because of its large heat-dissipating surface for a given conductance. The greater cross section for a given conductance requires a greater volume of insulation for a given voltage. When the ratio of the cost of aluminum to the cost of copper becomes economically favorable, aluminum is often used for insulated wires and cables. The international aluminum standard for 62 percent conductivity aluminum is a density of 2.70 g/cm3 (0.0976 lb/in3) and resistivity as given in Table 15.1.3. Steel, either galvanized or copper-covered (“copperweld”), is used for high-voltage transmission spans where tensile strength is more important than high conductance. Steel is also used for third rails. Copper alloys and bronzes are of increasing importance as electrical conductors. They have lower electrical conductivity but greater tensile strength and are resistant to corrosion. Hitenso, Calsum bronzes, Signal bronze, Phono-electric, and Everdur are bronzes containing phosphorus, silicon, manganese, or zinc. Their conductivities vary from 20 to 85

CONDUCTORS AND RESISTANCE

15-5

Table 15.1.3 Properties of Metals and Alloys (See Table 15.1.27 for properties of resistor alloys) Resistivity, 20C Metals

m  cm

 cir mil/ft

Aluminum Antimony Bismuth Brass Carbon: amorphous Retort (graphite) Copper (drawn) Gold Iron: electrolytic Cast Wire Lead Molybdenum Monel metal Mercury Nickel Platinum Platinum silver, 2Ag  1Pt Silver Steel: soft Glass hard Silicon (4 percent) Transformer Trolley wire Tin Tungsten Zinc

2.828 42.1 111.0 6.21 3,800–4,100 720–812* 1.724 2.44 10.1 75.2–98.8 97.8 22.0 5.78 43.5 96.8 8.54 10.72 24.6† 1.628 15.9 45.7 51.18 11.09 12.7 11.63 5.51 5.97

17.01 251.0 668.0 37.0 — — 10.37 14.7 59.9 448–588 588 132 34.8 262 576 50.8 63.8 148.0 9.8 95.8 275 308 66.7 76.4 70 33.2 35.58

Temperature coefficient of resistance at 20C 0.00403 0.0036 0.004 0.0015 () () 0.00393 0.0034 0.0064

0.00387 0.0019 0.00089 0.0041 0.003 0.00031 0.0038 0.0016

0.0042 0.005 0.0037

NOTE: Max working temperature: Cu, 260C; Ni, 600C; Pt, 1,500C. * Furnace electrodes, 3,000C. † 0C.

percent of 100 percent conductivity copper, and they have tensile strengths up to 130,000 lb/in2, about twice that of hard-drawn copper. Such alloys were frequently used for trolley wires. Copper alloys having lower conductivity are usually classified as resistor materials. In Table 15.1.3 are given the electrical properties of some of the pure metals and alloys. American Wire Gage (AWG) The AWG (formerly Brown & Sharpe gage) is based on a constant ratio between diameters of successive gage numbers (Table 15.1.4). The ratio of any diameter to the next smaller is 1.123, and the corresponding ratio of cross sections is (1.123)2  1.261, or

l1⁄4 approximately. (1.123)6 is 2.0050, so that diameters differing by 6 gage numbers have a ratio of approximately 2; cross sections differing by 3 gage numbers also have a ratio of approximately 2. The ratio of cross sections differing by 2 numbers is (1.261)2  1.590, or 1.6 approximately. The ratio of cross sections differing by 10 numbers is approximately 10. The gage ordinarily extends from no. 40 to 0000 (4/0). Wires larger than 0000 must be stranded, and their cross section is given in cir mils. The diameter of no. 10 wire is 102.0 mils. As an approximation this may be considered as being 100 mils; the cross section is 10,000 cir mils; the resistance is 1 per 1,000 ft; and the weight of 1,000 ft is

Table 15.1.4 Working Table, Standard Annealed Copper Wire, Solid [American wire gage (B & S)]

Gage no. 0000 000 00 0 1 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Diameter at 20C mils mm 460 410 365 325 289 258 204 162 129 102 80 64 51 40 32 25 20 16 13 10

11.68 10.40 9.27 8.25 7.35 6.54 5.19 4.12 3.26 2.59 2.05 1.63 1.29 1.02 0.81 0.64 0.51 0.40 0.32 0.25

mil2

Cross section at 20C cir mils

mm2

166,200 131,800 104,500 82,291 65,730 52,120 32,780 20,610 12,970 8,155 5,130 3,230 2,030 1,280 804 503 317 199 125 78.5

211,600 167,800 133,100 105,600 83,690 66,360 41,740 26,240 16,510 10,380 6,530 4,110 2,580 1,620 1,020 640 404 253 159 100

107.2 85.01 67.43 53.49 42.41 33.62 21.15 13.3 8.367 5.261 3.31 2.08 1.31 0.823 0.519 0.324 0.205 0.128 0.0804 0.0507

Ohm per 1,000 ft 25C 65C 0.05 0.063 0.0795 0.1 0.126 0.159 0.253 0.403 0.641 1.02 1.62 2.58 4.09 6.51 10.4 16.5 26.2 41.6 66.2 105

0.0577 0.0727 0.917 0.116 0.146 0.184 0.292 0.465 0.739 1.18 1.87 2.97 4.73 7.51 11.9 19 30.2 48 76.4 121

Ohm/mi at 25C 0.264 0.333 0.42 0.528 0.665 0.839 1.335 2.13 3.38 5.38 8.55 13.62 21.6 34.4 54.9 87.1 138.3 220 350 554

Weight per 1,000 ft, lb 641 508 403 319 253 201 126 79.5 50 31.4 19.8 12.4 7.82 4.92 3.09 1.94 1.22 0.769 0.484 0.304

15-6

ELECTRICAL ENGINEERING Table 15.1.5

Bare Concentric Lay Cables of Standard Annealed Copper per 1,000 ft

Standard concentric standing

cir mils

25C (77F)

65C (149F)

Weight per 1,000 ft, lb

No. of wires

Diam of wires, mils

Outside diam, mils

2,000,000 1,700,000 1,500,000 1,200,000 1,000,000

0.00539 0.00634 0.00719 0.00899 0.0108

0.00622 0.00732 0.00830 0.0104 0.0124

6,180 5,250 4,630 3,710 3,090

127 127 91 91 61

125.5 115.7 128.4 114.8 128.0

1,631 1,504 1,412 1,263 1,152

900,000 850,000 750,000 650,000 600,000

0.0120 0.0127 0.0144 0.0166 0.0180

0.0138 0.0146 0.0166 0.0192 0.0207

2,780 2,620 2,320 2,010 1,850

61 61 61 61 61

121.5 118.0 110.9 103.2 99.2

1,093 1,062 998 929 893

550,000 500,000 450,000 400,000

0.0196 0.0216 0.0240 0.0270

0.0226 0.0249 0.0277 0.0311

1,700 1,540 1,390 1,240

61 37 37 37

95.0 116.2 110.3 104.0

855 814 772 728

0000 000

350,000 300,000 250,000 212,000 168,000

0.0308 0.0360 0.0431 0.0509 0.0642

0.0356 0.0415 0.0498 0.0587 0.0741

1,080 926 772 653 518

37 37 37 19 19

97.3 90.0 82.2 105.5 94.0

681 630 575 528 470

00 0 1 2 3

133,000 106,000 83,700 66,400 52,600

0.0811 0.102 0.129 0.162 0.205

0.0936 0.117 0.149 0.187 0.237

411 326 258 205 163

19 19 19 7 7

83.7 74.5 66.4 97.4 86.7

418 373 332 292 260

4

41,700

0.259

0.299

129

7

77.2

232

AWG no.

NOTE: See Table 15.1.21 for the carrying capacity of wires. SOURCE: From NBS Cir. 31.

31.4(10p) lb. Also the weight of 1,000 ft of no. 2 is 200 lb. These facts give many short cuts in estimating resistances and weights of various gage numbers. Lay Cables In order to obtain sufficient flexibility, wires larger than 0000 are stranded, and they are designated by their circular mils (Table 15.1.5). Smaller wires may be stranded also since sizes as small as no. 4 when insulated are usually too stiff for easy handling. Lay cables are made up geometrically as shown in Fig. 15.1.1. Six strands will just fit around the single central conductor; the number of strands in each succeeding layer increases by 6. The number of strands that can thus be laid up are 1, 7, 19, 37, 61, 91, 127, etc. In order to obtain sufficient flexibility with large cables, the strands themselves frequently consist of stranded cable.

Fig. 15.1.1 Makeup of a 19-strand cable.

expressed by the following three equations: I 5 E/R E 5 IR R 5 E/I

(15.1.7) (15.1.8) (15.1.9)

where E is the emf, V; R the resistance, ; and I the current, A. Series Circuits The combined resistance of a number of seriesconnected resistors is the sum of their separate resistances. When batteries or other sources of emf are connected in series, the total emf of the combination is the sum of the separate emfs. The open-circuit emf of a battery is the total generated emf and can be measured at the battery terminals only when no current is being delivered by the battery. The internal resistance is the resistance of the battery alone. The current in a circuit connected in series with a source of emf is I = E/(R  r), where E is the open-circuit emf, R the external resistance, and r the internal resistance of the source of emf. Parallel Circuits The combined conductance of a number of parallel-connected resistors is the sum of their separate conductances. G5G 1G 1G 1 c (15.1.10) 1

2

3

The resistance of cables is readily computed from Eq. (15.1.1), using the cir mil ft as the unit of resistivity.

1 1 1 1 1 1 1 c 5 R R1 R2 R3

EXAMPLE. Determine the resistance of 3,500 ft of 800,000 cir mil cable at 20C. Answer: r (of a cir mil  ft)  10.37. R  10.37  3,500/800,000  0.0454 .

The equivalent resistance for two parallel resistors having resistances R1, R2 is

r  10 /cir mil  ft is often sufficiently accurate for practical purposes.

ELECTRICAL CIRCUITS

Figure 15.1.2 shows standard symbols for electrical circuit diagrams. Ohm’s law states that, with a steady current, the current in a circuit is directly proportional to the total emf acting in the circuit and is inversely proportional to the total resistance of the circuit. The law may be

R 5 R1R2/sR1 1 R2d

(15.1.11)

(15.1.12)

The equivalent resistance for three parallel resistors having resistances R1, R2, R3 is R1R2R3 R5 (15.1.13) R1R2 1 R2R3 1 R3R1 and for four parallel resistors having resistances R1, R2, R3, R4 R1R2R3R4 R5 (15.1.14) R1R2R3 1 R2R3R4 1 R3R4R1 1 R4R1R2

ELECTRICAL CIRCUITS

Fig. 15.1.2 Diagrammatic symbols for electrical machinery and apparatus. (American Standard, “Graphic Symbols for Electrical and Electronic Diagrams,” ANS/IEEE, 315, 1975.)

15-7

15-8

ELECTRICAL ENGINEERING

To obtain the resistance of combined series and parallel resistors, the equivalent resistance of each parallel portion is obtained separately and then these equivalent resistances are added to the series resistances according to the principles stated above. Kirchhoff’s laws (derived from Ohm’s law) make it possible to solve many circuit networks that would otherwise be difficult to solve. The first law states that: In any branching network of wires the algebraic sum of the currents in all the wires that meet at a point is zero. The second law states that: The sum of all the electromotive forces acting around a complete circuit is equal to the sum of the resistances of its separate parts multiplied each by the strength of the current in it, or the total change of potential around any closed circuit is zero. In applying Kirchhoff’s laws the following rules should be observed. Currents going toward a junction should be preceded by a plus sign. Currents going away from a junction should be preceded by a minus sign. A rise in potential should be preceded by a plus sign. (This occurs in going through a source of emf from the negative to the positive terminal, and in going through resistance in opposition to the direction of current.) A drop in potential should be preceded by a minus sign. (This occurs in going through a source of emf from the positive to the negative terminal and in going through resistance in conjunction with the current.) The application of Kirchhoff’s laws is illustrated by the following example. EXAMPLE. Determine the three currents I1, I2, and I3 in the circuit network (Fig. 15.1.3). The arrows show the assumed directions of the three currents. Applying Kirchhoff’s second law to circuit abcdea, or

14 1 0.2I1 1 0.5I1 2 3I2 1 2 2 0.1I2 1 I1 5 0 16 1 1.7I1 2 3.1I2 5 0

(I)

22 1 0.1I2 1 3I2 1 I3 1 3 1 0.3I3 5 0 11 1 3.1I2 1 1.3I3 5 0

(II)

and for edcfge, or

Applying Kirchhoff’s first law to junction c, 2I1 2 I2 1 I3 5 0

(III)

Solving (I), (II), and (III) simultaneously gives I1  2.56, I2  0.53, and I3  2.03. The minus signs before I1 and I3 show that the actual directions of these two currents are opposite the assumed directions.

Fig. 15.1.3 Electric network and Kirchhoff’s laws.

directly proportional to the resistance, and directly proportional to the time that the current flows. h  i2rt, where h  number of joules; i  current, A; r  resistance, ; and t  time, s. h (in Btu)  0.0009478i2rt. MAGNETISM Magnetic Circuit The magnetic circuit is analogous to the electric circuit in that the flux , is proportional to the magnetomotive force ^ and inversely proportional to the reluctance 5 or magnetic resistance. Thus

,  ^/5

Compare with Eq. (15.1.7). , is in webers, where the weber is the SI unit of flux, ^ in ampere-turns, and 5 in SI reluctance units. In the cgs system, f is in maxwells, ^ is in gilberts, and 5 is in cgs reluctance units. 5 5 l/mrmv A

P 5 EI

W

1

1

W W

(15.1.15) (15.1.16) (15.1.17)

The watt is too small a unit for many purposes. Hence, the kilowatt (kW) is used. 746 watts  1 hp  0.746 kW; 1 kW  1.340 hp. The kilowatthour (kWh) is the common engineering unit of electrical energy. Joule’s Law When an electric current flows through resistance, the number of heat units developed is proportional to the square of the current,

2

2

2

3

3

3

where l1, A1, m1, etc., are the lengths, cross sections, and relative permeabilities of each series part of the circuit.

Also, by substituting for E and I Eqs. (8) and (7), P 5 I 2R P 5 E 2/R

(15.1.19)

where mr is relative permeability (commonly called permeability, m), a property of the magnetic material, and mv is the permeability of evacuated space  4p  107, and A is in square metres. In the cgs system mv  1 l l 5 55 (15.1.20) mr s4p 3 1027dA mr s1.257 3 1026dA l is in metres and A in square metres. The unit of flux density in the SI system is the tesla, which is equal to the number of webers per square metre taken perpendicular to their direction. One ampere-turn between opposite faces of a metre cube of a magnetic medium produces mr tesla. For air, mr  4p  107. In the cgs system the unit of flux density is gauss  104T (see Table 15.1.2). Magnetic-circuit calculations cannot be made with the same degree of accuracy as electric-circuit calculations because of several factors. The cross-sectional dimensions of the magnetic circuit are large relative to its length; magnetic paths are irregular, and their geometry can only be approximated as with the air gap of electric machines, which usually have slots on one or both sides of the gap. Magnetic flux cannot be confined to definite magnetic paths, but a considerable proportion usually takes paths external to the circuit giving magnetic leakage (see Fig. 15.1.7). The relative permeability of iron varies over wide ranges with the flux density and with the previous magnetic condition (see Fig. 15.1.5). These variations of relative permeability cannot be expressed by any simple equation. Although the foregoing factors prevent the obtaining of extremely high accuracy in magnetic calculations, yet, with experience, it is possible to design magnetic circuits with a precision that is satisfactory for all practical purposes. The magnetomotive force ^ in Eq. (15.1.18) is expressed in ampereturns  NI, where N is the number of turns linked with the circuit and I is the current, A. The unit of reluctance is the reluctance of a 1-m cube of air. The total reluctance is proportional to the length and inversely proportional to the cross-sectional area of the magnetic circuit, which is analogous to electrical resistance. Hence the reluctance of any given path of uniform cross section A is l/Am, where l  length of path, cm; A  its cross section, cm2; and m  permeability. Reluctances in series are added to obtain their combined reluctance. Ohm’s law of the magnetic circuit becomes NI ,5 (15.1.21) Mx l /A m 1 l /A m 1 l /A m c 1

Electrical Power With direct currents the electrical power is given by the product of the volts and amperes. That is,

(15.1.18)

Fig. 15.1.4 Magnetic circuit.

MAGNETISM EXAMPLE. In Fig. 15.1.4 is shown a magnetic circuit of cast steel with a 0.4-cm air gap. The cross section of the core is 4 cm square. There are 425 turns wound on the core and the current is 10 A. The relative permeability of the steel at the operating flux density is 1,100. Assume that the path of the flux is as shown, the average path at the corners being quarter circles. Neglect fringing at the air gap and any leakage. Determine the flux and the flux density. Using the SI system, the length of the iron is 0.522 m, the length of the air gap is 0.004 m, and the cross section of the iron and air gap is 0.0016 m2. 425 3 10 0.522 0.004 1 1,100 3 4p 3 1027 3 0.0016 4p 3 1027 3 0.0016 5 0.00191 Wb

,5

Using the cgs system, the length of the magnetic path in the iron  12  8  8  5.8  5.8  4p  52.2 cm. From Eq. (15.1.21), 0.4p 3 425 3 10 5 191,000 Mx [52.2/s16 3 1,100d] 1 s0.4/16d 191,000 B5 5 11,940 G 16

,5

Magnetization and Permeability Curves The magnetic permeability of air is a constant and is taken as unity. The relative permeability of iron and other magnetic substances varies with the flux density. In Fig. 15.1.5 is shown a magnetization curve for cast steel in which the flux density B in tesla is plotted as a function of the field intensity, amperes

15-9

where B is the flux density, Mx/in2; and l the length of the magnetic path, in. EXAMPLE. The average flux density in the air gap of a generator is 40,000 Mx/in2, and the effective length of the gap is 0.2 in. How many ampere-turns per pole are necessary for the gap?

NI 5 0.313 3 40,000 3 0.2 5 2,500 Since the relation of mr to flux density B in Eq. (15.1.22) is not simple, the relation of ampere-turns per unit length of magnetic circuit to flux density is ordinarily shown graphically. Typical curves of this character are shown in Fig. 15.1.6, inch units being used although scales of tesla, and ampere turns per metre are also given. To determine the number of ampere-turns necessary to produce a given total flux in a magnetic circuit composed of several parts in series having various lengths, cross sections, and relative permeabilities, determine the flux density if the cross section is fixed, or otherwise choose a cross section to give a suitable flux density. From the magnetization curve obtain the ampereturns necessary to drive this flux density through a unit length of the portion of the circuit considered and multiply by the length. Add together the ampere-turns required for each series part of the magnetic circuit to obtain the total ampere-turns necessary to give the assumed flux.

Fig. 15.1.6 Typical magnetization curves.

Fig. 15.1.5 Magnetization and relative-permeability curves for cast steel.

per metre, H. Also the relative permeability mr  B/H is plotted as a function of the flux density B. Note the wide range over which the relative permeability varies. No satisfactory equation has been found to express the relation between magnetizing force and flux density and between relative permeability and flux density. If an attempt is made to solve Eq. (15.1.21) for flux, the factors m1, m2, etc., are unknown since they are functions of the flux density, which is being determined. The simplest method is one of trial and error, i.e., a value of flux, and the corresponding permeability, is first assumed, the equation solved for the flux, and if the computed flux differs widely from the assumed flux, a second approximation is made, etc. In nearly all magnetic designs either the flux or flux density is the independent variable, and it is required to find the necessary ampere-turns to produce them. Let the flux ,  BA where B is the flux density, G. Then and

, 5 BA 5 0.4pNIsl/Amrd NI 5 Bl/m0mr 5 0.796Bl/mr 3 106

(15.1.22)

Equation (15.1.22) shows that the necessary ampere-turns are proportional to the flux density and the length of path and are inversely proportional to the relative permeability. With air and nonmagnetic substances mr [Eq. (15.1.22)] becomes unity, and NI 5 0.796Bl 3 106 (15.1.23) in metre units. With inch units NI 5 0.313Brlr

It is desirable to operate magnetic circuits at as high flux densities as is practicable in order to reduce the amount of iron and copper. The air gaps of dynamos are operated at average densities of 40,000 to 50,000 Mx/in2. Higher densities increase the exciting ampere-turns and tooth losses. At 45,000 Mx/in2 the flux density in the teeth may be as high as 120,000 to 130,000 Mx/in2. The flux densities in transformer cores are limited as a rule by the permissible losses. At 60 Hz and with silicon steel the maximum density is 60,000 to 70,000 Mx/in2, at 25 Hz the density may run as high as 75,000 to 90,000 Mx/in2. With laminated cores, the net iron is approximately 0.9 the gross cross section. Magnetic Leakage It is impossible to confine all magnetic flux to any desired path since there is no known insulator of magnetic flux. Figure 15.1.7 shows the magnetic circuit of a modern four-pole dynamo.

(15.1.24)

Fig. 15.1.7 Magnetic circuit of a four-pole dynamo with leakage flux.

A considerable proportion of the useful magnetic flux leaks between the pole shoes and cores, rather than across the air gap. The ratio of the maximum flux, which exists in the field cores, to the useful flux, i.e., the flux that crosses the air gap, is the coefficient of leakage. This

15-10

ELECTRICAL ENGINEERING

coefficient must always be greater than unity and in carefully designed dynamos may be as low as 1.15. It is frequently as high as 1.30. Although the geometry of the leakage-flux paths is not simple, the leakage flux may be determined by approximations with a fair degree of accuracy. Magnetic Hysteresis The magnetization curves shown in Figs. 15.1.5 and 15.1.6 are called normal curves. They are taken with the magnetizing force continuously increased from zero. If at any point the magnetizing force be decreased, a greater value of flux density for any given magnetizing force will result. The effect of carrying iron through a complete cycle of magnetization, both positive and negative, is shown in Fig. 15.1.8.

A permanent increase in the hysteresis constant occurs if the temperature of operation remains for some time above 80C. This phenomenon is known as aging and may be much reduced by proper annealing of the iron. Silicon steels containing about 3 percent silicon have a lower hysteresis loss, somewhat larger eddy-current loss, and are practically nonaging. Eddy-current losses, also known as Foucault-current losses, occur in iron subjected to cyclic magnetization. Eddy-current losses are reduced by laminating the iron, which subdivides the emf and increases greatly the length of path of the parasitic currents. Eddy currents also have a screening effect, which tends to prevent the flux penetrating the iron. Hence laminating also allows the full cross section of the iron to be utilized unless the frequency is too high. Eddy-current loss in sheets is given by Pe 5 sptfBmd2/6r1016

W/cm3

(15.1.26)

where t  thickness, cm; f  frequency, Hz; Bm  the maximum flux density, G; r  the resistivity,  cm. Relations of Direction of Magnetic Flux to Current Direction The direction of the magnetizing force of a current is at right angles to its direction of flow. Magnetic lines about a cylindrical conductor carrying current exist in circular planes concentric with and normal to the conductor. This is illustrated in Fig. 15.1.9a. The 丣 sign, corresponding to the feathered end of the arrow, indicates a direction of current away from the observer; a 䉺 sign, corresponding to the tip of an arrow, indicates a direction of current toward the observer.

Fig. 15.1.8 Hysteresis loop for dynamo steel.

The curve OKB, taken with increasing values of magnetizing force per centimetre H, is the normal induction curve. If after the magnetizing force has reached the value OA, it is decreased, the magnetic flux density B will decrease in accordance with curve BCD, between A and O the values being much greater than those given by the normal curve, i.e., the flux density lags the magnetizing force. At zero magnetizing force, the flux density is OC, call the remanence. A negative magnetizing force OD, called the coercive force, is required to bring the flux density to zero. If the magnetizing force is increased negatively to OA, the flux density will be given by the curve DE. If the magnetizing force is increased positively from A to A, the flux density will be given by the curve EFGB, which is similar to the curve BCDE. OF is the negative remanence and OG again is the coercive force. The complete curve is called a hysteresis loop. When the normal curve reaches the point K, if the magnetizing force is then decreased, another hysteresis loop, a portion of which is shown at KL, will be obtained. It is seen that the flux density lags the magnetizing force throughout. The energy dissipated per cycle is proportional to the area of the loop and is equal to (1/4p) H dB ergs/(Hz)(cm3). For moderately high densities the energy loss per cycle varies according to the Steinmetz law W 5 10hB 1.6 W # s/m3 (15.1.25)

Fig. 15.1.9 Currents in (a) opposite directions, (b) in the same direction.

Corkscrew Rule The direction of the current and that of the resulting magnetic field are related to each other as the forward travel of a corkscrew and the direction in which it is rotated. Hand Rule Grasp the conductor in the right hand with the thumb pointing in the direction of the current. The fingers will then point in the direction of the lines of flux. The applications of these rules are illustrated in Fig. 15.1.9. If the currents in parallel conductors are in opposite directions (Fig. 15.1.9a), the conductors tend to move apart; if the currents in parallel conductors are in the same direction (Fig. 15.1.9b), the conductors tend to come together. The magnetic lines act like stretched rubber bands and, in attempting to contract, tend to pull the two conductors together. The relation of the direction of current in a solenoid helix to the direction of flux is shown in Fig. 15.1.10. Figure 15.1.11 shows the

m

where Bm is the maximum value of the flux density, T (Fig. 15.1.8). Table 15.1.6 gives values of the Steinmetz coefficient h for common magnetic steels. Table 15.1.6

Fig. 15.1.10 Direction of current and poles in a solenoid.

Steinmetz Coefficients

Hard tungsten steel Hard cast steel Forged steel Cast iron Electrolytic iron Soft machine steel Annealed cast steel

0.058 0.025 0.020 0.013 0.009 0.009 0.008

Ordinary sheet iron Pure iron Annealed iron sheet Best annealed sheet Silicon steel sheet Permalloy

0.004 0.003 0.002 0.001 0.00046 0.0001 Fig. 15.1.11 Effect of a current on a uniform magnetic field.

BATTERIES

effect on a uniform field of placing a conductor carrying current in that field and normal to it. In (a) the direction of the current is toward the observer. By applying the corkscrew rule it is seen that the current weakens the field immediately above it and strengthens the field immediately below it. The reverse is true in (b), where the direction of the current is away from the observer. Figure 15.1.11 is illustrative of the force developed on a conductor carrying current in a magnetic field. In (a) the conductor will tend to move upward owing to the stretching of the magnetic lines beneath it. Similarly, the conductor in (b) will tend to move downward. This principle is the basis of motor action. (See also “Magnets.”) BATTERIES

In an electric cell, or battery, chemical energy is converted into electrical energy. The word battery may be used for a single cell or for an assembly of cells connected in series or parallel. A battery utilizes the potential difference which exists between different elements. When two different elements are immersed in electrolyte an emf exists tending to send current within the cell from the negative pole, which is the more highly electropositive, to the positive pole. The poles, or electrodes of a battery form the junction with the external circuit. If the external circuit is closed, current flows from the battery at the positive electrode, or anode, and enters the battery at the negative electrode, or cathode. In a primary battery the chemically reacting parts require renewal; in a secondary battery, the electrochemical processes are reversible to a high degree and the chemically reacting parts are restored after partial or complete discharge by reversing the direction of current through the battery. See Table 15.1.7 for a summary of battery types and applications. Electromotive force of a battery is the total potential difference existing between the electrodes on open circuit. When current flows, the potential difference across the terminal drops because of the resistance drop within the cell and because of polarization. Polarization When current flows in a battery, hydrogen is deposited on the cathode. This produces two effects, both of which reduce the terminal voltage of the battery. The hydrogen in contact with the cathode constitutes a hydrogen battery which opposes the emf of the battery; the hydrogen bubbles reduce the contact area of the electrolyte with the cathode, thus increasing the battery resistance. The most satisfactory method of reducing polarization is to have present at the cathode some compound that supplies negative ions to combine with the positive hydrogen ions at the plate. In the Leclanché cell, manganese

Table 15.1.7

peroxide in contact with the carbon cathode serves as a depolarizer, its oxygen ion combining with the hydrogen ion to form water. If E is the emf of the cell, Ep the emf of polarization, r the internal resistance, V the terminal voltage, when current I flows, then V 5 sE 2 Epd 2 Ir

Cell type

(15.1.27)

Primary Batteries Dry Cells A dry cell is one in which the electrolyte exists in the form of a jelly, is absorbed in a porous medium, or is otherwise restrained from flowing from its intended position, such a cell being completely portable and the electrolyte nonspillable. The Leclanché cell consists of a cylindrical zinc container which serves as the negative electrode and is lined with specially prepared paper, or some similar absorbent material, to prevent the mixture of carbon and manganese dioxide, which is tamped tightly around the positive carbon electrode, from coming in contact with the zinc. The absorbent lining and the mixture are moistened with a solution of zinc chloride and sal ammoniac. In smaller cells (Fig. 15.1.12) the manganese-carbon mixture is often

Fig. 15.1.12 Cross section of a standard round zinc-carbon cell. (From “Standard Handbook for Electrical Engineers,” Fink and Carrol, McGraw-Hill, NY, copyright 2000.)

Battery Types and Applications

Battery type

15-11

Nominal cell voltage

Capacity, wH/kg

Applications

Primary Leclanché (zinc-carbon) Zinc-mercury (Ruben)

Dry Dry

1.5 1.34

22–44 90–110

Zinc-alkaline-manganese dioxide Silver or cuprous chloridemagnesium

Dry

1.5

66

Wet

55–120

Flashlights, emergency lights, radios Medical, marine, space, laboratory, and emergency devices Models, cameras, shavers, lights Disposable devices: torpedoes, rescue beacons, meteorological balloons

Secondary Lead-acid

Wet

2

Lead-calcium Edison (nickel-iron) Nickel-cadmium

Wet Wet Wet

2 1.2 1.2

28

Silver oxide-cadmium Silver-zinc

Wet Wet

1.4 1.55

45–65 90–155

Automotive, industrial trucks, railway, station service Standby Industrial trucks; boat and train lights Engine starting, emergency lighting, station service Space Models, photographic equipment, missiles

15-12

ELECTRICAL ENGINEERING

molded into a cylinder around the carbon electrode, the whole is then set into the zinc cup, and the space between the molded mixture and the zinc is filled with electrolyte made into a paste in such a manner that it can be solidified by either standing or heating. The top of the cell is closed with a sealing compound, and the cell is placed in a cardboard container. The emf of a dry cell when new is 1.4 to 1.6 V. In block assembly the dry cells, especially in the smaller sizes, are assembled in series and sealed in blocks of insulating compound with only two terminals and, sometimes, intermediate taps brought out. This type of battery is used for radio B and C batteries. Another construction is to build the battery up of layers in somewhat the manner of the old voltaic pile. Each cell consists of a layer of zinc, a layer of treated paper, and a flat cake of the manganese-carbon mixture. The cells are separated by layers of a special material which conducts electricity but which is impervious to electrolyte. A sufficient number of such cells are built up to give the required voltage and the whole battery is sealed into the carton. Leclanché cells (carbon-zinc dry cells), in use for over 100 years, are still the most widely used of all dry-cell batteries because of their low cost, reliable performance, and ready availability. They are available in sizes ranging from small penlight batteries to large assemblies of cells in series or parallel connections for special high-voltage or high-current applications. The efficiency of a standard-size dry battery depends on the rate at which it is discharged. Up to a certain rate the lower the discharge rate, the greater the efficiency. Above this rate the efficiency decreases (see Natl. Bur. Stand. Circ. 79, p. 39). When used efficiently, a 6-in dry cell will give over 30 A  h of service. As ordinarily used, however, the dry cell give no more than 8 to 10 A  h of service and at times even less. The 11⁄4- by 21⁄4-in flashlight battery is usually employed with a lamp taking 0.25 to 0.35 A. Under these conditions 3 A  h or thereabouts may be expected if the battery is used for not more than an hour or so a day. The so-called “heavy-duty” radio battery will give about 8 to 10 A  h when efficiently used. For the best results 6-in dry cells should not be used for current drains of over 0.5 A except for very short periods of time. Flashlight batteries should not be used for higher than the preceding current drain, and heavy-duty radio batteries will give best results if the current drain is kept below 25 mA. Dry cells should be stored in a cool, dry place. Extreme heat during storage will shorten their life. The cell will not be injured by being frozen but will be as good as new after being brought back to normal temperature. In extreme cold weather dry cells may not give more than half of their normal service. At a temperature of about 30F they freeze solid and give neither voltage nor current. The amperage of a dry cell by definition is the current that it will give when it is short-circuited (at about 70F) through an ammeter which with its leads has a resistance of 0.01 . The Ruben cell (Ruben, Balanced Alkaline Dry Cells, Trans. Electrochem. Soc., 92, 1947) was developed jointly by the Ruben Laboratories and P. R. Mallory & Company during World War II for the operation of radar equipment and other electronic devices which require a high ratio of ampere-hour capacity to the volume of the cell at higher current densities than were considered practicable for the Leclanché type. The anode is of amalgamated zinc, and the cathode is a mercuric oxide depolarizing material intimately mixed with graphite in order to reduce its electrical resistivity. The electrolyte is a solution of potassium hydroxide (KOH) containing potassium zincate. The cell is made in three forms as shown in Fig. 15.1.13, the wound-anode type (a), the button type (b), and the cylindrical type (c). The no-load emf of the cell is 1.34 V and remains essentially constant irrespective of time and temperature. Advantages of the cell are long shelf life, which enables them to be stored indefinitely; long service life, about four times that of the Leclanché dry cell of equivalent volume; small weight; a flat voltage characteristic which is advantageous for electronic uses in which the characteristics of tubes vary widely with voltage; adaptability to operating at high temperatures without deterioration; high resistance to shock.

Fig. 15.1.13 Ruben cells. (a) Wound-anode; (b) button; (c) cylindrical. (From “Standard Handbook for Electrical Engineers,” Fink and Carrol, McGraw-Hill, NY, copyright 1968.)

The zinc-alkaline-manganese dioxide cell is a cell especially useful in applications that require a dry cell with relatively heavy or continuous drain. The anode is of amalgamated zinc, and the cathode is a manganese dioxide depolarizing material mixed with graphite for conductivity. The electrolyte is a solution of highly alkaline potassium hydroxide immobilized in cellulosic-type separators. These cells are available in standardsize cylindrical construction and wafer (flat) construction for cassette and tape recorder applications. Wet Cells The silver or cuprous chloride-magnesium cell is a one-shot battery with a life of days after the electrolyte is added. A wet cell may

BATTERIES

be stored for years in a dry state. The cathode is either compacted copper chloride and graphite or sheet silver chloride, while the cathode is a thin magnesium sheet. The electrolyte is a solution of sodium chloride. The silver chloride cells are more expensive and are available in more and larger ratings. The Weston cell is a primary cell used as a standard of emf. It consists of a glass H tube in the bottom of one leg of which is mercury which forms the cathode; in the bottom of the other leg is cadmium amalgam forming the anode. The electrolytes consist of mercurous sulfate and cadmium sulfate. There are two forms of the Weston cell: the saturated or normal cell and the unsaturated cell. In the normal cell the electrolyte is saturated. This is the official standard since it is more permanent than the unsaturated type and can be reproduced with far greater accuracy. When carefully made, the emfs of cells agree within a few parts per million. There is, however, a small temperature coefficient. Although the unsaturated cell is not so reliable as the normal cell and must be standardized, it has a negligible temperature coefficient and is more convenient for general use. The manufacturers recommend that the temperature be not less than 4C and not more than 40C and the current should not exceed 0.0001 A. The emf is between 1.0185 and 1.0190 V. Since no appreciable current can be taken from the cell, a null method must be used to utilize its emf. Storage (Secondary) Batteries

charge

2 positive plate



Fig. 15.1.14 Variations of specific gravity in a stationary battery.

vehicle batteries it is necessary to operate the electrolyte from between 1.280 to 1.300 when fully charged to as low as 1.100 when completely discharged. The condition of charge of a battery can be determined by its specific gravity. Battery electrolyte may be made from concentrated sulfuric acid (oil of vitriol, sp gr 1.84) by pouring the acid into the water in the following proportions: Parts Water to 1 Part Acid Specific gravity Volume Weight

1.200 4.3 2.4

1.210 4.0 2.2

1.240 3.4 1.9

1.280 2.75 1.5

Freezing Temperature of Sulfuric Acid and Water Mixtures

In a storage battery the electrolytic action must be reversible to a high degree. There are three types of storage batteries; the lead-lead-acid type, the nickel-iron-alkaline type (Edison battery), and the nickel-cadmiumalkali type (Nicad). In addition, there are various specialized types of cells for scientific and military purposes, and there is continuous development work in the search for higher capacities. In the manufacture of the lead-lead-acid cells there are three general types of plates, or electrodes. In the Planté type the active material is electrically formed of pure lead by repeated reversals of the charging current. In the Faure, or pasted-plate, type, the positive and negative plates are formed by applying a paste, largely of lead oxides (PbO2, Pb3O4), to lead-antimony or lead calcium supporting grids. A current is passed through the plates while they are immersed in weak sulfuric acid, the positive plates being connected as anodes and the negative ones as cathodes. The paste on the positive plates is converted into lead peroxide while that on the negative plate is reduced to spongy lead. The tubular plate (iron-clad) type has lead-alloy rods surrounded by perforated dielectric tubes with powdered-lead oxides packed between the rod and tube for the positive plate. In order to obtain high capacity per unit weight it is necessary to expose a large plate area to the action of the acid. This is done in the Planté plate by “ploughing” with sharp steel disks, and by using corrugated helical inserts as active positive material (Manchester plate). In the pasted plate a large area of the material is necessarily exposed to the action of the acid. The chemical reactions in a lead cell may be expressed by the following equation, based on the double sulfation theory:

PbO2

15-13

Pb  2H2SO4 

2PbSO4



2H2O

2

2

4

2

negative plate

sulfuric acid

positive and negative plates

water

discharge

Between the extremes of complete charge and discharge, complex combinations of lead and sulfate are formed. After complete discharge a hard insoluble sulfate forms slowly on the plates, and this is reducible only by slow charging. This sulfation is objectionable and should be avoided. Specific Gravity Water is formed with discharge and sulfuric acid is formed on charge, consequently the specific gravity must decrease on discharge and increase on charge. The variation of the specific gravity for a stationary battery is shown in Fig. 15.1.14. With starting and

Specific gravity Freezing temp, F

1.180 6

1.200  16

1.240  51

1.280  90

Voltage The emf of a lead cell when fully charged and idle is 2.05 to 2.10 V. Discharge lowers the voltage in proportion to the current. When charging at constant current and normal rate, the terminal voltage gradually increases from 2.14 to 2.3 V, then increases rapidly to between 2.5 and 2.6 V (Fig. 15.1.15). This latter interval is known as the gassing period. When this period is reached, the charging rate should be reduced in order to avoid waste of power and unnecessary erosion of the plates.

Fig. 15.1.15 Voltage curves on charge and discharge for a lead cell.

Practically all batteries have a normal rating based on the 8-h rate of discharge. Thus a 320 A  h battery would have a normal rate of 40 A. The ampere-hour capacity of batteries falls off rapidly with increase in discharge rate. Effect of Discharge Rate on Battery Capacity Discharge rate, h Percentage of rated capacity, Planté type Pasted type

8

5

3

1

1⁄3

1⁄10

100 100

88 93

75 83

55.8 63

37 41

19.5 25.5

The following rule may be observed in charging a lead battery. The charging rate in amperes should be less than the number of amperehours out of the battery. For example, if 200 A  h are out of a battery,

15-14

ELECTRICAL ENGINEERING

Fig. 15.1.16 Connections for charging a storage battery from (a) 110-V dc mains, (b) copper oxide rectifier.

a charging rate of 200 A may be used until the ampere-hours out of the battery are reduced appreciably. There are two common methods of charging: the constant-current method and the constant-potential method. Figure 15.1.16a shows a common method of charging with constant current, provided a low-voltage dc power supply is available. The resistor connected in series may be adjusted to give the required current. Several batteries may be connected in series. Figure 15.1.16b shows a more common method, using a copper oxide or silicon rectifier, since ac power supply is more common than dc. The rectifier disks, mounted in a stack, are bridge-connected, the directions of rectification being indicated. The polarity of the two wires can readily be determined by means of a dc voltmeter. The constant-potential method is to be preferred since the rate automatically tapers off as the cell approaches the charged condition. Without resistance the terminal voltage should be 2.3 V per cell, but it is preferable to use 2.4 to 2.5 V per cell with low resistance in series. When a battery is being charged, its terminal voltage V 5 E 1 Ir

(15.1.28)

Compare with Eq. (15.1.27). When a battery is fully charged, any rate will produce gassing, but the rate may be reduced to such a low value that gassing is practically harmless. This is called the finishing rate. Portable batteries for automobile starting and lighting, airplanes, industrial trucks, electric locomotives, train lighting, and power boats employ the pasted-type plates because of their high discharge rates for a given weight and size. The separators are either of treated grooved wood, perforated hard rubber, glass-wool mats, perforated rubber and grooved wood, or ribbed microporous rubber. In low-priced shortlived batteries for automobiles, grooved wood alone is used; in the better types, the wood is reinforced with perforated hard rubber. Containers for the low-priced short-lived automobile-type starting batteries are of asphaltic compound; for other portable types they are usually of hard rubber. The Exide iron-clad battery is a portable type designed for propelling electric vehicles. The positive plate consists of a lead-antimony frame supporting perforated hard-rubber tubes. An irregular lead-antimony core runs down the center of each tube, and the lead peroxide paste is packed into these tubes so that shedding of active material from the positive plate cannot occur. Pasted negative plates are used. The separators are flat microporous rubber. Stationary Batteries The tanks of stationary batteries are made of hard rubber or plastics. When the battery is used for regulating or cycling duty, the positive plates may be of the Planté type because of their long life. However, in most modern installations thick pasted plates are used. Because of the tight fit of the plate assembly within the container and the resulting pressure of the separator against the plate surfaces, shedding of active material is reduced to a minimum and long life is obtained. Pasted negative plates are used in almost all batteries. A lead storage battery removed from service for less than 9 months should be charged once a month if possible; if not, it should be given a

heavy overcharge before discontinuing service. If removed for a longer period, siphon off acid (which may be used again) and fill with fresh water. Allow to stand 15 h and siphon off water. Remove and throw away the wood separators. The battery will now stand indefinitely. To put in service again, install new separators, fill with acid (sp gr 1.210) and charge at normal rate 35 h or until gravity has ceased to rise over a period of 5 h. Charge at a low rate a few hours longer. The ampere-hour efficiency of lead batteries is 85 to 90 percent. The watthour efficiency obtained from full charge to discharge at the normal rate and at rated amp-hour is 75 to 80 percent. Batteries which do regulating duty only may have a much higher watthour efficiency. The Edison storage cell when fully charged has a positive plate of nickel pencils filled with a higher nickel oxide and a negative plate of flat nickel-plated-steel stampings containing metallic iron in finely divided form. The active material for the positive plate is nickel hydrate and for the negative plate, iron oxide. The electrolyte is a 21 percent solution of potassium hydrate with lithium hydroxides. The initial emf is about 1.4 V and the average emf about 1.1 V throughout discharge. In Fig. 15.1.17 are shown typical voltage characteristics on charge and discharge for an Edison cell. On account of the higher internal resistance of the cell the battery is not so efficient from the energy standpoint as the lead cell. The jar is welded nickel-plated steel. The battery is compact and extremely light and strong and for these reasons is particularly adapted for propelling electric vehicles and for boat- and trainlighting systems. The battery is rugged, and since there is no opportunity for the growth of active material on the plates or flaking of active material, the battery has long life.

Fig. 15.1.17 Voltage during charge and discharge of an Edison cell. Nickel-Cadmium-Alkali (Nicad) Battery The positive active material is nickelic (black) hydroxide mixed with graphite to give it high conductivity. The negative active material is cadmium oxide. Both materials are used in powdered form and are contained within flat perforated steel pockets. These pockets are locked into steel plates, the positive and negative being alike in construction. All steel parts are nickel-plated. A complete plate group consists of a number of positive and negative plates assembled on bolts and terminal posts common to plates of the same polarity. The separators are thin strips of polystyrene, and all other battery insulation is also polystyrene. The entire plate assembly is contained within a welded-steel tank. The electrolyte is potassium hydroxide (KOH), specific gravity 1.210 at 72F (22C); it does not enter into any chemical reactions with the electrode materials,

DIELECTRIC CIRCUITS

and its specific gravity remains constant during charge and discharge, neglecting any slight change due to the small amount of gassing. On charge, the voltage is 1.4 to 1.5 V until near the end when it rises to 1.8 V. On discharge, the voltage is nearly constant at 1.2 V. Nicad batteries are strong mechanically and are not damaged by overcharge; they hold their charge over long periods of idleness, the active material cannot flake off, the internal resistance is low, there is no corrosion, and the battery has an indefinitely long life. It is a generalpurpose battery. In the Sonotone nickel-cadmium battery the positive plates are nickel oxide when the battery is charged, and the negative plates are metallic cadmium. On discharge the positive plates are reduced to a state of lower oxidation, and the negative plates regain oxygen. The electrolyte is a 30 percent solution of potassium hydroxide, the specific gravity of which is 1.29 at room temperature. The case is a transparent plastic. The terminal voltage at the normal discharge rate is 1.2 V per cell. Rechargeable batteries, exemplified by Gould Nicad cells (Alkaline Battery Division, Gould National Batteries, Inc.), are hermetically sealed nickel-cadmium cells that contain no free alkaline electrolyte. Since there is no spillage or leakage, they can operate in any position, have long life, and require no maintenance or servicing, and their weight is small for their output. They are thus well adapted to power many types of cordless appliances such as tools, hedge shears, cameras, dictating equipment, electric razors, radios, and television sets. The electrodes consist of a plaque of microporous sintered nickel having an extremely high surface area. The electrochemical reactions differ from those of the conventional vented-type alkaline battery, a type which at the end of a charge liberates both oxygen and hydrogen gases as well as electrolytic fumes that must be vented through a valve in the top of the cell. In the sealed nickel-cadmium cell, the negative electrode (at the time that the cell is sealed) never becomes fully charged, and the evolution of hydrogen is completely suppressed. On charging, when the positive electrode has reached its full capacity, the oxygen which has evolved is channeled through the porous separator to the negative electrode and oxidizes the finely divided cadmium of the microporous plate to cadmium hydroxide, which at the same time is reduced to metallic cadmium. The cells are constructed in three different forms: the button type, the cylindrical type, and the prismatic type. Their ratings range from 20 mA  h to 23 A  h. Their average discharge voltage is 1.22 V, and they require 14 h of charge at the normal rate (one-tenth A  h rating), which for a 3.5 A  h cell is 0.35 A. Precautions in the care of storage batteries: An ammeter should not be connected directly across the terminals to test the condition of a cell; a battery should not be left to stand in a discharged condition; a flame should not be brought in the vicinity of a battery that is being charged; the battery should not be allowed to become heated when charging; water should never be added to the concentrated acid—always acid to the water; acid should never be equalized except when the battery is in a charged condition; a battery should never be exposed to the influence of external heat; voltmeter tests should be made when the current is flowing; batteries should always be kept clean. To replace acid lost through slopping, use a solution of 2 parts concentrated sulfuric acid in 5 parts water by weight, unless a hydrometer is at hand to enable the solution to be made up according to the specifications of the makers of the cell. DIELECTRIC CIRCUITS

15-15

plates is called a dielectric. The dielectric properties of a medium relate to its ability to conduct dielectric lines. This is in distinction to its insulating properties which relate to its property to conduct electric current. For example, air is an excellent insulator but ruptures dielectrically at low voltage. It is not a good dielectric so far as breakdown strength is concerned. With capacitors Q 5 CE C 5 Q/E E 5 Q/C

(15.1.29) (15.1.30) (15.1.31)

where Q  quantity, C; C  capacitance, F; and E  voltage. The unit of capacitance in the practical system is the farad. The farad is too large a unit for practical purposes, so that either the microfarad (mF) or the picofarad (pF) are used. However, in voltage, current, and energy relations the capacitance must be expressed in farads. The energy stored in a capacitor is W 5 1⁄2QE 5 1⁄2CE2 5 1⁄2Q2/C Capacitance of Capacitors

(15.1.32)

J

The capacitance of a parallel-electrode

capacitor (Fig. 15.1.18) is C 5 er A/s4pd 3 9 3 103d

mF

(15.1.33)

where er  relative capacitivity; A  area of one electrode, m ; and d  distance between electrodes, m. 2

Fig. 15.1.18 Parallel-electrode capacitor.

Fig. 15.1.19 Coaxial-cylinder capacitor.

The capacitance of coaxial cylindrical capacitors (Fig. 15.1.19) is C 5 0.2171erl/[9 3 105 log sR2/R1d]

mF

(15.1.34)

where er is the relative capacitivity and l the length, m. Also C 5 0.03882er /logsR2/R1d

mF/mi

(15.1.35)

Equation (15.1.35) is useful in that it is applicable to cables. The capacitance of two parallel cylindrical conductors D m between centers and having radii of r m is C 5 0.01941/log sD/rd

mF/mi

(15.1.36)

In practice, the capacitance to neutral or to an infinite conducting plane midway between the conductors and perpendicular to their plane is usually used. The capacitance to neutral is C 5 0.03882/log sD/rd

mF/mi

(15.1.37)

Equations (15.1.36) and (15.1.37) are used for calculating the capacitance of overhead transmission lines. When computing charging current, use voltage between lines in (15.1.36) and to neutral in (15.1.37). Capacitances in Parallel The equivalent capacitance of capacitances in parallel (Fig. 15.1.20) is C 5 C1 1 C2 1 C3

(15.1.38)

Electricity in motion such as an electric current is dynamic electricity; electricity at rest is static electricity. The two are identical physically. Since static electricity is frequently produced at high voltage and small quantity, the two are frequently considered as being two different types of electricity.

Capacitances in parallel are all across the same voltage. If the voltage is E, then the total quantity Q  CE and Q1  C1E, etc.

Capacitors

Fig. 15.1.20 Capacitances in parallel.

Dynamic and Static Electricity

Two conducting bodies, or electrodes, separated by a dielectric constitute a capacitor. If a positive charge is placed on one electrode of a capacitor, an equal negative charge is induced on the other. The medium between the capacitor Capacitors (formerly condensers)

Capacitances in Series The equivalent capacitance C of capacitances in series (Fig. 15.1.21) is found as follows:

1/C 5 1/C1 1 1/C2 1 1/C3

(15.1.39)

15-16

ELECTRICAL ENGINEERING

If the permeability changes with the current L 5 Nsdf/did

H

(15.1.42)

The energy stored in the magnetic field W 5 1⁄2Li2

J

(15.1.43)

If Eq. (15.1.41) is written Li 5 Nf and differentiated with respect to the time t, Lsdi/dtd 5 Nsdf/dtd and from Eq. (15.1.40) EMF of Self-induction

Fig. 15.1.21 Capacitances in series.

If the capacitances are not leaky, the charge Q is the same on each. Q  CE, E1  Q/C1, E2  Q/C2, etc. Insulators and Dielectrics Insulating materials are applied to electric circuits to prevent the leakage of current. Insulating materials used with high voltage must not only have a high resistance to leakage current, but must also be able to resist dielectric puncture; i.e., in addition to being a good insulator, the material must be a good dielectric. Insulation resistance is usually expressed in M and the resistivity given in M cm. The dielectric strength is usually given in terms of voltage gradient, common units being V/mil, V/mm, and kV/cm. Insulation resistance decreases very rapidly with increase in temperature. Absorbed moisture reduces the insulation resistance, and moisture and humidity have a large effect on surface leakage. In Table 15.1.8 are given the insulating and dielectric properties of several common insulating materials (see also Sec. 6). Dielectric heating of materials is described in Sec. 7.

e 5 2Lsdi/dtd

V

(15.1.44)

e is the emf of self-induction. If a rate of change of current of 1 A/s induces an emf of 1 V, the inductance is then 1 H. Current in Inductive Circuit If a circuit containing resistance R and inductance L in series is connected across a steady voltage E, the voltage E must supply the iR drop in the circuit and at the same time overcome the emf of self-induction. That is E  Ri  L di/dt. A solution of this differential equation gives i 5 sE/Rds1 2 e2Rt/Ld

A

(15.1.45)

where e is the base of the natural system of logarithms.

TRANSIENTS Induced EMF

If a flux f webers linking N turns of conductor changes,

an emf

Fig. 15.1.22 Rise of current in an inductive circuit.

e 5 2Nsdf/dtd

V

(15.1.40)

is induced. Let a flux f link N turns. The linkages of the circuit are Nf weber-turns. If the permeability of the circuit is assumed constant, the number of these linkages per ampere is the self-inductance or inductance of the circuit. The unit of inductance is the henry. The inductance is Self-inductance

L 5 Nf/sid Table 15.1.8

H

(15.1.41)

Figure 15.1.22 shows this equation plotted when E  10 V, R  20 , L  0.6 H. It is to be noted that inductance causes the current to rise slowly to its Ohm’s law value, I0  E/R  10⁄20  0.5 A. When t  L /R, the current has reached 63.2 percent of its Ohm’s law value. L / R is the time constant of the circuit. In the foregoing circuit, the time constant L /R  0.6/20  0.03 s. The initial rate of rise of current is tan a  E /L. If current continued at this rate, it would reach a  E/R in L /R s [(E/L)  (L /R)  E/R].

Electrical Properties of Insulating Materials

Material Asbestos board (ebonized) Bakelite Epoxy Fluorocarbons: Fluorinated ethylene propylene Polytetrafluoroethylene Glass Magnesium oxide Mica Nylon Neoprene Oils: Mineral Paraffin Paper Paper, treated Phenolic (glass filled) Polyethylene Polyimide Polyvinyl chloride (flexible) Porcelain Rubber Rubber (butyl)

Volume resistivity, M  cm

Dielectric constant, 60 Hz

107 5–30  1011 1014

4.5–5.5 3.5–5

55 450–1,400 300–400

2  103 (17–55)  103 (12–16)  103

2.1 2.1 5.4–9.9 2.2 4.5–7.5 4–7.6 7.5

500 400 760–3,800 300–700 1,000–4,000 300–400 600

20  103 16  103 (3–15)  104 (12–27)  103 (4–16)  104 (12–16)  103 23.5  103

2–4.7 2.41 1.7–2.6 2.5–4 5–9 2.3 3.5 5–9 5.7–6.8 2–3.5 2.1

300–400 410–550 110–230 500–750 140–400 450–1,000 400 300–1,000 240–300 500–700

(12–16)  103 (16–22)  103 (4–9)  103 (20–30)  103 (5.5–16)  103 (17–40)  103 16  103 (12–40)  103 (9.5–12)  103 (20–27)  103

1018 1018 17  109 1014–1017 1014–1017 21  106 1015 1012–1013 1015–1018 1016–1017 1011–1015 3  108 1014–1016 1018

Dielectric strength V/mil V/mm

ALTERNATING CURRENTS

If a circuit containing inductance and resistance in series is shortcircuited when the current is I0, the equation of current becomes i 5 I0e2Rt/L

A

(15.1.46)

Figure 15.1.23 shows this equation plotted when I0  0.5 A, R  20 , L  0.6 H. It is seen that inductance opposes the decay of current. Inductance always opposes change of current.

15-17

across a source of steady voltage, a transient condition results. If R  14L/C, the circuit is nonoscillatory or overdamped. The current is EC i5 ses2a1bdt 2 es2a2bdtd A (15.1.52) 2R2C 2 2 4LC where a = R/2L and b 5 s2R2C 224LCd/2LC. In Fig. 15.1.25 is shown the curve corresponding to Eq. (15.1.52). When R 5 24L/C, the system is critically damped and the transient dies out rapidly without oscillation. The current is i 5 sE/Ldte2Rt/2L

A

(15.1.53)

Figure 15.1.25 shows also the curve corresponding to Eq. (15.1.53).

Fig. 15.1.23 Decay of current in an inductive circuit. Mutual Inductance If two circuits having inductances L1 and L2 henrys are so related to each other geometrically that any portion of the flux produced by the current in one circuit links the other circuit, the two circuits possess mutual inductance. It follows that a change of current in one circuit causes an emf to be induced in the other. Let e2 be induced in circuit 2 by a change di1/dt in circuit 1. Then

e2 5 2M di1/dt

V

(15.1.47)

M is the mutual inductance of the two circuits. M 5 k 2L1L2

If R , 14L/C the transient is oscillatory, being a logarithmically damped sine wave. The current is

(15.1.48)

where k is the coefficient of coupling of the two circuits, or the proportion of the flux in one circuit which links the other. Also a change of current di2 /dt in circuit 2 induces an emf e1 in circuit 1, e1  M di2 /dt. The stored energy is W 5 1⁄2L1I21 1 1⁄2L2I22 1 MI1I2

J

(15.1.49)

where I1 and I2 are the currents in circuits 1 and 2. Current in Capacitive Circuit If capacitance C farads and resistance R ohms are connected in series across the steady voltage E, the current is i 5 sE/Rde2t/CR

Fig. 15.1.25 Transient current in nonoscillatory circuits.

A

(15.1.50)

i5

2EC

e2Rt/2L sin

24LC 2 R2C2 t 2LC

A (15.1.54) 24LC 2 R C The transient oscillates at a frequency very nearly equal to 1/s2p 1LCd Hz. This is the natural frequency of the circuit. In Fig. 15.1.26 is shown the curve corresponding to Eq. (15.1.54). If the capacitor, after being charged to E V, is discharged into the foregoing series circuits, the currents are given by Eqs. (15.1.52) to (15.1.54) multiplied by 1. Equations (15.1.52) to (15.1.54) are the same types obtained with dynamic mechanical systems with friction, mass, and elasticity. 2

2

If a capacitor charged to voltage E is discharged through resistance R, the current is i 5 2sE/Rde2t/CR

A

(15.1.51)

Except for sign, these two equations are identical and are of the same form as Eq. (15.1.46).

Fig. 15.1.26 Transient current in an oscillatory circuit. ALTERNATING CURRENTS

Fig. 15.1.24 Transient current to a capacitor.

In Fig. 15.1.24 is shown the transient current to a capacitor in series with a resistor when E  200 V, C  4.0 mF, R  2 k . When t  CR, the current has reached 1/e  0.368 its initial value. CR is the time constant of the circuit. The initial rate of decrease of current is tan a  E/ CR2. If the current continued at this rate it would reach zero when the time is CR s. If, in its fully charged condition, the capacitor of Fig. 15.1.24 is discharged through the resistor R, the curve will be the negative of that shown in Fig. 15.1.24. Resistance, Inductance, and Capacitance in Series If a circuit having resistance, inductance, and capacitance in series is connected

Sine Waves In the following discussion of alternating currents, sine waves of voltage and current will be assumed. That is, e  Em sin vt and i  Im sin (vt  u), where Em and Im are maximum values of voltage and current; v, the angular velocity, in rad/s, is equal to 2pf, where f is the frequency; u is the angle of phase difference. Cycle; Frequency When any given armature coil has passed a pair of poles, the emf or current has gone through 360 electrical degrees, or 1 cycle. An alternation is one-half cycle. The frequency of a synchronous machine in cycles per second (hertz) is

f 5 NP/120

Hz

(15.1.55)

where N is the speed in r/min and P the number of poles. In the United States and Canada the frequency of 60 Hz is almost universal for general lighting and power. For the ac power supply to dc transit systems,

15-18

ELECTRICAL ENGINEERING

and for railroad electrification, a frequency of 25 Hz is used in many installations. In most of Europe and Latin America the frequency of 50 Hz is in general use. In aircraft the frequency of 400 Hz has become standard. Static inverters make it possible to obtain high and variable frequencies to drive motors at greater than the 3,600 r/min limitation on 60-Hz circuits, and to vary speeds. The textile industry has small motors operating at 12,000 r/min (200 Hz), and larger motors have been run at 6,000 r/min (100 Hz). Many large mainframe computers have been powered at 400 Hz. The root-mean-square (rms), or effective, value of a current wave produces the same heating in a given resistance as a direct current of the same ampere value. Since the heating effect of a current is proportional to i2r, the rms value is obtained by squaring the ordinates, finding their average value, and extracting the square root, i.e., the rms value is I5

Å

T

1/T 3 i2 dt

A

(15.1.56)

0

where T is the time of a cycle. The rms value I of a sine wave equals s1/ 12dIm 5 0.707Im. Average Value of a Wave The average value of a sine wave over a complete cycle is zero. For a half cycle the average is (2/p)Im, or 0.637 Im, where Im is the maximum value of the sine wave. The average value is of importance only occasionally. A dc measuring instrument gives the average value of a pulsating wave. The average value is of use (1) when the effects of the current are proportional to the number of coulombs, as in electrolytic work and (2) when converting alternating to direct current. Form Factor The form factor of a wave is the ratio of rms value to average value. For a sine wave this is p/s2 12d 5 1.11. This factor is important in that it enters equations for induced emf. Inductive reactance, 2pfL or vL, opposes an alternating current in inductance L. It is expressed in . Reactance is usually denoted by the symbol X. Inductive reactance is denoted by XL. The current in an inductive reactance XL when connected across the voltage E is I 5 E/XL 5 E/s2pfLd

A

(15.1.57)

This current lags the voltage by 90 electrical degrees. Inductance absorbs no energy. The energy stored in the magnetic field during each half cycle is returned to the source during the same half cycle. Capacitive reactance is 1/(2pfC)  1/vC and is denoted by XC, where C is in F. If C is given in mF, XC  106 2pfC. The current in a capacitive reactance XC when connected across voltage E is I 5 E/XC 5 2pfCE

(15.1.58)

A

This current leads the voltage by 90 electrical degrees. Pure capacitance absorbs no energy. The energy stored in the dielectric field during each half cycle is returned to the source during the same half cycle. Impedance opposes the flow of alternating current and is expressed in . It is denoted by Z. With resistance and inductance in series Z 5 2R2 1 X L2 5 2R2 1 s2pfLd2



Impedances and also admittances may be similarly combined, either graphically or symbolically. The usual method is to resolve series impedances into their component resistances and reactances, then combine all resistances and all reactances, from which the resultant impedance is obtained. Thus Z1 1 Z2 5 1sr1 1 r2 d 2 1 sx1 1 x2 d 2, where r1 and x1 are the components of Z1, etc. Phase Difference With resistance only in the circuit, the current and the voltage are in phase with each other; with inductance only in the circuit, the current lags the voltage by 90 electrical degrees; with capacitance only in the circuit, the current leads the voltage by 90 electrical degrees. With resistance and inductance in series, the voltage leads the current by angle u where tan u  XL/R. With resistance and capacitance in series, the voltage lags the current by angle u where tan u  XC /R. With resistance, inductance, and capacitance in series, the voltage may lag, lead, or be in phase with the current. tan u 5 sXL 2 XCd/R 5 s2pfL 2 1/2pfCd/R

(15.1.63)

If XL  XC the voltage leads; if XL XC the voltage lags; if XL  XC the current and voltage are in phase and the circuit is in resonance. Power Factor In ac circuits the power P  I 2R where I is the current and R the effective resistance (see below). Also the power P 5 EI cos u

W

(15.1.64)

where u is the phase angle between E and I. Cos u is the power factor (pf) of the circuit. It can never exceed unity and is usually less than unity. cos u 5 P/EI

(15.1.65)

P is often called the true power. The product EI is the volt-amp (V  A) and is often called the apparent power. Active or energy current is the projection of the total current on the voltage phasor. Ie  I cos u. Power  EIe. Reactive, quadrature, or wattless current Iq  I sin u and is the component of the current that contributes no power but increases the I 2R losses of the system. In power systems it should ordinarily be made low. The vars (volt-amp-reactive) are equal to the product of the voltage and reactive current. Vars  EIq. Kilovars  EIq /1,000. Effective Resistance When alternating current flows in a circuit, the losses are ordinarily greater than are given by the losses in the ohmic resistance alone. For example, alternating current tends to flow near the surface of conductors (skin effect). If iron is associated with the circuit, eddy-current and hysteresis losses result. These power losses may be accounted for by increasing the ohmic resistance to a value R, where R is the effective resistance, R  P/I 2. Since the iron losses vary as I1.8 to I 2, little error results from this assumption.

(15.1.59)

With resistance and capacitance in series Z 5 2R2 1 X C2 5 2R2 1 [1/s2pfCd]2

(15.1.60)

With resistance, inductance, and capacitance in series Z 5 2R 1 sXL 2 XCd 5 2R2 1 [2pfL 2 1/s2pfCd]2 2

Fig. 15.1.27 Resistor, inductor, and capacitor in series.

2



(15.1.61)

A

(15.1.62)

The current is I 5 E/ 2R2 1 [2pfL 2 1/s2pfCd]2

Sine waves of voltage and current can be represented by phasors, these phasors being proportional in magnitude to the waves that they represent. The angle between two phasors is also equal to the time angle existing between the two waves that they represent. Phasors may be combined as forces are combined in mechanics. Both graphical methods and the methods of complex algebra are used. Phasor or Vector Representation

SOLUTION OF SERIES-CIRCUIT PROBLEM. Let a resistor R of 10 , an inductor L of 0.06 H, and a capacitor C of 60 mF be connected in series across 120-V 60Hz mains (Fig. 15.1.27). Determine (1) the impedance, (2) the current, (3) the voltage across the resistance, the inductance, the capacitance, (4) the power factor, (5) the power, and (6) the angle of phase difference. (1) v = 2p60  377. XL = 0.06  377  22.6 ; XC  1/(377  0.000060)  44.2 ; Z 5 1s10d2 1 s22.6 2 44.2d2 5 23.8 ; (2) I  120/23.8  5.04 A; (3) ER  IR  5.04  10  50.4 V; EL  IXL  5.04  22.6  114.0 V; EC  IXC  5.04  44.2  223 V; (4) tan u  (XL  XC)/R  21.6/10  2.16, u  65.2, cos u  pf  0.420; (5) P  120  5.04  0.420  254 W; P  I 2R  (5.04)2  10  254 W (check); (6) from (4) u  65.2. Voltage lags. The phasor diagram to scale of this circuit is shown in Fig. 15.1.28. Since the current is common for all elements of the circuit, its phasor is laid horizontally along the axis of reference.

ALTERNATING CURRENTS

15-19

The energy current is EG; the reactive current is EB; the power is P 5 E2G vars 5 E2B

W W

(15.1.70) (15.1.71)

The power factor is pf 5 G/Y

(15.1.72)

Also the following relations hold: r 5 g/sg 2 1 b 2d 5 g/Y 2 x 5 b/sg 2 1 b 2d 5 b/Y 2

Fig. 15.1.28 Phasor diagram for a series circuit.

Resonance If the voltage E and the resistance R [Eq. (15.1.62)] are fixed, the maximum value of current occurs when 2pfL  1/2pfC  0. The circuit so far as its terminals are concerned behaves like a noninductive resistor. The current I  E/R, the power P  EI, and the power factor is unity. The voltage across the inductor and the voltage across the capacitor are opposite and equal and may be many times greater than the circuit voltage. The frequency

f 5 1/s2p 2LCd

Hz

(15.1.66)

is the natural frequency of the circuit and is the frequency at which it will oscillate if the circuit is not acted upon by some external frequency. This is the principle of radio sending and receiving circuits. Resonant conditions of this type should be avoided in power circuits, as the piling up of voltage may endanger apparatus and insulation. EXAMPLE. For what value of the inductance in the circuit (Fig. 15.1.27) will the circuit be in resonance, and what is the voltage across the inductor and capacitor under these conditions? From Eq. (15.1.66) L  1/(2pf )2C  0.1173 H. I  E/R  120/10  12 A. LvI  I/Cv  0.1173  377  12  530 V. This voltage is over four times the line voltage. Parallel Circuits Parallel circuits are used for nearly all power distribution. With several series circuits in parallel it is merely necessary to find the current in each and add all the current phasors vectorially to find the total current. Parallel circuits may be solved analytically. A series circuit has resistance r1 and inductive reactance x1. The conductance is

g1 5 r1/sr 21 1 x 21d 5 r1/Z 12

S



(15.1.73) (15.1.74)

SOLUTION OF A PARALLEL-CIRCUIT PROBLEM. In the parallel circuit of Fig. 15.1.29 it is desired to find the joint impedance, the total current, the power in each branch, the total power, and the power factor, when E  100, f  60, R1  2 , R2  4 , L1  0.00795 H, X1  2pfL1  3 , C2  1,326 mF, X2  1/2pfC2  2 , Z1 5 122 1 32 5 3.6 , and Y1  1/3.6  0.278 S. Solution: g1 5 R1/sR21 1 X21d 5 2/13 5 0.154; b1  3/13  0.231; Z2  116 1 4 5 4.47; Y2  1/4.47  0.224; g2 5 R2/sR22 1 X22d 5 4/s16 1 4d 5 0.2 S; b2  2/20  0.1 S; G  g1  g2  0.154  0.2  0.354 S; B  b1  b2  0.231  0.1  0.131 S; Y 5 2G2 1 B2 5 20.3542 1 s2 0.131d2 5 0.377 S, and joint impedance Z  1/0.377  2.65 . Phase angle u  tan1 (0.131/ 0.354)  20.3. I  EY  100  0.377  37.7 A; P1  E2g1  1002  0.154  1,540 W; P2  E2g2  1002  0.2  2,000 W; total power  E2G  1002  0.354  3,540 W. Power factor  cos u  3,540/(100  37.7)  93.8 percent.

With parallel circuits, unity power factor is obtained when the algebraic sum of the quadrature currents is zero. That is, b1  b2  b3     0. Three-Phase Circuits Ac generators are usually wound with three armature circuits which are spaced 120 electrical degrees apart on the armature. Hence these coils generate emfs 120 electrical degrees apart. The coils are connected either in Y (star) or in  (mesh) as shown in Fig. 15.1.30. Whether Y- or -connected, with a balanced load, the three coil emfs Ec and the three coil currents Ic are equal. In the Y connection the line and coil currents are equal, but the line emfs EAB, EBC, ECA are 23 times in magnitude the coil emfs EOA,

(15.1.67) Fig. 15.1.30 Three-phase connections. (a) Y connection; (b) ∆ connection.

and the susceptance is b1 5 x1/sr 21 1 x 21d 5 x1/Z 12

S

(15.1.68)

Conductance is not the reciprocal of resistance unless the reactance is zero; susceptance is not the reciprocal of reactance unless the resistance is zero. With inductive reactance the susceptance is negative; with capacitive reactance the susceptance is positive. If a second circuit has resistance r2 and capacitive reactance x2 in series, g2 5 r2/sr22 1 x22d 5 r2 /Z22; b2 5 x2 /sr22 1 x22d 5 x2/Z22. The total conductance G  g1  g2; the total susceptance B  b1  b2. The admittance is Y 5 2G2 1 B2 5 1/Z

S

(15.1.69)

Fig. 15.1.29 Parallel circuit and phasor diagrams.

EOB, EOC, since each is the phasor difference of two coil emfs. In the delta connection the line and coil emfs are equal, but I, the line current, is 13 Ic, the coil current, i.e., it is the phasor difference of the currents in the two coils connected to the line. The power of a coil is EcIc cos u, so that the total power is 3EcIc cos u. If u is the angle between coil current and coil voltage, the angle between line current and line voltage will be 30 u. In terms of line current and emf, the power is 13E I cos u. A fourth or neutral conductor connected to O is frequently used with the Y connection. The neutral point O is frequently grounded in transmission and distribution circuits. The coil

15-20

ELECTRICAL ENGINEERING

emfs are assumed to be sine waves. Under these conditions they balance, so that in the delta connection the sum of the two coil emfs at each instant is balanced by the third coil emf. Even though the third, ninth, fifteenth . . . harmonics, 3(2n  l)f, where n  0 or an integer, exist in the coil emfs, they cannot appear between the three external line conductors of the three-phase Y-connected circuit. In the delta circuit, the same harmonics 3(2n  1)f cause local currents to circulate around the mesh. This may cause a very appreciable heating. In a three-phase system the power P 5 23 EI cos u

W

(15.1.75)

Advantages of Polyphase Power The advantages of polyphase power over single-phase power are as follows. The output of synchronous generators and most other rotating machinery is from 60 to 90 percent greater when operated polyphase than when operated single phase; pulsating fluxes and corresponding iron losses which occur in many common types of machinery when operated single phase are negligible when operated polyphase; with balanced polyphase loads polyphase power is constant whereas with single phase the power fluctuates over wide limits during the cycle. Because of its minimum number of wires and the fact that it is not easily unbalanced, the three-phase system has for the most part superseded other polyphase systems.

the power factor is P/23 EI

(15.1.76)

23 EI/1,000

(15.1.77)

and the kV  A where E and I are line voltages and currents. Two-Phase Circuits Two-phase generators have two windings spaced 90 electrical degrees apart on the armature. These windings generate emfs differing in time phase by 90. The two windings may be independent and power transmitted to the receiver though the two single-phase circuits are entirely insulated from each other. The two circuits may be combined into a two-phase three-wire circuit such as is shown in Fig. 15.1.31, where OA and OB are the generator circuits

Fig. 15.1.31 Two-phase, three-wire circuit.

(or transformer secondaries) and AO and BO are the load circuits. The wire OO is the common wire and under balanced conditions carries a current 12 times the current wires AA and BB. For example, if Ic is the coil current, 12 Ic will be the value of the current in the common conductor OO. If Ec is the voltage across OA or OB, 12 Ec will be the voltage across AB. The power of a two-phase circuit is twice the power in either coil if the load is balanced. Normally, the voltages OA and OB are equal, and the current is the same in both coils. Owing to nonsymmetry and the high degree of unbalancing of this system even under balanced loads, it is not used at the present time for transmission and is little used for distribution. Four-Phase Circuit A four-phase or quarter-phase circuit is shown in Fig. 15.1.32. The windings AC and BD may be independent or

ELECTRICAL INSTRUMENTS AND MEASUREMENTS

Electrical measuring devices that merely indicate, such as ammeters and voltmeters, are called instruments; devices that totalize with time such as watthour meters and ampere-hour meters are called meters. (See also Sec. 16.) Most types of electrical instruments are available with digital read out. DC Instruments Direct current and voltage are both measured with an indicating instrument based on the principle of the D’Arsonval galvanometer. A coil with steel pivots and turning in jewel bearings is mounted in a magnetic field produced by permanent magnets. The motion is restrained by two small flat coiled springs, which also serve to conduct the current to the coil. The deflections of the coil are read with a light aluminum pointer attached to the coil and moving over a graduated scale. The same instrument may be used for either current or voltage, but the method of connecting in circuit is different in the two cases. Usually, however, the coil of an instrument to be used as an ammeter is wound with fewer turns of coarser wire than an instrument to be used as a voltmeter and so has lower resistance. The instrument itself is frequently called a millivoltmeter. It cannot be used alone to measure voltage of any magnitude since its resistance is so low that it would be burned out if connected across the line. Hence a resistance r in series with the coil is necessary as indicated in Fig. 15.1.33a in which rc is the resistance of the coil. From 0.2 to 750 V this resistance is usually within the instrument. For higher voltages an external resistance R, called an extension coil or multiplier (Fig. 15.1.33b), is necessary. Let e be the reading of the instrument, in volts (Fig. 15.1.33b), r the internal resistance of the instrument, including r and rc in Eq. (15.1.33a), R the resistance of the multiplier. Then the total voltage is E 5 esR 1 rd/r

(15.1.78)

It is clear that by using suitable values of R a voltmeter can be made to have several scales.

Fig. 15.1.33 Voltmeters. (a) Internal resistance; (b) with multiplier.

Fig. 15.1.32 Four-phase or quarter-phase circuit.

connected at O. The voltages AC and BD are 90 electrical degrees apart as in two-phase circuits. If a neutral wire O-O is added, three different voltages can be obtained. Let E1  voltage between O-A, O-B, O-C, O-D. Voltages between A-B, B-C, C-D, D-A 5 12 E1. Voltages between A-C, B-D  2E1. Because of this multiplicity of voltages and the fact that polyphase power apparatus and lamps may be connected at the same time, this system is still used to some extent in distribution.

Instruments themselves can only carry currents of the magnitudes of 0.01 to 0.06 A. To measure larger values of current the instrument is provided with a shunt R (Fig. 15.1.34). The current divides inversely as the resistances r and R of the instrument and the shunt. A low resistance r within the instrument is connected in series with the coil. This permits some adjustment to the deflection so that the instrument can be adapted to its shunt. Usually most of the current flows through the shunt, and the current in the instrument is negligible in comparison. Up to 50 and 75 A the shunt can be incorporated within the instrument. For larger currents it is usually necessary to have the shunt external to

ELECTRICAL INSTRUMENTS AND MEASUREMENTS

15-21

Fig. 15.1.34 Millivoltmeter with shunt.

the instrument and connect the instrument to the potential terminals of the shunt by means of leads. Any given instrument may have any number of ranges by providing it with a sufficient number of shunts. The range of the usual instrument of this type is approximately 50 mV. Although the same instrument may be used for voltmeters or ammeters, the moving coils of voltmeters are usually wound with more turns of finer wire. They take approximately 0.01 A so that their resistance is approximately 100 /V. Instruments used as ammeters alone operate with 0.01 to 0.06 A. Permanent-magnet moving-coil instruments may be used to measure unidirectional pulsating currents or voltages and in such cases will indicate the average value of the periodically varying current or voltage. AC Instruments Instruments generally used for alternating currents may be divided into five types: electrodynamometer, iron-vane, thermocouple, rectifier, and electronic. Instruments of the electrodynamometer type, the most precise, operate on the principle of one coil carrying current, turning in the magnetic field produced by a second coil carrying current taken from the same circuit. If these circuits or coils are connected in series, the torque exerted on the moving system for a given relative position of the coil system is proportional to the square of the current and is not dependent on the direction of the current. Consequently, the instrument will have a compressed scale at the lower end and will usually have only the upper two-thirds of the scale range useful for accurate measurement. Instruments of this type ordinarily require 0.04 to 0.08 A or more in the moving-coil circuit for full-scale deflection. They read the rms value of the alternating or pulsating current. The wattmeter operates on the electrodynamometer principle. The fixed coil, however, is energized by the current of the circuit, and the moving coil is connected across the potential in series with high resistance. Unless shielded magnetically the foregoing instruments will not, in general, indicate so accurately on direct as on alternating current because of the effects of external stray magnetic fields. Also reversed readings should be taken. Iron vane instruments consist of a fixed coil which actuates magnetically a light movable iron vane mounted on a spindle; they are rugged, inexpensive, and may be had in ranges of 30 to 750 V and 0.05 to 100 A. They measure rms values and tend to have compressed scales as in the case of electrodynamometer instruments. The compressed part of the scale may, however, be extended by changing the shape of the vanes. Such instruments operate with direct current and are accurate to within 1 percent or so. AC instruments of the induction type (Westinghouse Electric Corp.) must be used on ac circuits of the frequency for which they have been designed. They are rugged and relatively inexpensive and are used principally for switchboards where a long-scale range and a strong deflecting torque are of particular advantage. Thermocouple instruments operate on the Seebeck effect. The current to be measured is conducted through a heater wire, and a thermojunction is either in thermal contact with the heater or is very close to it. The emf developed in the thermojunction is measured by a permanentmagnet dc type of instrument. By controlling the shape of the air gap, a nearly uniform scale is obtained. This type of instrument is well adapted to the measurement of high-frequency currents or voltages, and since it operates on the heating effect of current, it is convenient as a transfer instrument between direct current and alternating current. In the rectifier-type instrument the ac voltage or current is rectified, usually by means of a small copper oxide or a selenium-type rectifier, connected in a bridge circuit to give full-wave rectification (Fig. 15.1.35). The rectified current is measured with a dc permanent-magnet-type instrument M. The instrument measures the average value of the half waves that have been rectified, and with the sine waves, the average

Fig. 15.1.35 Rectifier-type instrument.

value is 0.9 the rms value. The scale is calibrated to indicate rms values. With nonsinusoidal waves the ratio of average to rms may vary considerably from 0.9 so that the instrument may be in error up to 5 percent from this cause. This type of instrument is widely used in the measurement of high-frequency voltages and currents. Electronic voltmeters operate on the principle of the amplification which can be obtained with a transistor. Since the emf to be measured is applied to the base, the instruments take practically no current and hence are adapted to measure potential differences which would change radically were any appreciable current taken by the measuring device. This type of instrument can measure voltages from a few tenths of a volt to several hundred volts, and with a potential divider, up to thousands of volts. They are also adapted to frequencies up to 100 MHz. Particular care must be used in selecting instruments for measuring the nonsinusoidal waves of rectifier and controlled rectifier circuits. The electrodynamic, iron vane, and thermocouple instruments will read rms values. The rectifier instrument will read average values, while the electronic instrument may read either rms or average value, depending on the type. Power Measurement in Single-Phase Circuits Wattmeters are not rated primarily in W, but in A and V. For example, with a low power factor the current and voltage coils may be overloaded and yet the needle be well on the scale. The current coil may be carrying several times its rated current, and yet the instrument reads zero because the potential circuit is not closed, etc. Hence it is desirable to use both an ammeter and a voltmeter in conjunction with a wattmeter when measuring power (Fig. 15.1.36a). The instruments themselves consume appreciable power, and correction is often necessary unless these losses are negligible compared with the power being measured. For example, in Fig. 15.1.36a, the wattmeter measures the I 2 R loss in its own current coil and in the ammeter (1 to 2 W each), as well as the loss in the voltmeter (E 2/R where R is the resistance of the voltmeter). The losses in the ammeter and voltmeter may be eliminated by short-circuiting the

Fig. 15.1.36 Connections of instruments to single-phase load.

ammeter and disconnecting the voltmeter when reading the wattmeter. If the wattmeter is connected as shown in Fig. 15.1.36b, it measures the power taken by its own potential coil (E 2/Rp) which at 110 V is 5 to 7 W. (Rp is the resistance of the potential circuit.) Frequently correction must be made for this power. Power Measurement in Polyphase Circuits; Three-Wattmeter Method Let ao, bo, and co be any Y-connected three-phase load

(Fig. 15.1.37). Three wattmeters with their current coils in each line and

15-22

ELECTRICAL ENGINEERING

their potential circuits connected to neutral measure the total power, since the power in each load is measured by one of the wattmeters. The connection oo may, however, be broken, and the total power is still the sum of the three readings; i.e., the power P  P1  P2  P3. This method is applicable to any system of n wires. The current coil of one

Fig. 15.1.39 Two-wattmeter method.

the power factor is a function of P1/P2. Table 15.1.9 gives values of power factor for different ratios of P1/P2. P 5 P2 1 P1 when u , 608. Fig. 15.1.37 Three-wattmeter method.

wattmeter is connected in each of the n wires. The potential circuit of each wattmeter is connected between its own phase wire and a junction in common with all the other potential circuits. The wattmeters must be connected symmetrically, and the readings of any that read negative must be given the negative sign. In the general case any system of n wires requires at least n  1 wattmeters to measure the power correctly. The n  1 wattmeters are connected in series with n  1 wires. The potential circuit of each is connected between its own phase wire and the wire in which no wattmeter is connected (Fig. 15.1.38).

When u  60, pf  cos 60  0.5, P1  cos (30  60)  0, P  P2. When u  60, pf 0.5, P  P2  P1. Also, tan u 5 23sP2 2 P1d/sP2 1 P1d

(15.1.80)

In a polyphase wattmeter the two single-phase wattmeter elements are combined to act on a single spindle. Hence the adding and subtracting of the individual readings are done automatically. The total power is indicated on one scale. This type of instrument is almost always used on switchboards. The connections of a portable type are shown in Fig. 15.1.40.

Fig. 15.1.40 Connections for a polyphase wattmeter in a three-phase circuit.

In the foregoing instrument connections, Y-connected loads are shown. These methods are equally applicable to delta-connected loads. The two-wattmeter method (Fig. 15.1.39) is obviously adapted to the two-phase three-wire system (Fig. 15.1.31).

Fig. 15.1.38 Power measurement in an n-wire system.

The thermal watt converter is also used to measure power. This instrument produces a dc voltage proportional to three-phase ac power. Three-Phase Systems The three-wattmeter method (Fig. 15.1.37) is applicable to any three-phase system. It is commonly used with the three-phase four-wire system. If the loads are balanced, P1  P2  P3 and the power P  3P1. The two-wattmeter method is most commonly used with three-phase three-wire systems (Fig. 15.1.39). The current coils may be connected in any two wires, the potential circuits being connected to the third. It will be recognized that this is adapting the method of Fig. 15.1.38 to three wires. With balanced loads the readings of the wattmeters are P1  Ei cos (30  u), P2  Ei cos (30  u), and P  P2 P1. u is the angle of phase difference between coil voltage and current. Since P1/P2 5 cos s308 1 ud/cos s308 2 ud Table 15.1.9

(15.1.79)

Measurement of Energy Watthour meters record the energy taken by a circuit over some interval of time. Correct registration occurs if the angular velocity of the rotating element at every instant is proportional to the power. The method of accomplishing this with dc meters is illustrated in Fig. 15.1.41. The meter is in reality a small motor. The field coils FF are in series with the line. The armature A is connected across the line, usually in series with a resistor R. The movable field coil F is in series with the armature A and serves to compensate for friction. C is a small commutator, either of copper or of silver, and the two small brushes are usually of silver. An aluminum disk, rotating between the poles of permanent magnets M, acts as a magnetic brake the retarding torque of which is proportional to the angular velocity of the disk. A small worm and the gears G actuate the recording dials.

Ratio P1/P2 and Power Factor

P1/P2

Power factor

1.0 0.9 0.8 0.7 0.6 0.5

1.000 0.996 0.982 0.956 0.918 0.866

P1/P2

Power factor

P1/P2

Power factor

P1/P2

Power factor

0.4 0.3 0.2 0.1 0.0

0.804 0.732 0.656 0.576 0.50

0.1 0.2 0.3 0.4 0.5

0.427 0.360 0.296 0.240 0.188

0.6 0.7 0.8 0.9 1.0

0.142 0.102 0.064 0.030 0.000

ELECTRICAL INSTRUMENTS AND MEASUREMENTS

Fig. 15.1.41 DC watthour meter.

The following relation, or an equivalent, holds with most types of meter. With each revolution of the disk, K Wh are recorded, where K is the meter constant found usually on the disk. It follows that the average watts P over any period of time t s is P 5 3,600KN/t

15-23

the flux becomes so large in magnitude that the transformer overheats. Semiconductors that break down at safe voltages and short currenttransformer secondaries are available to ensure that the secondary is closed. The secondaries of both potential and current transformers should be well grounded at one point (Figs. 15.1.42 and 15.1.43). Instrument transformers introduce slight errors because of small variations in their ratio with load. Also there is slight phase displacement in both current and potential transformers. The readings of the instruments must be multiplied by the instrument transformer ratios. The scales of switchboard instruments are usually calibrated to take these ratios into account.

(15.1.81)

where N is the revolutions of the disk during that period. Hence, the meter may be calibrated by connecting standardized instruments to measure the average power taken by the load and by counting the revolutions N for t s. Near full load, if the meter registers fast, the magnets M should be moved outward radially; if it registers slow, the magnets should be moved inward. If the meter registers fast at light (5 to 10 percent) load, the starting coil F should be moved further away from the armature; if it registers slow, F should be moved nearer the armature. A meter should not register more than 1.5 percent fast or slow, and with calibrated standards it can be made to register to within 1 percent of correct. The induction watthour meter is used with alternating current. Although the dc meter registers correctly with alternating current, it is more expensive than the induction type, the commutator and brushes may cause trouble, and at low power factors compensation is necessary. In the induction watthour meter the driving torque is developed in the aluminum disk by the joint action of the alternating magnetic flux produced by the potential circuit and by the load current. The driving torque and the retarding torque are both developed in the same aluminum disk, hence no commutator and brushes are necessary. The rotating element is very light, and hence the friction torque is small. Equation (15.1.81) applies to this type of meter. When calibrating, the average power W for t s is determined with a calibrated wattmeter. The friction compensation is made at light loads by changing the position of a small hollow stamping with respect to the potential lug. The meter should also be adjusted at low power factor (0.5 is customary). If the meter is slow with lagging current, resistance should be cut out of the compensating circuit; if slow with leading current, resistance should be inserted. Power-Factor Measurement The usual method of determining power factor is by the use of voltmeter, ammeter, and wattmeter. The wattmeter gives the watts of the circuit, and the product of the voltmeter reading and the ammeter reading gives the volt-amperes. The power factor is the ratio of the two [see Eqs. (15.1.65) and (15.1.76)]. Also singlephase and three-phase power-factor indicators, which can be connected directly in circuit, are on the market. Instrument Transformers

With voltages higher than 600 V, and even at 600 V, it becomes dangerous and inaccurate to connect instruments and meters directly into power lines. It is also difficult to make potential instruments for voltages in excess of 600 V and ammeters in excess of 60-A ratings. To insulate such instruments from high voltage and at the same time to permit the use of low-range instruments, instrument transformers are used. Potential transformers are identical with power transformers except that their volt-ampere rating is low, being 40 to 500 W. Their primaries are wound for line voltage and their secondaries for 110 V. Current transformers are designed to go in series with the line, and the rated secondary current is 5 A. The secondary of a current transformer should always be closed when current is flowing; it should never be allowed to become open circuited under these conditions. When open-circuited the voltage across the secondary becomes so high as to be dangerous and

Fig. 15.1.42 Single-phase connections of instruments with transformers.

Figure 15.1.42 shows the use of instrument transformers to measure the voltage, current, power, and kilowatthours of a single-phase load. Figure 15.1.43 shows the connections that would be used to measure the voltage, current, and power of a 26,400-V 600-A three-phase load.

Fig. 15.1.43 transformers.

Three-phase connections of instruments and instrument

Measurement of High Voltages Potential transformers such as those shown in Figs. 15.1.42 and 15.1.43 may be used even for very high voltages, but for voltages above 132 kV they become so large and expensive that they are used only sparingly. A convenient method used with testing transformers is the employment of a voltmeter coil, which consists of a coil of a few turns interwoven in the high-voltage winding and insulated from it. The voltage ratio is the ratio of the turns in the high-voltage winding to those in the voltmeter coil. A capacitance voltage divider consists of two or more capacitors connected in series across the high voltage to be measured. A high-impedance voltmeter, such as an electronic one, is connected across the capacitor at the grounded end. The high voltage V  VmCm /C V, where Vm is the voltmeter reading, C the capacitance (in mF) of the entire divider, and Cm the equivalent capacitance (in mF) of the capacitor at the grounded end. A bushing potential device consists of a high-voltage-transformer bushing having a capacitance tap brought out from one of the metallic electrodes within the bushing which is near ground potential. This device is obviously a capacitance voltage divider. For testing, sphere gaps are used for the very high voltages. Calibration data for sphere gaps are given in the ANSI/IEEE Std. 4-1978 Standard Techniques for High Voltage Testing. Even when it is not being used for the measurement of

15-24

ELECTRICAL ENGINEERING

voltage, it is frequently advisable to connect a sphere gap in parallel with the specimen being tested so as to prevent overvoltages. The gap is set to a slightly higher voltage than that which is desired. Measurement of Resistance Voltmeter-Ammeter Method A common method of measuring resistance, known as the voltmeter-ammeter or fall-in-potential method, makes use of an ammeter and a voltmeter. In Fig. 15.1.44, the resistance to be measured is R. The current in the resistor R is I A, which is measured by the ammeter A in series. The drop in potential across the resistor R is measured by the voltmeter V. The current shunted by the voltmeter is so small that it may generally be neglected. A correction

resistance of the voltmeter must be comparable with the unknown resistance, or the deflection of the instrument will be so small that the results will be inaccurate. To determine the impressed voltage E, the same voltmeter is used. The switch S connects S and B for this purpose. With these two readings, the unknown resistance is R 5 rsE 2 ed/e

(15.1.82)

where e is the deflection of the voltmeter when in series with the resistance to be measured as when S is at A. If a special voltmeter, having a resistance of 100 k per 150 V, is available, a resistance of the order of 2 to 3 M may be measured very accurately. When the insulation resistance is too high to be measured with a voltmeter, a sensitive galvanometer may be used. The connections for measuring the insulation resistance of a cable are shown in Fig. 15.1.46. The battery should have an emf of at least 100 V. Radio B batteries are convenient for this purpose. The method involves comparing the unknown resistance with a standard 0.1 M . To calibrate the galvanometer the cable is short-circuited (dotted line) and the switch S is thrown to position (a). Let the galvanometer deflection be D1 and the reading of the

Fig. 15.1.44 Voltmeter-ammeter method for resistance measurement.

may be applied if necessary, for the resistance of the voltmeter is generally given with the instrument. The potential difference divided by the current gives the resistance included between the voltmeter leads. As a check, determinations are generally made with several values of current, which may be varied by means of the controlling resistor r. If the resistance to be measured is that of the armature of a dc machine and the voltmeter leads are placed on the brush holders, the resistance determined will include that of the brush contacts. To measure the resistance of the armature alone, the voltmeter leads should be placed directly on the commutator segments on which the brushes rest but not under the brushes. Insulation Resistance Insulation resistance is so high that it is usually given in megohms (106 ohms, M ) rather than in ohms. Insulation resistance tests are important, for although they may not be conclusive they frequently reveal flaws in insulation, poor insulating material, presence of moisture, etc. Such tests are applied to the insulation of electrical machinery from the windings to the frame, to underground cables, to insulators, capacitors, etc. For moderately low resistances, 1 to 10 M , the voltmeter method given in Fig. 15.1.45, which shows insulation measurement to the frame of the field winding of a generator, may be used. To measure the current when a voltage E is impressed across the resistor R, a high-reading voltmeter V is connected in series with R. The current under this condition with the switch connecting S and A is E/(R  r), where r is the resistance of the voltmeter. A high-resistance voltmeter is necessary, since the method is in reality a comparison of the unknown insulation resistance R with the known resistance r of the voltmeter. Hence, the

Fig. 15.1.45 Voltmeter method for insulation resistance measurement.

Fig. 15.1.46 Measurement of insulation resistance with a galvanometer.

Ayrton shunt S1. The short circuit is then removed. The 0.1 M is left in circuit since it is usually negligible in comparison with the unknown resistance X. Let the reading of the galvanometer now be D2 and the reading of the shunt S2. Then X 5 0.1S2D1/S1D2

M

(15.1.83)

When the switch S is thrown to position (b), the cable is shortcircuited through the 0.1 M and becomes discharged. The Megger insulation tester is an instrument that indicates insulation resistance directly on a scale. It consists of a small hand or motor-driven generator which generates 500 V, 1,000 V, 2,500 V, or 5,000 V. A clutch slips when the voltage exceeds the rated value. The current through the unknown resistance flows through a moving element consisting of two coils fastened rigidly together, but which move in different portions of the magnetic field. A pointer attached to the spindle of the moving element indicates the insulation resistance directly. These instruments have a range up to 10,000 M and are very convenient where portability and convenience are desirable. The insulation resistance of electrical machinery may be of doubtful significance as far as dielectric strength is concerned. It varies widely with temperature, humidity, and cleanliness of the parts. When the insulation resistance falls below the prescribed value, it can (in most cases of good design) be brought to the required standard by cleaning and drying the machine. Hence it may be useful in determining whether or not the insulation is in proper condition for a dielectric test. IEEE Std. 62-1978 specifies minimum values of insulation resistance in M  (rated voltage)/(rating in kW  1,000). If the operating voltage is higher than the rated voltage, the operating voltage should be used. The rule specifies that a dc voltage of 500 be used in testing. If not, the voltage should be specified. Wheatstone Bridge Resistors from a fraction of an to 100 k and more may be measured with a high degree of precision with the Wheatstone bridge (Fig. 15.1.47). The bridge consists of four resistors

DC GENERATORS

15-25

Fig. 15.1.47 Wheatstone bridge.

ABCX connected as shown. X is the unknown resistance; A and B are ratio arms, the resistance units of which are in even decimal as 1, 10, 100, etc. C is the rheostat arm. A battery or low-voltage source of direct current is connected across ab. A galvanometer G of moderate sensitivity is connected across cd. The values of A and B are so chosen that three or four significant figures in the value of C are obtained. As a first approximation it is well to make A and B equal. When the bridge is in balance, X/C 5 A/B

(15.1.84)

The positions of the battery and galvanometer are interchangeable. There are many modifications of the bridge which adapt it to measurements of very low resistances and also to ac measurements. Kelvin Double Bridge The simple Wheatstone bridge is not adapted to measuring very low resistances since the contact resistances of the test specimen become comparable with the specimen resistance. This error is avoided in the Kelvin double bridge, the diagram of which is shown in Fig. 15.1.48. The specimen X, which may be a short length of copper wire or bus bar, is connected in series with an adjustable calibrated resistor R whose resistance is comparable with that of the specimen. The arms A and B of the bridge are ratio arms usually with decimal values of 1, 10, 100 . One terminal of the galvanometer is connected to X and R by means of two resistors a and b. If these resistors are set so that a/b  A/B, the contact resistance r between X and R is eliminated in the measurement. The contact resistances at c and d have no effect since at balance the galvanometer current is zero. The contact resistances at f and e need only be negligible compared with the resistances of arms A and B both of which are reasonably high. By means of the variable resistor Rh the value of current, as indicated by ammeter A, may be adjusted to give the necessary sensitivity. When the bridge is in balance, X/R 5 A/B

(15.1.85)

Fig. 15.1.49 Potentiometer principle.

be connected to mm through the galvanometer G. The potentiometer is standardized by throwing Sw to the standard-cell side, setting mm so that their positions on ab and bc correspond to the emf of the standard cell. The rheostat R is then adjusted until G reads zero. (In commercial potentiometers a dial which may be set directly to the emf of the standard cell is usually provided.) The unknown emf is measured by throwing Sw to EMF and adjusting m and m until G reads zero. The advantage of this method of measuring emf is that when the potentiometer is in balance no current is taken from either the standard cell or the source of emf. Potentiometers seldom exceed 1.6 V in range. To measure voltage in excess of this, a volt box which acts as a multiplier is used. To measure current, the voltage drop across a standard resistor of suitable value is measured with the potentiometer. For example, with 50 A a 0.01- standard resistance gives a voltage drop of 0.5 V which is well within the range of the potentiometer. Potentiometers of low range are used extensively with thermocouple pyrometers. Figure 15.1.49 merely illustrates the principle of the potentiometer. There are many modifications, conveniences, etc., not shown in Fig. 15.1.49. DC GENERATORS

All electrical machines are comprised of a magnetic circuit of iron (or steel) and an electric circuit of copper. In a generator the armature conductors are rotated so that they cut the magnetic flux coming from and entering the field poles. In the dc generator (except the unipolar type) the emf induced in the individual conductors is alternating, but this is rectified by the commutator and brushes, so that the current to the external circuit is unidirectional. The induced emf in a generator (or motor) E 5 fZNP/60Pr

V

(15.1.86)

where f  flux in webers entering the armature from one north pole; Z  total number of conductors on the armature; N  speed, r/min; P  number of poles; and P  number of parallel paths through the armature. Since with a given generator, Z, P, P are fixed, the induced emf E 5 KfN

V

(15.1.87)

where K is a constant. When the armature delivers current, the terminal volts are Fig. 15.1.48 Kelvin double bridge. Potentiometer The principle of the potentiometer is shown in Fig. 15.1.49. ab is a slide wire, and bc consists of a number of equal individual resistors between contacts. A battery Ba the emf of which is approximately 2 V supplies current to this wire through the adjustable rheostat R. A slider m makes contact with ab, and a contactor m connects with the contacts in bc. A galvanometer G is in series with the wire connecting to m. By means of the double-throw double-pole switch Sw, either the standard cell or the unknown emf (EMF ) may

V 5 E 2 IaRa

(15.1.88)

where Ia is the armature current and Ra the armature resistance including the brush and contact resistance, which vary somewhat. There are three standard types of dc generators: the shunt generator, the series generator, and the compound generator. Shunt Generator The field of the shunt generator in series with its rheostat is connected directly across the armature as shown in Fig. 15.1.50. This machine maintains approximately constant terminal voltage over its working range of load. An external characteristic of the

15-26

ELECTRICAL ENGINEERING Table 15.1.10 Approximate Test Performance of Compound-Wound DC Generators with Commutating Poles Efficiencies, percent

Fig. 15.1.50 Shunt generator.

generator is shown in Fig. 15.1.51. As load is applied the terminal voltage drops owing to the armature-resistance drop [Eq. (15.1.88)] and armature reaction which decreases the flux. The drop in terminal voltage reduces the field current which in turn reduces the flux, hence the induced emf, etc. At some point B, usually well above rated current, the foregoing reactions become cumulative and the generator starts to break down. The current reaches a maximum value and then decreases to nearly zero at short circuit. With large machines, point B is well above rated current, the operating range being between O and A. The voltage may be maintained constant by means of the field rheostat. Automatic regulators which operate through field resistance are frequently used to maintain constant voltage. Shunt generators are used in systems which are all tied together where their stability when in parallel is an advantage. If a generator fails to build up, (1) the load may be connected; (2) the field resistance may be too high; (3) the field circuit may be open; (4) the residual magnetism may be insufficient; (5) the field connection may be reversed. Series Generator In the series generator (Fig. 15.1.52) the entire load current flows through the field winding, which consists of relatively few turns of wire of sufficient size to carry the entire load current without undue heating. The field excitation, and hence the terminal voltage, depends on the magnitude of the load current. The generator

Fig. 15.1.51 Shunt generator characteristic.

kW

Speed, r/min

Volts

Amperes

5 10 25 50 100 200 400 1,000

1,750 1,750 1,750 1,750 1,750 1,750 1,750 1,750

125 125 125 125 125 125 250 250

40 80 200 400 800 1,600 1,600 4,000

1 ⁄4

load

77.0 80.0 84.0 83.0 87.0 88.0 91.7 92.1

1⁄

2

load

80.5 83.0 86.5 86.0 88.5 90.5 91.9 92.6

Full load 82.0 85.0 88.0 88.0 90.0 91.0 91.7 92.1

SOURCE: Westinghouse Electric Corp.

Compound-wound generators are chiefly used for small isolated plants and for generators supplying a purely motor load subject to rapid fluctuations such as in railway work. When first putting a compound generator in service, the shunt field must be so connected that the machine builds up. The series field is then connected so that it aids the shunt field. Figure 15.1.54 gives the characteristics of an overcompounded 200-kW 600-V compound-wound generator. Amplidynes The amplidyne is a dc generator in which a small amount of power supplied to a control field controls the generator output, the response being nearly proportional to the control field input. The amplidyne is a dc amplifier which can supply large amounts of power. The amplifier operates on the principle of armature reaction. In Fig. 15.1.55, NN and SS are the conventional north and south poles of a dc generator with central cavities. BB are the usual brushes placed at right angles to the pole axes of NN and SS. A control winding CC of small rating, as low as 100 W, is wound on the field poles. In Fig. 15.1.55, for simplicity, the control winding is shown as being wound on one pole

Fig. 15.1.52 Series generator.

supplies an essentially constant current and for years was used to supply series arc lamps for street lighting requiring direct current. Except for some special applications, the series generator is now obsolete. Compound-Wound Generators By the addition of a series winding to a shunt generator the terminal voltage may be automatically maintained very nearly constant, or, by properly proportioning the series turns, the terminal voltage may be made to increase with load to compensate for loss of voltage in the line, so that approximately constant voltage is maintained at the load. If the shunt field is connected outside the series field (Fig. 15.1.53), the machine is long shunt; if the shunt field is connected inside the series field, i.e., directly to the armature terminals, it is short shunt. So far as the operating characteristic is concerned, it makes little difference which way a machine is connected. Table 15.1.10 gives performance characteristics.

Fig. 15.1.53 Compound-wound dc generator.

Fig. 15.1.54 Characteristics of a 200-kW compound-wound dc generator.

only. The brushes BB are short-circuited, so that a small excitation mmf in the control field produces a large short-circuit current along the brush axis BB. This large short-circuit current produces a large armaturereaction flux AA along brush axis BB. The armature rotating in this field produces a large voltage along the brush axis BB. The load or working current is taken from brushes BB as shown. In Fig. 15.1.55 the working current only is shown by the crosses and dots in the circles. The short-circuit current would be shown by crosses in the conductors to the left of brushes BB and by dots in the conductors to the right of brushes BB. A small current in the control winding produces a high output voltage and current as a result of the large short-circuit current in brushes BB. In order that the brushes BB shall not be short-circuiting conductors which are cutting the flux of poles NN and SS, cavities are cut in these poles. Also the load current from brushes BB produces an armature reaction mmf in opposition to flux AA produced by the control field CC. Were this mmf not compensated, the flux AA and the output

DC MOTORS

15-27

three-pole switch, the other two being the bus switch S, as in Fig. 15.1.56. When compound generators are used on a three-wire system, two series fields—one at each armature terminal—and two equalizers are necessary. It is possible to operate any number of compound generators in parallel provided their characteristics are not too different and the equalizer connection is used. DC MOTORS

Motors operate on the principle that a conductor carrying current in a magnetic field tends to move at right angles to that field (see Fig. 15.1.11). The ordinary dc generator will operate entirely satisfactorily as a motor and will have the same rating. The conductors of the motor rotate in a magnetic field and therefore must generate an emf just as does the generator. The induced emf E 5 Kf N Fig. 15.1.55 Amplidyne.

of the machine would no longer be determined entirely by the control field. Hence there is a compensating field FF in series with the armature, which neutralizes the armature-reaction mmf which the load current produces. For simplicity the compensating field is shown on one field pole only. The amplidyne is capable of controlling and regulating speed, voltage, current, and power with accurate and rapid response. The amplification is from 10,000 to 250,000 times in machines rated from 1 to 50 kW. Amplidynes are frequently used in connection with selsyns and are employed for gun and turret control and for accurate controls in many industrial power applications. Parallel Operation of Shunt Generators It is desirable to operate generators in parallel so that the station capacity can be adapted to the load. Shunt generators, because of their drooping characteristics (Fig. 15.1.51), are inherently stable when in parallel. To connect shunt generators in parallel it is necessary that the switches be so connected that like poles are connected to the same bus bars when the switches are closed. Assume one generator to be in operation; to connect another generator in parallel with it, the incoming generator is first brought up to speed and its terminal voltage adjusted to a value slightly greater than the bus-bar voltage. This generator may then be connected in parallel with the other without difficulty. The proper division of load between them is adjusted by means of the field rheostats and is maintained automatically if the machines have similar voltage-regulation characteristics. Parallel Operation of Compound Generators As a rule, compound generators have either flat or rising voltage characteristics. Therefore, when connected in parallel, they are inherently unstable. Stability may, however, be obtained by using an equalizer connection, Fig. 15.1.56, which connects the terminals of the generator at the junctions of the series fields. This connection is of low resistance so that any increase of current divides proportionately between the series fields of the two machines. The equalizer switch (E.S.) should be closed first and opened last, if possible. In practice, the equalizer switch is often one blade of a

(15.1.89)

where K  constant, f  flux entering the armature from one north pole, and N  r/min [see Eq. (15.1.87)]. This emf is in opposition to the terminal voltage and tends to oppose current entering the armature. Its value is E 5 V 2 IaRa

(15.1.90)

where V  terminal voltage, Ia  armature current, and Ra  armature resistance [compare with Eq. (15.1.88)]. From Eq. (15.1.89) it is seen that the speed N 5 KsE/f

(15.1.91)

when Ks  1/K. This is the fundamental speed equation for a motor. By substituting in Eq. (15.1.90) N 5 Ks sV 2 1aRad/f

(15.1.92)

which is the general equation for the speed of a motor. The internal or electromagnetic torque developed by an armature is proportional to the flux and to the armature current; i.e., Tt 5 KtfIa

(15.1.93)

when Kt is a constant. The torque at the pulley is slightly less than the internal torque by the torque necessary to overcome the rotational losses, such as friction, windage, eddy-current and hysteresis losses in the armature iron and in the pole faces. The total mechanical power developed internally Pm 5 EIa

W

s5 EIa/746

hpd

(15.1.94)

The internal torque thus becomes T 5 EIa33,000/s2p 3 746N d 5 7.04EIa /N

(15.1.95)

Let VI be the motor input. The output is VIh where h is the efficiency. The horsepower is PH 5 VIh/746

(15.1.96)

and the torque is T 5 33,000PH /2pN 5 5,260PH/N

lb ? ft

(15.1.97)

where N is r/min. Shunt Motor In the shunt motor (Fig. 15.1.57) the flux is substantially constant and IaRa is 2 to 6 percent of V. Hence from Eq. (15.1.92), the speed varies only slightly with load (Fig. 15.1.58), so that the motor is adapted to work requiring constant speed. The speed regulation of constant-speed motors is defined by ANSI/IEEE Std. 100-1992 Standard Dictionary of Electrical and Electronic terms as follows: The speed regulation of a constant-speed direct-current motor is the change in speed when the load is reduced gradually from the rated value to zero with constant applied voltage and field rheostat setting expressed as a percent of speed at rated load. Fig. 15.1.56 Connections for compound-wound generators operating in parallel.

In Fig. 15.1.61 the speed regulation under each condition is 100(ac  bc)/bc (Fig. 15.1.58a). Also from Eq. (15.1.93) it is seen that the torque

15-28

ELECTRICAL ENGINEERING

Fig. 15.1.57 Connections for shunt dc motors and starters. (a) Three-point box; (b) four-point box.

is practically proportional to the armature current (see Fig. 15.1.58b). The motor is able to develop full-load torque and more on starting, but the ordinary starter is not designed to carry the current necessary for starting under load. If a motor is to be started under load, the starter

Fig. 15.1.58 Speed and torque characteristics of dc motors. (1) Shunt motor (2) cumulative compound motor; (3) differential compound motor; (4) series motor.

should be provided with resistors adapted to carry the required current without overheating. A controller is also adapted for starting duty under load. Commutating poles have so improved commutation in dc machines that it is possible to use a much shorter air gap than formerly. Since, with the shorter air gap, fewer field ampere-turns are required, the armature becomes magnetically strong with respect to the field. Hence, a sudden overload might weaken the field through armature reaction, thus causing an increase in speed; the effect may become cumulative and the motor run away. To prevent this, modern shunt motors are usually provided with a stabilizing winding, consisting of a few turns of the field in series with the armature and aiding the shunt field. The resulting increase of field ampere-turns with load will more than compensate for any weakening of the field through armature reaction. The series turns are so few that they have no appreciable compounding effect. The shunt motor is used to drive constant-speed line shafting, for machine tools, etc. Since its speed may be efficiently varied, it is very useful when adjustable speeds are necessary, such as individual drive for machine tools. Shunt-Motor Starters At standstill the counter emf of the motor is zero and the armature resistance is very low. Hence, except in motors of very small size, series resistance in the armature circuit is necessary on starting. The field must, however, be connected across the line so that it may obtain full excitation. Figure 15.1.57 shows the two common types of starting boxes used for starting shunt motors. The armature resistance remains in circuit only during starting. In the three-point box (Fig. 15.1.57a) the starting lever is held, against the force of a spring, in the running position, by an electromagnet in series with the field circuit, so that, if the field circuit is interrupted or the line voltage becomes too low, the lever is released

and the armature circuit is opened automatically. In the four-point starting box the electromagnet is connected directly across the line, as shown in Fig. 15.1.57b. In this type the arm is released instantly upon failure of the line voltage. In the three-point type some time elapses before the field current drops enough to effect the release. Some starting rheostats are provided with an overload device so that the circuit is automatically interrupted if too large a current is taken by the armature. The fourpoint box is used where a wide speed range is obtained by means of the field rheostat. The electromagnet is not then affected by changes in field current. In large motors and in many small motors, automatic starters are widely used. The advantages of the automatic starter are that the current is held between certain maximum and minimum values so that the circuit does not become opened by too rapid starting as may occur with manual operation; the acceleration is smooth and nearly uniform. Since workers can stop and start a motor merely by the pushing of a button, there results considerable saving by the shutting down of the motor when it is not needed. Automatic controls are essential to elevator motors so that smooth rapid acceleration with frequent starting and stopping may be obtained. Also automatic starting is very necessary with multiple-unit operation of electric-railway cars and with rollingmill motors which are continually subjected to rapid acceleration, stopping, and reversing. Series Motor In the series motor the armature and field are in series. Hence, if saturation is neglected, the flux is proportional to the current and the torque [Eq. (15.1.93)] varies as the current squared. Therefore any increase in current will produce a much greater proportionate increase in torque (see Fig. 15.1.58b). This makes the motor particularly well adapted to traction work, cranes, hoists, fork-lift trucks, and other types of work which require large starting torques. A study of Eq. (15.1.92) shows that with increase in current the numerator changes only slightly, whereas the change in the denominator is nearly proportional to the change in current. Hence the speed of the series motor is practically inversely proportional to the current. With overloads the speed drops to very low values (see Fig. 15.1.58a). With decrease in load the speed approaches infinity, theoretically. Hence the series motor should always be connected to its load by a direct drive, such as gears, so that it cannot reach unsafe speeds (see Speed Control of Motors). A series-motor starting box with no-voltage release is shown in Fig. 15.1.59. Differential Compound Motors The cumulative compound winding of a generator becomes a differential compound winding when the machine is used as a motor. Its speed may be made more nearly constant

Fig. 15.1.59 Series motor starter, no-voltage release.

DC MOTORS Table 15.1.11

15-29

Test Performance of Compound-Wound DC Motors 115 V

230 V

Power, hp

Speed, r/min

Current, A

Full-load efficiency, %

1 2 5 10 25 50 100 200

1,750 1,750 1,750 1,750 1,750 850 850 1,750

8.4 16.0 40.0 75.0 182.0 — — —

78 80 82 85.6 87.3 — — —

550 V

Current, A

Full-load efficiency, %

Current, A

Full-load efficiency, %

4.3 8.0 20.0 37.5 91.7 180.0 350.0 700.0

79 81 83 85 87.5 89 90.5 91

1.86 3.21 8.40 15.4 38.1 73.1 149.0 295.0

73.0 82.0 81.0 86.5 88.5 90.0 91.0 92.0

SOURCE: Westinghouse Electric Corp.

than that of a shunt motor, or, if desired, it may be adjusted to increase with increasing load. The speed as a function of armature current is shown in Fig. 15.1.58a and the torque as a function of armature current is shown in Fig. 15.1.58b. Since the speed of the shunt motor is sufficiently constant for most purposes and the differential motor tends toward instability, particularly in starting and on overloads, the differential motor is little used. Cumulative compound motors develop a more rapid increase in torque with load than shunt motors (Fig. 15.1.58b); on the other hand, they have much poorer speed regulation (Fig. 15.1.58a). Hence they are used where larger starting torque than that developed by the shunt motor is necessary, as in some industrial drives. They are particularly useful where large and intermittent increases of torque occur as in drives for shears, punches, rolling mills, etc. In addition to the sudden increase in torque which the motor develops with sudden applications of load, the fact that it slows down rapidly and hence causes the rotating parts to give up some of their kinetic energy is another important advantage in that it reduces the peaks on the power plant. Performance data for compound motors are given in Table 15.1.11. Commutation The brushes on the commutator of either a motor or generator should be set in such a position that the induced emf in the armature coils undergoing commutation and hence short-circuited by the brushes, is zero. In practice, this condition can at best be only approximately realized. Frequently conditions are such that it is far from being realized. At no load, the brushes should be set in a position corresponding to the geometrical neutral of the machine, for under these conditions the induced emf in the coils short-circuited by the brushes is zero. As load is applied, two factors cause sparking under the brushes. The mmf of the armature, or armature reaction, distorts the flux; when the current in the coils undergoing commutation reverses, an emf of self-induction L di/dt tends to prolong the current flow which produces sparking. In a generator, armature reaction distorts the flux in the direction of rotation and the brushes should be advanced. In order to neutralize the emf of self-induction the brushes should be set a little ahead of the neutral plane so that the emf induced in the short-circuited coils by the cutting of the flux at the fringe of the next pole is opposite to this emf of self-induction. In a motor the brushes are correspondingly moved backward in the direction opposite rotation. Theoretically, the brushes should be shifted with every change in load. However, practically all dc generators and motors now have commutating poles (or interpoles) and with these the brushes can remain in the no-load neutral plane, and good commutation can be obtained over the entire range of load. Commutating poles are small poles between the main poles (Fig. 15.1.60) and are excited by a winding in series with the armature. Their function is to neutralize the flux distortion in the neutral plane caused by armature reaction and also to supply a flux that will cause an emf to be induced in the conductors undergoing commutation, opposite and equal to the emf of self-induction. Since armature reaction and the emf of self-induction are both proportional to the armature current, saturation being neglected, they are neutralized theoretically at every load. Commutating poles have made possible dc

Fig. 15.1.60 Commutating poles in motor.

generators and motors of very much higher voltage, greater speeds, and larger kW ratings than would otherwise be possible. Occasionally, the commutating poles may be connected incorrectly. In a motor, passing from an N main pole in the direction of rotation of the armature, an N commutating pole should be encountered as shown in Fig. 15.1.60. In a generator under these conditions an S commutating pole should be encountered. The test can easily be made with a compass. If poor commutation is caused by too strong interpoles, the winding may be shunted. If the poles are too weak and the shunting cannot be reduced, they may be strengthened by inserting sheet-iron shims between the pole and the yoke thus reducing the air gap. Although the emfs induced in the coils undergoing commutation are relatively small, the resistance of the coils themselves is low so that unless further resistance is introduced, the short-circuit currents would be large. Hence, with the exception of certain low-voltage generators, carbon brushes that have relatively large contact resistance are almost always used. Moreover, the graphite in the brushes has a lubricating action, and the usual carbon brush does not score the commutator. Speed Control of Motors Shunt Motors In Eq. (15.1.91) the speed of a shunt motor N  KsE/f, where Ks is a constant involving the design of the motor such as conductors on armature surface and number of poles. Obviously, in order to change the speed of a motor, without changing its construction, two factors may be varied, the counter emf E and the flux f. Armature-Resistance Control The counter emf E  V  IaRa, where V is the terminal voltage, assumed constant. Ra must be small so that the armature heating can be maintained within permissible limits. Under these conditions the speed change with load is small. By inserting an external resistor, however, into the armature circuit the counter emf E may be made to decrease rapidly with increase in load; that is, E  V  Ia(Ra  R) [see Eq. (15.1.90)] where R is the resistance of the external resistor. The resistor R must be inserted in the armature circuit only. The advantages of this method are its simplicity, the full torque of the motor is developed at any speed, and the method introduces no commutating difficulties. Its disadvantages are the increased speed regulation with change of load (Fig. 15.1.61), the low efficiency, particularly at the lower speeds, and the fact that provision must be made to dissipate the comparatively large power losses in the series resistor. Figure 15.1.61 shows typical speed-load curves without and with series

15-30

ELECTRICAL ENGINEERING

Fig. 15.1.61 Speed-load characteristics with armature resistance control.

resistors in the armature circuit. The armature efficiency is nearly equal to the ratio of the operating speed to the no-load speed. Hence at 25 percent speed the armature efficiency is practically 25 percent. Frequently the controlling and starting resistors are one, and the device is called a controller. Starting rheostats themselves are not designed to carry the armature current continuously and must not be used as controllers. The armature-resistance method of speed control is frequently used to regulate the speed of ventilating fans where the power demand diminishes rapidly with decrease in speed. Control by Changing Impressed Voltage From Eq. (15.1.92) it is evident that the speed of a motor may be changed if V is changed by connecting the armature across different voltages. Speed control by this method is accomplished by having mains (usually four), which are maintained at different voltages, available at the motor. The shunt field of the motor is generally permanently connected to one pair of mains, and the armature circuit is provided with a controller by means of which the operator can readily connect the armature to any pair of mains. Such a system gives a series of distinct and widely separated speeds and generally necessitates the use of field-resistance control, in combination, to obtain intermediate speeds. This method, known as the multivoltage method, has the disadvantage that the system is expensive, for it requires several generating machines, a somewhat complicated switchboard, and a number of service wires. The system is used somewhat in machine shops and is extensively used for dc elevator starting and speed control. In the Ward Leonard system, the variable voltage is obtained from a separately excited generator whose armature terminals are connected directed to the armature terminals of the working motor. The generator is driven at essentially constant speed by a dc shunt motor if the power supply is direct current, or by an induction motor or a synchronous motor if the power supply is alternating current. The field circuit of the generator and that of the motor are connected across a constant-voltage dc supply. The terminal voltage of the generator, and hence the voltage applied to the armature of the motor, is varied by changing the generatorfield current with a field rheostat. The rheostat has a wide range of resistance so that the speed of the motor may be varied smoothly from 0 to 100 percent. Since three machines are involved the system is costly, somewhat complicated, and has low power efficiency. However, because the system is flexible and the speed can be smoothly varied over wide ranges, it has been used in many applications, such as elevators, mine hoists, large printing presses, paper machines, and electric locomotives. The Ward-Leonard system has been largely replaced by a static converter system where a silicon-controlled-rectifier bridge is used to convert three-phase alternating current, or single-phase on smaller drives, to dc voltage that can be smoothly varied by phase-angle firing of the rectifiers from full voltage to zero. This system is smaller, lighter, and less expensive than the motor-generator system. Care must be taken to assure that the dc motor will accept the harmonics present in the dc output without overheating. Control by Changing Field Flux Equation (15.1.91) shows that the speed of a motor is inversely proportional to the flux f. The flux can be changed either by varying the shunt-field current or by varying the reluctance of the magnetic circuit. The variation of the shunt-field current is the simplest and most efficient of all the methods of speed control. With the ordinary motor, speed variation of 1.5 to 1.0 is obtainable with this method. If attempt is made to obtain greater ratios, severe

sparking at the brushes results, owing to the field distortion caused by the armature mmf becoming large in comparison with the weakened field of the motor. Speed ratios of 5 : 1 and higher are, however, obtainable with motors which have commutating poles. Since the field current is a small proportion of the total current (1 to 3 percent), the rheostat losses in the field circuit are always small. This method is efficient. Also for any given speed adjustment the speed regulation is excellent, which is another advantage. Because of its simplicity, efficiency, and excellent speed regulation, the control of speed by means of the field current is by far the most common method. Output power remains constant when the field is weakened, so output torque varies inversely with motor speed. Speed Control of Series Motors The series motor is fundamentally a variable-speed motor, the speed varying widely from light load to full load and more (see Fig. 15.1.58a). From Eq. (15.1.92) the speed for any value of f, or current, can be changed by varying the impressed voltage. Hence the speed can be controlled by inserting resistance in series with the motor. This method, which is practically the same as the armature-resistance control method for shunt motors, has the same objections of low efficiency and poor regulation with fluctuating loads. It is extensively used in controlling the speed of hoist and crane motors. The series-parallel system of series-motor speed control is almost universally used in electric traction. At least two motors are necessary. The two motors are first connected in series with each other and with the starting resistor. The starting resistor is gradually cut out and, since each motor then operates at half line voltage, the speed of each is approximately half speed. Both motors take the same current, and each can develop full torque. This condition of operation is efficient since there is no external resistance in circuit. When the controller is moved to the next position, the motors are connected in parallel with each other and each in series with starting resistors. Full speed of the motors is obtained by gradually cutting out these resistors. Connecting the two motors in series on starting reduces the current to one-half the value that would be required for a given torque were both motors connected in parallel on starting. The power taken from the trolley is halved, and an intermediate running speed is efficiently obtained. In the multiple-unit method of speed control which is used for electric railway trains, the starting contactors, reverser, etc., for each car are located under that car. The relays operating these control devices are actuated by energy taken from the train line consisting usually of seven wires. The train line runs the entire length of the train, the connections between the individual cars being made through the couplers. The train line is energized by the action of the motorman operating any one of the small master controllers which are located in each car. Hence corresponding relays, contactors, etc., in every car all operate simultaneously. High accelerations may be reached with this system because of the large tractive effort exerted by the wheels on every car. SYNCHRONOUS GENERATORS

The synchronous generator is the only type of ac generator now in general use at power stations. Construction In the usual synchronous generator the armature or stator is the stationary member. This construction has many advantages. It is possible to make the slots any reasonable depth, since the tooth necks increase in cross section with increase in depth of slot; this is not true of the rotor. The large slot section which is thus obtainable gives ample space for copper and insulation. The conductors from the armature to the bus bars can be insulated throughout their entire lengths, since no rotating or sliding contacts are necessary. The insulation in a stationary member does not deteriorate as rapidly as that on a rotating member, for it is not subjected to centrifugal force or to any considerable vibration. The rotating member is ordinarily the field. There are two general types of field construction: the salient-pole type and the cylindrical, or nonsalient-pole, type. The salient-pole type is used almost entirely for slow and moderate-speed generators since this construction is the least expensive and permits ample space for the field ampere-turns.

SYNCHRONOUS GENERATORS Table 15.1.12

15-31

Performance Data for Synchronous Generators 80% pf, 3 phase, 60 Hz, 240 to 2,400 V, horizontal-coupled or belted-type engine

kVA

Poles

Speed, r/min

25 93.8 250 500 1,000 3,125

4 8 12 18 24 48

1,800 900 600 400 300 150

Excitation, kW 0.8 2 5 8 14.5 40

Full load

Approx. net weight, lb

87.6 90.9 92.2 93.2 93.9 94.6

900 2,700 6,000 10,000 16,100 52,000

Efficiencies, % 1⁄

2

3⁄

load

81.5 87 90 91.7 92.6 93.4

4

load

85.7 89.5 91.3 92.6 93.4 94.2

Industrial-size turbine generators, direct-connected type, 80% pf, 3 phase, 60 Hz, air-cooled Efficiency, % kVA

Poles

Speed, r/min

kW

V

load

load

Full load

Volume of air, ft3/min

1,875 2,500 3,125 3,750 5,000 6,250 7,500 9,375

2 2 2 2 2 2 2 2

3,600 3,600 3,600 3,600 3,600 3,600 3,600 3,600

18 22 24 24 29 38 42 47

125 125 125 125–250 125–250 125–250 125–250 125–250

95.3 95.3 95.3 95.3 95.3 95.3 95.5 95.5

96.1 96.1 96.3 96.3 96.3 96.3 96.5 96.5

96.3 96.3 96.5 96.6 96.6 96.7 96.9 96.9

3,500 5,000 5,500 6,500 11,000 12,000 15,000 16,500

Excitation

1⁄

3⁄

2

4

Voltage 480–6,900 2,400–6,900 2,400–6,900 2,400–6,900 2,400–6,900 2,400–13,800 2,400–13,800 2,400–13,800

Approx. weight, including exciter, lb 21,900 22,600 25,100 27,900 40,100 43,300 45,000 61,200

Central-station-size turbine generators, direct-connected type, 85% pf, 3 phase, 60 Hz, 11,500 to 14,400 V Efficiency, % kVA

Poles

Speed, r/min

kW

V

1⁄ 2 load

3⁄ 4 load

Full load

Volume of air, ft3/min

13,529 17,647 23,529 35,294 47,058 70,588

2 2 2 2 2 2

3,600 3,600 3,600 3,600 3,600 3,600

70 100 115 145 155 200

250 250 250 250 250 250

96.3 97.7 98.0 98.1 98.3 98.4

97.1 97.9 98.2 98.3 98.5 98.7

97.3 97.9 98.2 98.3 98.5 98.7

22,000 22,000 25,000 34,000 42,000 50,000

Excitation

Ventilation

Approx. weight, including exciter, lb

Air-cooled H2-cooled H2-cooled H2-cooled H2-cooled H2-cooled

116,700 115,700 143,600 194,800 237,200 302,500

SOURCE: Westinghouse Electric Corp.

It is not practicable to employ salient poles in high-speed turboalternators because of the excessive windage and the difficulty of obtaining sufficient mechanical strength. The cylindrical type consists of a cylindrical steel forging with radial slots in which the field copper, usually in strip form, is placed. The fields are ordinarily excited at low voltage, 125 and 250 V, the current being conducted to the rotating member by means of slip rings and brushes. An ac generator armature to supply field voltage can be mounted on the generator shaft and supply dc to the motor field through a static rectifier bridge, also mounted on the shaft, eliminating all slip rings and brushes. The field power is ordinarily only 1.5 percent and less of the rated power of the machine (see Table 15.1.12). Classes of Synchronous Generators Synchronous generators may be divided into three general classes: (1) the slow-speed enginedriven type; (2) the moderate-speed waterwheel-driven type; and (3) the high-speed turbine-driven type. In (1) a hollow box frame is used as the stator support, and the field consists of a spider to which a larger number of salient poles are attached, usually bolted. The speed seldom exceeds 75 to 90 r/min, although it may run as high as 150 r/min. Waterwheel generators also have salient poles which are usually dovetailed to a cylindrical spider consisting of steel plates riveted together. Their speeds range from 80 to 900 r/min and sometimes higher, although the 9,000-kVA Keokuk synchronous generators rotate at only 58 r/min, operating at a very low head. The speed rating of directconnected waterwheel generators decreases with decrease in head. It is desirable to operate synchronous generators at the highest permissible speed since the weight and costs diminish with increase in speed.

Waterwheel-driven generators must be able to run at double speed, as a precaution against accident, should the governor fail to shut the gate sufficiently rapidly in case the circuit breakers open or should the governing mechanism become inoperative. Turbine-driven generators operate at speeds of 720 to 3,600 r/min. Direct-connected exciters, belt-driven exciters from the generator shaft, and separately driven exciters are used. In large stations separately driven (usually motor) exciters may supply the excitation energy to excitation bus bars. Steam-driven exciters and storage batteries are frequently held in reserve. With slow-speed synchronous generators, the belt-driven exciter is frequently used because it can be driven at higher speed, thus reducing the cost. Synchronous-Generator Design At the present time single-phase generators are seldom built. For single-phase service two phases of a standard three-phase Y-connected generator are used. A single-phase load or unbalanced three-phase load produces flux pulsations in the magnetic circuits of synchronous generators, which increase the iron losses and introduce harmonics into the emf wave. Two-phase windings consist of two similar single-phase windings displaced 90 electrical space degrees on the armature and ordinarily occupying all the slots on the armature. The most common type of winding is the three-phase lapwound two-layer type of winding. In three-phase windings three windings are spaced 120 electrical space degrees apart, the individual phase belts being spaced 60 apart. Usually, all the slots on the armature are occupied. Standard voltages are 550, 1,100, 2,200, 6,600, 13,200, and 20,000 V. It is much more difficult to insulate for 20,000 V than it is for the lower voltages. However, if the power is to be transmitted at this

15-32

ELECTRICAL ENGINEERING

voltage, its use would be justified by the saving of transformers. In machines of moderate and larger ratings it is common to generate at 6,600 and 13,200 V if transformers must be used. The higher voltage is preferable, particularly for the higher ratings, because it reduces the cross section of the connecting leads and bus bars. The standard frequency in the United States for lighting and power systems is 60 Hz; the few former 50-Hz systems have practically all been converted to 60 Hz. The frequency of 25 Hz is commonly used in street-railway and subway systems to supply power to the synchronous converters and other ac-dc conversion apparatus; it is also commonly used in railroad electrification, particularly for single-phase seriesmotor locomotives (see Sec. 11). At 25 Hz incandescent lamps have noticeable flicker. In European (and most other) countries 50 Hz is standard. The frequency of a synchronous machine f 5 P 3 r/min/120

Hz

(15.1.98)

where P is the number of poles. Synchronous generators are rated in kVA rather than in kW, since heating, which determines the rating, is dependent only on the current and is independent of power factor. If the kilowatt rating is specified, the power factor should also be specified. Induced EMF The induced emf per phase in synchronous generator is E 5 2.22kbkp,f Z

V/phase

(15.1.99)

where kb  breadth factor or belt factor (usually 0.9 to 1.0), which depends on the number of slots per pole per phase, 0.958 for threephase, four slots per pole per phase; kp  pitch factor = 1.0 for full pitch, 0.966 for 5⁄6 pitch; ,  total flux Wb, entering armature from one north pole and is assumed to be sinusoidally distributed along the air gap; f  frequency; and Z  number of series conductors per phase. Synchronous generators usually are Y-connected. The advantages are that for a given line voltage the voltage per phase is 1/ 13 that of the delta-connected winding; third-harmonic currents and their multiples cannot circulate in the winding as with a delta-connected winding; third-harmonic emfs and their multiples cannot exist in the line emfs; a neutral point is available for grounding. Regulation The terminal voltage of synchronous generator at constant frequency and field excitation depends not only on the current load but on the power factor as well. This is illustrated in Fig. 15.1.62, which shows the voltage-current characteristics of a synchronous generator with lagging current, leading current and in-phase current (pf  1.00). With leading current the voltage may actually rise with increase in load;

by the ohmic resistance. This is due to hysteresis and eddy-current losses in the iron adjacent to the conductor and to the alternating flux-producing losses in the conductors themselves. Also the current is not distributed uniformly over conductors in the slot, but the current density tends to be greatest in the top of the slot. These factors all have the effect of increasing the resistance. The ratio of effective to ohmic resistance varies from 1.2 to 1.5. The armature leakage reactance is due to the flux produced by the armature current linking the conductors in the slots and also the end connections. The armature mmf reacts on the field to change the value of the flux. With a single-phase generator and with an unbalanced load on a polyphase generator, the armature mmf is pulsating and causes iron losses in the field structure. With polyphase machines under a constant balanced load, the armature mmf is practically constant in magnitude and fixed in its relation to the field poles. Its direction with relation to the field-pole axis is determined by the power factor of the load. A component of current in phase with the no-load induced emf, or the excitation emf, merely distorts the field by strengthening the trailing pole tip and weakening the leading pole tip. A component of current lagging the excitation emf by 90 weakens the field without distortion. A component of current leading the excitation emf by 90 strengthens the field without distortion. Ordinarily, both cross magnetization and one of the other components are acting simultaneously. The foregoing effects are called armature reaction. Frequently the effects of armature reactance and armature reaction can be combined into a single quantity. It is difficult to determine the regulation of synchronous generator by actual loading, even when in service, owing to the difficulty of obtaining, controlling, and absorbing the large balanced loads. Hence methods of predetermining regulation without actually loading the machine are used. Synchronous Impedance Method Both armature reactance and armature reaction have the same effect on the terminal voltage. In the synchronous impedance method the generator is considered as having no armature reaction, but the armature reactance is increased a sufficient amount to account for the effect of armature reaction. The phasor diagram for a current I lagging the terminal voltage V by an angle u is shown in Fig. 15.1.63. In a polyphase generator the phasor diagram is applicable to one phase, a balanced load almost always being assumed.

Fig. 15.1.63 Phasor diagram for synchronous impedance method.

Fig. 15.1.62 Synchronous generator characteristics.

the rate of voltage decrease with load becomes greater as the lag of the current increases. The regulation of a synchronous generator is defined by the ANSI/IEEE Std. 100-1992 Standard Dictionary of Electrical and Electronic Terms as follows: The voltage regulation of a synchronous generator is the rise in voltage with constant field current, when, with the synchronous generator operated at rated voltage and rated speed, the specified load at the specified power factor is reduced to zero, expressed as a percent of rated voltage.

For example, in Fig. 15.1.62 the regulation under each condition is 100sac 2 bcd/bc

(15.1.100)

With leading current the regulation may be negative. Three factors affect the regulation of synchronous generators; the effective armature resistance, the armature leakage reactance, and the armature reaction. With alternating current the armature loss is greater than the value obtained by multiplying the square of the armature current

The power factor of the load is cos u; IR is the effective armature resistance drop and is parallel to I; IXs is the synchronous reactance drop and is at right angles to I and leading it by 90. IXs includes both the reactance drop and the drop in voltage due to armature reaction. That part of IXs which replaces armature reaction is in reality a fictitious quantity. The synchronous impedance drop is given by IZs. The no-load or open-circuit (excitation) voltage E 5 2sV cos u 1 IRd2 1 sV sin u 6 IXsd2

V

(15.1.101)

All quantities are per phase. The negative sign is used with leading current. Regulation 5 100sE 2 V d/V

(15.1.102)

With leading current E may be less than V and a negative regulation results. The synchronous impedance is determined from an open-circuit and a short-circuit test, made with a weak field. The voltage E on open circuit is divided by the current I on short circuit for the same value of field current. Zs 5 Er/Ir

Xs 5 2Z s2 2 R2



(15.1.103)

SYNCHRONOUS GENERATORS

15-33

R is so small compared with Xs that for all practical purposes Xs  Zs. R may be determined by measuring the ohmic resistance per phase and multiplying by 1.4 to 1.5 to obtain the effective resistance. This value of R and the value of Xs obtained from Eq. (15.1.103) may then be substituted in Eq. (15.1.101) to obtain E at the specified load and power factor. Since the synchronous reactance is determined at low saturation of the iron and used at high saturation, the method gives regulations that are too large; hence it is called the pessimistic method. MMF Method In the mmf method the generator is considered as having no armature reactance but the armature reaction is increased by an amount sufficient to include the effect of reactance. That part of armature reaction which replaces the effect of armature reactance is in reality a fictitious quantity. To obtain the data necessary for computing the regulation, the generator is short-circuited and the field adjusted to give rated current in the armature. The corresponding value of field current Ia is read. The field is then adjusted to give voltage E equal to rated terminal voltage  IR drop ( V  IR, as phasors, Fig. 15.1.64) on open circuit and the field current I read. Ia is 180 from the current phasor I, and I leads E by 90 (Fig. 15.1.64). The angle between I and Ia is 90  u  f, but since f is small, it can usually be neglected. The phasor sum of Ia and I is Io. The open-circuit

Fig. 15.1.65 ANSI method of synchronous generator regulation. Fig. 15.1.64 Phasor diagram for the mmf method.

voltage E corresponding to Io is the no-load voltage and can be found on the saturation curve. The regulation is then found from Eq. (15.1.102). This method gives a value of regulation less than the actual value and hence is called the optimistic method. The actual regulation lies somewhere between the values obtained by the two methods but is more nearly equal to the value obtained by the mmf method. ANSI Method The ANSI method (American Standard 50, Rotating Electrical Machinery) which has become the accepted standard for the predetermination of synchronous generator operation, eliminates in large measure the errors due to saturation which are inherent in the synchronous impedance and mmf methods. In Fig. 15.1.65a is shown the saturation curve OAF of the generator. The axis OP is not only the fieldcurrent axis but also the axis of the current phasor I as well. V the terminal voltage is drawn u deg from I or OP, where u is the power factor angle. The effective resistance drop IR and the leakage reactance drop IX are drawn parallel and perpendicular to the current phasor. Ea, the phasor sum of V, IR, and IX, is the internal induced emf. Arcs are swung with O as the center and V and Ea as radii to intercept the axis of ordinates at B and C. OK, tangent to the straight portion of the saturation curve, is the air-gap line. If there is no saturation, Iv is the field current necessary to produce V, and CK is the field current necessary to produce Ea. The field current Is is the increase in field current necessary to take into account the saturation corresponding to Ea. The corresponding phasor diagram to a larger scale is shown in Fig. 15.1.65b. Irf , the field current necessary to produce rated current at short circuit, corresponding to Ia (Fig. 15.1.64), is drawn horizontally. The field current Iv is drawn at an angle u to the right of a perpendicular erected at the right-hand end of Irf . Ir is the resultant of Irf and Iv. Is is added to Ir giving If the resultant field current. The no-load emf E is found on the saturation curve, Fig. 15.5.65a, corresponding to If  OD. Excitation is commonly supplied by a small dc generator driven from the generator shaft. On account of commutation, except in the smaller sizes, the dc generator cannot be driven at 3,600 r/min, the usual speed for turbine generators, and belt or gear drives are necessary. The use of

the silicon rectifier has made possible simpler means of excitation as well as voltage regulation. In one system the exciter consists of a small rotating-armature synchronous generator (which can run at high speed) mounted directly on the main generator shaft. The three-phase armature current is rectified by six silicon rectifiers and is conducted directly to the main generator field without any sliding contacts. The main generator field current is controlled by the current to the stationary field of the exciter generator. In another system there is no rotating exciter, the generator excitation being supplied directly from the generator terminals, the 13,800 V, three-phase, being stepped down to 115 V, threephase, by small transformers and rectified by silicon rectifiers. Voltage regulation is obtained by saturable reactors actuated by potential transformers connected across the generator terminals. Most regulators such as the following operate through the field of the exciter. In the Tirrill regulator the field resistance of the exciter is shortcircuited temporarily by contacts when the bus-bar voltage drops. Actually, the contacts are vibrating continuously, the time that they are closed depending on the value of the bus-bar voltage. The General Electric Co. manufactures a direct-acting regulator in which the regulating rheostat is part of the regulator itself. The rheostat consists of stacks of graphite plates, each plate being pivoted at the center. Tilting the plates changes the path of the current through the rheostat and thus changes the resistance. The plates are tilted by a sensitive torque armature which is actuated by variations of voltage from the normal value (for regulators employing silicon rectifiers). Parallel Operation of Synchronous Generators The kilowatt division of load between synchronous generators in parallel is determined entirely by the speed-load characteristics of their prime movers and not by the characteristics of the generators themselves. No appreciable adjustment of kilowatt load between synchronous generators in parallel can be made by means of their field rheostats, as with dc generators. Consider Fig. 15.1.66, which gives the speed-load characteristics in terms of frequency, of two synchronous generators, no. 1 and no. 2, these characteristics being the speed-load characteristics of their prime movers. These speed-load characteristics are drooping, which is necessary for stable parallel operation. The total load on the two machines is

15-34

ELECTRICAL ENGINEERING

Fig. 15.1.66 Speed-load characteristics of synchronous generators in parallel.

P1  P2 kW. Both machines must be operating at the same frequency F1. Hence generator 1 must be delivering P1 kW, and generator 2 must be delivering P2 kW (the small generator losses being neglected). If, under the foregoing conditions, the field of either machine is strengthened, it cannot deliver a greater kilowatt load, for its prime mover can deliver more power only by dropping its speed. This is impossible, for both generators must operate always at the same frequency f1. For any fixed total power load, the division of kilowatt load between synchronous generators can be changed only by modifying in some manner the speed-load characteristics of their prime movers, such, for example, as changing the tension in the governor spring. Synchronous generators in parallel are of themselves in stable equilibrium. If the driving torque of one machine is increased, the resulting electrical reactions between the machines cause a circulating current to flow between machines. This current puts more electrical load on the machine whose driving torque is increased and tends to produce motor action in the other machines. In an extreme case, the driving torque of one prime mover may be removed entirely, and its generator will operate as a synchronous motor, driving the prime mover mechanically. Variations in driving torques cause currents to circulate between synchronous generators, transferring power which tends to keep the generators in synchronism. If the power transfer takes the form of recurring pulsations, it is called hunting, which may be reduced by building heavy copper grids called amortisseurs, or damper windings, into the pole faces. Turbine- and waterwheel-driven synchronous generators are much better adapted to parallel operation than are synchronous generators which are driven by reciprocating engines, because of their uniformity of torque. Increasing the field current of synchronous generators in parallel with others causes it to deliver a greater lagging component of current. Since the character of the load determines the total current delivered by the system, the lagging components of current delivered by the other generators must decrease and may even become leading components. Likewise if the field of one generator is weakened, it delivers a greater leading component of current and the other machines deliver components of current which are more lagging. These leading and lagging currents do not affect appreciably the division of kilowatt load between the synchronous generators. They do, however, cause unnecessary heating in their armatures. The fields of all synchronous generators should be so adjusted that the heating due to the quadrature components of currents is a minimum. With two generators having equal armature resistances, this occurs when both deliver equal quadrature currents. Armature reactance in the armature of machines in parallel is desirable. If not too great, it stabilizes their operation by producing the synchronizing action. Synchronous generators with too little reactance are sensitive, and if connected in parallel with slight phase displacement or inequality of voltage, considerable disturbance results. Armature reactance also reduces the current on short circuit, particularly during the first few cycles when the short-circuit current is a maximum. Frequently, external power-limiting reactances are connected in series to protect the generators and equipment from injury that would result from the tremendous short-circuit currents. For these reasons, poor regulation in large synchronous generators is frequently considered to be an advantage rather than a disadvantage. Ground Resistors Most power systems operate with a grounded neutral. When the station generators deliver current directly to the system (without intervening transformers), it is customary to ground the neutral (of the Y-connected windings) or one generator in a station; this

is usually done through a grounding resistor of from 2 to 6 . If the neutral of more than one generator is grounded, third-harmonic (and multiples thereof) currents can circulate between the generators. The ground resistor reduces the short-circuit currents when faults to ground occur, and hence reduces the violence of the short circuit as well as the duty of the circuit breakers. Grounding reactors are sometimes used but have limited application owing to the danger of high voltages resulting from resonant conditions. INDUCTION GENERATORS

The induction generator is an induction motor driven above synchronous speed. The rotor conductors cut the rotating field in a direction to convert shaft mechanical power to electrical power. Load increases as speed increases, so the generator is self-regulating and can be used without governor control. On short circuits the induction generator will deliver current to the fault for only a few cycles because, unlike the synchronous generator, it is not self-exciting. Since it is not self-excited, an induction generator must always be used in parallel with an electrical system where there are some synchronous machines, or capacitor banks, to deliver lagging current (VARs) to the induction generator for excitation. Induction generators have found favor in industrial cogeneration applications and as wind-driven generators where they provide a small part of the total load. CELLS Fuel cells convert chemical energy of fuel and oxygen directly to electrical energy. Solar cells convert solar radiation to electrical energy. At present, these conversion methods are not economically competitive with historic generating techniques, but they have found applications in isolated areas, such as microwave relaying stations, satellites, and residential lighting, where power requirements are small and costs for transmitting electrical power from more conventional sources is prohibitive. As solar technology continues to improve, many other applications will be evaluated.

TRANSFORMERS Transformer Theory The transformer is a device that transfers energy from one electric circuit to another without change of frequency and usually, but not always, with a change in voltage. The energy is transferred through the medium of a magnetic field: it is supplied to the transformer through a primary winding and is delivered by means of a secondary winding. Both windings link the same magnetic circuit. With no load on the secondary, a small current, called the exciting current, flows in the primary and produces the alternating flux. This flux links both primary and secondary windings and induces the same volts per turn in each. With a sine wave the emf is

E 5 4.44,mnf

V

(15.1.104)

where ,m  maximum instantaneous flux in webers, n  turns on either winding, and f  frequency. Equation (15.1.104) may also be written E 5 4.44BmAnf

V

(15.1.105)

Bm  maximum instantaneous flux density in iron and A  net cross section of iron. If Bm is in T, A is in m2; if Bm is in Mx/in2, A is in in2. In English units, Eq. (15.1.104) becomes E 5 4.44,mnf1028

V

(15.1.104a)

where ,M is in maxwells; Eq. (15.1.105) becomes E 5 4.44BmAnf 1028 2

V

(15.1.105a)

where Bm is in Mx/in and A is in in2. Bm is practically fixed. In large transformers with silicon steel it varies between 60,000 and 75,000 Mx/in2 at 60 Hz and between 75,000 and 90,000 Mx/in2 at 25 Hz. It is desirable to operate the iron at as high

TRANSFORMERS

density as possible in order to minimize the weight of iron and copper. On the other hand, with too high densities the eddy-current and hysteresis losses become too great, and with low frequency the exciting current may become excessive. It follows from Eq. (15.1.104) that E1/E2 5 n1/n2

(15.1.106)

where E1 and E2 are the primary and secondary emfs and n1 and n2 are the primary and secondary turns. Since the impedance drops in ordinary transformers are small, the terminal voltages of primary and secondary are also practically proportional to their number of turns. As the change in secondary terminal voltage in the ordinary constant-potential transformer over its range of operation is small (1.5 to 3 percent), the flux must remain substantially constant. Therefore, the added ampere-turns produced by any secondary load must be balanced by opposite and equal primary ampere-turns. Since the exciting current is small compared with the load current (1.5 to 5 percent) and the two are usually out of phase, the exciting current may ordinarily be neglected. Hence, n1I1 5 n2I2 I1/I2 5 n2/n1

R01 5 Pc /I 12 Z01 5 Vc /I1 2 X01 5 2Z 01 2 R201

(15.1.109) (15.1.110) (15.1.111)

where R01, Z01, and X01 are the equivalent resistance, impedance, and reactance referred to the primary. Also R02  R01(n2/n1)2; Z02  Z01(n2/n1)2; X02  X01(n2/n1)2, these quantities being the equivalent resistance, impedance, and reactance referred to the secondary. If the dc resistances R1 and R2 of the primary and secondary are measured, R01 5 R1 1 sn1/n2d2R2 R02 5 R2 1 sn2/n1d2R1

The ac or effective resistances, determined from Eq. (15.1.109), are usually 10 to 15 percent greater than these values. Regulation The regulation may be computed from the foregoing data as follows: Vr1 5 2sV1 cos u 1 I1R01d2 1 sV1 sin u 6 I1X01d2 Regulation 5 100sVr1 2 V1dV1

(15.1.112) (15.1.113)

(15.1.114) (15.1.115)

V1  rated primary terminal voltage; cos u  load power factor; I1  rated primary current; R01  equivalent resistance referred to primary [from Eq. (15.1.109)]; X01  equivalent reactance referred to primary. The  sign is used with lagging current and the  sign with leading current. Equations (15.1.114) and (15.1.115) are equally applicable to the secondary if the subscripts are changed. Efficiency The only two losses in a constant-potential transformer are the core loss in W, P0, which is practically independent of load, and Pc the copper loss in W, which varies as the load current squared. The efficiency for any current I1 is h 5 V1I1 cos u/sV1I1 cos u 1 P0 1 I 21R01d

(15.1.107) (15.1.108)

where I1 and I2 are the primary and secondary currents. When load is applied to the secondary of a transformer, the secondary ampere-turns reduce the flux slightly. This reduces the counter emf of the primary, permitting more current to enter and thus supply the increased power demanded by the secondary. Both primary and secondary windings must necessarily have resistance. All the flux produced by the primary does not link the secondary; the counter ampere-turns of the secondary produce some flux which does not link the primary. These leakage fluxes produce reactance in each winding. The combined effect of the resistance and reactance produces an impedance drop in each winding when current flows. These impedance drops produce a slight drop in the secondary terminal voltage with load. Transformer Testing Transformer regulation and losses are so small that it is far more accurate to compute the regulation and efficiency than to determine them by actual measurement. The necessary measurements and computations are comparatively simple, and little power is involved in making the tests. In the open-circuit test, the power input to either winding is measured at its rated voltage. Usually it is more convenient to make this test on the low-voltage winding, particularly if it is rated at 110, 220, or 550 V. The open-circuit power practically all goes to supply the core losses, consisting of eddy-current and hysteresis losses. Let this value of power be P0. The eddy-current loss varies as the square of the voltage and frequency; the hysteresis loss varies as the 1.6 power of the voltage, and directly as the frequency. In the short-circuit test one winding is short-circuited, and the current in the other is adjusted to near its rated value. The voltage Vc, the current I1, and the power input Pc are measured. When one winding of a transformer is short-circuited, the voltage across the other winding is 3 to 4 percent of rated value when rated current flows. Since a voltage range of from 110 to 250 V is best adapted to measuring instruments, that winding whose rated voltage, multiplied by 0.03 or 0.04, is closest to this voltage range should be used for making the short-circuit test, the other winding being short-circuited. Practically all the power on short circuit goes to supply the copper loss of primary and secondary. If the measurements are made on the primary,

15-35

(15.1.116)

Equation (15.1.116) applies equally well to the secondary if the subscripts are changed. The maximum efficiency occurs when the core and copper losses are equal. All-Day Efficiency Since transformers must usually be on the line 24 h/day, part of which time the load may be very light, the all-day efficiency is important. This is equal to the total energy or watthour output divided by the total energy or watthour input for the 24 h. That is, h5

sV1I1 cos u1dt1 1 c sV1I1 cos u1dt1 1 c 1 sI 21R01dt1 1 c 1 24P0

(15.1.117)

where t1  time in hours that load V1I1 cos u1 is being delivered, etc. Polyphase Transformer Connections Three-phase transformer banks may be connected -, -Y, Y-Y, and Y-. The - connection is very common, particularly at the lower voltages, and has the important advantage that the bank will operate V-connected if one transformer is disabled. The -Y connection is advantageous for stepping up to high voltages since the secondary of the transformers need be wound only for 58 percent s1/ 13d of the line voltage; it is also necessary when a four-wire three-phase system is obtained from a three-wire three-phase system since “a floating neutral” on the secondary cannot occur. This connection has found increased application in step-down distribution at 600 V and lower because of the relative ease of applying sensitive fault protection. The Y-Y system may be used for stepping up voltage. It should not be used for obtaining a three-phase four-wire system from a three-phase three-wire system, because of the “floating neutral” on the secondary and the resulting high degree of unbalance of the secondary voltages. The Y- system may be used to step down high voltages, the reverse of the -Y connection. Y-connected windings also preclude third harmonic, and their multiples, circulating currents in the transformer windings. In the -Y and Y- systems the ratio of line voltage is obviously not that of the individual transformers. Because of different phase displacement between primaries and secondaries, a - bank cannot be connected in parallel (on both sides) with a -Y bank, etc., even if they both have the correct voltage ratios between lines. Three-phase transformers combine the magnetic circuits of three singlephase transformers so that they have parts in common. A material saving in cost, in weight, and in space results, the greatest saving occurring in the core and oil. The advantages of three-phase transformers are often outweighed by their lack of flexibility. The failure of a single phase shuts down the entire transformer. With three single units, one unit may be readily replaced with a single spare. The primaries of single-phase transformers may be connected in Y or  at will and the secondaries properly phased. The primaries, as well as the secondaries of three-phase transformers, must be phased. For the transformation of moderate amounts of power from threephase to three-phase, two transformers employing either the open delta or T connection (Fig. 15.1.67) may be used. With each connection the ratio of line voltages is the same as the transformer ratios. In the figure,

15-36

ELECTRICAL ENGINEERING

Fig. 15.1.67 Transformer connections for transforming moderate amounts of three-phase power. (a) Open delta connection; (b) T connection.

ratios 10 : 1 are shown. In the T connection the primary and the secondary of the main transformer must be provided with a center tap to which one end of the teaser transformer is connected. The ratings of these systems are only 58 percent of the rating of the system using three similar transformers, one for each phase. Owing to dissymmetry, the terminal voltages become somewhat unbalanced even with a balanced load. To transform from two- to three-phase or the reverse, the T connection of Fig. 15.1.68 is used. To make the secondary voltages symmetrical a tap (called a Scott tap) is brought out at 86.6 percent s13/2d of the primary winding of the teaser transformer as shown in Fig. 15.1.68. With balanced no-load voltages the voltages become slightly unbalanced even under a symmetrical load, owing to unequal phase differences in the individual coils. The three-phase neutral O is one-third the winding of the teaser transformer from the junction. In Fig. 15.1.68a the transformation is from three-phase to a two-phase three-wire system. In Fig. 15.1.68b the transformation is from three-phase to a four-phase, fivewire system. The voltages are given on the basis of 100-V primaries with 1 : 1 transformer ratios. An autotransformer, also called compensator, consists essentially of a single winding linking a magnetic circuit. Part of the energy is transformed, and the remainder flows through conductively. Suitable taps are provided so that, if the primary voltage is applied to two of the taps, a voltage may be taken from any other two taps. The ratio of voltages is equal practically to the ratio of the turns between their taps. An autotransformer should be installed only when the ratio of transformation is not large. The ratio of power transformed to total power is 1  n, where n is the ratio of low-voltage to high-voltage emf. This gives the saving over the ordinary transformer and is greatest when the ratio is not far from unity. Figure 15.1.69a shows 100 kW being changed from 3,300 to 2,300 V; 30.3 kW only are being actually transformed, and the remainder of the power flows through conductively. Figure 15.1.69b shows how an ordinary 10 : 1, 10-kW lighting transformer may be connected to boost 110 kW 10 percent in voltage. In Fig. 15.1.69b, however, the 230-V secondary must be insulated for 2,300 V to the core and ground. The voltage may likewise be reduced by reversing the 230-V coil. An autotransformer should never be used when it is desired to keep dangerous primary potentials from the secondary. It is used for starting induction motors (Fig. 15.1.71) and for a number of similar purposes.

Fig. 15.1.69 Autotransformer. Data on Transformers Single-phase 55 self-cooled oil-insulated transformers for 2,300-V primaries, 230/115-V secondaries, and in sizes from 5 to 200 kVA for 60(25) Hz have efficiencies from one-half to full load of about 98 (97 to 98.7) percent and regulation of 1.5 (1.1 to 2.1) percent with pf  1, and 3.5 (2.7 to 4.1) percent with pf  0.8. Power transformers with 13,200-V primaries and 2,300-V secondaries in sizes from 667 to 5,000 kVA and for both 60 and 25 Hz have efficiencies from one-half to full load of about 99.0 percent and regulation of about 1.0 (4.2) percent with pf  1 (0.8). AC MOTORS Polyphase Induction Motor The polyphase induction motor is the most common type of motor used. It may be classified as squirrel cage or wound rotor and may be single-speed or multispeed type. Squirrelcage motors are further classified on the basis of torque, speed, and current characteristics as Designs A, B, C, and D and for locked-rotor current in kVA/hp by Codes A to V. It consists ordinarily of a stator which is wound in the same manner as the synchronous-generator stator. If two-phase current is supplied to a two-phase winding or threephase current to a three-phase winding, a rotating magnetic field is produced in the air gap. The number of poles which this field has is the same as the number of poles that a synchronous generator employing the same stator winding would have. The speed of the rotating field, or the synchronous speed,

N 5 120f/P

r/min

where f  frequency and P  number of poles.

Fig. 15.1.68 Connections for transforming from three-phase to two- and four-phase power.

(15.1.118)

AC MOTORS

There are two general types of rotors. The squirrel-cage type consists of heavy copper or aluminum bars short-circuited by end rings, or the bars and end rings may be an integral aluminum casting. The wound rotor has a polyphase winding of the same number of poles as the stator, and the terminals are brought out to slip rings so that external resistance may be introduced. The rotor conductors must be cut by the rotating field, hence the rotor cannot run at synchronous speed but must slip. The per unit slip is s 5 sN 2 N2d/N

(15.1.119)

where N2  the rotor speed, r/min. The rotor frequency f2 5 sf

(15.1.120)

The torque is proportional to the air-gap flux and the components of rotor current in space phase with it. The rotor currents tend to lag the emfs producing them, because of the rotor-leakage reactance. From Eq. (15.1.120) the rotor frequency and hence the rotor reactance (X2  2pf2L2) are low when the motor is running near synchronous speed, so that there is a large component of rotor current in space phase with the flux. With large values of slip the increased rotor frequency increases the rotor reactance and hence the lag of the rotor currents behind their emfs, and therefore considerable space-phase difference between these currents and the flux develops. Consequently even with large values of current the torque may be small. The torque of the induction motor increases with slip until it reaches a maximum value called the breakdown torque, after which the torque decreases (see Fig. 15.1.72). The breakdown torque varies as the square of the voltage, inversely as the stator impedance and rotor reactance, and is independent of the rotor resistance. The squirrel-cage motor develops moderate torque on starting (s  1.0) even though the current may be three to seven times rated current. For any value of slip the torque of the induction motor varies as the square of the voltage. The torque of the squirrel-cage motor which, on starting, is only moderate may be reduced in the larger motors because of starting voltage drop from inrush and possible necessity of applying reduced-voltage starting. Polyphase squirrel-cage motors are used mostly for constant-speed work; however, recent developments in variable-frequency pulse-width modulation (PWM) ac drives have seen these motors used in variablespeed applications where the ratio of maximum to minimum speed does not exceed 4 : 1. They are used widely on account of their rugged construction and the absence of moving electrical contacts, which makes them suitable for operation when exposed to flammable dust or gas. General-purpose squirrel-cage motors have starting torques of 100 to 250 percent of full load torque at rated voltage. The highest torques occur at the higher rated speeds. The locked rotor currents vary between four and seven times full-load current. In the double-squirrel-cage type of motor there is a high-resistance winding in the top of the rotor slots and a low-resistance winding in the bottom of the slots. The lowresistance winding is made to have a high leakage reactance, either by separating the windings with a magnetic bridge, Fig. 15.1.70a, or by making the slot very narrow in the area between the two windings, Fig. 15.1.70b. On starting, because of the high reactance of the lowresistance winding, most of the rotor current will flow in the highresistance winding, giving the motor a large starting torque. As the rotor approaches the low value of slip at which it normally operates, the rotor frequency and hence the rotor reactance become low and most of the rotor current now flows in the low-resistance winding. Cage bars can be shaped so that one winding gives comparable results. See Fig. 15.1.70c. The rotor operates with a low value of slip. The high starting torque of

Fig. 15.1.70 Types of slots for squirrel-cage windings.

15-37

the high-resistance motor and the excellent constant-speed operating characteristics of the low-resistance squirrel-cage rotor are combined in one motor. The single shaped cage bar is more economical, and almost any shape for required characteristics can be extruded from aluminum. Single bars also eliminate the problems of differential expansion with double cage bars. Nameplates of polyphase integral-hp squirrel-cage induction motors carry a code letter and a design letter. These provide information about motor characteristics, the former on locked rotor or starting inrush current (see Table 15.1.26) and the latter on torque characteristics. National Electrical Manufacturers Association (NEMA) standards publication No. MG1-2006 defines four design letters: A, B, C, and D. In all cases the motors are designed for full voltage starting. Locked rotor current and torque, pull-up torque and breakdown torque are tabulated according to horsepower and speed. Designs A, B, and C have full load slips less than 5 percent and design D more than 5 percent. Design B motors are the most widely used, having starting-torque and line-starting current characteristics suitable for most power systems. The nature of the various designs can be understood by reference to the full voltage values for a 100-hp, 1,800-r/min motor which follow:

Design

Locked rotor torque Pull-up torque Breakdown torque Locked rotor current Full load slip (%)

A

B

C

D

125* 100† 200† — 5§

125* 100† 200† 600* 5§

200* 140† 190† 600* 5§

275* — — 600* 5‡

NOTE: All quantities, except slip, are percent of full load value. * Upper limit. [See NEC® Table 430.7(B).] † Not less than. ‡ Greater than. § Less than.

Starting Polyphase induction motor controllers are commonly available for full-voltage (across-the-line) or reduced-voltage starting for single-speed or multispeed applications and may include reversing control. It is desirable to start induction motors by direct connection across the line, since reduced voltage starters are expensive and almost always reduce the starting torque. The capacity of the distribution system dictates when reduced voltage starting must be used to limit voltage dips on the system. On stiff industrial systems 25,000-hp motors have been successfully started across-the-line. In Fig. 15.1.71a is shown an “across-the-line” starter which may be operated from different push-button stations. The START push button closes the solenoid circuit between phases C and A through three bimetallic strips in series. This energizes solenoid S, which attracts armature D, which in turn closes the starting switch and the auxiliary blade G. This blade keeps the solenoid circuit closed when the START push button is released. Pressing the STOP push button opens the solenoid circuit, permitting the starting switch to open. A prolonged heavy overload raises the temperature of the heaters by an amount that will cause at least one of the overload sensing elements to open the solenoid circuit, releasing the starting switch. A common method of applying reduced-voltage start is to use a compensator or autotransformer or autostarter (Fig. 15.1.71b). When the switch is in the starting position, the three windings AB of the threephase autotransformer are connected in Y across the line and the motor terminals are connected to the taps which supply reduced voltage. When the switch is in the running position, the starter is entirely disconnected from the line. In modern practice, motors are protected by thermal or magnetic overload relays (Fig. 15.1.71) which operate to trip the circuit breaker. Since a time element is involved in the operation of such relays, they do not respond to large starting currents, because of their short duration. To limit the current to as low a value as possible, the lowest taps that will give the motor sufficient voltage to supply the

15-38

ELECTRICAL ENGINEERING

By introducing resistance into the rotor circuit through slip rings, the rotor currents may be brought nearly into phase with the air-gap flux and, at the same time, any value of torque up to maximum torque obtained. As the rotor develops speed, resistance may be cut out until there is no external resistance in the rotor circuit. The speed may also be controlled by inserting resistance in the rotor circuit. However, like the armature-resistance method of speed control with shunt motors, this method is also inefficient and gives poor speed regulation. Figure 15.1.72 shows graphically the effect on the torque of applying reduced voltage (curves b, c) and of inserting resistance in the rotor circuit (curve d). As shown by curves b and c, the torque for any given slip is proportional to the square of the line voltage. The effect of introducing resistance into the rotor circuit is shown by curve d. The point of maximum torque is shifted toward higher values of slip. The maximum torque at starting (slip  1.0) occurs when the

Fig. 15.1.72 Speed-torque curve for 10-hp, 60-Hz, 1,140-r/min induction motor.

Fig. 15.1.71 Starters for squirrel-cage induction motor. (a) Across-the-line starter; (b) autostarter.

required starting torque should be used. As the torque of an induction motor varies as the square of the voltage, the compensator produces a very low starting torque. Resistors in series with the stator may also be used to start squirrel-cage motors. They are inserted in each phase and are gradually cut out as the motor comes up to speed. The resistors are generally made of wire-type resistor units or of graphite disks enclosed within heat-resisting porcelain-lined iron tubes. The disadvantage of resistors is that if the motor is started slowly the resistor becomes very hot and may burn out. Resistor starters are less expensive than autotransformers. Their application is to motors that start with light loads at infrequent intervals. A phase-controlled, silicon-controlled rectifier (SCR) may be used to limit the motor-starting current to any value that will provide sufficient starting torque by reducing voltage to the motors. The SCR can also be used to start and stop the motors. A positive opening device such as a contactor or circuit breaker should be used in series with the SCR to stop the motor in case of a failed SCR, which will normally fail shorted and apply full voltage to the motor. The SCR starter has been combined with a microprocessor to provide gradual motor terminal voltage increases during an adjustable acceleration period. This is commonly called soft start. Other features include the ability to limit motor starting current and save energy when the motor is lightly loaded.

rotor resistance is equal to the rotor reactance at standstill. The woundrotor motor is used where large starting torque is necessary as in railway work, hoists, and cranes. It has better starting characteristics than the squirrel-cage motor, but, because of the necessarily higher resistance of the rotor, it has greater slip even with the rotor resistance all cut out. Obviously, the wound rotor, controller, and external resistance make it more expensive than the squirrel-cage type. See Table 15.1.13 for performance data. One disadvantage of induction motors is that they take lagging current, and the power factor at half load and less is low. The speed- and torqueload characteristics of induction motors are almost identical with those of the shunt motor. The speed decreases slightly to full load, the slip being from 10 percent in small motors to 2 percent in very large motors. The torque is almost proportional to the load nearly up to the breakdown torque. The power factor is 0.8 to 0.9 at full load. The direction of rotation of any three-phase motor may be reversed by interchanging any two stator wires. Speed Control of Induction Motors The induction motor inherently is a constant-speed motor. From Eqs. (15.1.118) and (15.1.119) the rotor speed is N2 5 120fs1 2 sd/P

(15.1.121)

The speed can be changed only by changing the frequency, poles, or slip. By employing two distinct windings or by reconnecting a single winding by switching it is possible to change the number of poles. Complications prevent more than two speeds being readily obtained in this manner. Another objection to changing the number of poles is the fact that the design is a compromise, and sacrifices of desirable characteristics usually are necessary at both speeds. Three-phase, two-speed squirrel-cage motors of either consequent pole or separate winding types are commonly used for applications such as machine tools, fans, blowers, and refrigeration compressors in designs for constant horsepower and for constant or variable torque. The change of slip by introducing resistance into the rotor circuit has been discussed under the wound-rotor motor.

AC MOTORS Table 15.1.13

15-39

Performance Data for Drip-Proof Motors 3 phase, 230 V, 60 Hz, 1,750 r/min, squirrel cage

hp

Weight, lb

Current, A

1 2 5 10 20 40 100 200

40 45 65 110 190 475 830 1,270

3.8 6.0 14.2 26.0 53.6 101.2 230.0 446.0

Power factor, % 1⁄ 2

load

45.3 54.3 61.3 66.3 69.5 69.8 83.7 82.5

3⁄

4

load

64.8 67.6 73.7 77.7 78.4 79.4 88.2 87.8

Efficiency, %* Full load 66.2 76.5 80.7 83.1 80.9 83.6 89.0 89.2

1⁄ 2

load

3⁄ 4

load

Full load

63.8 75.2 77.0 83.5 85.0 86.8 92.2 93.5

71.4 79.9 80.9 86.0 87.0 88.3 92.4 94.2

75.5 81.5 81.5 86.5 86.5 88.5 91.7 94.1

77.6 84.3 88.1 86.7 90.0 90.5

81.0 85.4 88.6 87.9 90.4 92.1

81.4 84.8 87.8 87.7 89.8 92.5

90.8 91.7 92.1

92.5 93.4 93.8

92.8 93.7 94.1

90.8 91.7 92.1

92.5 93.4 93.8

92.8 93.7 94.1

3 phase, 230 V, 60 Hz, 1,750 r/min, wound rotor 5 10 25 50 100 200

155 220 495 650 945 2,000

15.6 27.0 60.0 122.0 234.0 446.0

50.7 66.8 79.2 72.4 78.4 79.8

64.1 77.5 86.6 82.3 86.2 87.6

74.0 82.5 89.4 86.8 89.4 90.8

3 phase, 2,300 V, 60 Hz, 1,750 r/min, squirrel cage 300 700 1,000

2,300 3,380 4,345

70.4 155.0 221.0

76.2 85.5 86.2

83.4 89.4 89.5

86.0 90.0 90.0

3 phase, 2,300 V, 60 Hz, 1,750 r/min, wound rotor 300 700 1,000

3,900 5,750 8,450

68 154 218

84.4 82.7 82.9

86.2 87.7 87.9

89.9 90.9 91.1

* High-efficiency motors are available at premium cost. SOURCE: Westinghouse Electric Corp.

Solid-state ac drives, also referred to as adjustable-speed drives (ASDs), using pulse-width modulation (PWM) frequency converters have become widely used for speed control of ac induction motors and are available for powers up to 10,000 horsepower. AC drives using line-commutated rectifiers (LCI) are used for speed control of synchronous motors and, to a lesser degree, induction motors. PWM drives are widely used in industrial applications for pumps, blowers, fans, and compressors. Benefits in energy savings, reduced maintenance, and reduced mechanical and thermal stresses can be realized. Important issues to be considered in applying PWM drives include, but are not limited to: turndown ratio, harmonic effects on electrical networks, impact on the mechanical system, condition of existing motors retrofitted to PWM control, use of insulating bearings to minimize circulating currents in motors, distance from drive to motor, and increased frame size when motors are operated at less than rated speed. AC drive suppliers may have specific installation requirements or recommendations on motor feeder cable type and grounding practices. In many applications the motor/drive combination is engineered and supplied as a system. Polyphase voltages should be evenly balanced to prevent phase current unbalance. If voltages are not balanced, the motor must be derated in accordance with National Electrical Manufacturers Association (NEMA), Publication No. MG-1-14.34. Approximately 5 percent voltage unbalance would cause about 25 percent increase in temperature rise at full load. The input current unbalance at full load would probably be 6 to 10 times the input voltage unbalance. Single phasing, one phase open, is the ultimate unbalance and will cause overheating and burnout if the motor is not disconnected from the line. Single-Phase Induction Motor Single-phase induction motors are usually made in fractional horsepower ratings, but they are listed by

NEMA in integral ratings up to 10 hp. They have relatively high rotor resistances and can operate in the single-phase mode without overheating. Single-phase induction motors are not self-starting. However, the single-phase motor runs in the direction in which it is started. There are several methods of starting single-phase induction motors. Short-circuited turns, or shading coils, may be placed around the pole tips which retard the time phase of the flux in the pole tip, and thus a weak torque in the direction of rotation is produced. A high-resistance starting winding, displaced 90 electrical degrees from the main winding, produces poles between the main poles and so provides a rotating field which is weak but is sufficient to start the motor. This is called the split-phase method. In order to minimize overheating this winding is ordinarily cut out by a centrifugal device when the armature reaches speed. In the larger motors a repulsion-motor start is used. The rotor is wound like an ordinary dc armature with a commutator, but with short-circuited brushes pressing on it axially rather than radially. The motor starts as a repulsion motor, developing high torque. When it nears its synchronous speed, a centrifugal device pushes the brushes away from the commutator, and at the same time causes the segments to be short-circuited, thus converting the motor into a single-phase induction motor. Capacitor Motors Instead of splitting the phase by means of a highresistance winding, it has become almost universal practice to connect a capacitor in series with the auxiliary winding (which is displaced 90 electrical degrees from the main winding). With capacitance, it is possible to make the flux produced by the auxiliary winding lead that produced by the main field winding by 90 so that a true two-phase rotating field results and good starting torque develops. However, the 90 phase relation between the two fields is obtainable at only one value of speed (as at starting), and the phase relation changes as the motor comes up to speed. Frequently the auxiliary winding is disconnected either by a centrifugal switch or a relay as the motor approaches full speed, in

15-40

ELECTRICAL ENGINEERING

which case the motor is called a capacitor-start motor. With proper design the auxiliary winding may be left in circuit permanently (frequently with additional capacitance introduced). This improves both the power factor and torque characteristics. Such a motor is called permanent-split capacitor motor. Phase Converter If a polyphase induction motor is operating single-

phase, polyphase emfs are generated in its stator by the combination of stator and rotor fluxes. Such a machine can be utilized, therefore, for converting single-phase power into polyphase power and, when so used, is called a phase converter. Unless corrective means are utilized, the polyphase emfs at the machine terminals are somewhat unbalanced. The power input, being single-phase and at a power factor less than unity, not only fluctuates but is negative for two periods during each cycle. The power output being polyphase is steady, or nearly so. The cyclic differences between the power output and the power input are accounted for in the kinetic energy stored in the rotating mass of the armature. The armature accelerates and decelerates, but only slightly, in accordance with the difference between output and input. The phase converter is used principally on railway locomotives, since a single trolley wire can be used to deliver single-phase power to the locomotive, and the converter can deliver three-phase power to the three-phase woundrotor driving motors. PWM-type adjustable-speed drives are also available for small motor applications that can provide variable-speed control of a three-phase motor from a one- or two-phase voltage source. AC Commutator Motors Inherently simple ac motors are not adapted to high starting torques and variable speed. There are a number of types of commutating motor that have been developed to meet the requirement of high starting torque and adjustable speed, particularly with single phase. These usually have been accompanied by compensating windings, centrifugal switches, etc., in order to overcome low power factors and commutation difficulties. With proper compensation, commutator motors may be designed to operate at a power factor of nearly unity or even to take leading current. One of the simplest of the single-phase commutator motors is the ac series railway motor such as is used on the erstwhile New York, New Haven, and Hartford Railroad. It is based on the principle that the torque of the dc series motor is in the same direction irrespective of the polarity of its line terminals. This type of motor must be used on low frequency, not over 25 Hz, and is much heavier and more costly than an equivalent dc motor. The torque and speed curves are almost identical with those of the dc series motor. Unlike most ac apparatus the power factor is highest at light load and decreased with increasing load. Such motors operate with direct current even better than with alternating current. For example, the New Haven locomotives also operate from the 600-V dc third-rail system (two motors in series) from the New York City line (238th St.) into Grand Central Station. (See also Sec. 11.) On account of difficulties inherent in ac operation such as commutation and high reactance drops in the windings, it is economical to construct and operate such motors only in sizes adaptable to locomotives, the ratings being of the order of 300 to 400 hp. Universal motors are small simple series motors, usually of fractional horsepower, and will operate on either direct or alternating current, even at 60 Hz. They are used for vacuum cleaners, electric drills, and small utility purposes. Synchronous Motor Just as dc shunt generators operate as motors, a synchronous generator, connected across a suitable ac power supply, will operate as a motor and deliver mechanical power. Each conductor on the stator must be passed by a pole of alternate polarity every half cycle so that at constant frequency the rpm of the motor is constant and is equal to N 5 120f/P

r/min

(15.1.122)

and the speed is independent of the load. There are two types of synchronous motors in general use: the slipring type and the brushless type. The motor field current is transmitted to the motor by brushes and slip rings on the slip-ring type. On the brushless type it is generated by a shaft-mounted exciter and rectified and controlled by shaft-mounted static devices. Eliminating the slip rings is advantageous in dirty or hazardous areas.

The synchronous motor has the desirable characteristic that its power factor can be varied over a wide range merely by changing the field excitation. With a weak field the motor takes a lagging current. If the load is kept constant and the excitation increased, the current decreases (Fig. 15.1.73) and the phase difference between voltage and current becomes less until the current is in phase with the voltage and the power factor is unity. The current is then at its minimum value such as I0, and the corresponding field current is called the normal excitation. Further increase in field current causes the armature current to lead and the power factor to decrease. Thus underexcitation causes the current to lag; overexcitation causes the current to lead. The effect of varying the field current at constant values of load is shown by the V curves (Fig. 15.1.73). Unity power factor occurs at the minimum value of armature current, corresponding to normal excitation. The power factor for any point such as P is I0/I1, leading current. Because of its adjustable power factor, the motor is frequently run light merely to improve power factor or to control the voltage at some part of a power system. When so used the motor is called a synchronous condenser. The motor may, however, deliver mechanical power and at the same time take either leading or lagging current.

Fig. 15.1.73 V curves of a synchronous motor.

Synchronous motors are used to drive centrifugal and axial compressors, usually through speed increasers, pumps, fans, and other highhorsepower applications where constant speed, efficiency, and power factor correction are important. Low-speed synchronous motors, under 600 r/min, sometimes called engine type, are used in driving reciprocating compressors and in ball mills and in other slow-speed applications. Their low length-to-diameter ratio, because of the need for many poles, gives them a high moment of inertia which is helpful in smoothing the pulsating torques of these loads. If the motor field current is separately supported by a battery or constant voltage transformer, the synchronous motor will maintain speed on a lower voltage dip than will an induction motor because torque is proportional to voltage rather than voltage squared. However, if a synchronous motor drops out of step, it will normally not have the ability to reaccelerate the load, unless the driven equipment is automatically unloaded. The synchronous motor is usually not used in smaller sizes since both the motors and its controls are more expensive than induction motors, and the ability of a small motor to supply VARs to correct power factor is limited. If situated near an inductive load the motor may be overexcited, and its leading current will neutralize entirely or in part the lagging quadrature current of the load. This reduces the I 2R loss in the transmission

AC MOTORS Table 15.1.14 Power, hp

15-41

Performance Data for Coupled Synchronous Motors

Poles

Speed, r/min

Current, A

Excitation, kW

Efficiencies, % 1⁄

2

load

3⁄

Full load

Weight, lb

95.2 97.1 97.3 97.9 93.9 94.6 95.6

95.3 97.2 97.5 98.0 94.3 95.0 95.6

5,000 15,000 27,000 45,000 7,150 15,650 54,000

94.0 96.1 96.3 97.3 93.4 94.2 95.3

94.1 96.2 96.5 97.4 93.6 94.4 95.5

6,500 24,000 37,000 70,000 9,500 17,500 11,500

4

load

Unity power factor, 3 phase, 60 Hz, 2,300 V 500 2,000 5,000 10,000 500 1,000 4,000

4 4 4 6 18 24 48

1,800 1,800 1,800 1,200 400 300 150

100 385 960 1,912 99.3 197 781

3 9 13 40 5 8.4 25

94.5 96.5 96.5 97.5 92.9 93.7 94.9

80% power factor, 3 phase, 60 Hz, 2,300 V 500 2,000 5,000 10,000 500 1,000 4,000

4 4 4 6 18 24 48

1,800 1,800 1,800 1,200 400 300 150

127 486 1,212 2,405 125 248 982

4.5 13 21 50 7.2 11.6 40

93.3 95.5 95.5 96.8 92.4 93.3 94.6

SOURCE: Westinghouse Electric Corp.

lines and also increases the kilowatt ratings of the system apparatus. The synchronous condenser and motor can also be used to control voltage and to stabilize power lines. If the condenser or motor is overexcited, its leading current flowing through the line reactance causes a rise in voltage at the motor; if it is underexcited, the lagging current flowing through the line reactance causes a drop in voltage at the motor. Thus within limits it becomes possible to control the voltage at the end of a transmission line by regulating the fields of synchronous condensers or motors. Long 220-kV lines and the 287-kV Hoover Dam–Los Angeles line require several thousand kVa in synchronous condensers floating at their load ends merely for voltage control. If the load becomes small, the voltage would rise to very high values if the synchronous condensers were not underexcited, thus maintaining nearly constant voltage. See Table 15.1.14 for characteristics. A salient-pole synchronous motor may be started as an induction motor. In laminated-pole machines conducting bars of copper, copper alloy, or aluminum, damper or amortisseur windings are inserted in the pole face and short-circuited at the ends, exactly as a squirrel-cage winding in the induction motor is connected. The bars can be designed only for starting purposes since they carry no current at synchronous speed and have no effect on efficiency. In solid-pole motors a block of steel is bolted to the pole and performs the current-carrying function of the damper winding in the laminated-pole motor. At times the pole faces are interconnected to minimize starting-pulsating torques. When the synchronous motor reaches 95 to 98 percent speed as an induction motor, the motor field is applied by a timer or slip frequency control circuit, and the motor pulls into step at 100 percent speed. While accelerating, the motor field is connected to resistances to minimize induced voltages and currents. Two-pole motors are built as turbine type or round-rotor motors for mechanical strength and do not have the thermal capacity or space for starting windings, so they must be started by supplementary means. One such supplementary starting means is the use of a variablefrequency source, either a variable-speed generator or more commonly a static converter-inverter. The motor is brought up to speed in synchronism with a slowly increasing frequency. One common application is the starting of the large motor-generators used in pump-storage utility systems. Variable frequency may be used to start salient-pole machines also. Requirements for a start without high torques and pulsations or high voltage drops on small electrical systems may dictate the use of something other than full voltage starting. The synchronous reluctance motor is similar to an induction machine with salient poles machined in the rotors. Under light loads the motor

will synchronize on reluctance torque and lock in step with the rotating field at synchronous speed. These motors are used in small sizes with variable-frequency inverters for speed control in the paper and textile industry. The synchronous-induction motor is fundamentally a wound-rotor slipring induction motor with an air gap greater than normal, and the rotor slots are larger and fewer. On starting, resistance is inserted in the rotor circuit to produce high torque, and this is cut out as the speed increases. As synchronism is approached, the rotor windings are connected to a dc power source and the motor operates synchronously. Timing or clock motors operate synchronously from ac power systems. Figure 15.1.74a illustrates the Warren Telechron motor which operates on the hysteresis principle. The stator consists of a laminated element with an exciting coil, and each pole piece is divided, a short-circuited shading turn being placed on each of the half poles so formed. The rotor consists of two or more hard-steel disks of the shape shown, mounted on a small shaft. The shaded poles produce a 3,600 r/min rotating magnetic field (at 60 Hz), and because of hysteresis loss, the disk follows the field just as the rotor of an induction motor does. When the rotor approaches the synchronous speed of 3,600 r/min, the rotating magnetic field takes a path along the two rotor bars and locks the rotor in with it. The rotor and the necessary train of reducing gears rotate in oil sealed in a small metal can. Figure 15.1.74b shows a subsynchronous motor. Six squirrel-cage bars are inserted in six slots of a solid cylindrical iron rotor, and the spaces between the slots form six salient poles. The motor, because of the squirrel cage, starts as an induction motor, attempting to attain

Fig. 15.1.74 Synchronous motors for timing. (a) Warren Telechron motor; (b) Holtz induction-reluctance subsynchronous motor.

15-42

ELECTRICAL ENGINEERING

Table 15.1.15 Devices

Relationships for AC-DC Conversion Static Voltages, %

Currents, %

Device description

Eac

Edc

Iac

Idc

Ripple, %

1f, half wave 1f, full wave 3f, full wave

100 100 100

45 90 135

100 100 100

100 90 123

121 48 4.2

the speed of the rotating field, or 3,600 r/min (at 60 Hz). However, when the rotor reaches 1,200 r/min, one-third synchronous speed, the salient poles of the rotor lock in with the poles of the stator and hold the rotor at 1,200 r/min. AC-DC CONVERSION Static Rectifiers Silicon devices, having replaced gas tubes, are the primary means of ac to dc or dc to ac conversion in modern installations. They are advantageous when compared to synchronous converters or motor generators because of efficiency, cost, size, weight, and reliability. Various bridge configurations for single-phase and threephase applications are shown in Fig. 15.1.75a. Table 15.1.15 shows the relative outputs of rectifier circuits. The use of two three-phase bridges fed from an ac source consisting of a three-winding transformer with both a  and Y secondary winding so that output voltages are 30 out of phase will reduce dc ripple to approximately 1 percent. The use of silicon-controlled rectifiers (SCRs) to replace rectifiers in the various bridge configurations allows the output voltages to be varied from rated output voltage to zero. The output voltage wave will not be a sine wave but a series of square waves, which may not be suitable for some applications. Dc to ac conversions are shown in Fig. 15.1.75b. A new technology of ac-to-dc conversion commonly called switch mode power supplies (SMPSs) is found in almost all microprocessorbased modern electronic equipment for dc supplies between 3 and 15 V. The supply voltage (120 V ac) is rectified by a single-phase full-wave bridge circuit. See Fig. 15.1.75. The output is stored in a capacitor. A switcher will then switch the dc voltage from the capacitor on and off at a high frequency, usually between 10 and 100 kHz. These highfrequency pulses are stepped down in voltage by a transformer and

rectified by diodes. The diodes’ output is filtered to the dc supply required. These SMPSs are small in size, have higher efficiency, and are lower in cost. They do create harmonic and power quality problems that have to be addressed. SYNCHRONOUS CONVERTERS

The synchronous converter is essentially a dc generator with slip rings connected by taps to equidistant points in the armature winding. Alternating current may also be taken from and delivered to the armature. The machine may be single-phase, in which case there are two slip rings and two slip-ring taps per pair of poles; it may be threephase, in which case there are three slip rings and three slip-ring taps per pair of poles, etc. Converters are usually used to convert alternating to direct current, in which case they are said to be operating direct; they may equally well convert direct to alternating current, in which case they are said to be operating inverted. A converter will operate satisfactorily as a dc motor, a synchronous motor, a dc generator, a synchronous generator, or it may deliver direct and alternating current simultaneously, when it is called a double-current generator.

The rating of a converter increases very rapidly with increase in the number of phases owing, in part, to better utilization of the armature copper and also because of more uniform distribution of armature heating. Because of the materially increased rating, converters are nearly all operated six-phase. The rating decreases rapidly with decrease in power factor, and hence the converter should operate near unity power factor. The diametrical ac voltage is the ac voltage between two slip-ring taps 180 electrical degrees apart. With a two-pole closed winding, i.e., a winding that closes on itself when the winding is completed, the diametrical ac voltage is the voltage between any two slip-ring taps diametrically opposite each other. With a sine-voltage wave, the dc voltage is the peak of the diametrical ac voltage wave. The voltage relations for sine waves are as follows: dc volts, 141; single phase, diametrical, 100; three-phase, 87; fourphase, diametrical, 100; four phase, adjacent taps, 71; six phase, diametrical, 100; six phase, adjacent taps, 50. These relations are obtained from the sides of polygons inscribed in a circle having a diameter of 100 V, as shown in Fig. 15.1.76.

Fig. 15.1.76 EMF relations in a converter.

Fig. 15.1.75 AC-DC conversion with static devices.

Selsyns The word selsyn is an abbreviation of self-synchronizing and is applied to devices which are connected electrically, and in which an angular displacement of the rotating member of one device produces an equal angular displacement in the rotating member of the second device. There are several types of selsyns and they may be dc or ac, single-phase or polyphase. A simple and common type is shown in Fig. 15.1.77. The two stators S1, S2 are phase-wound stators, identical electrically with synchronous-generator or induction-motor stators. For simplicity Gramme-ring windings are shown in Fig. 15.1.77. The two stators are connected three-phase and in parallel. There are

RATING OF ELECTRICAL MOTORS

15-43

Fig. 15.1.77 Selsyn system.

also two bobbin-type rotors R1, R2, with single-phase windings, each connected to a single-phase supply such as 115 V, 60 Hz. When R1 and R2 are in the same angular positions, the emfs induced in the two stators by the ac flux of the rotors are equal and opposite, there are no interchange currents between stators, and the system is in equilibrium. However, if the angular displacement of R1, for example, is changed, the magnitudes of the emfs induced in the stator winding of S1 are correspondingly changed. The emfs of the two stators then become unbalanced, currents flow from S1 to S2, producing torque on R2. When R2 attains the same angular position as R1, the emfs in the two rotors again become equal and opposite, and the system is again in equilibrium. If there is torque load on either rotor, a resultant current is necessary to sustain the torque, so that there must be an angular displacement between rotors. However, by the use of an auxiliary selsyn a current may be fed into the system which is proportional to the angular difference of the two rotors. This current will continue until the error is corrected. This is called feedback. There may be a master selsyn, controlling several secondary units. Selsyns are used for position indicators, e.g., in bridge–engine-room signal systems. They are also widely used for fire control so that from any desired position all the turrets and guns on battleships can be turned and elevated simultaneously through any desired angle with a high degree of accuracy. The selsyn itself rarely has sufficient power to perform these operations, but it actuates control through power multipliers such as amplidynes.

RATING OF ELECTRICAL MOTORS

system when the motor is loaded to its rating or to its service factor load. Temperature rise by insulation class for single and polyphase motors is as follows:

Class of motor insulation system Motor type Integral horsepower All motors with 1.15 service factor or higher Totally enclosed fan-cooled motors (TEFC) Totally enclosed nonventilated motors (TENV) Motors with encapsulated windings, 1.0 service factor All other motors Fractional horsepower All motors with 1.15 service factor or higher Totally enclosed motors (TEFC and TENV) Any motor in frame size smaller than 42 frame All other motors

A

B

F

H

70C

90C

115C



60C

80C

105C

125C

65C

85C

110C

135C

65C

85C

110C



60C

80C

105C

125C

70C

90C

115C



65C

85C

110C

135C

65C

85C

110C

135C

60C

80C

105C

125C

NOTE: Manufacturers may offer insulation systems that exceed the above classes to provide additional measures of protection—for example, corona-resistant magnet wire for use in PWM ac drive applications. SOURCE: Table 5-20, Standard Handbook for Electrical Engineers, McGraw-Hill.

Application of Electrical Motors

In addition to motor Design and Code classifications, motors are assigned other classifications to meet specific industry standards and practices such as the temperature rating of the motor at which the materials in the machine can be operated for long periods of time without deterioration. Standard ANSI/IEEE 100-2001 defines seven different classes (Class 90, 105, 130, 155, 180, 220, and Class over 220) for insulating materials based on a not-to-exceed temperature rating in degrees Celsius. For example, Class 155 insulation includes materials or combinations of materials such as silicone elastomer, mica, glass fiber, and asbestos, with bonding substances suitable for continuous operation at 155C. Temperature class information would normally be used by wire, motor, or electrical apparatus designers in selecting insulating materials. A system of motor insulation classes is used to establish maximum temperature rise in the motor winding (determined by winding resistance) based on a 40C ambient, 3,300 ft (1006 m) elevation. In practice the motor manufacturer specifies the ambient temperature and insulation class. The temperature rise shall not exceed the limit for the insulation

General-purpose fractional horsepower and integral-horsepower motors are assigned a service factor, which allows the motor to be operated above its rated horsepower, when operated at its rated voltage and frequency, without damaging its insulation system. The normal service factors range from 1.40 to 1.25 for fractional to 1-hp motors, 1.15 for motors greater than 1 to 200 hp, and 1.0 for motors from 250 to 500 hp. The starting kVA of a squirrel-cage motor is indicated by a lockedrotor indicating code letter (A to V) included on the motor nameplate (see Table 15.1.16). For additional information refer to NEMA Standard MG-1 or NEC®-2005 Table 430.7(B) for values and application. The locked-rotor current is used for selecting the size of the motor disconnecting means and motor controller (starter). Required information on the motor nameplate includes manufacturer, rated volts, full-load amperes, rated frequency, number of phases if an ac motor, rated full-load speed for each speed, rated temperature rise for the insulation class (A, B, F, or H) at rated ambient temperature, time

15-44

ELECTRICAL ENGINEERING

Table 15.1.16

Maximum Rating or Setting of Motor Branch-Circuit Protective Devices and Starting-Inrush Code Letters Percent of full-load current

Type of motor Single-phase, all types, no code letter

Non-time delay fuse

Dual-element (time delay) fuse

Instantaneous breaker

Inverse time breaker

Code letter

300

175

700

250

A

0–3.14 3.15–3.54 3.55–3.99 4.0–4.49 4.5–4.99 5.0–5.59

AC motors: Polyphase other than wound-rotor Polyphase Design E or energy-efficient Design B

300 300

175 175

800 1100

250 250

B C D

High reactance squirrel-cage: Synchronous

300

175

800

250

E F

Wound rotor

150

150

800

150

DC motors: Constant voltage

150

150

250

150

kVA/hp with locked rotor*

G

5.6–6.29

H J K L M N P R S T U V

6.3–7.09 7.1–7.99 8.0–8.99 9.0–9.99 10.0–11.19 11.2–12.49 12.5–13.99 14.0–15.99 16.0–17.99 18.0–19.99 20.0–22.39 22.4 and up

* This is a general conversion. Refer to NEC® 2005, Tables 430-251(A) and (B) for conversion tables of single and polyphase motor locked-rotor currents for selection of disconnecting means and controllers as determined by motor horsepower and voltage rating. SOURCE: NEC 1993, Tables 430-152 and 430-7(b). Reprinted with permission from NFPA 70-1993, National Electrical Code®, Copyright © 1992, National Fire Protection Association, Quincy, Massachusetts 02269. This reprinted material is not the complete and official position of the NFPA on the referenced subject, which is represented only by the standard in its entirety.

rating (5, 15, 30, 60 minutes, or continuous), rated horsepower for each speed, locked-rotor amperes of code letter (A to V), design letter (B, C, D, or E). Other required information includes identification of secondary volts for wound-rotor motors, field voltage and current for dcexcited synchronous motors, winding type if a dc motor, and whether motor is thermally or impedance protected. Motors are required to be marked with any certifying agency approvals (listing) for hazardous (classified) area use. Large or special motors may also include nominal efficiency, power factor, or other information such as marine or inverter duty, as specified by the manufacturer. The motor branch-circuit protection device provides both short-circuit and ground-fault protection. Once motor hp is known, the approximate motor full load current can be determined from Table 15.1.17 and the maximum setting of motor branch-circuit protection device (fuses, circuit breakers) can be selected using Table 15.1.16. The motor branchcircuit protective device must be capable of carrying the motor locked-rotor current. Where the selected value of the branch-circuit protection device does not correspond to the size of a standard and fuse or nonadjustable circuit breaker, the next standard size (ampere rating) may be used. Running overload protection is required for each motor unless the motor is impedance protected or provided with an integral thermal protector. A separate overload device is required that is responsive to motor current and selected on the basis of motor nameplate full-load current to trip at no more than 125 percent of the motor nameplate full-load rating for motors with a service factor of 1.15 or greater and motors with a marked temperature rise of 40C. All other motors are set at 115 percent. Overload devices include electronic current-sensing relays, thermal and eutectic electromechanical relays, and, for motors larger than 1,500 hp, embedded temperature detectors. Thermal and electronic overload relays are supplied with either a fixed or adjustable class of trip curves responsive to motor current. The standard overload trip class is Class 20 and is suitable for most applications of Design B motors. Trip Class 10 is used for quick-trip applications and Class 30 for slowtrip applications. Motor circuit conductors for a single motor in a continuous duty application are required to have an ampacity of not less than 125 percent of the motor full-load current as determined by Table 15.1.17. Conductors

supplying several motors and other loads are sized at 125 percent of the largest load plus the full-load current of other motors, plus ampacity of other loads. Efficiency of Electrical Motors

Methods of determining efficiency are by direct measurement or by segregated losses. Methods are outlined in Standard Test Procedure for Polyphase Induction Motors and Generators, ANSI/IEEE Std. 1121996; Test Procedure for Single-Phase Induction Motors, ANSI/IEEE Std. 114-2001; and Test Procedures for Synchronous Machines, IEEE Std. 115-2002. Direct measurements can be made by using calibrated motors, generators, or dynamometers for input to generators and output from motors, and precision electrical motors for input to motors and output from generators. Efficiencies 5

output input

(15.1.123)

The segregated losses in motors are classified as follows: (1) Stator I 2 R (shunt and series field I 2R for dc); (2) rotor I 2R (armature I 2R for dc); (3) core loss; (4) stray-load loss; (5) friction and windage loss; (6) brushcontact loss (wound rotor and dc); (7) brush-friction loss (wound rotor and dc); (8) exciter loss (synchronous and dc); and (9) ventilating loss (dc). Losses are calculated separately and totaled. Measure the electrical output of the generator; then output (15.1.124) Efficiency 5 output 1 losses Measure the electrical input of the motors; then input 2 losses Efficiency 5 (15.1.125) input When testing dc motors, compensation should be made for the harmonics associated with rectified ac used to provide the variable dc

voltage to the motors. Instrumentation should be chosen to accurately reflect the rms value of currents. Temperature rise under full-load conditions may be measured by tests as outlined in the IEEE Standards referred to above. Methods of loading

ELECTRIC DRIVES

15-45

Table 15.1.17 Approximate Full-Load Currents of Motors,* A (See NEC® 2005, Art. 430, Tables 430-248, 430-249, 430-250 for more complete information.) Three-phase ac motors squirrel-cage and woundrotor induction types hp

Synchronous type, unity power factor

Single-phase ac motors

DC motors†

230 V

460 V

575 V

2,300 V

230 V

460 V

575 V

2,300 V

115 V‡

230 V‡

120 V

240 V

1 11⁄2 2 3

2.2 3.2 4.2 6.0 6.8 9.6

1.1 1.6 2.1 3.0 3.4 4.8

0.9 1.3 1.7 2.4 2.7 3.9

— — — — — —

— — — — — —

— — — — — —

— — — — — —

— — — — — —

9.8 13.8 16 20 24 34

4.9 6.9 8 10 12 17

5.4 7.6 9.5 13.2 17 25

2.7 3.8 4.7 6.6 8.5 12.2

5 71⁄2 10 15 20 25

15.2 22 28 42 54 68

7.6 11 14 21 27 34

6.1 9 11 17 22 27

— — — — — —

— — — — — 53

— — — — — 26

— — — — — 21

— — — — — —

56 80 100 — — —

28 40 50 — — —

40 58 76 — — —

20 29 38 55 72 89

1⁄2 3⁄4

30 40 50 60 75 100

80 104 130 154 192 248

40 52 65 77 96 124

32 41 52 62 77 99

— — — 16 20 26

63 83 104 123 155 202

32 41 52 61 78 101

26 33 42 49 62 81

— — — 12 15 20

— — — — — —

— — — — — —

— — — — — —

106 140 173 206 255 341

125 150 200

312 360 480

156 180 240

125 144 192

31 37 49

253 302 400

126 151 201

101 121 161

25 30 40

— — —

— — —

— — —

425 506 675

* The values of current are for motors running at speeds customary for belted motors and motors having normal torque characteristics. Use name-plate data for low-speed, high-torque, or multispeed motors. For synchronous motors of 0.8 pf multiply the above amperes by 1.25, at 0.9 pf by 1.1. The motor voltages listed are rated voltages. Respective nominal system voltages would be 220 to 240, 440 to 480, and 550 to 600 V. For full-load currents of 208-V motors, multiply the 230-V amperes by 1.10; for 200-V motors, multiply by 1.15. † Ampere values are for motors running at base speed. ‡ Rated voltage. Nominal system voltages are 120 and 240.

are: (1) Load motor with dynamometer or generator of similar capacity and run until temperatures stabilize; (2) load generator with motor-generator set or plant load and run until temperature stabilizes; (3) alternately apply dual frequencies to motor until it reaches rated temperature; (4) synchronous motor may be operated as synchronous condenser at no load with zero power factor at rated current, voltage, and frequency until temperatures stabilize. Industrial Applications of Motors Alternating or Direct Current The induction motor, particularly the squirrel-cage type, is preferable to the dc motor for constant-speed work, for the initial cost is less and the absence of a commutator reduces maintenance. Also there is less fire hazard in many industries, such as sawmills, flour mills, textile mills, and powder mills. The use of the induction motor in such places as cement mills is advantageous since with dc motors the grit makes the maintenance of commutators difficult. For variable-speed work like cranes, hoists, elevators, and for adjustable speeds, the dc motor characteristics are superior to induction-motor characteristics. Even then, it may be desirable to use induction motors since their less desirable characteristics are more than balanced by their simplicity and the fact that ac power is available, and to obtain dc power conversion apparatus is usually necessary. Where both lights and motors are to be supplied from the same ac system, the 208/120-V four-wire three-phase system is now in common use. This gives 208 V three-phase for the motors, and 120 V to neutral for the lights. Full-load speed, temperature rise, efficiency, and power factor as well as breakdown torque and starting torque have long been parameters of concern in the application and purchase of motors. Another qualification is service factor. Special enclosures, fittings, seals, ventilation systems, electromagnetic design, etc., are required when the motor is to be operated under unusual service conditions, such as exposure to (1) combustible, explosive,

abrasive, or conducting dusts, (2) lint or very dirty conditions where the accumulation of dirt might impede the ventilation, (3) chemical fumes or flammable or explosive gases, (4) nuclear radiation, (5) steam, salt laden air, or oil vapor, (6) damp or very dry locations, radiant heat, vermin infestation, or atmosphere conducive to the growth of fungus, (7) abnormal shock, vibration, or external mechanical loading, (8) abnormal axial thrust or side forces on the motor shaft, (9) excessive departure from rated voltage, (10) deviation factors of the line voltage exceeding 10 percent, (11) line voltage unbalance exceeding 1 percent, (12) situations where low noise levels are required, (13) speeds higher than the highest rated speed, (14) operation in a poorly ventilated room, in a pit, or in an inclined attitude, (15) torsional impact loads, repeated abnormal overloads, reversing or electric braking, (16) operation at standstill with any winding continuously energized, and (17) operation with extremely low structureborne and airborne noise. For dc machines, a further unusual service condition occurs when the average load is less than 50 percent over a 24-h period or the continuous load is less than 50 percent over a 4-h period. The standard direction of rotation for all nonreversing dc motors, ac single-phase motors, synchronous motors, and universal motors is counterclockwise when facing the end of the machine opposite the drive end. For dc and ac generators, the rotation is clockwise. Further information may be found in Publication No. MG-1 of the National Electrical Manufacturers Association. In applying motors the temperature rating of the branch-circuit wiring conductors must be sufficient to meet the operating environment inside the motor terminal box to prevent possible failure of the insulation inside the motor terminal box. ELECTRIC DRIVES Cranes and Hoists The dc series motor is best adapted to cranes and hoists. When the load is heavy the motor slows down automatically and develops increased torque thus reducing the peaks on the electrical

15-46

ELECTRICAL ENGINEERING

system. With light loads, the speed increases rapidly, thus giving a lively crane. The series motor is also well adapted to moving the bridge itself and also the trolley along the bridge. Where alternating current only is available and it is not economical to convert it, the slip-ring type of induction motor, with external-resistance speed control, is the best type of ac motor. Squirrel-cage motors with high resistance end rings to give high starting torque (design D) are used (design D motors; also see Ilgner system). Constant-Torque Applications Piston pumps, mills, extruders, and agitators may require constant torque over their complete speed range. They may require high starting torque design C or D squirrel-cage motors to bring them up to speed. Where speed is to be varied while running, a variable armature voltage dc motor or a variablefrequency squirrel-cage induction-motor drive system may be used. Centrifugal Pumps Low WK2 and low starting torques make design B general-purpose squirrel-cage motors the preference for this application. When variable flow is required, the use of a variable-frequency power supply to vary motor speed will be energy efficient when compared to changing flow by control-valve closure to increase head. Centrifugal Fans High WK 2 may require high starting torque design C or D squirrel-cage motors to bring the fan up to speed in a reasonable period of time. When variable flow is required, the use of a variable-frequency power supply or a multispeed motor to vary fan speed will be energy efficient when compared to closing louvers. For large fans, synchronous-motor drives may be considered for high efficiency and improved power factor. Axial or Centrifugal Compressors For smaller compressors, say, up to 100 hp, the squirrel-cage induction motor is the drive of choice. When the WK 2 is high, a design C or D high-torque motor may be required. For larger compressors, the synchronous motor is more efficient and improves power factor. Where variable flow is required, the variable-frequency power supply to vary motor speed is more efficient than controlling by valve and in some applications may eliminate a gearbox by allowing the motor to run at compressor operating speed. Pulsating-Torque Applications Reciprocating compressors, rock crushers, and hammer mills experience widely varying torque pulsations during each revolution. They usually have a flywheel to store energy, so

a high-torque, high-slip design D motor will accelerate the high WK 2 rapidly and allow energy recovery from the flywheel when high torque is demanded. On larger drives a slow-speed, engine-type synchronous motor can be directly connected. The motor itself supplies significant WK 2 to smooth out the torque and current pulsations of the system. SWITCHBOARDS Switchboards may, in general, be divided into four classes: direct-control panel type; remote mechanical-control panel type; direct-control truck type; electrically operated. With direct-control panel-type boards the switches, rheostats, bus bars, meters, and other apparatus are mounted on or near the board and the switches and rheostats are operated

directly, or by operating handles if they are mounted in back of the board. The voltages, for both direct current and alternating current, are usually limited to 600 V and less but may operate up to 2,500 V ac if oil circuit breakers are used. Such panels are not recommended for capacities greater than 3,000 kVA. Remote mechanical-control panel-type boards are ac switchboards with the bus bars and connections removed from the panels and mounted separately away from the load. The oil circuit breakers are operated by levers and rods. This type of board is designed for heavier duty than the direct-control type and is used up to 25,000 kVA. Direct-control truck-type switchboards for 15,000 V or less consist of equipment enclosed in steel compartments completely assembled by the manufacturers. The high-voltage parts are enclosed, and the equipment is interlocked to prevent mistakes in operation. This equipment is designed for low- and medium-capacity plants and auxiliary power in large generating stations. Electrically operated switchboards employ solenoid or motor-operated circuit breakers, rheostats, etc., controlled by small switches mounted on the panels. This makes it possible to locate the high-voltage and other equipment independently of the location of switchboard. In all large stations the switching equipment and buses are always mounted entirely either in separate buildings or in outdoor enclosures. Such equipment is termed bus structures and is electrically operated from the main control board. Switchboards should be erected at least 3 to 4 ft from the wall. Switchboard frames and structures should be grounded. The only exceptions are effectively insulated frames of single-polarity dc switchboards. For low-potential work, the conductors on the rear of the switchboard are usually made up of flat copper strip, known as bus-bar copper. The size required is based upon a current density of about 1,000 A/in2. Figure 15.1.78 gives the approximate continuous dc carrying capacity of copper bus bars for different arrangements and spacings for 35C temperature rise. Switchboards must be individually adapted for each specific electrical system. Space permits the showing of the diagrams of only three boards each for a typical electrical system (Fig. 15.1.79). Aluminum bus bars are also frequently used. Equipment of Standard Panels Following are enumerated the various parts required in the equipment of standard panels for varying services: Generator or synchronous-converter panel, dc two-wire system: 1 circuit breaker; 1 ammeter; 1 handwheel for rheostat; 1 voltmeter; 1 main switch (three-pole single throw or double throw) or 2 single-pole switches. Generator or synchronous-converter panel, dc three-wire system: 2 circuit breakers; 2 ammeters; 2 handwheels for field rheostats; 2 field switches; 2 potential receptacles for use with voltmeter; 3 switches; 1 four-point starting switch. Generator or synchronous-motor panel, three-phase three-wire system:

3 ammeters; 1 three-phase wattmeter; 1 voltmeter; 1 field ammeter; 1 double-pole field switch; 1 handwheel for field rheostats; 1 synchronizing receptacle (four-point); 1 potential receptacle (eight-point); 1 field

Fig. 15.1.78 Current-carrying capacity of copper bus bars.

SWITCHBOARDS

15-47

Fig. 15.1.79 Switchboard wiring diagrams for generators. (a) 125-V or 250-V dc generator; (b) three-phase, synchronous generator and exciter for a small or isolated plant; (c) three-wire dc generator for a small or isolated plant. A, ammeter; AS, three-way ammeter switch; CB, circuit breaker, CT, current transformer; DR, ground detector receptacle; L, ground detector lamp; OC, overload coil; OCB, oil circuit breaker; PP, potential ring; PR, potential receptacle; PT, potential transformer; Rheo, rheostat; RS, resistor; S, switch; Sh, shunt; V, voltmeter; WHM, watthour meter.

rheostat; 1 triple-pole oil switch; 1 power-factor indicator; 1 synchronizer; 2 series transformers; 1 governor control switch. Synchronous-converter panel, three-phase: 1 ammeter; 1 power-factor indicator; 1 synchronizing receptacle; 1 triple-pole oil circuit breaker; 2 current transformers; 1 potential transformer; 1 watthour meter (polyphase); 1 governor control switch. Induction motor panel, three-phase: 1 ammeter; series transformers; 1 oil switch. Feeder panel, dc, two-wire and three-wire: 1 single-pole circuit breaker; 1 ammeter; 2 single-pole main switches; potential receptacles (1 fourpoint for two-wire panel; 1 four-point and 1 eight-point for three-wire panel). Feeder panel, three-wire, three-phase and single-phase: 3 ammeters; 1 automatic oil switch (three-pole for three-phase, two-pole for singlephase); 2 series transformers; 1 shunt transformer; 1 wattmeter; 1 voltmeter; 1 watthour meter; 1 handwheel for control of potential regulator. Exciter panel (for 1 or 2 exciters): 1 ammeter (2 for 2 exciters); 1 field rheostat (2 for 2 exciters); 1 four-point receptacle (2 for 2 exciters); 1 equalizing rheostat for regulator. Switches The current-carrying parts of switches are usually designed for a current density of 1,000 A/in2. At contact surfaces, the current density should be kept down to about 50 A/in2. Circuit Breakers Switches equipped with a tripping device constitutes an elementary load interrupter switch. The difference between a load interrupter switch and a circuit breaker lies in the interrupting capacity. A circuit breaker must open the circuit successfully under short circuit conditions when the current through the contacts may be several orders of magnitude greater than the rated current. As the circuit is being opened, the device must withstand the accompanying mechanical forces and the heat of the ensuing arc until the current is permanently reduced to zero.

The opening of a metallic circuit while carrying electric current causes an electric arc to form between the parting contacts. If the action takes place in air, the air is ionized (a plasma is formed) by the passage of current. When ionized, air becomes an electric conductor. The space between the parting contacts thus has relatively low voltage drop and the region close to the surface of the contacts has relatively high voltage drop. The thermal input to the contact surfaces (VI) is therefore relatively large and can be highly destructive. A major aim in circuit breaker design is to quench the arc rapidly enough to keep the contacts in a reusable state. This is done in several ways: (1) lengthening the arc mechanically, (2) lengthening the arc magnetically by driving the currentcarrying plasma sideways with a magnetic field, (3) placing barriers in the arc path to cool the plasma and increase its length, (4) displacing and cooling the plasma by means of a jet of compressed air or inert gas, and (5) separating the contacts in a vacuum chamber. By a combination of shunt and series coils the circuit breaker can be made to trip when the energy reverses. Circuit breakers may trip unnecessarily when the difficulty has been immediately cleared by a local breaker or fuse. In order that service shall not be thus interrupted unnecessarily, automatically reclosing breakers are used. After tripping, an automatic mechanism operates to reclose the breaker. If the short circuit still exists, the breaker cannot reclose. The breaker attempts to reclose two or three times and then if the short circuit still exists it remains permanently locked out. Metal-clad switch gears are highly developed pieces of equipment that combine buses, circuit breakers, disconnecting devices, controlling devices, current and potential transformers, instruments, meters, and interlocking devices, all assembled at the factory as a single unit in a compact steel enclosing structure. Such equipment may comprise truck-type circuit breakers, assembled as a unit, each housed in a separate steel compartment and mounted on a small truck to facilitate removal for inspection

15-48

ELECTRICAL ENGINEERING

Fig. 15.1.80 Three equivalent symmetrical transmission, or distribution, systems. (a) Single-phase; (b) three-phase; (c) four-phase.

and servicing. The equipment is interlocked to prevent mistakes in operation and in the removal of the unit; the removal of the unit breaks all electrical connections by suitable disconnecting switches in the rear of the compartment, and all metal parts are grounded. This design provides compactness, simplicity, ease of inspection, and safety to the operator. High-voltage circuit breakers can be oil type, in which the contacts open under oil, air-blast type, in which the arc is extinguished by a powerful blast of air directed through an orifice across the arc and into an arc chute, H2S type, or vacuum contact type. The tripping of high-voltage circuit breakers is initiated by an abnormal current acting through the secondary of a current transformer on an inverse-time relay in which the time of closing the relay contacts is an inverse time function of the current; i.e., the greater the current the shorter the time of closing. The breaker is tripped by a dc tripping coil, the dc circuit being closed by the relay contacts. Modern circuit breakers should open the circuit within 3 cycles from the time of the closing of the relay contacts. Vacuum circuit breakers have received wide acceptance in all fields in recent years, both for indoor work and for outdoor applications. Indoor breakers are available up to 40 kV and interrupting capacities up to 2.5 GVA. Outdoor breakers are available in ratings up to that of the EHV (extra-high voltage 765-kV three-pole breaker capable of interrupting 55 GVA, or 40,000-A symmetrical current. Its operating rating is 3,000 A, 765 kV. The arc is extinguished in a vacuum. Switching stations, gasinsulated and operating at 550 kV, are also in use. POWER TRANSMISSION

Power for long-distance transmission is usually generated at 6,600, 13,200, and 18,000 V and is stepped up to the transmission voltage by -Y-connected transformers. The transmission voltage is roughly 1,000 V/mi. Preferred or standard transmission voltages are 22, 33, 44, 66, 110, 132, 154, 220, 287, 330, 500, and 765 kV. High-voltage lines across country are located on private rights of way. When they reach urban areas, the power must be carried underground to the substations which must be located near the load centers in the thickly settled districts. In many cases it is possible to go directly to underground cables since these are now practicable up to 345 kV between three-phase line conductors (200 kV to ground). High-voltage cables are expensive in both first cost and maintenance, and it may be more economical to step down the voltage before transmitting the power by underground cables. Within a city, alternating current may be distributed from a substation at 13,200, 6,600, or 2,300 V, being stepped down to 600, 480, and 240 V, three-phase for power and 240 to 120 V single-phase three-wire for lights, by transformers at the consumers’ premises. Direct current at 1,200 or 600 V for railways, 230 to 115 V for lighting and power, is supplied by motor-generator sets, synchronous converters, and rectifiers. Constant current for series street-lighting systems is obtained through constant-current transformers.

percentage power loss varies directly with the power, directly as the square of the distance, and inversely as the square of the voltage. The cross-sectional area of the conductors with a given percentage power loss varies directly with the power, directly with the distance, and inversely as the square of the voltage. For two systems of the same length transmitting the same power at different voltages and with the same power loss for both systems, the cross-sectional area and weight of the conductors will vary inversely as the square of the voltages. The foregoing relations between the cross section or weight of the conductor and transmission distance and voltage hold for all systems, whether dc, single-phase, three-phase, or fourphase. With the power, distance, and power loss fixed, all symmetrical systems having equal voltages to neutral require equal weights of conductor. Thus, the three symmetrical systems shown in Fig. 15.1.80 all deliver the same power, have the same power loss and equal voltages to neutral, and the transmission distances are all assumed to be equal. They all require the same weight of conductor since the weights are inversely proportional to all resistances. (No actual neutral conductor is used.) The respective power losses are (1) 2I 2R W; (2) 3(2I/3)2(3R/2)  2I 2R W; (3) 4(I/2)2(2R)  2I 2R W, which are all equal. Size of Transmission Conductor Kelvin’s law states, “The most economical area of conductor is that for which the annual cost of energy wasted is equal to the interest on that portion of the capital outlay which can be considered proportional to the weight of copper used.” In Fig. 15.1.81 are shown the annual interest cost, the annual cost of I 2R loss, and the total cost as functions of circular mils cross section for both typical overhead conductors and three-conductor cables. Note that the total-cost curves have very flat minimums, and usually other

Transmission Systems

Power is almost always transmitted three-phase. The following fundamental relations apply to any transmission system. The weight of conductor required to transmit power by any given system with a given

Fig. 15.1.81 Most economical sizes of overhead and underground conductors.

POWER TRANSMISSION

15-49

Fig. 15.1.82 Three-phase power system.

factors such as the character of the load and the voltage regulation, are taken into consideration. In addition to resistance, overhead power lines have inductive reactance to alternating currents. The inductive reactance X 5 2pf580 1 741.1 log [sD 2 rd/r]61026 /conductor mile

(15.1.126)

where f  frequency, D  distance between centers of conductors (in), and r their radius (in). Table 15.1.18 gives the inductive reactance per mile at 60 Hz and the resistance of stranded and solid copper conductor. (See Table 15.1.22.) Any symmetrical system having n conductors can be divided into n equal single-phase systems, each consisting of one wire and a return circuit of zero impedance and each having as its voltage the system voltage to neutral. Figure 15.1.82 shows a symmetrical three-phase system, with one phase detached. The load or received voltage between line conductors is ErR so that the receiver voltage to neutral is ER 5 ErR/ 13 V. The current is IA, the load power factor is cos u, and the line resistance and reactance are R and X per wire, and the sending-end voltage is ES. The phasor diagram is shown in Fig. 15.1.83 (compare with Fig. 15.1.63). Its solution is ES 5 2sER cos u 1 IRd2 1 sER sin u 1 IXd2 (15.1.127) [see Eq. (15.1.101)].

Fig. 15.1.83 Phasor diagram for a power line.

Figure 15.1.84 (Mershon diagram) shows the right-hand portion of Fig. 15.1.83 plotted to large scale, the arc 00 corresponding to the arc ab (Fig. 15.1.83). The abscissa 0 (Fig. 15.1.84) corresponds to point b (Fig. 15.1.83) and is the load voltage ER taken as 100 percent. The concentric circular arcs 0–40 are given in percentage of ER. To find the sending-end voltage ES for any power factor cos u, compute first the resistance drop IR and the reactance drop IX in percentage of ER. Then follow the ordinate corresponding to the load power factor to the inner arc 00 (a, Fig. 15.1.83). Lay off the percentage IR drop horizontally to the right, and the percentage IX drop vertically upward. The arc at which the IX drop terminates (c, Fig. 15.1.83) when added to 100 percent gives the sending-end voltage ES in percent of the load voltage ER. EXAMPLE. Let it be desired to transmit 20,000 kW three-phase 80 percent power factor lagging current, a distance of 60 mi. The voltage at the receiving end is 66,000 V, 60 Hz and the line loss must not exceed 10 percent of the power delivered. The conductor spacing must be 7 ft (84 in). Determine the sendingend voltage and the actual efficiency. I 5 20,000,000/s66,000 3 0.80 3 13d 5 218.8 A. 3 3 218.82 3 Rr 5 0.10 3 20,000,000. Rr 5 13.9 5 0.232 /mi. By referring to Table 13.1.18, 250,000 cir mils copper having a resistance of 0.2278

Fig. 15.1.84 Mershon diagram for determining voltage drop in power lines.

/mi may be used. The total resistance R  60  0.2278  13.67 . The reactance X  60  0.723  43.38 . The volts to neutral at the load, ER  66,000/ 13 5 38,100 V. cos u  0.80; sin u  0.60. Using Eq. (15.1.127), ES  {[(38,100  0.80)  (218.8  13.67)]2  [(38,100  0.60)  (218.8  43.38)]2}1/2  46,500 V to neutral or 13 3 46,500 5 80,500 between lines at the sending end. The line loss is 3(218.8)2  13.67  1,963 kW. The efficiency h  20,000/(20,000  1,963)  0.911, or 91.1 percent. This same line is solved by means of the Mershon diagram as follows. Let ER  38,100 V  100 percent. IR  218.8  13.67  2,991 V  7.85 percent. IX  218.8  43.38  9,490 V  24.9 percent. Follow the 0.80 power-factor ordinate (Fig. 15.1.84) to its intersection with the arc 00; from this point go 7.85 percent horizontally to the right and then 24.9 percent vertically. (These percentages are measured on the horizontal scale.) This last distance terminates on the 22.5 percent arc. The sending-end voltage to neutral is then 1.225  38,100  46,500 V, so that the sending-end voltage between line conductors is Ers 5 46,500 13 5 80,530 V.

In Table 15.1.18 the spacing is the distance between the centers of the two conductors of a single-phase system or the distance between the centers of each pair of conductors of a three-phase system if they are equally spaced. If they are not equally spaced, the geometric mean dis3 tance (GMD) is used, where GMD 5 1 D1D2D3 (Fig. 15.1.85a). With 3 the flat horizontal spacing shown in Fig. 15.1.85b, GMD 5 1 2D3 5 1.26D.

Fig. 15.1.85 Unequal spacing of three-phase conductors. (a) GMD  (D1D2D3)1/3; (b) flat horizontal spacing; GMD  1.26D.

In addition to copper, aluminum cable steel-reinforced (ACSR), Table 15.1.19, is used for transmission conductor. For the same resistance it is lighter than copper, and with high voltages the larger diameter reduces corona loss. Until 1966, 345 kV was the highest operating voltage in the United States. The first 500 kV system put into operation (1966) was a 350-mi

15-50

Table 15.1.18

Resistance and Inductive Reactance per Single Conductor Hard-drawn copper, stranded 60 Hz Spacing, ft

Size, cir mils or AWG

No, of strands

OD, in

Resistance, /mi

1

2

3

4

5

6

7

8

10

12

15

20

30

500,000 400,000 300,000 250,000 0000 000 00 0

37 19 19 19 19 7 7 7

0.814 0.725 0.628 0.574 0.528 0.464 0.414 0.368

0.1130 0.1426 0.1900 0.2278 0.2690 0.339 0.428 0.538

0.443 0.458 0.476 0.487 0.497 0.518 0.532 0.546

0.527 0.542 0.560 0.571 0.581 0.602 0.616 0.630

0.576 0.591 0.609 0.620 0.630 0.651 0.665 0.679

0.611 0.626 0.644 0.655 0.665 0.686 0.700 0.714

0.638 0.653 0.671 0.682 0.692 0.713 0.727 0.741

0.660 0.675 0.693 0.704 0.714 0.735 0.749 0.763

0.679 0.694 0.712 0.723 0.733 0.754 0.768 0.782

0.695 0.710 0.728 0.739 0.749 0.770 0.784 0.798

0.722 0.737 0.755 0.766 0.776 0.797 0.811 0.825

0.745 0.760 0.778 0.789 0.799 0.820 0.834 0.848

0.772 0.787 0.805 0.816 0.826 0.847 0.861 0.875

0.807 0.822 0.840 0.851 0.861 0.882 0.896 0.910

0.856 0.871 0.889 0.900 0.917 0.931 0.945 0.959

0.746 0.760 0.774 0.788 0.802

0.762 0.776 0.790 0.804 0.818

0.789 0.803 0.817 0.831 0.845

0.812 0.826 0.840 0.854 0.868

0.839 0.853 0.867 0.881 0.895

0.874 0.888 0.902 0.916 0.930

0.923 0.937 0.951 0.965 0.979

Hard-drawn copper, solid 0000 000 00 0 1

__ — __ __ —

0.4600 0.4096 0.3648 0.3249 0.2893

0.264 0.333 0.420 0.528 0.665

0.510 0.524 0.538 0.552 0.566

0.594 0.608 0.622 0.636 0.650

0.643 0.657 0.671 0.685 0.699

0.678 0.692 0.706 0.720 0.734

0.705 0.719 0.733 0.747 0.761

0.727 0.741 0.755 0.769 0.783

POWER TRANSMISSION Table 15.1.19

15-51

Properties of Aluminum Cable Steel-Reinforced (ACSR) /mi of single conductor at 25C

Cir mils or AWG Cross section, in2

No. of wires

Aluminum

Copper equivalent

Aluminum

Steel

OD, in

Aluminum

1,590,000 1,431,000 1,272,000 1,192,500 1,113,000

1,000,000 900,000 800,000 750,000 700,000

54 54 54 54 54

19 19 19 19 19

1.545 1.465 1.382 1.338 1.293

1.249 1.124 0.9990 0.9366 0.8741

1,033,500 954,000 874,500 795,000 715,500

650,000 600,000 550,000 500,000 450,000

54 54 54 26 54

7 7 7 7 7

1.246 1.196 1.146 1.108 1.036

636,000 556,500 477,000 397,500 336,400

400,000 350,000 300,000 250,000 0000

54 26 26 26 26

7 7 7 7 7

266,800 0000 000 00 0

000 00 0 1 2

26 6 6 6 6

7 1 1 1 1

200 A

600 A

Total

Total lb/mi

0 A dc

25 Hz

60 Hz

25 Hz

60 Hz

1.4071 1.2664 1.1256 1.0553 0.9850

10,777 9,699 8,621 8,082 7,544

0.0587 0.0652 0.0734 0.0783 0.0839

0.0589 0.0654 0.0736 0.0785 0.0841

0.0594 0.0659 0.0742 0.0791 0.0848

0.0592 0.0657 0.0738 0.0787 0.0843

0.0607 0.0671 0.0752 0.0801 0.0857

0.8117 0.7493 0.6868 0.6244 0.5620

0.9170 0.8464 0.7759 0.7261 0.6348

7,019 6,479 5,940 5,770 4,859

0.0903 0.0979 0.107 0.117 0.131

0.0906 0.0980 0.107 0.117 0.131

0.0913 0.0985 0.108 0.117 0.133

0.0908 0.0983 0.107 0.117 0.131

0.0922 0.0997 0.109 0.117 0.133

0.977 0.927 0.858 0.783 0.721

0.4995 0.4371 0.3746 0.3122 0.2642

0.5642 0.5083 0.4357 0.3630 0.3073

4,319 4,039 3,462 2,885 2,442

0.147 0.168 0.196 0.235 0.278

0.147 0.168 0.196 0.235 0.278

0.149 0.168 0.196 0.235 0.278

0.147 0.168 0.196 0.235 0.278

0.149 0.168 0.196 0.235 0.278

0.642 0.563 0.502 0.447 0.398

0.2095 0.1662 0.1318 0.1045 0.0829

0.2367 0.1939 0.1537 0.1219 0.0967

1,936 1,542 1,223 970 769

0.350 0.441 0.556 0.702 0.885

0.350 0.443 0.557 0.703 0.885

0.350 0.446 0.561 0.707 0.889

0.350 0.447 0.562 0.706 0.887

0.350 0.464 0.579 0.718 0.893

SOURCE: Aluminum Co. of America.

transmission loop of the Virginia Electric and Power Company; the longest transmission distance was 170 mi. The towers, about 94 ft high, are of corrosion-resistant steel, and the conductors are 61-strand cables of aluminum alloy, rather than the usual aluminum cable with a steel core (ACSR). The conductor diameter is 1.65 in with two “bundled” conductors per phase and 18-in spacing. The standard span is 1,600 ft, the conductor spacing is flat with 30-ft spacing between phase-conductor centers, and the minimum clearance to ground is 34 to 39 ft. To maintain a minimum clearance of 11 ft to the towers and 30 ft spacing between phases, vee insulator strings, each consisting of twenty-four 10-in disks, are used with each phase. The highest EHV system in North American is the 765-kV system in the midwest region of the United States. A dc transmission line on the west coast of the United States operating at

450 kV is transmitting power in bulk more than 800 miles. High-voltage dc transmission has a greater potential for savings and a greater ability to transmit large blocks of power longer distances than has three-phase transmission. For the same crest voltage there is a saving of 50 percent in the weight of the conductor. Because of the power stability limit due to inductive and capacitive effects (inherent with ac transmission), the ability to transmit large blocks of power long distances has not kept pace with power developments, even at the present highest ac transmission voltage of 765 kV. With direct current there is no such power stability limit. Where cables are necessary, as under water, the capacitive charging current may, with alternating current become so large that it absorbs a large proportion, if not all, of the cable-carrying capability. For example, at 132 kV, three-phase (76 kV to ground), with a 500 MCM cable, at 36 mi, the charging current at 60 Hz is equal to the entire cable capability so that no capability remains for the load current. With direct current there is no charging current, only the negligible leakage current, and there are no ac dielectric losses. Furthermore, the dc voltage at which a given cable can operate is twice the ac voltage. The high dc transmission voltage is obtained by converting the ac power voltage to direct current by means of mercury-arc rectifiers; at the receiving end of the line the dc voltage is inverted back to a powerfrequency voltage by means of mercury-arc inverters. The maximum dc transmission voltage in use in the United States, as of 1994, is 500 kV. Alternating to direct to alternating current is nonsynchronous transmission of electric power. It can be overhead or under the surface. In the early 1970s the problem of bulk electric-power transmission over

high-voltage transmission lines above the surface developed the insistent discussion of land use and environmental cost. While the maximum ac transmission voltage in use in the United States (1994) is 765 kV, ultrahigh voltage (UHV) is under consideration. Transmission voltages of 1,200 to 2,550 kV are being tested presently. The right-of-way requirements for power transmission are significantly reduced at higher voltages. For example, in one study the transmission of 7,500 MVA at 345 kV ac was found to require 14 circuits on a corridor 725 ft (221.5 m) wide, whereas a single 1,200-kV ac circuit of 7,500-MVA capacity would need a corridor 310 ft (91.5 m) wide. Continuing research and development efforts may result in the development of more economic high-voltage underground transmission links. Sufficient bulk power transmission capability would permit power wheeling, i.e., the use of generating capacity to the east and west to serve a given locality as the earth revolves and the area of peak demand glides across the countryside. Corona is a reddish-blue electrical discharge which occurs when the voltage-gradient in air exceeds 30 kV peak, 21.1 kV rms, at 76 cm pressure. This electrical discharge is caused by ionization of the air and becomes more or less concentrated at irregularities on the conductor surface and on the outer strands of stranded conductors. Corona is accompanied by a hissing sound; it produces ozone and, in the presence of moisture, nitrous acid. On high-voltage lines corona produces a substantial power loss, corrosion of the conductors, and radio and television interference. The fair-weather loss increases as the square of the voltage above a critical value e0 and is greatly increased by fog, smoke, rainstorms, sleet, and snow (see Fig. 15.1.86). To reduce corona, the diameter of high-voltage conductors is increased to values much greater than would be required for the necessary conductance cross section. This is accomplished by the use of hollow, segmented conductors and by the use of aluminum cable, steel-reinforced (ACSR), which often has inner layers of jute to increase the diameter. In extra-high-voltage lines (400 kV and greater), corona is reduced by the use of bundled conductors in which each phase consists of two or three conductors spaced about 16 in (0.41 m) from one another. Underground Power Cables

Insulations for power cables include heat-resisting, low-water-absorptive synthetic rubber compounds, varnished cloth, impregnated paper, cross-linked polyethylene thermosetting compounds, and thermoplastics

15-52

ELECTRICAL ENGINEERING

Fig. 15.1.86 Corona loss with snowstorm.

such as polyvinyl chloride (PVC) and polyethylene (PE) compounds (see Sec. 6). Properly chosen rubber-insulated cables may be used in wet locations with a nonmetallic jacket for protective covering instead of a metallic sheath. Commonly used jackets are flame-resisting, such as neoprene and PVC. Such cables are relatively light in weight, easy to train in ducts and manholes, and easily spliced. When distribution voltages exceed 2,000 V phase to phase, an ozone-resisting type of compound is required. Such rubber insulation may be used in cables carrying up to 28,000 V between lines in three-phase grounded systems. The insulation wall will be thicker than with varnished cloth, polyethylene, ethylene propylene rubber, or paper. Varnished-cloth cables are made by applying varnish-treated closely woven cloth in the form of tapes, helically, to the metallic conductor. Simultaneously a viscous compound is applied between layers which fills in any voids at laps in the taping and imparts flexibility when the cable is bent by permitting movement of one tape upon another. This type of insulation has higher dielectric loss than impregnated paper but is suitable for the transmission of power up to 28,000 V between phases over short distances. Such insulated cables may be used in dry locations with flame-resisting fibrous braid, reinforced neoprene tape, or PVC jacket and are often further protected with an interlocked metallic tape armor; but in wet locations these cables should be protected by a continuous metallic sheath such as lead or aluminum. Since varnish-cloth-insulated cable has high ozone resistance, heat resistance, and impulse strength, it is well adapted for station or powerhouse wiring or for any service where the temperature is high or where there are sudden increases in voltage for short periods. Since the varnish is not affected by mineral oils, such cables make excellent leads for transformers and oil switches. PVC is readily available in several fast, bright colors and is often chosen for color-coded multiconductor control cables. It has inherent flame and oil resistance, and as single conductor wire and cable with the proper wall thickness for a particular application, it usually does not need any outside protective covering. On account of its high dielectric constant and high power factor, its use is limited to low voltages, i.e., under 1,000 V, except for series lighting circuits. Polyethylene, because of its excellent electrical characteristics, first found use when it was adapted especially for high-frequency cables used in radio and radar circuits; for certain telephone, communication, and signal cables; and for submarine cables. Submarine telephone cables with built-in repeaters laid first in the Atlantic Ocean and then in the Pacific are insulated with polyethylene. Because of polyethylene’s thermal characteristics, the standard maximum conductor operating temperature is 75C. It is commonly used for power cables (including large use for underground residential distribution), with transmissions up to 15,000 V. Successful installations have been in service at 46 kV and some at 69 kV. The upper limit has not been reached, inasmuch as work is in progress on higher-voltage polyethylene power cables as a result of advancements in the art of compounding.

Cross-linked polyethylene is another insulation which is gaining in favor in the process field. For power cable insulations, the cross-linking process is most commonly obtained chemically. It converts polyethylene from a thermoplastic into a thermosetting material; the result is a compound with a unique combination of properties, including resistance to heat and oxidation, thus permitting an increase in maximum conductor operating temperature to 90C. The service record with this compound has been good at voltages which have been gradually increased to 35 kV. Another thermosetting material, ethylene propylene rubber (EPR), has found wide acceptance in the 5- to 35-kV range. Impregnated-paper insulation is used for very-high-voltage cables whose range has been extended to 345 kV. To eliminate the detrimental effects of moisture and to maintain proper impregnation of the paper, such cables must have a continuous metallic sheath such as lead or aluminum or be enclosed within a steel pipe; the operation of the cable depends absolutely on the integrity of that enclosure. In three-conductor belted-type cables the individual insulated conductors are surrounded by a belt or wall of impregnated paper over which the lead sheath is applied. When all three conductors are within one sheath, their inductive effects practically neutralize one another and eddy-current loss in the sheath is negligible. In the type-H cable, each of the individual conductors is surrounded with a perforated metallic covering, either aluminum foil backed with a paper tape or thin perforated metal tapes wound over the paper. All three conductors are then enclosed within the metal sheath. The metallic coverings being grounded electrically, each conductor acts as a single-conductor cable. This construction eliminates “tangential” stresses within the insulation and reduces pockets or voids. When paper tapes are wound on the conductor, impregnated with an oil or a petrolatum compound, and covered with a lead sheath, they are called solid type. Three-conductor cables are now operating at 33,000 V, and singleconductor cables at 66,000 V between phases (38,000 V to ground). In New York and Chicago, special hollow-conductor oil-filled singleconductor cables are operating successfully at 132,000 V (76,000 V to ground). In France, cables are operating at 345 kV between conductors. Other methods of installing underground cables are to draw them into steel pipes, usually without the sheaths, and to fill the pipes with oil under pressure (oilstatic) or nitrogen under 200 lb pressure. The ordinary medium-high-voltage underground cables are usually drawn into duct lines. With a straight run and ample clearance the length of cable between manholes may reach 600 to 1,000 ft. Ordinarily, the distance is more nearly 400 to 500 ft. With bends of small radius the distance must be further reduced. Cable ratings are based on the permissible operating temperatures of the insulation and environmental installation conditions. See Table 15.1.20 for ampacities.

POWER DISTRIBUTION Distribution Systems The choice of the system of power distribution is determined by the type of power that is available and by the nature of the load. To transmit a given power over a given distance with a given power loss (I2R), the weight of conductor varies inversely as the square of the voltage. Incandescent lamps will not operate economically at voltages much higher than 120 V; the most suitable voltages for dc motors are 230, 500, and 550 V; for ac motors, standard voltages are 230, 460, and 575 V, three-phase. When power for lighting is to be distributed in a district where the consumers are relatively far apart, alternating current is used, being distributed at high voltage (2,400, 4,160, 4,800, 6,900, and 13,800 V) and transformed at the consumer’s premises, or by transformers on poles or located in manholes or vaults under the street or sidewalks, to 240/120 V three-wire for lighting and domestic customers, and to 208, 240, 480, and 600 volts, three-phase, for power. The first central station power systems were built with dc generation and distribution. The economical transmission distance was short. Densely populated, downtown areas of cities were therefore the first sections to be served. Growth of electric service in the United States

POWER DISTRIBUTION

15-53

Table 15.1.20 Ampacities of Insulated High-Voltage Cables in Underground Raceways [Based on type MV-90 conductor temperature of 90C, ambient earth temperature of 20C, 100 percent load factor, thermal resistance (RHO) of 90, and three circuits in group] Copper Conductor size, AWG or MCM

3-1/C conductors per raceway

Aluminum 1-3/C cable per raceway

3-1/C conductors per raceway

1-3/C cable per raceway

2,001–5,000-V ampacity

5,001–35,000-V ampacity

2,001–5,000-V ampacity

5,001–35,000-V ampacity

2,001–5,000-V ampacity

5,001–35,000-V ampacity

2,001–5,000-V ampacity

5,001–35,000-V ampacity

8 6 4 2 1

56 73 95 125 140

77 99 130 145

59 78 100 135 155

88 115 150 170

44 57 74 96 110

60 77 100 110

46 61 80 105 120

69 89 115 135

1/0 2/0 3/0 4/0

160 185 210 235

165 185 210 240

175 200 230 265

195 220 250 285

125 145 160 185

125 145 165 185

140 160 180 205

150 170 195 220

250 350 500 750 1,000

260 315 375 460 525

260 310 370 440 495

290 355 430 530 600

310 375 450 545 615

205 245 295 370 425

200 245 290 355 405

230 280 340 425 495

245 295 355 440 510

NOTE: This is a general table. For other temperatures and installation conditions, see NFPA 70, 2005, National Electrical Code®, Tables 310.77, through 310.80 and associated notes.

was phenomenal in the last two decades of the nineteenth century. After 1895 when ac generation was selected for the development of power from Niagara Falls, the expansion of dc distribution diminished. The economics overwhelmingly favored the new ac system. Direct current service is still available in small pockets in some cities. In those cases, ac power is generated, transmitted, and distributed. The conversion to dc takes place in rectifiers installed in manholes near the load or in the building to be served. Some dc customers resist the change to ac service because of their need for motor speed control. Elevator and printing press drives and some cloth-cutting knives are examples of such needs. (See Low-Voltage AC Network.) Series Circuits These constant-current circuits were widely used for street lighting. The voltage was automatically adjusted to match the number of lamps in series and maintain a constant current. With the advent of HID lamps and individual photocells on street lighting fixtures which require parallel circuitry, this system fell into disuse and is rarely seen. Parallel Circuits Power is usually distributed at constant potential, and all the devices or receivers in the circuit are connected in parallel, giving a constant-potential system, Fig. 15.1.87a. If conductors of constant cross section are used and all the loads, L1, L2, etc., are operating, there will be a greater voltage IR drop per unit length of wire in the portion of the circuit AB and CD than in the other portions; also the voltage will not be the same for the different lamps but will decrease along the mains with distance from the generating end. Loop Circuits A more nearly equal voltage for each load is obtained in the loop system, Fig. 15.1.87b. The electrical distance from one generator terminal to the other through any receiver is the same as

that through any other receiver, and the voltage at the receivers may be maintained more nearly equal, but at the expense of additional conductor material. Series-Parallel Circuit For incandescent lamps the power must be at low voltage (115 V) and the voltage variations must be small. If the transmission distance is considerable or the loads are large, a large or perhaps prohibitive investment in conductor material would be necessary. In some special cases, lamps may be operated in groups of two in series as shown in Fig. 15.1.88. The transmitting voltage is thus doubled, and, for a given number of lamps, the current is halved, the permissible voltage drop (IR) in conductors doubled, the conductor resistance quadrupled, the weight of conductor material thus being reduced to 25 percent of that necessary for simple parallel operation.

Fig. 15.1.88 Series-parallel system. Three-Wire System In the series-parallel system the loads must be used in pairs and both units of the pair must have the same power rating. To overcome these objections and at the same time to obtain the economy in conductor material of operating at higher voltage, the threewire system is used. It consists merely of adding a third wire or neutral to the system of Fig. 15.1.88 as shown in Fig. 15.1.89.

Fig. 15.1.89 Three-wire system.

Fig. 15.1.87 (a) Parallel circuit; (b) loop circuit.

If the neutral wire is of the same cross section as the two outer wires, this system requires only 37.5 percent of the copper required by an equivalent two-wire system. Since the neutral ordinarily carries less current than the outers, it is usually smaller and the ratio of copper to that of the two-wire system is even less than 37.5 percent (see Table 15.1.21). When the loads on each half of the system are equal, there will be no current in the middle or neutral wire, and the condition is the same as that shown in Fig. 15.1.88. When the loads on the two sides are unequal,

15-54

ELECTRICAL ENGINEERING

Table 15.1.21 Resistance and 60-Hz Reactance for Wires with Small Spacings, , at 20 C (See also Table 15.1.18.) AWG and size of wire, cir mils

Resistance in 1,000 ft of line (2,000 ft of wire), copper

Reactance in 1,000 ft of line (2,000 ft of wire) at 60 Hz for the distance given in inches between centers of conductors 1⁄ 2

1

2

3

4

5

6

9

12

18

24

14– 4,107 12– 6,530 10– 10,380 8– 16,510 6– 26,250

5.06 3.18 2.00 1.26 0.790

0.138 0.127 0.116 0.106 0.095

0.178 0.159 0.148 0.138 0.127

0.218 0.190 0.180 0.169 0.158

0.220 0.210 0.199 0.188 0.178

0.233 0.223 0.212 0.201 0.190

0.244 0.233 0.223 0.212 0.201

0.252 0.241 0.231 0.220 0.209

0.271 0.260 0.249 0.238 0.228

0.284 0.273 0.262 0.252 0.241

0.302 0.292 0.281 0.270 0.260

0.284 0.272

4– 41,740 2– 66,370 1– 83,690 0– 105,500 00– 133,100

0.498 0.312 0.248 0.196 0.156

0.085 0.074 0.068 0.063 0.057

0.117 0.106 0.101 0.095 0.090

0.149 0.138 0.132 0.127 0.121

0.167 0.156 0.151 0.145 0.140

0.180 0.169 0.164 0.159 0.153

0.190 0.180 0.174 0.169 0.164

0.199 0.188 0.183 0.177 0.172

0.217 0.206 0.201 0.196 0.190

0.230 0.220 0.214 0.209 0.204

0.249 0.238 0.233 0.228 0.222

0.262 0.252 0.246 0.241 0.236

000– 167,800 0000– 211,600 250,000 300,000 350,000 400,000

0.122 0.098 0.085 0.075 0.061 0.052

0.052 0.046 __ — — —

0.085 0.079 0.075 0.071 0.067 0.064

0.116 0.111 0.106 0.103 0.099 0.096

0.135 0.130 0.125 0.120 0.118 0.114

0.148 0.143 0.139 0.134 0.128 0.127

0.158 0.153 0.148 0.144 0.141 0.138

0.167 0.161 0.157 0.153 0.149 0.146

0.185 0.180 0.175 0.171 0.168 0.165

0.199 0.193 0.189 0.185 0.182 0.178

0.217 0.212 0.207 0.203 0.200 0.197

0.230 0.225 0.220 0.217 0.213 0.209

500,000 600,000 700,000 800,000 900,000 1,000,000

0.042 0.035 0.030 0.026 0.024 0.022

— — — — __ —

— — — — — —

0.090 0.087 0.083 0.080 0.077 0.075

0.109 0.106 0.102 0.099 0.096 0.094

0.122 0.118 0.114 0.112 0.109 0.106

0.133 0.128 0.125 0.122 0.119 0.117

0.141 0.137 0.133 0.130 0.127 0.125

0.160 0.155 0.152 0.148 0.146 0.144

0.172 0.169 0.165 0.162 0.159 0.158

0.192 0.187 0.184 0.181 0.178 0.176

0.202 0.200 0.197 0.194 0.191 0.188

NOTE: For other frequencies the reactance will be in direct proportion to the frequency.

there will be a current in the neutral wire equal to the difference of the currents in the outside wires. AC Three-Wire Distribution Practically all energy for lighting and small motor work is distributed at 1,150, 2,300, or 4,160 V ac to transformers which step down the voltage to 240 and 120 V for three-wire domestic and lighting systems as well as 208, 240, 480, and 600 V, threephase, for power. For the three-wire systems the transformers are so designed that the secondary or low-voltage winding will deliver power at 240 V, and the middle or neutral wire is obtained by connecting to the center or midpoint of this winding (see Figs. 15.1.90 and 15.1.91).

Fig. 15.1.90 Three-wire generator. Disturbances of utility power have been present since the inception of the electric utility industry; however, in the past equipment was much more forgiving and less affected. Present electronic data processing equipment is microprocessor-based and consequently requires a higher quality of ac power. Although the quality of utility power in the United States is very good, sensitive electronic equipment still needs to be protected against the adverse and damaging effects of transients, surges, and other power-system aberrations.

Fig. 15.1.91 Three-wire 230/115-V ac system.

This has led to the development of technologies which condition power for use by this sensitive equipment. Power-enhancing devices (surge suppressors, isolation transformers, and line voltage regulators) modify the incoming waveform to mitigate some aberrations. Powersynthesizing devices (motor-generator sets, magnetic synthesizers, and uninterruptible power supplies) use the incoming power as a source of energy to generate a new, completely isolated waveform. Each type has advantages and limitations. Grounding The neutral wire of the secondary circuit of the transformer should be grounded on the pole (or in the manhole) and at the service switch in the building supplied. If, as a result of a lightning stroke or a fault in the transformer insulation, the transformer primary circuit becomes grounded at a (Fig. 15.1.91) and the transformer insulation between primary and secondary windings is broken down at b and if there were no permanent ground connection in the secondary neutral wire, the potential of wire 1 would be raised 2,300 V above ground potential. This constitutes a very serious hazard to life for persons coming in contact with the 120 V system. The National Electrical Code® requires the use of a ground wire not smaller than 8 AWG copper. With the neutral grounded (Fig. 15.1.91), voltages to ground on the secondary system cannot exceed 120 V. (See National Electrical Code® 2005, Art. 250.) Feeders and Mains Where power is supplied to a large district, improved voltage regulation is obtained by having centers of distribution. Power is supplied from the station bus at high voltage to the centers of distribution by large cables known as feeders. Power is distributed from the distribution centers to the consumers through the mains and transformed to a usable voltage at the user’s site. As there are no loads connected to the feeders between the generating station and centers of distribution, the voltage at the latter points may be maintained constant. Pilot wires from the centers of distribution often run back to the station, allowing the operator or automatic controls to maintain constant voltage at the centers of distribution. This system provides a means of maintaining very close voltage regulation at the consumer’s premises. A common and economical method of supplying business and thickly settled districts with high load densities is to employ a 208/120-V, three-phase, four-wire low-voltage ac network. The network operates

WIRING CALCULATIONS

with 208 V between outer wires giving 120 V to neutral (Fig. 15.1.92). Motors are connected across the three outer wires operating at 208 V, three-phase. Lamp loads are connected between outer wires and the grounded neutral. The network is supplied directly from 13,800-V feeders by 13,800/208-V three-phase transformer units, usually located in manholes, vaults, or outdoor enclosures. This system thus eliminates the necessity for transformation in the substation. A large number of such units feed the network, so that the secondaries are all in parallel. Each transformer is provided with an overload reverse-energy circuit breaker (network protector), so that a feeder and its transformer are isolated if trouble develops in either. This system is flexible since units can be easily added or removed in accordance with the rapid changes in local loads that occur particularly in downtown business districts.

15-55

EXAMPLE. Determine the size of conductor to supply power to a 10-hp, 220-V dc motor 500 ft from the switchboard with 5 V drop. Assume a motor efficiency of 86 percent. The motor will then require a current of (10  746)/(0.86  220)  39.4 A. From Eq. (15.1.129), A  21.6  39.4  500/5  85,100 cir mils. The next largest wire is no. 0 AWG.

The calculation of the size conductor for three-wire circuits is made in practically the same manner. With a balanced circuit there is no current in the neutral wire, and the current in each outside wire will be equal to one-half the sum of the currents taken by all the receiving devices connected between neutral and outside wires plus the sum of the currents taken by the receivers connected between the outside wires. Using this total current and neglecting the neutral wire, make calculations for the size of the outside wires by means of Eq. (15.1.129). The neutral wire should have the same cross section as the outside wires in interior wiring. EXAMPLE. Determine the size wire which should be used for the three-wire main of Fig. 15.1.93. Allowable drop is 3 V and the distance to the load center 40 ft; circuit loaded with two groups of receivers each taking 60 A connected between the neutral and the outside wires, and one group of receivers taking 20 A connected across the outside wires.

Fig. 15.1.92 208/120-V secondary network (single unit) showing voltages.

Voltage Drops In ac distribution systems the voltage drop from transformer to consumer in lighting mains should not exceed 2 percent in first-class systems, so that the lamps along the mains can all operate at nearly the same voltage and the annoying flicker of lamps may not occur with the switching of appliances. This may require a much larger conductor than the most economical size. In transmission lines and in feeders where there are no intermediate loads and where means of regulating the voltage are provided, the drop is not limited to the low values that are necessary with mains and the matter of economy may be given consideration.

WIRING CALCULATIONS

These calculations can be used for dc, and for ac if the reactance can be neglected. The determination of the proper size of conductor is influenced by a number of factors. Except for short distances, the minimum size of conductor shown in Table 15.1.23, which is based on the maximum permissible current for each type of insulation, cannot be used; the size of conductor must be larger so that the voltage drop IR shall not be too great. With branch circuits supplying an incandescent-lamp load, this drop should not be more than a small percentage of the voltage between wires. The National Electric Code® 2005, Article 215.2, FPN 2 recommends that conductors for feeders, i.e., from the service equipment to the final branch circuit overcurrent device, be sized to prevent (1) a voltage drop of more than 3 percent at the farthest outlet of power, heating, and lighting loads or combinations thereof, and (2) a maximum voltage drop on combined feeders and branch circuits to the farthest outlet of more than 5 percent. The resistance of 1 cir mil  ft of commercial copper may be taken as 10.8 . The resistance of a copper conductor may be expressed as R  10.8l/A, where l  length, ft and A  area, cir mils. If the length is expressed in terms of the transmission distance d (since the two wires are usually run parallel), the voltage drop IR to the end of the circuit is e 5 21.6Id/A

(15.1.128)

and the size of conductor in circular mils necessary to give the permissible voltage drop e is A 5 21.6Id/e

(15.1.129)

If e is expressed as a percentage x of the voltage E between conductors, then A 5 2,160Id/xE

(15.1.130)

Fig. 15.1.93 Three-wire 230/115-V main. Solution: load  (60  60)/2  20  80 A. Substituting in Eq. (15.1.129), cir mils  21.6Id/e  21.6  80  40/3  23,030 cir mils. From Tables 15.1.21 and 15.1.23, no. 6 wire, which has a cross section of 26,250 cir mils, is the next size larger. This size of wire would satisfy the voltagedrop requirements, but No. 6 wire w/type TW insulation has a safe carrying capacity of 55 A at 60C. The current in the circuit is 80 A. Therefore, No. 3 wire w/type TW insulation which has a carrying capacity of 85 A should be used. The neutral wire should be the same size as the outside wires. See also examples in the National Electrical Code® 2005. Wiring calculations for ac circuits require some consideration of power factor, reactance, and skin effect. Skin effect becomes pronounced only when very large conductors are used for alternating current. For interior wiring, conductors larger than 700,000 cir mils should not be used, and many prefer not to use conductors larger than 300,000 cir mils. Should the required copper cross section exceed these values, a number of conductors may be operated in parallel. For voltages under 5,000 the effect of line capacitance may be neglected. With ordinary single-phase interior wiring, where the effect of the line reactance may be neglected and where the power factor of the load (incandescent lamps) is nearly 100 percent, the calculations are made the same as for dc circuits. Three-wire ac circuits of ordinary length with incandescent lamp loads are also determined in the same manner. When the load is other than incandescent lamps, it is necessary to know the power factor of the load in order to make calculations. When the exact power factor cannot be accurately determined, the following approximate values may be used: incandescent lamps, 0.95 to 1.00; lamps and motors, 0.75 to 0.85; motors 0.5 to 0.80. Equation (15.1.131) gives the value of current in a single-phase circuit. See also Table 15.1.17.

I 5 sP 3 1,000dsE 3 pfd

(15.1.131)

where I  current, A; P  kW; E  load voltage; and pf  power factor of the load. The size of conductor is then determined by substituting this value of I in Eq. (15.1.129) or (15.1.130). For three-phase three-wire ac circuits the current per wire I 5 1,000P/ 23Epf 5 580P/Epf

(15.1.132)

Computations are usually made of voltage drop per wire (see Fig. 15.1.94). Hence, if reactance can be neglected, the conductor cross section in cir mils is one-half that given by Eq. (15.1.129). That is, A 5 10.8Id/e

cir mils

(15.1.133)

15-56

ELECTRICAL ENGINEERING

where e in Eq. (15.1.133) is the voltage drop per wire. The voltage drop between any two wires is 13e. The percent voltage drop should be in terms of the voltage to neutral. That is, percent drop  [e/ sE/ 13d]100 5 [ 13e/E]100 (see Fig. 15.1.82).

Fig. 15.1.94 Three-phase lamp and induction motor load.

EXAMPLE. In Fig. 15.1.94, load 10 kW; voltage of circuit 230; power factor 0.85; distance 360 ft; allowable drop per wire 4 V. Substituting in Eq. (15.1.132) I  (580  10)/(230  0.85)  29.7 A. Substituting in Eq. (15.1.133), A  10.8  29.7  360/4  28,900 cir mils. The next larger commercially available standard-size wire (see Table 15.1.21) is 41,700 cir mils corresponding to AWG no. 4. From Table 15.1.23 the allowable ampacity is 70 A with type TW insulation, and is therefore ample in section for 29.7 A. Three no. 4 wires would be used for this circuit. From Table 15.1.21 the resistance of 1,000 ft of no. 4 copper wire is 0.249 . Hence, the voltage drop per conductor, e  29.7  (360/1,000)0.249  2.66 V. Percent voltage drop 5 13 3 2.66/230 5 2.00 percent.

Where all the wires of a circuit, two wires for a single-phase circuit, four wires for a four-phase circuit (see Fig. 15.1.32 and Fig. 15.1.80c), and three wires for a three-phase circuit, are carried in the same conduit or where the wires are separated less than 1 in between centers, the effect of line (inductive) reactance may ordinarily be neglected. Where circuit conductors are large and widely separated from one another and the circuits are long, the inductive reactance may increase the voltage drop by a considerable amount over that due to resistance alone. Such problems are treated using IR and IX phasors. Line reactance decreases somewhat as the size of wire increases and decreases as the distance between wires decreases.

Fig. 15.1.95 Single-phase induction motor load on branch circuit. EXAMPLE. Determine the size of wire necessary for the branch to the 50-hp, 60-Hz, 250-V single-phase induction motor of Fig. 15.1.95. The name-plate rating of the motor is 195 A, and its full-load power factor is 0.85. The wires are run open and separated 4 in; length of circuit, 600 ft. Assume the line drop must not exceed 7 percent, or 0.07  250  17.5 V. The point made by this example is emphasized by the assumption of an outsize motor. Solution. To ascertain approximately the size of conductor, substitute in Eq. (15.1.129) giving cir mils  21.6  195  600/17.5  144,400. Referring to Table 15.1.21, the next larger size wire is no. 000 or 167,800 cir mils. This size would be ample if there were no line reactance. In order to allow for reactance drop, a larger conductor is selected and the corresponding voltage drop determined. Inasmuch as this is a motor branch, the code rules require that the carrying capacity be sufficient for a 25 percent overload. Therefore the conductor should be capable of carrying 195  1.25  244 A. From Table 15.1.23, 250,000-cir mil conductor Type THHN insulated cable would be required to carry 255 A at 75C. Resistance drop (see Table 15.1.21), IR  195  0.061  0.6  7.14 V. 7.14/250  2.86 percent. From Table 15.1.21, X  0.128  0.6  0.0768 . IX  195  0.768  14.98 V. 14.98/250  5.99 percent. Using the Mershon diagram (Fig. 15.1.84), follow the ordinate corresponding to power factor, 0.85, until it intersects the smallest circle. From this point, lay off horizontally the percentage resistance drop, 2.86. From this last point, lay off vertically the percentage reactance drop 5.99. This last point lies about on the 6.0 percent circle, showing that with

195 amp the difference between the sending-end and receiving-end voltages is 0.06  250  15.0 V, which is within the specified limits.

Also Eq. (15.1.127) may be used. cos u  0.85; sin u  0.527. Es 5 5[s250 3 0.85d 1 7.14]2 1 [s250 3 0.527d 1 14.98]26 1/2 5 264.3 V 264.3/250 5 105.7 percent In the calculation of three-phase three-wire circuits where line reactance must be considered, the method found above under Power Transmission may be used. The system is considered as being three single-phase systems having a ground return the resistance and inductance of which are zero, and the voltages are equal to the line voltages divided by 13. When the three conductors are spaced unequally, the value of GMD given in Fig. 15.1.85 should be used in Tables 15.1.18 and 15.1.21. (When the value of resistance or reactance per 1,000 ft of conductor is desired, the values in Table 15.1.21 should be divided by 2.) The National Electrical Code® specifies that the size of conductors for branch circuits should be such that the voltage drop should not exceed 3 percent to the farthest outlet for power, heating, lighting, or combination thereof, requiring further that the total voltage drop for feeders and branch circuits should not exceed 5 percent overall (NEC, Article 210.19, FPN No. 4). For examples of calculations for interior wiring, see National Electrical Code® (Chap. 9). INTERIOR WIRING Interior wiring requirements are based, for the most part, on the National Electrical Code® (NEC®), which has been adopted by the National Fire

Protection Association, American National Standards Institute (ANSI), and the Occupational Safety and Health Act (OSHA). The Occupational Safety and Health Act of 1970 (OSHA) made the National Electrical Code® a national standard. Conformance with the NEC became a requirement in most commercial, industrial, agricultural, etc., establishments in the United States. Some localities may not accept the latest edition of NEC® standards. In those cases, local rules must be followed. NEC® authority starts at the point where the connections are made to the conductor of the service drop (overhead) or lateral (underground) from the electricity supply system. The service equipment must have a rating not less than the load to be carried (computed according to NEC® methods). Service equipment is defined as the necessary equipment, such as circuit breakers or fused switches and accompanying accessories. This equipment must be located near the point of entrance of supply conductors to a building or other structure or an otherwise defined area. Service equipment is intended to be the main control and means of cutoff of the supply. Service-entrance conductors connect the electricity supply to the service equipment. Service-entrance conductors running along the exterior or entering a building or other structure may be installed (1) as separate conductors, (2) in approved cables, (3) as cable bus, or (4) enclosed in rigid conduit. Also, for voltages less than 600 V, the conductors may be installed in electrical metallic tubing; IMC, ENT, and MC cable wireways; auxiliary gutters; or busways. Service-entrance cables which are subject to physical damage from awnings, swinging signs, coal chutes, etc., shall be of the protected type or be protected by conduit, electrical metallic tubing, etc. Service heads must be raintight. Thermoplastic thermoset insulation is required in overhead services. A grounded conductor may be bare. If exposed to the weather or embedded in masonry, raceways must be raintight and arranged to drain. Underground service raceway or duct entering from an underground distribution system must be sealed with a suitable compound (spare ducts, also). NEC® rules permit multiple service to a building for various reasons, such as: (1) fire pumps, (2) emergency light and power, (3) multiple occupancy, (4) when the calculated load is greater than 2,000 A, (5) when the building extends over a large area, and (6) where different voltages, frequencies, number of phases, or classes of use are required.

INTERIOR WIRING

Ordinary service drops (overhead) and lateral (underground) must be large enough to carry the load but not smaller than no. 8 copper or no. 6 aluminum. As an exception, for installations to supply only limited loads of a single branch circuit, such as small polyphase power, etc., service drops must not be smaller than no. 12 hard-drawn copper or equivalent, and service laterals must be not smaller than no. 12 AWG copper or no. 10 AWG aluminum. The phrase large enough to carry the load requires elaboration. The various conductors of public-utility electric-supply systems are sized according to the calculations and decisions of the personnel of the specific public utility supplying the service drop or lateral. At the load end of the drop or lateral, the NEC® rules apply, and from that point on into the consumer’s premises, NEC® rules are the governing authority. There is a discontinuity at this point in the calculation of combined load demand for electricity and allowable current (ampacity) of conductors, cables, etc. This discontinuity in calculations results from the fact that the utility company operates locally, whereas the NEC® is a set of national standards and therefore cannot readily allow for regional differences in electrical coincident demand, ambient temperature, etc. The NEC’s aim is the assurance of an electrically “safe” human environment. This will be fostered by following the NEC® rules. Service-entrance cables are conductor assemblies which bear the type codes SE (for overhead services) and USE (for underground services). Under specified conditions, these cables may also be used for interior feeder and branch-circuit wiring. The service-entrance equipment must have the capability of safely interrupting the current resulting from a short circuit at its terminals. Available short-circuit current is the term given to the maximum current that the power system can deliver through a given circuit to any negligible-impedance short circuit applied at a given point. (This value can be in terms of symmetrical or asymmetrical, momentary or clearing current, as specified.) In most instances, the available short-circuit current is limited by the impedance of the last transformer in the supply system and connecting wiring. Large power users, however, must become aware of changes in the electricity supply system which, because of growth of system capacity or any other reason, would increase the short-circuit current available to their service-entrance equipment. If this current is too great, explosive failure can result in a hazard to personnel and equipment.

Fig. 15.1.96 Motor and wiring protection.

Kilowatthour and sometimes demand-metering equipment are connected to the service-entrance conductors. Proceeding toward the utilization equipment, the power-supply system fans out into feeders and branch circuits (see Fig. 15.1.96). Each of the feeders, i.e., a run of untapped conductor or cable, is connected to the supply through a switch and fuses or a circuit breaker. At a point, usually near that portion of the electrical loads which are to be supplied, a panelboard or perhaps a load-center assembly of switching and/or control equipment is installed. From this panel box or load-center assembly, circuits originate; i.e., circuits are installed to extend into the area being served to connect electrical machinery or devices or make available electric receptacles connected to the source of electric power. Each feeder and each branch circuit will have its own over-current protection and disconnect means in the form of a fuse and switch combination or a circuit breaker.

15-57

Wiring Methods

There is a provision in the NEC® rules for the following types of feeder and branch circuit wiring: 1. Open Wiring on Insulators (NEC® 2005, Art. 398). This wiring method uses approved cleats, knobs, tubes, and flexible tubing for the protection and support of insulated conductors run on or in buildings and not concealed by the building structure. It is permitted only in industrial or agricultural establishments. 2. Concealed Knob-and-Tube Work (NEC® 2005, Art. 394). Concealed knob-and-tube work may be used in the hollow spaces of walls and ceilings. It may be used only for the extension of existing installations. 3. Flat Conductor Cable, Type FCC (NEC® 2005, Art. 324). Type FCC cable may be installed under carpet squares. It may not be used outdoors or in wet locations, in corrosive or hazardous areas, or in residential, school, or hospital buildings. 4. Mineral-Insulated Metal-Sheathed Cable, Type MI (NEC® 2005, Art. 332). Type MI cable contains one or more electrical conductors insulated with a highly compressed refractory mineral insulation and enclosed in a liquid- and gastight continuous copper of steel sheath. Appropriate approved fittings must be used with it. It may be used for services, feeders, and branch circuits either exposed or concealed and dry or wet. It may be used in Class I, II, or III hazardous locations. It may be used for under-plaster extensions and embedded in plaster finish or brick or other masonry. It may be used where exposed to weather or continuous moisture, for underground runs and embedded in masonry, concrete or fill, in buildings in the course of construction or where exposed to oil, gasoline, or other conditions. If the environment would cause destruction of the sheath, it must be protected by suitable materials. 5. Power and Control Tray Cable (NEC® 2005, Art. 336). Type TC cable is a factory assembly of two or more insulated conductors with or without associated bare or covered-grounding conductors under a nonmetallic jacket, approved for installation in cable trays, in raceways, or where supported by messenger wire. 6. Metal-Clad Cable, Type MC and AC Series (NEC® 2005, Art. 320 and 330). These are metal-clad cables, i.e., an assembly of insulated conductors in a flexible metal enclosure. Type AC cables are manufactured in sizes 14 AWG through 1 AWG copper and 12 AWG through 1 AWG aluminum conductors. Type MC cables are manufactured in sizes 14 AWG and larger copper and 12 AWG and larger aluminum conductors. Optical fibers are permitted in multiconductor-type MC cables. Type AC cables utilize the armor as the path for equipment grounding. Armor of type MC cable is not recognized as the sole grounding path and includes a separate equipment-grounding conductor. Metal-clad cables may generally be installed where not subject to physical damage, for feeders and branch circuits in exposed or concealed work, with qualifications for wet locations, direct burial in concrete, etc. The use of Type AC cable is prohibited (1) in motion-picture studios, (2) in theaters and assembly halls, (3) in hazardous locations, (4) where exposed to corrosive fumes or vapors, (5) on cranes or hoists except where flexible connections to motors, etc., are required, (6) in storage-battery rooms, (7) in hoistways or on elevators except (i) between risers and limit switches, interlocks, operating buttons, and similar devices in hoistways and in escalators and moving walkways and (ii) short runs on elevator cars, where free from oil, and if securely fastened in place, or (8) in commercial garages in hazardous locations. 7. Nonmetallic-Sheathed Cable, Types NM, NMC, and NMS (NEC® 2005, Art. 334). These are assemblies of two more insulated conductors (nos. 14 through 2 for copper, nos. 12 through 2 for aluminum) having an outer sheath of moisture-resistant, flame-retardant, nonmetallic material. In addition to the insulated conductors, the cable may have an approved size of insulated or bare conductor for equipment grounding purposes only. The outer covering of NMC cable is flame retardant and corrosion resistant. The use of this type of cable, commonly called Romex is permitted in one- or two-family dwellings, multifamily dwellings, and other structures of type III, IV, and V construction. These cables are not permitted as open runs in dropped or suspended ceilings, as service entrance cable, in commercial garages

15-58

ELECTRICAL ENGINEERING

having hazardous (classified) locations, in theaters, motion picture studios, storage battery rooms, hoistways, escalators, or embedded in concrete. 8. Service Entrance Cable, Types SE and USE (NEC® 2005, Art. 338). These cables, containing one or more individually insulated conductors, are primarily used for electric services. Type SE has a flame-retardant, moisture-resistant covering and is not required to have built-in protection against mechanical abuse. Type USE is recognized for use underground. It has a moisture-resistant covering, but not necessarily a flame-retardant one. Like the SE cable, USE cable is not required to have inherent protection against mechanical abuse. Where all circuit conductors are fully insulated, type USE and SE cables can be used for branch circuits or feeders. The equipment grounding conductor is the only conductor permitted to be bare within service entrance cables used for branch circuits. 9. Underground Feeder and Branch-Circuit Cable, Type UF (NEC® 2005, Art. 340). This cable is made in sizes 14 through 4/0, and the insulated conductors are Types TW, RHW, and others approved for the purpose. As in the NM cable, the UF type may contain an approved size of uninsulated or bare conductor for grounding purposes only. The outer jacket of this cable shall be flame-retardant, moisture-resistant, fungusresistant, corrosion-resistant, and suitable for direct burial in the ground. 10. Other Installation Practices. The NEC® details rules for nonmetallic circuit extensions and underplaster extensions. It also provides detailed rules for installation of electrical wiring in (a) rigid metal conduit (which may be used for all atmospheric conditions and locations with due regard to corrosion protection and choice of fittings), (b) rigid nonmetallic conduit (which is essentially corrosionproof), in electrical metallic tubing (which is lighter-weight than rigid metal conduit), (c) flexible metal conduit, (d) liquidtight flexible metal conduit, (e) surface raceways, ( f ) underfloor raceways, (g) multioutlet assemblies, (h) cellular metal floor raceways, (i) structural raceways, ( j) cellular concrete floor raceways, (k) wireways (sheet-metal troughs with hinged or removable covers), (l) flat, Type FC (NEC® 2005, Art. 322), cable assemblies installed in a surface metal raceway (Type FC cable contains three or four no. 10 special stranded copper wires), (m) busways, and (n) cable-bus. Busways and cable-bus installations are permitted for exposed work only. In all installation work, only approved outlets, switch and junction boxes, fittings, terminal strips, and dead-end caps shall be used, and they are to be used in an approved fashion (see NEC® 2005, Art. 314). Conductors

Table 15.1.22 relates American wire gage (AWG) AWG wire sizes to metric units. Table 15.1.23 lists the allowable ampacities of copper and aluminum conductors. Table 15.1.24 lists various conductor insulation systems approved by the 2005 NEC® for conductors used in interior wiring. Adjustment factors for number of conductors in a raceway or cable are listed in Table 15.1.25. For installations not covered by the tables presented here, review the NEC®. Estimated full-load currents of motors can be taken from Table 15.1.17. Table 15.1.22 Wire Table for Standard Annealed Copper at 20 C in SI Units AWG size

No. wires

Diameter, mm

kg/km

m/

Area, mm2

14 12 10 8 6 4 2 1 0 00 000 0000

1 1 1 1 7 7 7 19 19 19 19 19

1.628 2.053 2.588 3.264 4.115 5.189 6.544 7.348 8.252 9.266 10.40 11.68

18.50 29.42 46.77 74.37 118.2 188.0 299.0 377.0 477.6 599.5 755.7 935.2

120.7 191.9 305.1 485.2 771.5 1227 1951 2460 3102 3911 4932 6219

2.08 3.31 5.261 8.367 13.30 21.15 33.62 42.41 53.49 67.43 85.01 107.2

Protection of conductors Unless excepted by the NEC®, other than fixture cords, flexible cables, and fixture wires, conductors shall be protected against overcurrent according to their rated ampacities (see Table 15.1.23). A fuse or overcurrent trip element of circuit breakers is connected in series with each ungrounded conductor and is normally located at the point where the conductor receives its supply. Standard ratings for fuses and circuit-breakers (inverse time) are 15, 20, 25, 30, 40, 45, 50, 60, 70, 80, 90, 100, 125, 150, 175, 200, 225, 250, 300, 250, 400, 450, 500, 600, 700, 800, 1,000, 1,200, 1,600, 2,000, 2,500, 3,000, 4,000, 5,000, and 6,000 amperes. Additional standard sizes for fuses are 1, 3, 6, 10, and 601. Listed nonstandard sizes of fuses and circuit-breakers are permitted. Allowable conductor ampacities (Table 15.1.23) are subject to limitations and adjustments that may limit or reduce the initial conductor ampacity. One guiding limitation is that the conductor shall be selected so as not to exceed the lowest temperature rating of any connected termination, conductor, or device. Unless the equipment (termination) is listed for a higher temperature, the following applies: • Termination provisions for circuits rated 100 A or less, or marked for 14 AWG through 1 AWG conductors, shall be conductors rated for 60C. Conductors with higher temperature ratings can be used as long as the ampacity is based on the 60C ampacity. • Termination provisions for circuits rated over 100 A, or marked for conductors larger than 1 AWG, shall be rated for conductors rated for 75C. Conductors with higher temperature ratings can be used as long as the ampacity is based on the 75C ampacity. • The above temperature limitations are most commonly applied at circuit breakers at motor starter terminals. Listing temperatures are normally identified on each device. • For additional information see NEC® Article 110.14(C)(1). Unless the equipment is listed and marked otherwise, conductor ampacities apply after any required derating for ambient temperature or number of conductors in a raceway. Conductor ampacity tables should be carefully checked, since the basis can include ambient temperatures of 20C, 30C, or 40C depending on the application. Refer to Table 15.1.23 for examples of temperature correction factors. For complete ampacity information refer to NEC® Article 310. Where the number of current-carrying conductors in a raceway or cable exceeds three the allowable ampacity in of each conductor in Table 15.1.23 must be reduced. Table 15.1.25 establishes derating factors to be applied to current-carrying conductors after any required derating for ambient temperature. The factors in Table 15.1.25 are based on conductors with no load diversity, meaning all conductors are fully loaded. For conductors with load diversity, adjustment factors for more than three current-carrying conductors in a raceway or cable are found in NEC® 2005, Annex B, Table B.310.11. For information on ampacity derating of cables in cable trays, refer to NEC® 2005, Article 392. After the final ampacity is derived, the basic rule for selecting overcurrent protection is that all conductors, other than flexible cords, shall be protected against overcurrent in accordance with their ampacities. For devices rated 800 A or less, the next higher standard overcurrent device rating (above the ampacity of the conductors being protected) is permitted to be used. Refer to NEC® Article 240.4(A) for additional information and exceptions. The number and size of conductors in raceways must not be more than will permit dissipation of heat and ready installation or removal of conductors without damage. Lookup tables for conduit fill by conductor insulation are often used. Because of the large variety of available insulation types, raceway systems, and measurement units (U.S. versus metric), the NEC® approaches fill by several methods. If all of the conductors are the same size and insulation type, conduit fill can be determined by lookup tables based on the type of raceway; refer to NEC® 2005, Annex-C, Tables C1 through C12A. If the conductors are mixed sizes, then the process has two steps: 1. Determine the total area occupied by the conductors (in2 or mm2), based on the wire insulation type and size, using NEC®, Chap. 9, Table 5 or 5A.

INTERIOR WIRING

15-59

Table 15.1.23 Allowable Ampacities of Insulated Conductors Rated 0 through 2,000 Volts, 60°C through 90°C (140°F through 194°F), Not More Than Three Current-Carrying Conductors in Raceway, Cable, or Earth (Directly Buried), Based on Ambient Temperature of 30°C (86°F) Temperature rating of conductor 60C (140F)

Types TW, UF

75C (167F)

90C (194F)

Types RHW, THHW, THW, THWN, XHHW, USE, ZW

Types TBS, SA, SIS, FEP, FEPB, MI, RHH, RHW-2, THHN, THHW, THW-2, THWN-2, USE-2, XHH, XHHW, XHHW-2, ZW-2

Size AWG or kcmil

60C (140F)

Types TW, UF

Copper

75C (167F)

90C (194F)

Types RHW, THHW, THW, THWN, XHHW, USE

Types TBS, SA, SIS, THHN, THHW, THW-2,THWN-2, RHH, RHW-2, USE-2, XHH, XHHW, XHHW-2, ZW-2 Size AWG or kcmil

Aluminum or copper-clad aluminum

18 16 14* 12* 10*

— — 20 25 30

— — 20 25 35

14 18 25 30 40

— — — 20 25

— — — 20 30

— — — 25 35

— — — 12* 10*

8 6 4 3 2 1

40 55 70 85 95 110

50 65 85 100 115 130

55 75 95 110 130 150

30 40 55 65 75 85

40 50 65 75 90 100

45 60 75 85 100 115

8 6 4 3 2 1

1/0 2/0 3/0 4/0

125 145 165 195

150 175 200 230

170 195 225 260

100 115 130 150

120 135 155 180

135 150 175 205

1/0 2/0 3/0 4/0

250 300 350 400 500

215 240 260 280 320

255 285 310 335 380

290 320 350 380 430

170 190 210 225 260

205 230 250 270 310

230 255 280 305 350

250 300 350 400 500

600 700 750 800 900

355 385 400 410 435

420 460 475 490 520

475 520 535 555 585

285 310 320 330 355

340 375 385 395 425

385 420 435 450 480

600 700 750 800 900

1,000 1,250 1,500 1,750 2,000

455 495 520 545 560

545 590 625 650 665

615 665 705 735 750

375 405 435 455 470

445 485 520 545 560

500 545 585 615 630

1,000 1,250 1,500 1,750 2,000

Correction factors Ambient temp., C

For ambient temperatures other than 30C (86F), multiply the allowable ampacities shown above by the appropriate factor shown below.

Ambient temp.,F

21–25

1.08

1.05

1.04

1.08

1.05

1.04

26–30

1.00

1.00

1.00

1.00

1.00

1.00

70–77 78–86

31–35

0.91

0.94

0.96

0.91

0.94

0.96

87–95

36–40

0.82

0.88

0.91

0.82

0.88

0.91

96–104

41–45

0.71

0.82

0.87

0.71

0.82

0.87

105–113

46–50

0.58

0.75

0.82

0.58

0.75

0.82

114–122

51–55

0.41

0.67

0.76

0.41

0.67

0.76

123–131

56–60



0.58

0.71



0.58

0.71

132–140

61–70



0.33

0.58



0.33

0.58

141–158

71–80





0.41





0.41

159–176

* Overcurrent protection shall not exceed 15 A for 14 AWG, 20 A for 12 AWG, 30 A for 10 AWG copper, or 15 A for 12 AWG and 25 A for 10 AWG aluminum or copper-clad aluminum after any correction factors for ambient temperature and numbers of conductors in a raceway have been applied. NOTE: This table is but one example of many provided for various installation methods. Refer to NEC Art. 310 for additional installations that include conductors in air, underground, and others. SOURCE: NEC Table 310-16. Reprinted with permission from NFPA 70-2002, National Electric Code®, Copyright © 2001, National Fire Protection Association, Quincy, MA 02269. This printed material is not the complete and official position of the NFPA on the referenced subject, which is represented only by the standard in its entirety.

15-60

ELECTRICAL ENGINEERING

Table 15.1.24 Type letter

Conductor Type and Application Insulation, trade name (see NEC®, Art. 310 for complete information)

Outer coveringa

Environment

Max operating temperature = 60C (140F) TW TF TFF MTW UF

Moisture-resistant thermoplastic Thermoplastic-covered, solid or 7-strand Thermoplastic-covered, flexible stranding Moisture-, heat-, and oil-resistant thermoplastic machinetool wiring (NFPA Stand. 79, NEC 1975, Art. 670) Moisture-resistant, underground feeder

RHW THW THWN XHHW RFH-1 and 2 FFH-1 and 2 UF USE ZW

Moisture-resistant thermoset Moisture- and heat-resistant thermoplastic Moisture- and heat-resistant thermoplastic Moisture-resistant thermoset Heat-resistant rubber-covered solid or 7-strand Heat-resistant rubber-covered flexible stranding Moisture-resistant and heat-resistant Heat- and moisture-resistant Modified ethylene tetrafluoroethylene

MI

Mineral-insulated (metal-sheathed)

RHH THHN THW XHHW FEP FEPB TFN TFFN MTW PFA SA

Heat-resistant thermoset Heat-resistant thermoplastic Moisture- and heat-resistant thermoplastic Moisture-resistant thermoset Fluorinated ethylene propylene Fluorinated ethylene propylene Heat-resistant thermoplastic covered, solid or 7-strand Heat-resistant thermoplastic flexible stranding Moisture-, heat-, and oil-resistant thermoplastic machine-tool wiring (NFPA Stand. 79, NEC 1975, Art. 670) Perfluoralkoxy Silicone

Z, ZW

Modified ethylene tetrafluoroethylene

FEP, FEPB

Fluorinated ethylene propylene Special applications Perfluoralkoxy Fluorinated ethylene propylene Perfluoralkoxy Silicone rubber, solid or 7-strand

Dry and wet

Wet

None None None None or nylon

Dry and wet

None

Dry and wet Dry and wet Dry and wet Wet

Dry and wet Dry and wet Wet

1,2 None Nylon None None None None 4 None

Dry and wet

Copper or alloy steel

Dry and damp Dry and damp

Dry

1,2 Nylon None None None 3 Nylon Nylon None or nylon

Dry and damp Dry

None Glass

Dry

None

Dry

None 3 None None or glass braid None Nonmetallic

c,d c,d

Max operating temperature = 75C (167F)

b–d b–d

Max operating temperature  90C (194F)

Max operating temperature  90C (194F)

f

Dry Dry Dry c,d

Max operating temperature  150C (302F)

Max operating temperature  200C (392F)

PFA PF, PGF PFA SF-2

Dry c,d

Dry c,d

Max operating temperature  250C (482F) MI TFE

PFAH PTF a

Mineral-insulated (metal-sheathed), for special applications Extruded polytetrafluoroethylene, only for leads within apparatus or within raceways connected to apparatus, or as open wiring (nickel or nickel-coated copper only) Perfluoroalkoxy (special application) (nickel or nickel-coated copper) Extruded polytetrafluoroethylene, solid or 7-strand (nickel or nickel-coated copper only)

Dry and wet Dry

Copper or alloy steel None

Dry

None

c,d

None

1: Moisture-resistant, flame-retardant nonmetallic; 2: outer covering not required when thermoset insulation has been specifically approved for the purpose; 3: no. 14-8 glass braid, no. 6-2 asbestos braid; 4: moisture-resistant nonmetallic. b Limited to 300 V. c No. 18 and no. 16 conductor for remote controls, low-energy power, low-voltage power, and signal circuits; NEC® 2005 Sec. 725.27, Sec. 760.27. d Fixture wire no. 18-16. e For over 2,000 V, the insulation shall be ozone-resistant. f Special applications within electric discharge lighting equipment. Limited to 1,000 V open-circuit volts or less. SOURCE: NEC® 2005 Tables 310.13 and 402.3. Reprinted with permission from NFPA 70-1993, National Electrical Code®, Copyright © 1992, National Fire Protection Association, Quincy, Massachusetts 02269. This reprinted material is not the complete and official position of the NFPA on the referenced subject, which is represented only by the standard in its entirety. NOTE: Not all types of approved wires are listed.

INTERIOR WIRING Table 15.1.25 Adjustment Factors for More Than Three Current-Carrying Conductors in a Raceway or Cable Number of currentcarrying conductors

Percent of values in Table 15.1.23 as adjusted for ambient temperature if necessary

4–6 7–9 10–20 21–30 31–40 41 and above

80 70 50 45 40 35

SOURCE: NEC® Table 310.15 (A)(2)(a). Reprinted with permission from NFPA 70-2005. National Electric Code®, Copyright © 2001, National Fire Protection Association, Quincy, MA. This reprinted material is not the complete and official position of the NFPA on the referenced subject, which is represented only by the standard in its entirety.

2. Then, using the appropriate table for the raceway system from NEC® 2005, Table 4, find a raceway equal to or greater than the maximum allowable fill for the number of conductors. Generally, insulated conductors in premises wiring systems and circuits are required to have a grounded conductor (commonly referred to as the neutral) identified by a continuous white or gray finish. Where the wiring method does not include a white colored conductor, a conductor (other than green) can be identified by three continuous white stripes along its length, or for conductor sizes larger than 6 AWG, white tape or paint can be applied at terminations. Refer to NEC® 2005, Art. 200.6, for additional information. The insulation color of ungrounded circuit conductors is not restricted other than that it be different from a grounded (white) or grounding conductor (green). A common identification practice for circuit conductors in 208Y120-V, three-phase, three-wire systems is black, red, and blue, and for three-phase, four-wire systems is black, red, blue, and white. For 480Y277-V, three-phase, three-wire systems, brown, orange, and yellow (BOY) is widely used, and for three-phase, four-wire systems, brown, orange, yellow, and white is common practice. Where more than one nominal voltage system exists in a building (or premises), unique identification either by conductor color or tape is required to be applied at each termination. Switching Arrangements Small quick-break switches must be set in or on an approved box or fitting and may be of the push, tumbler, or rotary type. The following types of switches are used to control lighting circuits: (1) single-pole, (2) double-pole, (3) three-point or three-way, (4) four-way, in combination with three-way switches to control lights from three or more stations. In all metallic protecting systems, such as conduit, armored cable, or metal raceways, joints and splices in conductors must be made only in junction boxes or other proper fittings; therefore, these fittings can be located only in accessible places and never concealed in partitions. Splices or joints in the wire must never be in the conduit piping, raceway, or metallic tubing itself, for the splices may become a source of trouble as a result of corrosion or grounding if water should enter the conduit. All conductors of an ac system must be placed in the same metallic casing so that their resultant magnetic field is nearly zero. If this is not done, eddy currents are set up causing heating and excessive loss. With single conductors in a casing, or multiple conductors of the same phase, an excessive reactance drop may result. Services A service disconnecting means is required to disconnect all ungrounded conductors in buildings or other structures from the service entrance conductors. The disconnecting means must be installed at a readily accessible location either outside a building or structure or inside, nearest the point of entrance of the service conductors. The length of service entrance conductors should be kept to a minimum inside buildings. Some local jurisdictions have ordinances that limit the distance of service entrance conductors or the location of service equipment inside a building. The service drop or entrance conductors must be insulated or covered, except a grounded conductor is permitted to be bare when part of a service cable, underground lateral, and certain other

15-61

installations. (Refer to NEC® 2005, Art. 230, for additional information on electrical service requirement.) The minimum size of service entrance conductors (before any adjustment or correction factors) shall not be less than the sum of the noncontinuous load plus 125 percent of the continuous load. The service disconnecting means shall have a rating of not less than the load to be carried. Calculating the service size requires accounting for (1) all fixed loads (motors, appliances, heating devices, etc.), adjusted for any applicable demand factors (ratio of maximum demand to connected load); (2) general lighting load (lights and general-use receptacles) based on type of building (dwelling, hospital, hotel, apartments, warehouse, etc.), adjusted for applicable demand factor; and (3) additional capacity for the future, if applicable. (Refer to NEC® 2005, Art. 220, for additional information and NEC® 2005, Annex D, for examples.) The computed load of a feeder shall not be less than the sum of all the loads on branch circuits after any applicable demand factors have been applied. In no case shall the service rating be less than 15 A for loads of a single branch circuit, 30 A for loads consisting of two 2-wire circuits, 100 A for a one-family dwelling. For all other installations the service disconnecting means shall be rated for a minimum of 60 A. Unless specifically excepted, alternating-current circuits and systems operating from 50 to 1,000 V supplying premises wiring and premises wiring systems are required to be grounded. Also, two-wire direct-current systems operating from 50 to 300 V and all three-wire direct-current systems supplying premises wiring are required to be grounded. Grounding of electrical systems to earth is done in a manner that will limit the voltage caused by lightning, surges, or unintentional contact with higher voltage sources and acts to stabilize the voltage to earth during normal operation. System grounding is normally accomplished by connecting a grounding electrode conductor from a neutral terminal in the service equipment to a grounding electrode. The types of grounding electrodes permitted include driven ground rods, water pipe, metal frames of buildings, ground rings, and others. Refer to NEC® 2005, Art. 250, Part III, for additional information on application and conductor sizing. Electrical equipment, wiring, and other electrically conductive materials likely to become energized must be installed in a manner that will provide an effective ground path capable of carrying the maximum ground-fault current likely in any part of the system where a ground fault may occur to the electrical supply. All non-current-carrying conductive materials enclosing conductors (raceways, wireways, shields, etc.), enclosing equipment, or forming part of electrical equipment (motor frames, etc) are required to be bonded (connected together) in a manner that establishes an effective path for ground-fault current. Equipment grounding conductors are used to provide the grounding of non-current-carrying metal parts likely to become energized for fixed, stationary, or portable equipment served by fixed wiring or cord-andplug connected. Equipment grounding conductors can be a bare or insulated (green or green/yellow stripe) wire where properly sized, rigid metal or intermediate conduit, copper sheath of MI cables, and cable trays. Under certain conditions electrical metallic tubing, type MC cable, wireways/auxilliary gutters, flexible metal conduit, and liquidtight flexible metal conduit can be used as an equipment-grounding conductor. See NEC® 2005, Art. 250.118, for additional information. Ground-Fault Protection There are several types of ground-fault protection used for protection of either personnel or equipment. Ground-fault circuit interrupters (GFCIs) have been required by the NEC® since 1973 to provide personnel protection in certain areas of dwellings, manufactured housing, hotels, swimming pools, health-care facilities, recreational vehicles, and other locations to prevent electrocution. The GFCI monitors the current in the two conductors of a circuit (120 or 240 V, single phase) and automatically trips the circuit breaker on a differential current in excess of 4 to 6 mA (nominally 5 mA). During the time to trip a fault a person can be exposed to a shock sensation that can cause secondary accidents such as falls. Ground-fault equipment protection (GFEP) is required for electric heating circuits as defined by NEC® 2005, Arts. 426 and 427. The requirement can be met by use of a ground-fault equipment protection circuit

15-62

ELECTRICAL ENGINEERING

interrupter (GFEPCI) with a nominal trip value of 30 mA or a heating controller or ground-fault relay with adjustable trip settings. Ground-fault protection of equipment has been required since 1971 in solidly grounded wye systems of more than 150 V to ground but not exceeding 600 V phase-to-phase at each service disconnect rated 1,000 A or more. Adjustable current-sensing relays are commonly used for equipment protections; they monitor either all phase conductors or the main bonding jumper. When the stray current exceeds the setting of the ground-fault relay, a shunt-trip breaker is operated to open all ungrounded circuit conductors. Arc-fault interrupter protection is designed to detect and interrupt arcing faults. Arc-fault circuit interrupter (AFCI) circuit breakers are required in 15- and 20-A, 125-V circuits that supply outlets (receptacles, lighting, etc.) in dwelling-unit bedrooms to provide protection from arcing faults occurring in extension cords or cord-connected lights or appliances. RESISTOR MATERIALS

For use in rheostats, electric furnaces, ovens, heaters, and many electrical appliances, a resistor material with high melting point and high resistivity which does not disintegrate or corrode at high temperatures is necessary. These requirements are met by the nickel-chromium and nickel-chromium-iron alloys. For electrical instruments and measuring apparatus, the resistor material should have high resistivity, low temperature coefficient, and, for many uses, low thermoelectric power against copper. The properties of resistor materials are given in Table 15.1.26. Most of these materials are available in ribbon as well as in wire form. Cast-iron and steel wire are efficient and economical resistor materials for many uses, such as power-absorbing rheostats and motor starters and controllers. (See also Table 15.1.3.) Advance has a low temperature coefficient and is useful in many types of measuring instrument and precision equipment. Because of its high thermoelectric power to copper, it is valuable for thermoelements and pyrometers. It is noncorrosive and is used to a large extent in industrial and radio rheostats. Hytemco is a nickel-iron alloy characterized by a high temperature coefficient and is used advantageously where self-regulation is required as in immersion heaters and heater pads. Magno is a manganese-nickel alloy used in the manufacture of incandescent lamps and radio tubes. Manganin is a copper-manganesenickel alloy which, because of its very low temperature coefficient and its low thermal emf with respect to copper, is very valuable for highprecision electrical measuring apparatus. It is used for the resistance units in bridges, for shunts, multipliers, and similar measuring devices. Nichrome V is a nickel-chromium alloy free from iron, is noncorrosive, nonmagnetic, withstands high temperatures, and has high Table 15.1.26

MAGNETS

A permanent magnet is one that retains a considerable amount of magnetism indefinitely. Permanent magnets are used in electrical instruments, telephone receivers, loudspeakers, magnetos, tachometers, magnetic chucks, motors, and for many purposes where a constant magnetic field or a constant source of magnetism is desired. The magnetic material should have high retentivity, a high remanence, and a high coercive force (see Fig. 15.1.8). These properties are usually found with hardened steel and its alloys and also in ceramic permanent magnet materials. Since permanent magnets must operate on the molecular mmf imparted to them when magnetized, they must necessarily operate on the portion CDO of the hysteresis loop (see Fig. 15.1.8). The area CDO is proportional to the stored energy within the magnet and is a criterion of its usefulness as a permanent-magnetic material. In the left half of Fig. 15.1.97 are given the B-H characteristics of several permanentmagnetic materials; these include 5 to 6 percent tungsten steel (curve 1); 31⁄2 percent chrome magnet steel (curve 2); cobalt magnet steel, containing 16 to 36 percent cobalt and 5 to 9 percent chromium and in some alloys tungsten (curve 3); and the carbon-free aluminum-nickelcobalt-steel alloys called Alnico. There are many grades of Alnico; the

Properties of Metals, Alloys, and Resistor Materials

Material

Composition

Sp gr

Advance Comet Bronze, commercial Hytemco-Balco Kanthal A

Cu 0.55; Ni 0.45 Ni 0.30; Cr 0.05; Fe 0.65 Cu; Zn Ni 0.50; Fe 0.50 Al 0.055; Cr 0.22; Co 0.055; Fe 0.72 Ni 0.955; Mn 0.045 Cu 0.84; Mn 0.12; Ni 0.04 Ni 0.67; Cu 0.28 Ni 0.60; Fe 0.25; Cr 0.15 Ni 0.80; Cr 0.20 Ni 0.99 Pt Ag W

8.9 8.15 8.7 8.46 7.1

Magno Manganin Monel metal Nichrome Nichrome V Nickel, pure Platinum Silver Tungsten

resistivity. It is recommended as material for heating elements in electric furnaces, hot-water heaters, ranges, radiant heaters, and high-grade electrical appliances. Kanthal is used for heating applications where higher operating temperatures are required than for Nichrome. Mechanically, it is less workable than Nichrome. Platinum is used in specialized heating applications where very high temperatures are required. Tungsten may be used in very high temperature ovens with an inert atmosphere. Pure nickel is used to satisfy the high requirements in the fabrication of radio tubes, such as the elimination of all gases and impurities in the metal parts. It also has other uses such as in incandescent lamps, for combustion boats, laboratory accessories, and resistance thermometers. Carbon withstands high temperatures and has high resistance; its temperature coefficient is negative; it will safely carry about 125 A/in2. Amorphous carbon has a resistivity of 3,800 and 4,100 m  cm, retort carbon about 720 m  cm, and graphite about 812 m  cm. The properties of any particular kind of carbon depend on the temperature at which it was fired. Carbon for rheostats may best be used in the form of compression rheostats. Silicon carbide is used to manufacture heating rods that will safely operate at 1,650C (3,000F) surface temperature. It has a negative coefficient of resistance up to 650C, after which it is positive. It must be mechanically protected because of inherent brittleness.

* Tungsten subject to rapid oxidation in air above 150C.

8.75 8.19 8.9 8.247 8.412 8.9 21.45 10.5 19.3

Microhms-cm at 20C

Resistivity Ohms Temp coef cir-mil-ft of resistance at 20C per deg C

Temp range, C

Max safe working temp, C

Approx melting point, C

48.4 95 4.2 20 145

294 570 25 120 870

0.00002 0.00088 0.0020 0.0045 0.00002

20–100 20–500 0–100 20–100 0–500

500 600 — 600 1330

1210 1480 1040 1425 1510

20 48.2 42.6 112 108 10 10.616 1.622 5.523

120 290 256 675 650 60 63.80 9.755 33.22

0.0036

0.000015 0.0001 0.00017 0.00013 0.0050 0.003 0.00361 0.0045

20–100 15–35 0–100 20–100 20–100 0–100 0–100 0–100 —

400 100 425 930 1100 400 1200 650 *

1435 1020 1350 1350 1400 1450 1773 960 3410

MAGNETS

characteristics of three of them are shown by curves 4, 5, 6. Their composition is as follows:

Curve

Alnico no.

Composition, percent Al

Ni

Co

Cu

Ti

Fe

4 5

5 6

8 8

14 14

24 24

3 3

... 1.25

51 49.75

6

12

6

18

35

...

8.0

33

All the Alnicos can be made by the sand or the precision-casting (lost-wax) process, but the most satisfactory method is by the sintering process. If the alloys are held in a magnetic field during heat-treatment, a magnet grain is established and the magnetic properties in the direction of the field are greatly increased. The alloys are hard, can be formed only by casting or sintering, and cannot be machined except by grinding. The curves in the right half of Fig. 15.1.97 are “external energy” curves and give the product of B and H. The optimum point of operation is at the point of maximum energy as is indicated at A1 on curve 5. Considering curve 5, if the magnetic circuit remained closed, the magnet would operate at point B. To utilize the flux, an air gap must be introduced. The air gap acts as a demagnetizing force, H1 ( B1A1), and the magnet operates at point A1 on the HB curve. The line OA1 is called the air-gap line and its slope is given by tan u1  B1/H1 where H1  Bglg/lm and B1/Bg  Ag /Am, where Bg  flux density in gap, T; lg  length of gap, m; lm  length of magnet, m; Ag, Am  areas of the gap and magnet, m2. If the air gap is lengthened, the magnet will operate at A2 corresponding to a lesser flux density B3 and the new air-gap line is OA2. If the gap is now closed to its original value, the magnet will not return to operation at point A1 but will operate at some point C on the line OA1. If the air gap is varied between the two foregoing values, the magnet will operate along the minor hysteresis loop A2C. Return to point A1 can be accomplished only by remagnetizing and coming back down the curve from B to A1. Alnico magnets corresponding to curves 4 and 5 are best adapted to operation with short air gaps, since the introduction of a long air gap will demagnetize the magnet materially. On the other hand, a magnet

15-63

with a long air gap will operate most satisfactorily on curve 6 on account of the high coercive force H2. With change in the length of the air gap, the operation will be essentially along that curve and the magnet will lose little of its original magnetization. There are several other grades of Alnico with characteristics between curves 4, 5, and 6. Ceramic PM materials have a very large coercive force. The steels for permanent magnets are cut in strips, heated to a redhot temperature, and forged into shape, usually in a “bulldozer.” If they are to be machined, they are cooled in mica dust to prevent air hardening. They are then ground, tumbled, and tempered. Alnico and ceramic types are cast and then finish-ground. Permanent magnets are magnetized either by placing them over a bus bar carrying a large direct current, by placing them across the poles of a powerful electromagnet, or by an ampere-turn pulse. Unless permanent magnets are subjected to artificial aging, they gradually weaken until after a long period they become stabilized usually at from 85 to 90 percent of their initial strength. With magnets for electrical instruments, where a constant field strength is imperative, artificial aging is accomplished by mechanical vibration or by immersion in oil at 250F for a period of a few hours. In an electromagnet the magnetic field is produced by an electric current. The core is usually made of soft iron or mild steel because, the permeability being higher, a stronger magnetic field may be obtained. Also since the retentivity is low, there is little trouble due to the sticking of armatures when the circuit is opened. Electromagnets may have the form of simple solenoids, iron-clad solenoids, plunger electromagnets, electromagnets with external armatures, and lifting magnets, which are circular in form with a flat holding surface. A solenoid is a winding of insulated conductor and is wound helically; the direction of winding may be either right or left. A portative electromagnet is one designed only for holding material brought in contact with it. A tractive electromagnet is one designed to exert a force on the load through some distance and thus do work. The range of an electromagnet is the distance through which the plunger will perform work when the winding is energized. For long range of operation, the plunger type of tractive magnet is best suited, for the length of core is governed practically by the range of action desired, and the area of the core is

Fig. 15.1.97 Characteristics of permanent magnet materials.

15-64

ELECTRICAL ENGINEERING

Table 15.1.27

Maximum Pull per Square Inch of Core for Solenoids with Open Magnetic Circuit

Length of coil l, in

Length of plunger, in

Core area A, in2

Total ampere-turns In

Max pull P, psi

6 9 9 10 10 10

Long Long Long 10 10 10

1.0 1.0 1.0 2.76 2.76 2.76

15,900 11,330 14,200 40,000 60,000 80,000

22.4 11.5 14.6 40.2 61.6 80.8

1,000  C

Length of coil l, in

Length of plunger, in

Core area A, in2

Total ampere-turns In

Max pull P, psi

1,000  C

9.0 9.1 9.2 10.0 10.3 10.1

12 12 18 18 18 18

Long Long 36 36 18 18

1.0 1.0 1.0 1.0 1.0 1.0

11,200 20,500 18,200 41,000 18,200 41,000

8.75 16.75 9.8 22.5 9.8 22.5

9.4 9.8 9.7 9.8 9.7 9.8

SOURCE: From data by Underhill, Elec. World, 45, 1906, pp. 796, 881.

determined by the pull. Solenoid and plunger is a solenoid provided with a movable iron rod or bar called a plunger. When the coil is energized, the iron rod becomes magnetized and the mutual action of the field in the solenoid on the poles created on the plunger causes the plunger to move within the solenoid. This force becomes zero only when the magnetic centers of the plunger and solenoid coincide. If the load is attached to the plunger, work will be done until the force to be overcome is equal to the force that the solenoid exerts on the plunger. When the iron of the plunger is not saturated, the strength of magnetic field in the solenoid and the induced poles are both proportional to the exciting current, so that the pull varies as the current squared. When the plunger becomes highly saturated, the pull varies almost directly with the current. The maximum uniform pull occurs when the end of the plunger is at the center of the solenoid and is equal to F 5 CAnI/l

lb

(15.1.134)

where A  cross-sectional area of plunger, in ; n  number of turns; I  current, A; l  length of the solenoid, in; and C  pull, (lb/in2)/(A-turn/ in). C depends on the proportions of the coil, the degree of saturation, the length, and the physical and chemical purity of the plunger. Table 15.1.27 gives values of C for several different solenoids. Curve 1, Fig. 15.1.98, shows the characteristic pull of an openmagnetic circuit solenoid, 12 in long, having 10,000 A-turns or 833 Aturns/in. 2

Fig. 15.1.99 Solenoid with stop.

of the stroke the first member represents practically the entire pull. Approximate values of C and C1 are C1  2,660 (for l greater than 10d ), C  0.0096, where d is the diameter of the plunger, in. In SI units P 5 1.7512AnIa

C 2.54nI 1 b l l 2aC 21

N

(15.1.135a)

where A is in cm2, l and la in cm, and the pull P in N. All other quantities are unchanged. The range of uniform pull can be extended by the use of conical ends of stop and plunger, as shown in Fig. 15.1.100. A stronger magnet mechanically can be obtained by using an iron-clad solenoid, Fig. 15.1.101, in which an iron return path is provided for the flux. Except for low flux densities and short air gaps the dimensions of the iron

Fig. 15.1.100 Conical plunger and stop.

return path are of no practical importance, and the fact that an iron return path is used does not affect the pull curve except at short air gaps. This is illustrated in Fig. 15.1.98 where curves 3, 4, and 5 are typical pull curves for this same solenoid when it is made iron-clad, each curve corresponding to a different position of the stop.

Fig. 15.1.101 Iron-clad solenoid.

Fig. 15.1.98 Pull on solenoid with plunger. (1) Coil and plunger; (2) coil and plunger with stop; (3) iron-clad coil and plunger; (4) and (5) same as (3) with different lengths of stop.

When a strong pull is desired at the end of the stroke, a stop may be used as shown in Fig. 15.1.99. Curve 2, Fig. 15.1.98, shows the pull obtained by adding a stop to the plunger. It will be noted that, except when the end of the plunger is near the stop, the stop adds little to the solenoid pull. The pull is made up of two components: one due to the attraction between plunger and winding, the other to the attraction between plunger and stop. The equation for the pull is P 5 AIn[sIn/l2aC21d 1 sC/ld]

Mechanical jar at the end of the stroke may be prevented by leaving the end of the solenoid open. The plunger then comes to equilibrium when its middle is at the middle of the winding, thus providing a magnetic cushion effect. Electromagnets with external armatures are best adapted for short-range work, and the best type is the horseshoe magnet. The pull for short-range magnets is expressed by the equation F 5 B2A/72,134,000

where A  area of the core, in2; n  number of turns; la  length of gap between core and stop; and C, C1  constants. At the beginning of the stroke the second member of the equation is predominant, and at the end

(15.1.136)

where B  flux density, Mx/in , and A  area of the core, in2. In SI units F 5 397,840B2A

N

(15.1.136a)

where the flux density B is in Wb/m ; A  area, m ; and F  force, N. A greater holding power is obtained if the surfaces of the armature and core are not machined to an absolutely smooth contact surface. If the surface is slightly irregular, the area of contact A is reduced but the flux density B is increased approximately in proportion (if the iron is 2

(15.1.135)

lb

2

2

MAGNETS

being operated below saturation) and the pull is increased since it varies as the square of the density B. Nonmagnetic stops should be used if it is desired that the armature may be released readily when the current is interrupted. Lifting magnets are of the portative type in that their function is merely to hold the load. The actual lifting is performed by the hoisting apparatus. The magnet is almost toroidal in shape. The coil shield is of manganese steel which is very hard and thus resists wear and is practically nonmagnetic. The holding power is given by Eq. (15.1.136), where A  area of holding surface, in2. It is difficult to calculate accurately the holding force of a lifting magnet for it depends on the magnetic characteristics of the load, the area of contact, and the manner in which the load is applied. Rapid action in a magnet can be obtained by reducing the time constant of the winding and by subdividing the metal parts to reduce induced currents which have a demagnetizing effect when the circuit is closed. The movement of the plunger through the winding causes the winding and its bobbin to be cut by a magnetic field; if the bobbin is of metal and not slotted longitudinally, it is a short-circuited turn linked by a changing magnetic field and hence currents are induced in it. These currents oppose the flux and hence reduce the pull during the transient period. They also cause some heating. Where it is found impossible to reduce the time constant sufficiently, an electromagnet designed for a voltage much lower than normal is often used. A resistor is connected in series which is short-circuited during the stroke of the plunger. At the completion of the stroke the plunger automatically opens the short circuit, reducing the current to a value which will not overheat the magnet under continuous operation. The extremely short time of overload produces very rapid action but does not injure the winding. The solenoids on many automatic motor-starting panels are designed in this manner. When slow action is desired, it can be obtained by using solid cores and yoke and by using a heavy metallic spool or bobbin for the winding. A separate winding short-circuited on itself is also used to some extent. Sparking at switch terminals may be reduced or eliminated by neutralizing the inductance of the winding. This is accomplished by winding a separate short-circuited coil with its wires parallel to those of the active winding. (This method can be used with dc magnets only.) This is not economical, since one-half the winding space is wasted. By connecting a capacitor across the switch terminals, the energy of the inductive discharge on opening the circuit may be absorbed. For the purpose of neutralizing the inductive discharge and causing a quick release, a small reverse current may be sent through the coil winding automatically on opening the circuit. Sleeves of tin, aluminum, or copper foil placed over the various layers of the winding absorb energy when the circuit is broken and reduce the energy dissipated at the switch terminals. This scheme can be used for dc magnets only. Sticking of the parts of the magnetic circuit due to residual magnetism may be prevented by the use of nonmagnetic stops. In the case of lifting magnets subjected to rough usage and hard blows (as in a steel works), these stops usually consist of plates of manganese steel, which are extremely hard and nonmagnetic. AC Tractive Magnets Because of the iron losses due to eddy currents, the magnetic circuits of ac electromagnets should be composed of laminated iron or steel. The magnetic circuit of large magnets is usually built up of thin sheets of sheet metal held together by means of suitable clamps. Small cores of circular cross section usually consist of a bundle of soft iron wires. Since the iron losses increase with the flux density, it is not advisable to operate at as high a density as with direct current. The current instead of being limited by the resistance of the winding is now determined almost entirely by the inductive reactance as the resistance is small. With the removal of the load the current rises to high values. The pull of ac magnets is nearly constant irrespective of the length of air gap. In a single-phase magnet the pull varies from zero to a maximum and back to zero twice every cycle, which may cause considerable chattering of the armature against the stop. This may be prevented by the use of a spring or, in the case of a solenoid coil, by allowing the plunger to seek its position of equilibrium in the coil. Chattering may also be prevented by the use of a short-circuited winding or shading coil around

15-65

one tip of the pole piece or by the use of polyphase. In a two-phase magnet the pull is constant and equal to the maximum instantaneous pull produced by one phase so long as the voltage is a sine function. In a three-phase magnet under the same conditions the pull is constant and equal to 1.5 times the maximum instantaneous pull of one phase. Should the load become greater than the minimum instantaneous pull, there will be chattering as in a single-phase magnet. Heating of Magnets The lifting capacity of an electromagnet is limited by the permissible current-carrying capacity of the winding, which, in turn, is dependent on the amount of heat energy that the winding can dissipate per unit time without exceeding a given temperature rise. Enamels, synthetic varnishes, and thermoplastic wire insulations are available to allow a wide variety of temperature rises. Design of Exciting Coil Let n  number of turns, l  mean length of turn, in (l  2pr, where r is the mean radius, in), A  cross section of wire, cir mils. The resistance of 1 cir mil  ft of copper is practically 12 at 60C, or 1 /cir mil  in. Hence the resistance, R  nl/A ; the current, I  EA/nl; the ampere-turns, nI  EA/l; the power to be dissipated, P  E 2A/nl W. From the foregoing equations the cross section of wire and the number of turns can be calculated.

Fig. 15.1.102 Winding space factor. Space Factor of Winding Space factor of a coil is the ratio of the space occupied by the conductor to the total volume of the coil or winding. Only in the theoretical case of uninsulated square or rectangular conductor may the space factor be 100 percent. For wire of circular section with insulation of negligible thickness, wound as shown in Fig. 15.1.102a, the space factor will be 78.5 percent. When the turns of wire are “bedded,” as shown in Fig. 15.1.102b (the case in most windings, particularly with smaller wires), there is a theoretical gain of about 7 percent in space factor. Experiments have shown that in most cases this gain is about neutralized in practice by the flattening out of the insulation of the wire due to the tension used in winding. When wound in a haphazard manner, the space factors of magnet wires vary according to size, substantially as follows:

Double covered Size, AWG Space factor, %

0 60

5 53.8

10 45.5

Single covered 15 35.1

20 32.2

25 32

30 25.7

35 16

Magnet wire is usually a soft annealed copper wire of high conductivity. It can be obtained in square, rectangular, and circular conductors. Round or cylindrical wire is almost entirely used in smaller sizes; ribbons are frequently used in the larger sizes. Common round wire sizes for copper magnet wire include No. 42 AWG (0.0025-in bare wire diameter) to 8 AWG (0.1285 in). Common round wire sizes for aluminum magnet wire include No. 26 AWG to No. 4 AWG. For use in very small devices, ultrafine round magnet wire is available in sizes as small as No. 60 AWG copper and No. 52 aluminum. Magnet wire insulation has evolved to become high in electrical, physical, and thermal performance. Polymers are most commonly used for film-insulated magnet wire and are based on polyvinyl acetates, polyesters, polyamideimides, polyimides, polyamides, and polyurethanes. It is common to use different layers of polymers to obtain the best performance for a specific application. Film-insulated magnet wires are available in temperature classes from 105 to 200C, each with specific advantages such as solderability, thermal stability, transformer resistance, chemical resistance, windability, and toughness. Magnet wire is also produced with fabric layers served over bare or conventional film-insulated wire. Fabric materials include fiberglass,

15-66

ELECTRICAL ENGINEERING

polyester (Dacron), aromatic polyamide (Nomex) paper, and aromatic polyamide film (Kapton). Fibrous insulated magnet wires are available in temperature classes from 105 to 220C, each with specific advantages such as high dielectric strength, positive separation of conductors, high thermal and chemical stability, physical protection, and hightemperature performance. AUTOMOBILE SYSTEMS

A battery ignition system for a four-cylinder engine is shown diagrammatically in Fig. 15.1.104. The primary circuit supplied by the battery consists of the primary coil P and a set of contacts, or “points” operated by a four-lobe cam, in series. In order to reduce arcing and burning of the contacts and to produce a sharp break in the current, a capacitor C is connected across the contacts. The contacts, which are of pure tungsten, are operated by a four-lobe cam which is driven at onehalf engine speed. A strong spring tends to keep the contacts closed.

The evolution of automobile electrical systems toward microprocessors has transformed all aspects of automotive engineering, operation, and repair. Engine operation, emission controls, drive-train control, operator interfaces to vehicle operation and drive controls, entertainment systems, and communication and navigational systems, are commonplace and will continue to evolve. At the heart of understanding the operation of the car/truck for-cycle combustion engine is a basic electrical system that has supported engine operation for generations. The following information describes electrical-based automotive ignition, starting, lighting, and accessory systems commonly used prior to microprocessorbased systems.

Fig. 15.1.104 Battery-ignition system.

Automobile Ignition Systems

The ignition system in an automobile produces the spark which ignites the combustible mixture in the engine cylinders. This is accomplished by a high-voltage, or high-tension, spark between metal points in a spark plug. (A spark plug is an insulated bushing screwed into the cylinder head.) Spark plugs usually have porcelain insulation, but for some special uses, such as in airplane engines, mica may be used. There are two general sources for the energy necessary for ignition; one is the electrical system of the car which is maintained by the generator and the battery (battery ignition), and the other is a magneto. Battery ignition systems have traditionally operated electromechanically, using a spark coil, a high-voltage distributor, and low-voltage breaker points. Electronic ignition systems, working from the battery, became standard on U.S. cars in 1975. These vary in complexity from the use of a single transistor to reduce the current through the points to pointless systems triggered by magnetic pulses or interrupted light beams. Capacitor discharge into a pulse transformer is used in some systems to obtain the high voltage needed to fire the spark plugs. Battery ignition is most widely used since it is simple, reliable, and low in cost, and the electrical system is a part of the car equipment. The high voltage for the spark is obtained from an ignition coil which consists of a primary coil of relatively few turns and a secondary coil of a large number of turns, both coils being wound on a common magnetic core consisting of either thin strips of iron or small iron wires. In a 6-V system the resistance of the primary coil is from 0.9 to 2 and the inductance is from 5 to 10 mH. The number of secondary turns varies from 9,000 to 25,000, and the ratio of primary to secondary turns varies from 1 : 40 to 1 : 100. The coil operates on the following principle. It stores energy in a magnetic field relatively slowly and then releases it suddenly. The power developed ( p  dw/dt) is thus relatively large (w  stored energy). The high emf e2 which is required for the spark is induced by the sudden change in the flux f in the core of the coil when the primary current is suddenly interrupted, e2  n2(df/dt), where n2 is the number of secondary turns. For satisfactory ignition, peak voltages from 10 to 20 kV are desirable. Figure 15.1.103 shows the relation between the volts required to produce a spark and pressure with compressed air.

Fig. 15.1.103 Pressure-voltage curve for spark plug.

In the Delco-Remy distributor (Fig. 15.1.105) two breaker arms are connected in parallel; one coil and one capacitor are used. One set of contacts is open when the other is just breaking but closes a few degrees after the break occurs. This closes the primary of the ignition coil immediately after the break and increases the time that the primary of the ignition coil is closed and permits the flux in the iron to reach its full value. The interrupter shown in Fig. 15.1.105 is designed for an eightcylinder engine.

Fig. 15.1.105 Delco-Remy dual-point eight-cylinder interrupter. Electronic ignition systems have no problem operating at the required speed. The spark should advance with increase in engine speed so as to allow for the time lag in the explosion. To take care of this automatic timers are equipped with centrifugally operated weights which advance the breaker cam with respect to the engine drive as the speed increases. Automobile Lighting and Starting Systems

Automobile lighting and starting systems initially operated at 6 V, but at present nearly all cars operate at 12 V because larger engines, particularly V-8s, are now common and require more starting power. With 12 V, for the same power, the starting current is halved, and the effect of resistance in the leads, connections, and brushes is materially reduced. In some systems the positive side of the system is grounded, but more often the negative side is grounded. A further development is the application of an ac generator, or alternator, combined with a rectifier as the generating unit rather than the usual dc generator. One advantage is the elimination of the commutator, made up of segments, which requires some maintenance due to the sparking and wear of the carbon brushes. With the alteraator the dc field rotates, the brushes operating on smooth slip rings require almost no maintenance. Also, the system is greatly simplified by the fact that rectifiers are “one-way” devices, and the battery cannot deliver current back to the generator when its voltage drops below that of the battery. Thus, no cutout relay, such as is required with dc generators, is necessary. This ac development is the result of the development of reliable, low-cost germanium and silicon semiconductor rectifiers. Figure 15.1.106 shows a schematic diagram of the Ford system (adapted initially to trucks). The generator stator is wound three-phase

AUTOMOBILE SYSTEMS

15-67

Fig. 15.1.106 Schematic diagram of Ford lighting and starting system.

Y-connected, and the field is bipolar supplied with direct current through slip rings and brushes. The rectifier diodes are connected fullwave bridge circuit to supply the battery through the ammeter. Regulator The function of the regulator is to control the generator current so that its value is adapted to the battery voltage which is related to the condition of charge of the battery (see Fig. 15.1.15). Thus, when the battery voltage drops (indicating a lowered condition of charge), the current should be increased, and, conversely, when the battery voltage increases (indicating a high condition of charge), the current should be decreased. Neglecting for the moment the starting procedure, when the ignition switch is thrown to the normal “on” position at c, the coil actuating the field relay is connected to the battery  terminal and causes the relay contacts a to close. This energizes the regulator circuits, and, if the two upper voltage regulator contacts b are closed as shown, the rotating field of the alternator is connected directly to the battery  terminal, and the field current is then at its maximum value and produces a high generator voltage and large output current. At the same time the voltage regulator coil in series with the 0.3- and 14- resistors is connected between the battery  terminal and ground. If the voltage of the battery rises owing to its higher condition of charge, the current to the voltage regulator coil increases, causing it to open the two upper contacts at b. Current from the battery now flows through the 0.3- resistor and divides, some going through the 10- resistor and dividing between the field and the 50- resistor, and the remainder going to the voltage regulator coil. The current to the rotating field is thus reduced, causing the alternator output to be reduced. Because of the 0.3 now in circuit, the current to the coil of the voltage regulator is reduced to such a value that it holds the center contact at b in the mid, or open, position. If the battery voltage rises to an even higher value, the regulator coil becomes strong enough to close the lower

contacts at b; this short-circuits the field, reducing its current almost to zero, and thus reducing the alternator output to zero. On the other hand when the battery voltage drops, the foregoing sequence is reversed, and the contacts at b operate to increase the current to the alternator field. As was mentioned earlier, the battery cannot supply current to the alternator because of the “one-way” characteristic of the rectifier. Thus, when the alternator voltage drops below that of the battery and even when the alternator stops running, its current automatically becomes zero. The alternator has a normal rectifier open-circuit voltage of about 14 V and a rating of 20 A. Starting In most cases, for starting, the ignition key is turned far to the right and held there until the motor starts. Then, when the key is released, the ignition switch contacts assume a normal operating position. Thus, in Fig. 15.1.106, when the ignition-switch contact is in the starting position S, the starter relay coil becomes connected by a lead to the battery  terminal and thus becomes energized, closing the relay contacts. The starter motor is then connected to the battery to crank the engine. At the time that the contact closes it makes contact with a small metal brush e which connects the battery  terminal to the primary terminal of the ignition coil through the protective resistor R. After the motor starts, the ignition switch contacts spring to the normal operating position C, and the starter relay switch opens, thereby breaking contact with the small brush e. However, when contact C is closed, the ignition coil primary terminal is now connected through leads to the battery  terminal. The interrupter, the ignition coil, and the distributor now operate in the manner described earlier (see Fig. 15.1.104); the system shown in Fig. 15.1.106 is that for a six-cylinder engine. The connection of accessories to the electric system is illustrated in Fig. 15.1.106 for the horn, head and other lights, and temperature and fuel gages.

15-68

ELECTRONICS

15.2

ELECTRONICS

by Byron M. Jones revised by Timothy M. Cockerill REFERENCES: “Reference Data for Radio Engineers,” Howard Sams & Co. “Transistor Manual,” General Electric Co. “SCR Manual,” General Electric Co. “The Semiconductor Data Book,” Motorola Inc. Fink, “Television Engineering,” McGraw-Hill. “Industrial Electronics Reference Book,” Wiley. Mano, “Digital and Logic Design,” Prentice-Hall. McNamara, “Technical Aspects of Data Communication,” Digital Press. Fletcher, “An Engineering Approach to Digital Design,” Prentice-Hall. “The TTL Data Book,” Texas Instruments, Inc. “1988 MOS Products Catalog,” American Microsystems, Inc. “1988 Linear Data Book,” National Semiconductor Corp. “CMOS Standard Cell Data Book,” Texas Instruments, Inc. “Power MOSFET Transistor Data,” Motorola, Inc. Franco, “Design with Operational Amplifiers and Analog Integrated Circuits,” McGrawHill. Soclof, “Applications of Analog Integrated Circuits,” Prentice-Hall. Gibson, “Computer Systems Concepts and Design,” Prentice-Hall. Sedra and Smith, “Microelectronic Circuits,” HRW Saunders. Millman and Grabel, “Microelectronics,” McGraw-Hill. Ghausi, “Electronic Devices and Circuits: Discrete and Integrated,” HRW Saunders. Savant, Roden, and Carpenter, “Electronic Design Circuits and Systems,” Benjamin Cummings. Stearns and Hush, “Digital Signal Analysis,” Prentice-Hall. Kassakian, Schlecht, and Verghese, “Principles of Power Electronics,” Addison Wesley. Yariv, “Optical Electronics,” HRW Saunders.

Diodes are rated for forward current capacity and reverse voltage breakdown. They are manufactured with maximum current capabilities ranging from 0.05 A to more than 1,000 A. Reverse voltage breakdown varies from 50 V to more than 2,500 V. At rated forward current, the forward voltage drop varies between 0.7 and 1.5 V for silicon diodes. Although other materials are used for special-purpose devices, by far the most common semiconductor material is silicon. With a forward current of 1,000 A and a forward voltage drop of 1 V, there would be a power loss in the diode of 1,000 W (more than 1 hp). The basic diode package shown in Fig. 15.2.3 can dissipate about 20 W. To maintain an acceptable temperature in the diode, it is necessary to mount the diode on a heat sink. The manufacturer’s recommendation should be followed

The subject of electronics can be approached from the standpoint of either the design of devices or the use of devices. The practicing mechanical engineer has little interest in designing devices, so the approach in this article will be to describe devices in terms of their external characteristics. Fig. 15.2.3 Physical diode package. COMPONENTS Resistors, capacitors, reactors, and transformers are described earlier in this

section, along with basic circuit theory. These explanations are equally applicable to electronic circuits and hence are not repeated here. A description of additional components peculiar to electronic circuits follows. A rectifier, or diode, is an electronic device which offers unequal resistance to forward and reverse current flow. Figure 15.2.1 shows the schematic symbol for a diode. The arrow beside the diode shows the

Fig. 15.2.1 Diode schematic symbol.

direction of current flow. Current flow is taken to be the flow of positive charges, i.e., the arrow is counter to electron flow. Figure 15.2.2 shows typical forward and reverse volt-ampere characteristics. Notice that the scales for voltage and current are not the same for the first and third quadrants. This has been done so that both the forward and reverse characteristics can be shown on a single plot even though they differ by several orders of magnitude.

Fig. 15.2.2 Diode forward-reverse characteristic.

very carefully to ensure good heat transfer and at the same time avoid fracturing the silicon chip inside the diode package. The selection of fuses or circuit breakers for the protection of rectifiers and rectifier circuitry requires more care than for other electronic devices. Diode failures as a result of circuit faults occur in a fraction of a millisecond. Special semiconductor fuses have been developed specifically for semiconductor circuits. Proper protective circuits must be provided for the protection of not only semiconductors but also the rest of the circuit and nearby personnel. Diodes and diode fuses have a short-circuit rating in amperes-squared-seconds (I 2t). As long as the I 2t rating of the diode exceeds the I 2t rating of its protective fuse, the diode and its associated circuitry will be protected. Circuit breakers may be used to protect diode circuits, but additional line impedance must be provided to limit the current while the circuit breaker clears. Circuit breakers do not interrupt the current when their contacts open. The fault is not cleared until the line voltage reverses at the end of the cycle of the applied voltage. This means that the clearing time for a circuit breaker is about 1⁄2 cycle of the ac input voltage. Diodes have a 1-cycle overcurrent rating which indicates the fault current the diode can carry for circuit breaker protection schemes. Line inductance is normally provided to limit fault currents for breaker protection. Often this inductance is in the form of leakage reactance in the transformer which supplies power to the diode circuit. A thyristor, often called a silicon-controlled rectifier (SCR), is a rectifier which blocks current in both the forward and reverse directions. Conduction of current in the forward direction will occur when the anode is positive with respect to the cathode and when the gate is pulsed positive with respect to the cathode. Once the thyristor has begun to conduct, the gate pulse can return to 0 V or even go negative and the thyristor will continue to pass current. To stop the cathode-to-anode current, it is necessary to reverse the cathode-to-anode voltage. The thyristor will again be able to block both forward and reverse voltages until current flow is initiated by a gate pulse. The schematic symbol for an SCR is shown in Fig. 15.2.4. The physical packaging of thyristors is

COMPONENTS

15-69

Fig. 15.2.4 Thyristor schematic symbol.

similar to that of rectifiers with similar ratings except, of course, that the thyristor must have an additional gate connection. The gate pulse required to fire an SCR is quite small compared with the anode voltage and current. Power gains in the range of 106 to 109 are easily obtained. In addition, the power loss in the thyristor is very low, compared with the power it controls, so that it is a very efficient powercontrolling device. Efficiency in a thyristor power supply is usually 97 to 99 percent. When the thyristor blocks either forward or reverse current, the high voltage drop across the thyristor accompanies low current. When the thyristor is conducting forward current after having been fired by its gate pulse, the high anode current occurs with a forward voltage drop of about 1.5 V. Since high voltage and high current never occur simultaneously, the power dissipation in both the on and off states is low. The thyristor is rated primarily on the basis of its forward-current capacity and its voltage-blocking capability. Devices are manufactured to have equal forward and reverse voltage-blocking capability. Like diodes, thyristors have I 2t ratings and 1-cycle surge current ratings to allow design of protective circuits. In addition to these ratings, which the SCR shares in common with diodes, the SCR has many additional specifications. Because the thyristor is limited in part by its average current and in part by its rms current, forward-current capacity is a function of the duty cycle to which the device is subjected. Since the thyristor cannot regain its blocking ability until its anode voltage is reversed and remains reversed for a short time, this time must be specified. The time to regain blocking ability after the anode voltage has been reversed is called the turn-off time. Specifications are also given for minimum and maximum gate drive. If forward blocking voltage is reapplied too quickly, the SCR may fire with no applied gate voltage pulse. The maximum safe value of rate of reapplied voltage is called the dv/dt rating of the SCR. When the gate pulse is applied, current begins to flow in the area immediately adjacent to the gate junction. Rather quickly, the current spreads across the entire cathode-junction area. In some circuits associated with the thyristor an extremely fast rate of rise of current may occur. In this event localized heating of the cathode may occur with a resulting immediate failure or in less extreme cases a slow degradation of the thyristor. The maximum rate of change of current for a thyristor is given by its di/dt rating. Design for di/dt and dv/dt limits is not normally a problem at power-line frequencies of 50 and 60 Hz. These ratings become a design factor at frequencies of 500 Hz and greater. Table 15.2.1 lists typical thyristor characteristics. A triac is a bilateral SCR. It blocks current in either direction until it receives a gate pulse. It can be used to control in ac circuits. Triacs are widely used for light dimmers and for the control of small universal ac motors. The triac must regain its blocking ability as the line voltage crosses through zero. This fact limits the use of triacs to 60 Hz and below.

Fig. 15.2.5 Transistor schematic symbol.

A transistor is a semiconductor amplifier. The schematic symbol for a transistor is shown in Fig. 15.2.5. There are two types of transistors, p-n-p and n-p-n. Notice that the polarities of voltage applied to these devices are opposite. In many sizes matched p-n-p and n-p-n devices are available. The most common transistors have a collector dissipation rating of 150 to 600 mW. Collector to base breakdown voltage is 20 to 50 V. The amplification or gain of a transistor occurs because of two facts: (1) A small change in current in the base circuit causes a large change in current in the collector and emitter leads. This current amplification is designated hfe on most transistor specification sheets. (2) A small change in base-to-emitter voltage can cause a large change in either the collector-to-base voltage or the collector-to-emitter voltage. Table 15.2.2 shows basic ratings for some typical transistors. There is a great profusion of transistor types so that the choice of type depends upon availability and cost as well as operating characteristics. The gain of a transistor is independent of frequency over a wide range. At high frequency, the gain falls off. This cutoff frequency may be as low as 20 kHz for audio transistors or as high as 1 GHz for radiofrequency (rf ) transistors. The schematic symbols for the field effect transistor (FET) is shown in Fig. 15.2.6. The flow of current from source to drain is controlled by an electric field established in the device by the voltage applied between the gate and the drain. The effect of this field is to change the resistance of the transistor by altering its internal current path. The FET has an extremely high gate resistance (1012 ), and as a consequence, it is used for applications requiring high input impedance. Some FETs have been designed for high-frequency characteristics. The two basic constructions used for FETs are bipolar junctions and metal oxide semiconductors. The schematic symbols for each of these are shown in Fig. 15.2.6a and 15.2.6b. These are called JFETs and MOSFETs to distinguish between them. JFETs and MOSFETs are used as stand-alone devices and are also widely used in integrated circuits. (See below, this section.) A MOSFET with higher current capacity is called a power MOSFET. Table 15.2.3 shows some typical characteristics for power MOSFETs. Power MOSFETs are somewhat more limited in maximum power than are thyristors. Circuits with multiple power MOSFETs are limited to about 20 kW. Thyristors are limited to about 1500 kW. As a point of interest, there are electric power applications in the hundreds of megawatts which incorporate massively series and parallel thyristors. An advantage of power MOSFETs is that they can be turned on and turned off by means of the gate-source voltage. Thus low-power electric

Table 15.2.1

Typical Thyristor Characteristics

Voltage

rms

avg

I2t, A2  s

1-cycle surge, A

di/dt, A/ms

dv/dt, V/ms

Turn-off time, ms

400 1,200 400 1,200 400 1,200 400 1,200

35 35 110 110 235 235 470 470

20 20 70 70 160 160 300 300

165 75 4,000 4,000 32,000 32,000 120,000 120,000

180 150 1,000 1,000 3,500 3,500 5,500 5,500

100 100 100 100 100 75 50 50

200 200 200 200 200 200 100 100

10 10 40 40 80 80 150 150

Current, A

15-70

ELECTRONICS Table 15.2.2 JEDEC number 2N3904 2N3906 2N3055 2N6275 2N5458 2N5486

Typical Transistor Characteristics

Type

Collector-emitter volts at breakdown, BVCE

Collector dissipation, Pc(25C)

Collector current, Ic

Current gain, hfe

n-p-n p-n-p n-p-n n-p-n JFET JFET

40 40 100 120 40 25

310 mW 310 mW 115 W 250 W 200 mW 200 mW

200 mA 200 mA 15 A 50 A 9 mA

200 200 20 30 * †

* JFET, current gain is not applicable. † High-frequency JFET—up to 400 MHz.

control can turn the device on and off. Thyristors can be turned on only by their gate voltage. To turn a thyristor off it is necessary for its highpowered anode-to-cathode voltage to be reversed. This factor adds complication to many thyristor circuits. Another power device which is

Fig. 15.2.6 Field-effect transistor. (a) Bipolar junction type (JFET); (b) metal-oxide-semiconductor type (MOSFET). Table 15.2.3 Drain voltage

Typical Power MOSFET Characteristics

Device

Drain amperes

Power dissipation, W

Case type

600

MTH6N60

6.0

150

TO-218

400

MTH8N40

8.0

150

TO-218

100

MTH25N10

25

150

TO-218

50

MTH35N05

35

150

TO-218

600

MTP1N60

1.0

75

TO-220

400

MTP2N40

2.0

75

TO-220

100

MTP10N10

10

75

TO-220

200

MTE120N20

120

500

346-01

100

MTE150N10

150

500

346-01

50

MTE200N05

200

500

346-01

similar to the power MOSFET is the insulated gate-bipolar transistor (IGBT). This device is a Darlington combination of a MOSFET and a bipolar transistor (see Fig. 15.2.13). A low-power MOSFET first transistor drives the base of a second high-power bipolar transistor. These two transistors are integrated in a single case. The IGBT is applied in high-power devices which use high-frequency switching. IGBTs are available in ratings similar to thyristors (see Table 15.2.1), so they are power devices. The advantage of the IGBT is that it can be switched on and off by means of its gate. The advantage of an IGBT over a power MOSFET is that it can be made with higher power ratings. The unijunction is a special-purpose semiconductor device. It is a pulse generator and widely used to fire thyristors and triacs as well as in timing circuits and waveshaping circuits. The schematic symbol for a unijunction is shown in Fig. 15.2.7. The device is essentially a silicon resistor. This resistor is connected to base 1 and base 2. The emitter is fastened to this resistor about halfway between bases 1 and 2. If a positive voltage is applied to base Fig. 15.2.7 Unijunction. 2, and if the emitter and base 1 are at zero, the

emitter junction is back-biased and no current flows in the emitter. If the emitter voltage is made increasingly positive, the emitter junction will become forward-biased. When this occurs, the resistance between base 1 and base 2 and between base 2 and the emitter suddenly switches to a very low value. This is a regenerative action, so that very fast and very energetic pulses can be generated with this device. Before the advent of semiconductors, electronic rectifiers and amplifiers were vacuum tubes or gas-filled tubes. Some use of these devices still remains. If an electrode is heated in a vacuum, it gives up surface electrons. If an electric field is established between this heated electrode and another electrode so that the electrons are attracted to the other electrode, a current will flow through the vacuum. Electrons flow from the heated cathode to the cold anode. If the polarity is reversed, since there are no free electrons around the anode, no current will flow. This, then, is a vacuum-tube rectifier. If a third electrode, called a control grid, is placed between the cathode and the anode, the flow of electrons from the cathode to the anode can be controlled. This is a basic vacuumtube amplifier. Additional grids have been placed between the cathode and anode to further enhance certain characteristics of the vacuum tube. In addition, multiple anodes and cathodes have been enclosed in a single tube for special applications such as radio signal converters. If an inert gas, such as neon or argon, is introduced into the vacuum, conduction can be initiated from a cold electrode. The breakdown voltage is relatively stable for given gas and gas pressure and is in the range of 50 to 200 V. The nixie display tube is such a device. This tube contains 10 cathodes shaped in the form of the numerals from 0 to 9. If one of these cathodes is made negative with respect to the anode in the tube, the gas in the tube glows around that cathode. In this way each of the 10 numerals can be made to glow when the appropriate electrode is energized. An ignitron is a vapor-filled tube. It has a pool of liquid mercury in the bottom of the tube. Air is exhausted from the enclosure, leaving only mercury vapor, which comes from the pool at the bottom. If no current is flowing, this tube will block voltage whether the anode is plus or minus with respect to the mercury-pool cathode. A small rod called an ignitor can form a cathode spot on the pool of mercury when it is withdrawn from the pool. The ignitor is pulled out of the pool by an electromagnet. Once the cathode spot has been formed, electrons will continue to flow from the mercury-pool cathode to the anode until the anode-to-cathode voltage is reversed. The operation of an ignitron is very similar to that of a thyristor. The anode and cathode of each device perform similar functions. The ignitor and gate also perform similar functions. The thyristor is capable of operating at much higher frequencies than the ignitron and is much more efficient since the thyristor has 15 V forward drop and the ignitron has 15 V forward drop. The ignitron has an advantage over the thyristor in that it can carry extremely high overload currents without damage. For this reason ignitrons are often used as electronic “crowbars” which discharge electrical energy when a fault occurs in a circuit. DISCRETE-COMPONENT CIRCUITS

Several common rectifier circuits are shown in Fig. 15.2.8. The waveforms shown in this figure assume no line reactance. The presence of line reactance will make a slight difference in the waveshapes and the conversion factors shown in Fig. 15.2.8. These waveshapes are equally applicable for loads which are pure resistive or resistive and inductive.

DISCRETE-COMPONENT CIRCUITS

15-71

Fig. 15.2.8 Comparison of rectifier circuits.

In a resistive load the current flowing in the load has the same waveshape as the voltage applied to it. For inductive loads, the current waveshape will be smoother than the voltage applied. If the inductance is high enough, the ripple in the current may be indeterminantly small. An approximation of the ripple current can be calculated as follows: I5

EdcPCT 200pf NL

(15.2.1)

where I  rms ripple current, Edc  dc load voltage, PCT  percent ripple from Fig. 15.2.8, f  line frequency, N  number of cycles of ripple frequency per cycle of line frequency, L  equivalent series inductance in load. Equation (15.2.1) will always give a value of ripple higher than that calculated by more exact means, but this value is normally satisfactory for power-supply design. Capacitance in the load leads to increased regulation. At light loads, the capacitor will tend to charge up to the peak value of the line voltage and remain there. This means that for either the single full-wave circuit or the single-phase bridge the dc output voltage would be 1.414 times the rms input voltage. As the size of the loading resistor is reduced, or as the size of the parallel load capacitor is reduced, the load voltage will more nearly follow the rectified line voltage and so the dc voltage will approach 0.9 times the rms input voltage for very heavy loads or for very small filter capacitors. One can see then that dc voltage may vary

between 1.414 and 0.9 times line voltage due only to waveform changes when capacitor filtering is used. Four different thyristor rectifier circuits are shown in Fig. 15.2.9. These circuits are equally suitable for resistive or inductive loads. It will be noted that the half-wave circuit for the thyristor has a rectifier across the load, as in Fig. 15.2.8. This diode is called a freewheeling diode because it freewheels and carries inductive load current when the thyristor is not conducting. Without this diode, it would not be possible to build up current in an inductive load. The gate-control circuitry is not shown in Fig. 15.2.9 in order to make the power circuit easier to see. Notice the location of the thyristors and rectifiers in the single-phase full-wave circuit. Constructed this way, the two diodes in series perform the function of a free-wheeling diode. The circuit can be built with a thyristor and rectifier interchanged. This would work for resistive loads but not for inductive loads. For the full three-phase bridge, a freewheeling diode is not required since the carryover from the firing of one SCR to the next does not carry through a large portion of the negative half cycle and therefore current can be built up in an inductive load. Capacitance must be used with care in thyristor circuits. A capacitor directly across any of the circuits in Fig. 15.2.9 will immediately destroy the thyristors. When an SCR is fired directly into a capacitor with no series resistance, the resulting di/dt in the thyristor causes extreme local heating in the device and a resultant failure. A sufficiently high series resistor prevents failure. An inductance in series with a

15-72

ELECTRONICS

and R3. Usual practice is to design single-stage gains of 10 to 20. Much higher gains are possible to achieve, but low gain levels permit the use of less expensive transistors and increase circuit reliability. Figure 15.2.12 illustrates a basic two-stage transistor amplifier using complementary n-p-n and p-n-p transistors. Note that the first stage is

Fig. 15.2.11 Single-stage amplifier.

Fig. 15.2.9 Basic thyristor circuits.

capacitor must also be used with caution. The series inductance may cause the capacitor to “ring up.” Under this condition, the voltage across the capacitor can approach twice peak line voltage or 2.828 times rms line voltage. The advantage of the thyristor circuits shown in Fig. 15.2.9 over the rectifier circuits is, of course, that the thyristor circuits provide variable output voltage. The output of the thyristor circuits depends upon the magnitude of the incoming line voltage and the phase angle at which the thyristors are fired. The control characteristic for the thyristor power supply is determined by the waveshape of the output voltage and also by the phase-shifting scheme used in the firing-control means for the thyristor. Practical and economic power supplies usually have control characteristics with some degree of nonlinearity. A representative characteristic is shown in Fig. 15.2.10. This control characteristic is usually given for nominal line voltage with the tacit understanding that variations in line voltage will cause approximately proportional changes in output voltage.

Fig. 15.2.12 Two-stage amplifier.

identical to that shown in Fig. 15.2.11. This n-p-n stage drives the following p-n-p stage. Additional alternate n-p-n and p-n-p stages can be added until any desired overall amplifier gain is achieved. Figure 15.2.13 shows the Darlington connection of transistors. The amplifier is used to obtain maximum current gain from two transistors. Assuming a base-to-collector current gain of 50 times for each transistor, this circuit will give an input-to-output current gain of 2,500. This high level of gain is not very stable if the ambient temperature changes, but in many cases this drift is tolerable.

Fig. 15.2.13 Darlington connection.

Figure 15.2.14 shows a circuit developed specifically to minimize temperature drift and drift due to power supply voltage changes. The differential amplifier minimizes drift because of the balanced nature of the circuit. Whatever changes in one transistor tend to increase the output are compensated by reverse trends in the second transistor. The input signal does not affect both transistors in compensatory ways, of course,

Fig. 15.2.10 Thyristor control characteristic. Transistor amplifiers can take many different forms. A complete discussion is beyond the scope of this handbook. The circuits described here illustrate basic principles. A basic single-stage amplifier is shown in Fig. 15.2.11. The transistor can be cut off by making the input terminal sufficiently negative. It can be saturated by making the input terminal sufficiently positive. In the linear range, the base of an n-p-n transistor will be 0.5 to 0.7 V positive with respect to the emitter. The collector voltage will vary from about 0.2 V to Vc (20 V, typically). Note that there is a sign inversion of voltage between the base and the collector; i.e., when the base is made more positive, the collector becomes less positive. The resistors in this circuit serve the following functions. Resistor R1 limits the input current to the base of the transistor so that it is not harmed when the input signal overdrives. Resistors R2 and R3 establish the transistor’s operating point with no input signal. Resistors R4 and R5 determine the voltage gain of the amplifier. Resistor R4 also serves to stabilize the zero-signal operating point, as established by resistors R2

Fig. 15.2.14 Differential amplifier.

and so it is amplified. One way to look at a differential amplifier is that twice as many transistors are used for each stage of amplification to achieve compensation. For very low drift requirements, matched transistors are available. For the ultimate in differential amplifier performance, two matched transistors are encapsulated in a single unit. Operational amplifiers made with discrete components frequently use differential

DISCRETE-COMPONENT CIRCUITS

amplifiers to minimize drift and offset. The operational amplifier is a low-drift, high-gain amplifier designed for a wide range of control and instrumentation uses. Oscillators are circuits which provide a frequency output with no signal input. A portion of the collector signal is fed back to the base of the transistor. This feedback is amplified by the transistor and so maintains a sustained oscillation. The frequency of the oscillation is determined by parallel inductance and capacitance. The oscillatory circuit consisting of an inductance and a capacitance in parallel is called an LC tank circuit. This frequency is approximately equal to f 5 1/2p 2CL

(15.2.2)

where f  frequency, Hz; C  capacitance, F; L  inductance, H. A 1-MHz oscillator might typically be designed with a 20-mH inductance in parallel with a 0.05-mF capacitor. The exact frequency will vary from the calculated value because of loading effects and stray inductance and capacitance. The Colpitts oscillator shown in Fig. 15.2.15 differs from the Hartley oscillator shown in Fig. 15.2.16 only in the way energy is fed back to the emitter. The Colpitts oscillator has a capacitive voltage

Fig. 15.2.15 Colpitts oscillator.

Fig. 15.2.16 Hartley oscillator.

divider in the resonant tank. The Hartley oscillator has an inductive voltage divider in the tank. The crystal oscillator shown in Fig. 15.2.17 has much greater frequency stability than the circuits in Figs. 15.2.15 and 15.2.16. Frequency stability of 1 part in 107 is easily achieved with a crystal-controlled oscillator. If the oscillator is temperature-controlled by mounting it in a small temperature-controlled oven, the frequency

Fig. 15.2.17 Crystal-controlled oscillator.

stability can be increased to 1 part in 109. The resonant LC tank in the collector circuit is tuned to approximately the crystal frequency. The crystal offers a low impedance at its resonant frequency. This pulls the collectortank operating frequency to the crystal resonant frequency. As the desired operating frequency becomes 500 MHz and greater, resonant cavities are used as tank circuits instead of discrete capacitors and inductors. A rough guide to the relationship between frequency and resonant-cavity size is the wavelength of the frequency l 5 300 3 106/f

(15.2.3)

where l  wavelength, m; 300  10  speed of light, m/s; f  frequency, Hz. The resonant cavities will be smaller than indicated by Eq. (15.2.3) because in general the cavity is either one-half or onefourth wavelength and also, in general, the electromagnetic wave velocity is less in a cavity than in free space. 6

15-73

The operating principles of these devices are beyond the scope of this article. There are many different kinds of microwave tubes including klystrons, magnetrons, and traveling-wave tubes. All these tubes employ moving electrons to excite a resonant cavity. These devices serve as either oscillators or amplifiers at microwave frequencies. Lasers operate at a frequency of light of approximately 600 THz. This corresponds to a wavelength of 0.5 mm, or, the more usual measure of visual-light wavelength, 5  103 Å. Most laser oscillators are basically a variation of the Fabry-Perot etalon, or interferometer. Most light is disorganized insofar as the axis of vibration and the frequency of vibration are concerned. When radiation along different axes is attenuated, as with a polarizing screen, the light is said to be polarized. White light contains all visible frequencies. When white light is filtered, the remaining light is colored, or frequency-limited. A single color of light still contains a broad range of frequencies. Polarized, colored light is still so disorganized that it is difficult to focus the light energy into a narrow beam. Laser light is inherently a single-axis, single-frequency light. Light-emitting diodes (LEDs) and semiconductor laser diodes (LDs) are specialty diodes that emit light and have the same current-voltage characteristic shown in Fig. 15.2.2. These diodes are made from various alloy compositions of compound semiconductor materials of gallium arsenide and indium phosphide that are capable of highly efficient light emission, in contrast to the silicon diodes described previously in this section. The devices are fabricated on 2-, 3-, 4-, and 6-in wafers via a layered deposition technique such as molecular beam epitaxy (MBE) or metalorganic vapor phase epitaxy (MOVPE) followed by standard semiconductor fabrication processes. Wavelength selection is controlled by engineering the alloy composition and thickness of the diode materials in the deposition process. Typical diode thickness is on the order of a couple of micrometres, with critical wavelength-control layers, known as quantum wells, as thin as a few nanometres. Diodes can be engineered to cover a broad wavelength range from ultraviolet (UV) through infrared (IR). LEDs and lasers have very similar materials, but differ in their fabrication process. For lasers, the fabrication process creates a Fabry-Perot cavity, or gain region, with two highly reflective mirrors, which are required for operation, as described in Section 19, Optics. LEDs have a simpler, yet similar, design that incorporates one or two lower-reflectivity mirrors that enhance the light output, but are not sufficiently reflective to create laser emission. Individual dice, on the order of 500 mm square, are cut from the wafers and then packed in a variety of methods to suit the particular application. LEDs and lasers have become prevalent in our everyday lives. Early use of LED technology was for indicator lights on industrial electronic equipment. LEDs are now found nearly everywhere, including kitchen appliances, toys, home computers, remote control units, alarm clocks, and solar-powered yard lights. More recently, LEDs have been implemented in vehicle brake lights, traffic signals, and large outdoor video displays, and are starting to break into the overhead lighting market to replace fluorescent lighting. Advantages of LEDs over incandescent or fluorescent technologies are many: 1. Higher electrooptical conversion efficiency results in lower operating cost per lumen. 2. Long life, typically on the order of 10 years, and longer time between replacement, so costs for personnel and equipment for replacement are lower. 3. For transportation applications, improved safety because, with the high number of LEDs used in a stoplight or brake light, about 50 percent can burn out before replacement is required, whereas a single incandescent bulb failure makes the light inoperative. The semiconductor laser has also become part of our everyday lives, and is more of an enabling technology, as opposed to the LED, which replaced incandescent technology in many of its applications. The laser diode has enabled novel products and services such as: 1. Fiber-optic communications 2. Read/write optical disk technology (CD/DVD) 3. Portable handheld bar code scanners 4. Laser pointers 5. Laser printers

15-74

ELECTRONICS

Additionally, arrays of laser diodes are used for replacing older solidstate lasers in high-power applications such as industrial cutting, machining, and marking, with advantages of much smaller footprint and lower power consumption. A radio wave consists of two parts, a carrier and an information signal. The carrier is a steady high frequency. The information signal may be a voice signal, a video signal, or telemetry information. The carrier wave can be modulated by varying its amplitude or by varying its frequency. Modulators are circuits which impress the information signal onto the carrier. A demodulator is a circuit in the receiving apparatus which separates the information signal from the carrier. A simple amplitude modulator is shown in Fig. 15.2.18. The transistor is base-driven with the carrier input and emitter driven with the information signal. The modulated carrier wave appears at the collector of the transistor.

Fig. 15.2.21 FM discriminator.

not exactly at resonance, the current through resistor R1 will vary as the carrier frequency shifts up and down. This will create an AM signal across resistor R1. The diode, resistors R2 and R3, and capacitor C2 demodulate this signal as in the circuit in Fig. 15.2.20. The waveform of the basic electronic timing circuit is shown in Fig. 15.2.22 along with a basic timing circuit. Switch S1 is closed from time t1 until time t3. During this time, the transistor shorts the capacitor and holds the capacitor at 0.2 V. When switch S1 is opened at time t3, the transistor ceases to conduct and the capacitor charges exponentially due to the current flow through resistor R1. Delay time can be measured to any point along this exponential charge. If the time is measured until time t6, the timing may vary due to small shifts in supply voltage or slight changes in the voltage-level detecting circuit. If time is

Fig. 15.2.18 AM modulator.

An FM modulator is shown in Fig. 15.2.19. The carrier must be changed in frequency in response to the information signal input. This is accomplished by using a saturable ferrite core in the inductance of a Colpitts oscillator which is tuned to the carrier frequency. As the collector current in transistor T1 varies with the information signal, the saturation level in the ferrite core changes, which in turn varies the inductance of the winding in the tank circuit and alters the operating frequency of the oscillator.

Fig. 15.2.22 Basic timing circuit.

Fig. 15.2.19 FM modulator.

The demodulator for an AM signal is shown in Fig. 15.2.20. The diode rectifies the carrier plus information signal so that the filtered voltage appearing across the capacitor is the information signal. Resistor R2 blocks the carrier signal so that the output contains only the information signal. An FM demodulator is shown in Fig. 15.2.21. In this circuit, the carrier plus information signal has a constant amplitude. The information is in the form of varying frequency in the carrier wave. If inductor L1 and capacitor C1 are tuned to near the carrier frequency but

Fig. 15.2.20 AM demodulator.

measured until time t4, the voltage level will be easy to detect, but the obtainable time delay from time t3 to time t4 may not be large enough compared with the reset time t1 to t2. Considerations like these usually dictate detecting at time t4. If this time is at a voltage level which is 63 percent of Vc, the time from t3 to t4 is one time constant of R1 and C. This time can be calculated by t 5 RC

(15.2.4)

where t  time, s; R  resistance, ; C  capacitance, F. A timing circuit with a 0.1-s delay can be constructed using a 0.1-mF capacitor and a 1.0-M resistor. An improved timing circuit is shown in Fig. 15.2.23. In this circuit, the unijunction is used as a level detector, a pulse generator, and a reset means for the capacitor. The transistor is used as a constant current source for charging the timing capacitor. The current through the transistor is determined by resistors R1, R2, and R3. This current is adjustable by means of R1. When the charge on the capacitor reaches approximately 50 percent of Vc, the unijunction fires, discharging the capacitor and generating a pulse at the output. The discharged capacitor is then recharged by the transistor, and the cycle continues to repeat. The pulse rate of this circuit can be varied from one pulse per minute to many thousands of pulses per second.

LINEAR INTEGRATED CIRCUITS

15-75

Fig. 15.2.23 Improved timing circuit. INTEGRATED CIRCUITS

Table 15.2.4 lists some of the more common physical packages for discrete component and integrated semiconductor devices. Although discrete components are still used for electronic design, integrated circuits (ICs) are becoming predominant in almost all types of electronic equipment. Dimensions of common dual in-line pin (DIP) integrated-circuit Table 15.2.4

Semiconductor Physical Packaging

Signal devices Plastic Metal can Power devices Tab mount Diamond case Stud mount Flat base Flat pak (Hockey puck) Integrated circuits Dip (dual in-line pins) Flat pack Chip carrier (50-mil centers)

Fig. 15.2.24 Approximate physical dimensions of dual in-line pin (DIP) integrated circuits. All dimensions are in inches. Dual in-line packages are made in three different constructions—molded plastic, cerdip, and ceramic.

TO92 TO5, TO18, TO39 TO127, TO218, TO220 TO3, TO66

(See Fig. 15.2.24)

devices are shown in Fig. 15.2.24. An IC costs far less than circuits made with discrete components. Integrated circuits can be classified in several different ways. One way to classify them is by complexity.

anticipate further declines in price versus performance. It has been demonstrated again and again that digital IC designs are much more stable and reliable than analog designs. For years, the complexity of large-scale integrated circuits doubled each year. This meant that, over a 10-year period, the complexity of a single device increased by 210 times, or over 1,000 times, and, over the period from 1960 to 1980 grew from one transistor on a chip to one million transistors on a chip. More recently the complexity increase has fallen off to only 1.7 times, or 1.710, or over 200 times in a 10-year period. Manufacturers also have developed application-specific integrated circuits (ASICs). These devices allow a circuit designer to design integrated circuits almost as easily as printed board circuits.

Small-scale integration (SSI), medium-scale integration (MSI), large-scale integration (LSI), and very large scale integration (VLSI) refer to this kind

LINEAR INTEGRATED CIRCUITS

of classification. The cost and availability of a particular IC are more dependent upon the size of the market for that device than on the level of its internal complexity. For this reason, the classification by circuit complexity is not as meaningful today as it once was. The literature still refers to these classifications, however. For the purpose of this text, ICs will be separated into two broad classes: linear ICs and digital ICs. The trend in IC development has been toward greatly increased complexity at significantly reduced cost. Present-day ICs are manufactured with internal spacings as low as 0.5 mm. The limitation of the contents of a single device is more often controlled by external connections than by internal space. For this reason, more and more complex combinations of circuits are being interconnected within a single device. There is also a tendency to accomplish functions digitally that were formerly done by analog means. Although these digital circuits are much more complex than their analog counterparts, the cost and reliability of ICs make the resulting digital circuit the preferred design. One can expect these trends will continue based on current technology. One can also

The basic building block for many linear ICs is the operational amplifier. Table 15.2.5 lists the basic characteristics for a few representative IC operational amplifiers. In most instances, an adequate design for an operational amplifier circuit can be made assuming an “ideal” operational amplifier. For an ideal operational amplifier, one assumes that it has infinite gain and no voltage drop across its input terminals. In most designs, feedback is used to limit the gain of each operational amplifier. As long as the resulting closed-loop gain is much less than the openloop gain of the operational amplifier, this assumption yields results that are within acceptable engineering accuracy. Operational amplifiers use a balanced input circuit which minimizes input voltage offset. Furthermore, specially designed operational amplifiers are available which have extremely low input offset voltage. The input voltage must be kept low because of temperature drift considerations. For these reasons, the assumption of zero input voltage, sometimes called a “virtual ground,” is justified. Figure 15.2.25 shows three operational amplifier circuits and the equations which describe their behavior. In this figure S

Table 15.2.5

Operational Amplifiers

Type

Purpose

Input bias current, nA

Input res.,

Supply voltage, V

Voltage gain

Unity-gain bandwidth, MHz

LM741 LM224 LM255 LM444A

General purpose Quad gen. purpose FET input Quad FET input

500 150 0.1 0.005

2  106 2  106 1012 1012

 20 3 to 32  22  22

25,000 50,000 50,000 50,000

1.0 1.0 2.5 1.0

15-76

ELECTRONICS

Fig. 15.2.25 Operational amplifier circuits.

is the Laplace transform variable. In these equations the input and output voltages are functions of S. The equations are written in the frequency domain. A simple transformation to steady variable-frequency behavior can be obtained by simply substituting cos vt for the variable S wherever it appears in the equation. By varying v, the frequency in radians per second, one can obtain the steady-state frequency response of the circuit. The operational amplifier circuits shown in Figs. 15.2.25, 15.2.26, 15.2.27, and 15.2.28, show only the signal wires. There are additional connections to a dc power supply, and in some instances, to stabilizing circuits and guard circuits. These connections have been omitted for conceptual clarity.

Fig. 15.2.26 Difference amplifier.

One of the most useful analog integrated circuits is the difference amplifier. This is a balanced input amplifier that is a fundamental component in instrumentation and control applications. The difference amplifier is shown in Fig. 15.2.26. This amplifier maximizes the voltage gain for Voutput /Vdif, the difference gain, and minimizes the voltage gain for Voutput /VCM, the common-mode gain. There are two resistors in the circuit shown as R1 and two resistors shown as R2. The resistance values of the two R1 resistors are equal, and similarly the resistance values of the two R2 resistors are equal. The difference gain is given by Voutput R2 52 (15.2.5) Vdif R1

Fig. 15.2.27 Instrumentation amplifier.

The ratio of the common-mode gain to the difference gain is called the common-mode rejection ratio (CMRR). In a well-designed difference amplifier, the CMRR is 80 dB. Stated another way, the commonmode gain is 10,000 times smaller than the difference gain. The difference amplifier is the most common input to instrumentation amplifiers and analog-to-digital converter circuits. Bridge-connected transducers have a common-mode voltage that is 100 times the difference voltage, or more. Such transducers require a difference amplifier. The commonmode gain is highly dependent on the matching of the R1 resistors and the R2 resistors. A 1 percent difference in these resistors causes the CMRR to be degraded to 30 or 40 dB. An instrumentation amplifier is a high-grade difference amplifier. Although there are many implementations of the instrumentation amplifier, the three op-amp circuit is quite common, and will illustrate this device. Figure 15.2.27 shows the three op-amp instrumentation amplifier. This is a two-stage amplifier. The first stage is composed of amplifiers X1 and X2 and resistors R1, R2, and R2. The second stage is composed of amplifier X3 and resistors R3, R3, R4, and R4. The CMRR is usually much lower for the instrumentation amplifier, typically 120 dB. The differential mode gain is much higher for the instrumentation amplifier. For stability and frequency response considerations, the difference gain of a normal difference amplifier is usually 10 or less. For the instrumentation amplifier, the difference gain is typically 1,000. In the circuit shown in Fig. 15.2.27, the difference gain of the first stage is given by Vinternal 2 3 R2 1 R1 5 (15.2.6) Vdif R1 The difference gain of the second is given by Voutput R4 52 Vinternal R3

(15.2.7)

The overall gain for the amplifier is the product of the individual stage gains. In the typical case the gain of the first stage is 100, and for the second stage is 10, giving an overall gain of 1,000. Integrated circuit instrumentation amplifiers are available which include all of the circuitry in Fig. 15.2.27. The resistors are matched so that the circuit designer does not have to contend with component matching. A word about cost is in order. In 1994, high-grade operational amplifiers cost $0.50 for four op-amps in a single IC chip, or about $0.125 for each amplifier. The instrumentation amplifier is somewhat more expensive, but can be obtained for less than $5.00.

LINEAR INTEGRATED CIRCUITS

15-77

Filtering of electronic signals is often required. A filter passes some frequencies and suppresses others. Filters may be classed as low pass, high pass, band pass, or band stop. Figure 15.2.28 shows a low-pass active filter circuit which is a modification of the Sallen-Key circuit. In this circuit, the two resistors labeled R are matched and the two capacitors labeled C are matched. The frequency response of this circuit is given by its transfer function: Voutput 1 5 (15.2.8) vN Vinput 2 S 1 S 1 v2N Q

Fig. 15.2.29 High-pass filter section.

Fig. 15.2.28 Modified Sallen-Key filter circuit.

In this equation, vN is the resonant frequency of this circuit, in radians per second, and Q is a quality factor that indicates the amount of amplitude increase at the resonant frequency. For the circuit shown in Fig. 15.2.28, vN is given by 1 RC

(15.2.9)

1 2 2 sRB/RAd

(15.2.10)

vN 5 and Q is given by Q5

The filter shown in Fig. 15.2.28 is a two-pole low-pass filter. Higherorder filters—four pole, six, etc.—can be constructed by cascading sections of two-pole filters. Thus, a six-pole low-pass filter would consist of three circuits of the configuration shown in the figure. The components R, R, C, C, RA, and RB would vary in each filter section. Continuing with the low-pass filter, there are three common variations in filter response: Butterworth response, Chebychev response, and elliptical response. The Butterworth filter has no ripple in either the pass band or the stop band. The Chebychev filter has equal ripple variations in the pass band, but is flat in the stop band, and the elliptical filter has equal ripple variations in both the pass band and the stop band. There is a transition band of frequencies between the pass band and the stop band. The Butterworth filter has the greatest transition band. The Chebychev response has a sharper cutoff frequency characteristic than the Butterworth response, and the elliptical response has the sharpest transition from pass band to stop band. The circuits shown in Figs. 15.2.27 and 15.2.28 can be used to realize Butterworth and Chebychev filters. The elliptical filter requires a more complex circuit. The high-pass filter can be formulated by substituting 1/S for S in Eq. (15.2.8). The circuit configuration for a high-pass filter is shown in Fig. 15.2.29. Notice that only the position of the Rs and Cs have changed. In general, the values of Rs, Cs, RA, and RB will change for the high-frequency filter, so it would be inappropriate to design a lowpass filter and simply reverse the positions of the Rs and Cs.

Band-pass and band-stop filters can be made by combinations of low-pass and high-pass filter sections. Figure 15.2.30a shows the diagram configuration of filters to realize a band-pass filter. The low-pass filter would be designed for the high-frequency transition, and the highpass filter would be designed for the low-frequency transition. Figure 15.2.30b shows the block diagram configuration for a band-stop filter. The band-stop filter and the low-pass filter would be designed for the low-frequency transition, and the high-pass filter would be designed for the high-frequency transition. Band-pass and band-stop filters may also be realized by means of special high-Q circuits. A more complete discussion of filter technology can be found in the references at the beginning of this section. Table 15.2.6 lists some typical linear ICs, most of which contain operational amplifiers with additional circuitry. The voltage comparator is an operational amplifier that compares two input voltages, V1 and V2. Its output voltage is positive when V1 is greater than V2, and negative when V2 is greater than V1. The sample-and-hold circuit samples an analog input voltage at prescribed intervals, which are determined by an input clock pulse. Between clock pulses the circuit holds the sample voltage level. This circuit is useful in converting from an analog voltage to a digital number whose value is proportional to the analog voltage. An analog-to-digital (A/D) converter is a signal-converting device which changes several analog signals into digital signals. It consists of an input difference amplifier, an analog time multiplexer, a sample-and-hold amplifier, a digital decoder, and the necessary logic to interface with a digital computer. The A/D converter is programmable, in that a computer can set up the device for the requisite number of input channels and whether these input channels are differential inputs or single-ended inputs. The A/D converter typically takes eight differential analog input signals and converts them to 10-, 12-, or 16-bit digital signals. Typical A/D converters convert signals at a rate of 200,000 samples per second. Other types of A/D converters, such as flash converters, can convert over 10,000,000 samples per second. A companion circuit to the A/D converter is the digital-to-analog (D/A) converter, which is used to convert from digital signals to analog signals. This is also a programmable device, but in general the D/A converter is less complex than is the A/D converter. Voltage-regulators and voltage references are electronic circuits which create precision dc voltage sources. The voltage reference is more precise than the voltage regulator. The voltage-controlled oscillator is a circuit which converts from a dc signal to a proportional ac frequency. The output of a voltage-controlled oscillator is usually a rectangular ac wave rather than a sine wave. The NE555 timer/oscillator is a general-purpose timer/oscillator which has been integrated into a single IC chip. It can function as a monostable multivibrator, a free-running multivibrator, or as a synchronized multivibrator. It can also be used as a linear ramp generator, or for time delay or sequential timing applications.

15-78

ELECTRONICS

Fig. 15.2.30 (a) Band-pass and (b) band-stop filters.

Table 15.2.6

Linear Integrated-Circuit Devices

Operational amplifier Sample and hold Analog-to-digital converter Voltage regulator Voltage-controlled oscillator

Voltage comparator Digital-to-analog converter Voltage reference NE555 Timer/oscillator

Table 15.2.7 lists linear ICs that are used in audio, radio, and television circuits. The degree of complexity that can be incorporated in a single device is illustrated by the fact that a complete AM-FM radio circuit is available in a single IC device. The phase-locked loop is a device that is widely utilized for accurate frequency control. This device produces an output frequency that is set by a digital input. It is a highly accurate and stable circuit. This circuit is often used to demodulate FM radio waves. Table 15.2.8 lists linear IC circuits that are used in telecommunications. These circuits include digital circuits within them and/or are used with digital devices. Whether these should be classed as linear ICs or digital Table 15.2.7 Audio, Radio, and Television Integrated-Circuit Devices Audio amplifier Dolby filter circuit Intermediate frequency circuit TV chroma demodulator Video-IF amplifier-detector

Tone-volume-balance circuit Phase-locked loop (PLL) AM-FM radio Digital tuner

Table 15.2.8 Telecommunication Integrated-Circuit Devices Radio-control transmitter-encoder Radio-control receiver-decoder Pulse-code modulator–coder-decoder (PCM CODEC) Single-chip programmable signal processor Touch-tone generators Modulator-demodulator (modem)

ICs may be questioned. Several manufacturers include them in their linear device listings and not with their digital devices, and for this reason, they are listed here as linear devices. The radio-control transmitter-encoder and receiver-decoder provide a means of sending up to four control signals on a single radio-control frequency link. Each of the four channels can be either an on-off channel or a pulse-width-modulated (PWM) proportional channel. The pulse-code modulator–coder-decoder (PWM CODEC) is typical of a series of IC devices that have been designed to facilitate the design of digital-switched telephone circuits. Integrated circuits are normally divided into two classes: linear or analog ICs and digital ICs. There are a few hybrid circuits which have both analog and digital characteristics. Some representative hybrid integrated circuits are listed in Table 15.2.9, and shown in Fig. 15.2.31. In the superdiode, there are two modes of operation. When the input Table 15.2.9

Hybrid Circuits

Superdiode Schmitt trigger circuits Switched-capacitor filters

Limiters Dead-band circuits Logarithmic amplifiers

signal is positive, the output of the op-amp is negative, causing diode D2 to conduct and diode D1 to block current flow. When the input is negative, diode D2 blocks and diode D1 conducts. The output of the circuit is zero when the input is negative, and the output is proportional to the input when the input is positive. A normal diode has 0.5 to 1.0 V of forward voltage drop. This circuit is called a super diode because its switching point is at zero volts. There is a voltage inversion from input to output, but this can be reversed by means of an additional op-amp inverter following the super diode. The action of the circuit can be reversed by reversing diodes D1 and D2. The limiter circuit, shown in Fig. 15.2.31b, provides linear operation at low output voltages, either positive or negative, and neither diode is conducting. When the input voltage goes sufficiently negative, diode D1 begins to conduct, limiting the negative output of the op-amp. Similarly when the input goes sufficiently positive, diode D2 begins to conduct, limiting the positive

DIGITAL INTEGRATED CIRCUITS

15-79

the diode. This relationship is remarkably true for current changes of 106 to 1. The logarithmic amplifiers can have their outputs added together to effect a multiplication of the input signals. DIGITAL INTEGRATED CIRCUITS

The basic circuit building block for digital ICs is the gate circuit. A gate is a switching amplifier that is designed to be either on or off. (By contrast, an operational amplifier is a proportional amplifier.) For 5-V logic levels, the gate switches to a 0 whenever its input falls below 0.8 V and to a 1 whenever its input exceeds 2.8 V. This arrangement ensures immunity to spurious noise impulses in both the 0 and the 1 state. Several representative transistor-transistor-logic (TTL) gates are listed in Table 15.2.10. Gates can be combined to form logic devices of two fundamental kinds: combinational and sequential. In combinational logic, the output of a device changes whenever its input conditions change. The basic gate exemplifies this behavior. Table 15.2.10

Digital Integrated-Circuit Devices

Type 54/74*

No. circuits per device

No. inputs per device

00 02 04 06 08 10 11 13 14 20 21 30 74 76 77 86 174 373 374

4 4 6 6 4 3 3 2 6 2 2 1 2 2 4 4 6 8 8

2 2 1 1 2 3 3 4 1 4 4 8

2

Function NAND gate NOR gate Inverter Buffer AND gate NAND gate AND gate Schmitt trigger Schmitt trigger NAND gate AND gate NAND gate D flip-flop JK flip-flop Latch EXCLUSIVE OR gate D flip-flop Latch D flip-flop

NOTE: Example of device numbers are 74LS04, 54L04, 5477, and 74H10. The letters after the series number denote the speed and loading of the device. * 54 series devices are rated for temperatures from  55 to 125C. 74 series devices are rated for temperatures from 0 to 70C.

Fig. 15.2.31 Hybrid circuits. (a) Superdiode circuit; (b) limiter circuit; (c) Schmitt trigger circuit.

output of the op-amp. The Schmitt trigger circuit, shown in Fig. 15.2.31c, has positive feedback to the op-amp, through resistors R1 and R2. If the amplifier output is initially negative, the output will not change until the input becomes as negative as the noninverting terminal of the op-amp. When the input becomes slightly more negative, the op-amp output will suddenly switch from negative to positive due to the positive feedback. The op-amp will remain in this condition until the input becomes sufficiently positive, causing the op-amp to switch to a negative output. The dead-band circuit is similar to the limiter circuit except that for input voltages around zero, there is no output voltage. Above a threshold input voltage, the output voltage is proportional to the input voltage. The logarithmic amplifier relies upon the fact that the voltage drop across a diode causes logarithmically varying current to flow through

A number of gates can be interconnected to form a flip-flop circuit. This is a bistable circuit that stays in a particular state, a 0 or a 1 state, until its “clock” input goes to a 1. At this time its output will stay in its present state or change to a new state depending upon its input just prior to the clock pulse. Its output will retain this information until the next time the clock goes to a 1. The flip-flop has memory, because it retains its output from one clock pulse to another. By connecting several flipflops together, several sequential states can be defined permitting the design of a sequential logic circuit. Table 15.2.11 shows three common flip-flops. The truth table, sometimes called a state table, shows the specification for the behavior of each circuit. The present output state of the flip-flop is designated Q(t). The next output state is designated Q(t + 1). In addition to the truth table, the Boolean algebra equations in Table 15.2.11 are another way to describe the behavior of the circuits. The JK flip-flop is the most versatile of these three flip-flops because of its separate J and K inputs. The T flip-flop is called a toggle. When its T input is a 1, its output toggles, from 0 to 1 or from 1 to 0, at each clock pulse. The D flip-flop is called a data cell. The output of the D flip-flop assumes the state of its input at each clock pulse and holds this data until the next clock pulse. The JK flip-flop can be made to function as a T flip-flop by applying the T input to both the J and K input terminals. The JK flip-flop can be made to function as a D flip-flop by applying the data signal to the J input and applying the inverted data signal to the K input. Some common IC flip-flops are listed in Table 15.2.10.

15-80

ELECTRONICS Table 15.2.11

Flip-Flop Sequential Devices Graphic symbol

Name

Algebraic function

S JK flip-flop

Clock

J < K

Q

Q(t  1)  JQ(t)  KQ(t)

Q R

T flip-flop

Clock

T
T

1 2 3 t 5 a0 1 a1 ln Rt 1 a2 sln Rtd 1 a3 sln Rtd with the constants chosen to fit four calibration points. Often a simpler form is given: and

where lm  wavelength of maximum intensity, mm (nm); q  radiant energy flux, Btu/h (W); A  radiation surface, ft2 (m2); e  mean emissivity of the surfaces; T2 , T1  absolute temperatures of radiating and receiving surfaces, respectively, R (K); k1  5,215 mm, R (2,898 mm  K); k2  0.173  108 Btu/(h  ft2  R4) [5.73  108 W/(m2  K4)]. The emissivity depends on the material and form of the surfaces involved (see Sec. 4). Radiation sensors with scanning capability can produce maps, photographs, and television displays showing temperaturedistribution patterns. They can operate with resolutions to under 1C and at temperatures below room temperature. 7. Change in physical or chemical state (Seger cones, Tempilsticks). The temperatures at which substances melt or initiate chemical reaction are often known and reproducible characteristics. Commercial products are available which cover the temperature range from about 120 to 3,600F (50 to 2,000C) in intervals ranging from 3 to 70F (2 to 40C).

1 1 R 5 R0 exp e bca t b 2 a t b d f 0 Typically b varies in the range of 3,000 to 5,000 K. The reference temperature t0 is usually 298 K( 25C, 77F), and R0 is the resistance at that temperature. The error may be as small as 0.3C in the range of 0 to 50C. Thermistors are available in many forms and sizes for use from 196 to 450C with various tolerances on interchangeability and matching. (See “Catalog of Thermistors,” Thermometrics, Inc.) The AD590 and AD592 integrated circuit (Analog Devices, Inc.) passes a current of 1 mA/K very nearly proportional to absolute temperature. All these sensors are subject to self-heating error. Table 16.1.1

q 5 k2eAsT 42 2 T 41d

Polynomial Coefficients for Resistance Temperature Detectors Polynomial coefficients

Material

Useful

of

range,

A,

B,

C,

D,

E,

accuracy,*

Typical

C

C

C/

C/ 2

C/ 3

C/ 4

C

70 to 0 0 to 150

225.64 234.69

23.30735 25.95508

0.246864

0.00715

conductor

ID R0 ,

Copper 10 @ 25C

9.042

Nickel

120

80 to 320

199.47

1.955336

0.00266

1.88E  6

Platinum DIN/IEC a  0.00385/C

100

200 to 0 0 to 850

241.86 236.06

2.213927 2.215142

0.002867 0.001455

9.8E  6

1.5 1.5 1 1.64E – 8

* For higher accuracy consult the table or equation furnished by the manufacturer of the specific RTD being used. Temperatures per ITS-90, resistances per SI-90.

1 0.5

TEMPERATURE MEASUREMENT

The temperature-sensing element may be used as a solid which softens and changes shape at the critical temperature, or it may be applied as a paint, crayon, or stick-on label which changes color or surface appearance. For most the change is permanent; for some it is reversible. Liquid crystals are available in sheet and liquid form: these change reversibly through a range of colors over a relatively narrow temperature range. They are suitable for showing surface-temperature patterns in the range 20 to 50C (68 to 122F). An often used temperature device is the mercury-in-glass thermometer. As the temperature increases, the mercury in the bulb expands and rises through a fine capillary in the graduated thermometer stem. Useful range extends from  30 to 900F ( 35 to 500C). In many applications of the mercury thermometer, the stem is not exposed to the measured temperature; hence a correction is required (except where the thermometer has been calibrated for partial immersion). Recommended formula for the correction K to be added to the thermometer reading is K  0.00009D(t1  t2), where D  number of degrees of exposed mercury filament, F; t1  thermometer reading, F; t2  the temperature at about middle of the exposed portion of stem, F. For Celsius thermometers the constant 0.00009 becomes 0.00016. For industrial applications the thermometer or other sensor is encased in a metal or ceramic protective well and case (Fig. 16.1.20). A threaded union fitting is provided so that the thermometer can be installed in a line or vessel under pressure. Ideally the sensor should have the same

16-11

Fig. 16.1.21 Bimetallic temperature gage.

element is made by welding together two strips of metal having different coefficients of expansion. A change in temperature then causes the element to bend or twist an amount proportional to the temperature. A common bimetallic pair consists of invar (iron-nickel alloy) and brass. For control or alarm indications at fixed temperatures, thermometers may be equipped with electrical contacts such that when the temperature matches the contact point, an external relay circuit is energized. A popular industrial-type instrument employs the deflection of a pressure-spring to indicate (or record) the temperature (Fig. 16.1.22). The sensing element is a metal bulb containing some specific gas or liquid. The bulb connects with the pressure spring (in the form of a spiral or helix) through a capillary tube which is usually enclosed in a protective sheath or armor. Increasing temperature causes the fluid in the bulb to expand in volume or increase in pressure. This forces the pressure spring to unwind and move the pen or pointer an appropriate distance upscale.

Fig. 16.1.20 Industrial thermometer.

temperature as the fluid into which the well is inserted. However, heat conduction to or from the pipe or vessel wall and radiation heat transfer may also influence the sensor temperature (see ASME PTC 19.31974 Temperature Measurement, on well design). An approximation of the conduction error effect is Tsensor 2 Tfluid 5 sTwall 2 TfluiddE For a sensor inserted to a distance L  x from the tip of a well of insertion length L, E  cosh[m(L  x)]/cosh mL, where m  (h/kt)0.5; x and L are in ft (m); h  fluid-to-well conductance, Btu/(h) (ft2)(F) [J/(h) (m2)(C)]; k  thermal conductivity of the well-wall material. Btu/(h)(ft)(F) [J/(h)(m)(C)]; and t  well-wall thickness, ft (m). Good thermal contact between the sensor and the well wall is assumed. For (L  x)/L  0.25: mL E

1

2

3

4

5

6

7

0.67

0.30

0.13

0.057

0.025

0.012

0.005

Radiation effects can be reduced by a polished, low-emissivity surface on the well and by radiation shields around the well. Concern with mercury contamination has made the bimetal thermometer the most commonly used expansion-based temperature measuring device. Differential thermal expansion of a solid is employed in the simple bimetal (used in thermostats) and the bimetallic helix (Fig. 16.1.21). The bimetallic

Fig. 16.1.22 Pressure-spring element.

The bulb fluid may be mercury (mercury system), nitrogen under pressure (gas system), or a volatile liquid (vapor-pressure system). Mercury and gas systems have linear scales; however, they must be compensated to avoid ambient temperature errors. The capillary may range up to 200 ft in length with, however, considerable reduction in speed of response. The use of mercury is being phased out due to environmental and health concerns over the use of the metal. The pneumatic transmitter (Fig. 16.1.23) was at one time the standard for long-distance transmission of temperature. Although its place has been taken by various electronic means, it is still commonly used in many parts of the world, especially in refining and other hydrocarbon processing applications. The bulb is filled with gas under pressure, which acts on the diaphragm. An increase in bulb temperature increases the upward force acting on the main beam, tending to rotate it clockwise. This causes the baffle or flapper to move closer to the nozzle, increasing the nozzle back pressure. This acts on the pilot, producing an increase in output pressure, which increases the force exerted by the feedback bellows. The system returns to equilibrium when the increase in bellows pressure exactly balances the effect of the increased diaphragm pressure. Since the lever ratios are fixed, this results in a direct proportionality between bulb temperature and output air pressure. For precision, compensating elements are built into the instrument to correct for the effects of changes in barometric pressure and ambient temperature.

16-12

INSTRUMENTS

given by the curves of Fig. 16.1.24. Table 16.1.2 gives the recommended temperature limits, for each kind of couple. Table 16.1.3 gives polynomials for converting thermocouple millivolts to temperature. The thermocouple voltage is measured by a digital or deflection millivoltmeter or null-balance type of potentiometer. Completion of the thermocouple circuit through the instrument immediately introduces one or more additional junctions. Common practice is to connect the thermocouple (hot junction) to the instrument with special lead wire (which may be of the

Fig. 16.1.23 Pneumatic temperature transmitter. Electrical systems based on the thermocouple or resistance thermometer are particularly applicable where many different temperatures are to be measured, where transmission distances are large, or where high sensitivity and rapid response are required. The thermocouple is used with high temperatures; the resistance thermometer for low temperatures and high accuracy requirements. The choice of thermocouple depends on the temperature range, desired accuracy, and the nature of the atmosphere to which it is to be exposed. The temperature-voltage relationships for the more common of these are Table 16.1.2 ANSI symbol‡

Limits of Error on Standard Wires without Selection*† ºF:  150 ºC:  101

Materials and polarities Positive Negative

T E J K N R S

Fig. 16.1.24 Thermocouple voltage-temperature characteristics [reference junction at 32F (0C)].

Cu Ni-Cr Fe Ni-Cr Ni-Cr-Si Pt-13% Rh Pt-10% Rh

Constantan§ Constantan Constantan Ni-Al Ni-SiMn Pt Pt

75 59 2%

32 0

200 93

1.5ºF (0.8ºC) 3ºF (1.7ºC) 4ºF (2.2ºC) 4ºF (2.2ºC) 4ºF (2.2ºC) 3ºF (1.5ºC) 3ºF (1.5ºC)

530 277

600 316 3⁄

700 371

1,000 538

1,400 760

2,300 1,260

2,700 1,482

4% 1⁄

2%

3⁄ % 4 3⁄ % 4 3⁄ % 4 1⁄

4%

1⁄

4%

* Protect copper from oxidation above 600F; iron above 900F. Protect Ni-A1 from reducing atmospheres. Protect platinum from nonreducing atmospheres. Type B (Pt-30% Rh versus Pt-6%) is used up to 3,200F(1,700C). Its standard error is 1/2 percent above 1,470F (800C). † Closer tolerances are obtainable by selection and calibration. Consult makers’ catalogs. Tungsten-rhenium alloys are in use up to 5,000F (2,760C). For cryogenic thermocouples see Sparks et al., Reference Tables for Low-Temperature Thermocouples. Natl. Bur. Stand. Monogr. 124. ‡ Individual wires are designated by the ANSI symbol followed by P or N; thus iron is JP. § Constantan is 55% Cu, 45% Ni. The nickel-chromium and nickel-aluminum alloys are available as Chromel and Alumel, trademarks of Hoskins Mfg. Co.

Table 16.1.3 Range

Polynomial Coefficients for Converting Thermocouple emf to Temperature* Type E

Type J

Type K

Type N

Type S

Type T

mV

0 to 76.373

0 to 42.919

0 to 20.644

0 to 47.513

1.874 to 11.95

0 to 20.872

ºC ºF

0 to 1,000º 32 to 1,832º

0 to 760º 32 to 1,400º

0 to 500º 32 to 932º

0 to 1,300º 32 to 2,372º

250 to 1,200º 482 to 2,192º

0 to 400º 32 to 752º

a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 Maximum deviation *Ts8Cd 5

0 1.7057035E  01 2.3301759E  01 6.5435585E  03 7.3562749E  05 1.7896001E  06 8.4036165E  08 1.3735879E  09 1.0629823E  11 3.2447087E  14

0 1.978425E  01 2.001204E  01 1.036969E  02 2.549687E  04 3.585153E  06 5.344285E  08 5.099890E  10

0 2.508355E  01 7.860106E  02 2.503131E  01 8.315270E  02 1.228034E  02 9.804036E  04 4.413030E  05 1.057734E  06 1.052755E  08

0 3.8783277E  01 1.1612344E  00 6.9525655E  02 3.0090077E  03 8.8311584E  05 1.6213839E  06 1.6693362E  08 7.3117540E  11

1.291507177E  01 1.466298863E  02 1.534713402E  01 3.145945973E  00 4.163257839E  01 3.187963771E  02 1.291637500E  03 2.183475087E  05 1.447379511E  07 8.211272125E  09

0.02ºC

0.04ºC

0.05 to  0.04ºC

0.06ºC

0.01ºC

0 2.592800E  01 7.602961E  01 4.637791E  02 2.165394E  03 6.048144E  05 7.293422E  07

0.03ºC

n

g ai 3 smVdi. i50

All temperatures are ITS-1990 and all voltages are SI-1990 values. Maximum deviation is that from the ITS-1990 tables; thermocouple wire error is additional. Computed temperature deviates greatly outside of given ranges. Consult source for thermocouple types B and R and for other millivolt ranges. SOURCE: NIST Monograph 175, April 1993.

TEMPERATURE MEASUREMENT

16-13

same materials as the thermocouple itself). This assures that the cold junction will be inside the instrument case, where compensation can be effectively applied. Cold junction compensation is typically achieved by measuring the temperature of the thermocouple wire to copper wire junctions or terminals with a resistive or semiconductor thermometer and correcting the measured terminal voltage by a derived equivalent millivolt cold junction value. Figure 16.1.25 shows a digital temperature indicator with correction for different ANSI types of thermocouple voltage to temperature nonlinearities being stored in and applied to the analog-to-digital converter (A/D) by a read-only memory (ROM) chip. Fig. 16.1.27 Optical pyrometer.

Fig. 16.1.25 Temperature measurement with thermocouple and digital millivoltmeter.

The resistance thermometer employs the same circuitry as described above, with the resistance element (RTD) being placed external to the instrument and the cold junction being omitted (Fig. 16.1.26). Three types of RTD connections are in use: two wire, three wire, and four wire. The two-wire connection makes the measurement sensitive to lead wire temperature changes. The three-wire connection, preferred in industrial applications, eliminates the lead wire effect provided the leads are of the same gage and length, and subject to the same environment. The four-wire arrangement makes no demands on the lead wires and is preferred for scientific measurements.

The light intensity of the filament is kept constant by maintaining a constant current flow. The intensity of the target image is adjusted by positioning the optical wedge until the image intensity appears exactly equal to that of the filament. A scale attached to the wedge is calibrated directly in temperature. The red filter is employed so that the comparison is made at a specific wavelength (color) of light to make the calibration more reproducible. In another type of optical pyrometer, comparison is made by adjusting the current through the filament of the standard lamp. Here, an ammeter in series is calibrated to read temperature directly. Automatic operation may be had by comparing filament with image intensities with a pair of photoelectric cells arranged in a bridge network. A difference in intensity produces a voltage, which is amplified to drive the slide wire or optical wedge in the direction to restore zero difference. The radiation pyrometer is normally applied to temperature measurements above 1,000F. Basically, there is no upper limit; however, the lower limit is determined by the sensitivity and cold-junction compensation of the instrument. It has been used down to almost room temperature. A common type of radiation receiver is shown in Fig. 16.1.28. A lens focuses the radiation onto a thermal sensing element. The temperature rise of this element depends on the total radiation received and the

Fig. 16.1.28 Radiation pyrometer.

Fig. 16.1.26 Three-wire resistance thermometer with self-balancing potentiometer recorder.

The resistance bulb consists of a copper or platinum wire coil sealed in a protective metal tube. The thermistor has a very large temperature coefficient of resistance and may be substituted in low-accuracy, lowcost applications. By use of a selector switch, any number of temperatures may be measured with the same instrument. The switch connects in order each thermocouple (or resistance bulb) to the potentiometer (or bridge circuit) or digital voltmeter. When balance is achieved, the recorder prints the temperature value, then the switch advances on to the next position. Optical pyrometers are applied to high-temperature measurement in the range 1,000 to 5,000F (540 to 2,760C). One type is shown in Fig. 16.1.27. The surface whose temperature is to be measured (target) is focused by the lens onto the filament of a calibrated tungsten lamp.

conduction of heat away from the element. The radiation relates to the temperature of the target; the conduction depends on the temperature of the pyrometer housing. In normal applications the latter factor is not very great; however, for improved accuracy a compensating coil is added to the circuit. The sensing element may be a thermopile, vacuum thermocouple, or bolometer. The thermopile consists of a number of thermocouples connected in series, arranged so that all the hot junctions lie in the field of the incoming radiation; all of the cold junctions are in thermal contact with the pyrometer housing so that they remain at ambient temperature. The vacuum thermocouple is a single thermocouple whose hot junction is enclosed in an evacuated glass envelope. The bolometer consists of a very thin strip of blackened nickel or platinum foil which responds to temperature in the same manner as the resistance thermometer. The thermal sensing element is connected to a potentiometer or bridge network of the same type as described for the self-balance thermocouple and resistance-thermometer instruments. Because of the nature of the radiation law, the scale is nonlinear. Accuracy of the optical- and radiation-type pyrometers depends on: 1. Emissivity of the surface being sighted on. For closed furnace applications, blackbody conditions can be assumed (emissivity  1). For other applications corrections for the actual emissivity of the surface must be made (correction tables are available for each pyrometer model).

16-14

INSTRUMENTS

Multiple color or wavelength sensing is used to reduce sensitivity to hot object emissivity. For measuring hot fluids, a target tube immersed in the fluid provides a target of known emissivity. 2. Radiation absorption between target and instrument. Smoke, gases, and glass lenses absorb some of the radiation and reduce the incoming signal. Use of an enclosed (or purged) target tube or direct calibration will correct this. 3. Focusing of the target on the sensing element.

MEASUREMENT OF FLUID FLOW RATE (See also Secs. 3 and 4.)

Flow is expressed in volumetric or mass units per unit time. Thus gases are generally measured in ft3/min (m3/min) or ft3/h (m3/h), steam in lb/h (kg/h), and liquid in gal/min (L/min) or gal/h (L/h). Conversion between volumetric flow Q and mass flow m is given by m  KrQ, where r  density of the fluid and K is a constant depending on the units of m, Q, and r. Flow rate can be measured directly by attaching a rate device to a volumetric meter of the types previously described, e.g., a tachometer connected to the rotating shaft of the nutating-disk meter (Fig. 16.1.13). There are many kinds of flow instruments that serve in different applications. Depending on the application, the engineer may choose from a dozen different technologies. Issues, such as accuracy, repeatability, and cost, as well as suitability for the service, make selecting a flowmeter a demanding exercise. The basic flowmeter technologies include: Positive displacement Differential pressure Variable area and rotameter Turbine Electromagnetic Ultrasonic Coriolis mass flowmeter Vortex shedding and other fluidic Thermal dispersion Open-channel primary elements

Flange

D1

Orifice plate

D2

Vena contracta Downstream tap

Upstream tap

Diaphragm and solid state differentialpressure transmitter

Analog or digital signal to control system

Fig. 16.1.29 Orifice plate and differential-pressure transmitter.

Because of their use in domestic water systems worldwide, positive displacement meters (nutating disk, rotary piston, multijet) are the most widely used method of measurement of flow rate and total. They are, as the name implies, a means of measuring volumetric flow rate directly, one small volume at a time. Positive displacement meters are limited in the materials of construction, pressure, amount of air entrained, and the amount and size of solids they can pass. They must be compensated for temperature and viscosity changes. Typically, they are used for totalization of clean fluids from water to petroleum byproducts. Differential pressure flowmeters operate by the conversion of fluid velocity to pressure (head). Thus, if the fluid is forced to change its velocity from V1 to V2 , its pressure will change from p1 to p2 according to the equation (neglecting friction, expansion, and turbulence effects): 2gc V 22 2 V 21 5 r s p1 2 p2d

A wide variety of differential pressure meters is available for measuring the orifice (or other primary element) pressure drop. Figure 16.1.29 shows an orifice plate and differential-pressure flow transmitter. An electronic device is shown, but in relatively old plants, and in some very special cases in the refinery industry, pneumatic devices still are present. The differential across the orifice plate produces a pressure differential on either side of the diaphragm, thus causing an electrical signal to be transmitted. Modem differential-pressure flow transmitters generally include a microprocessor, which permits temperature compensation, pressure compensation, and the conversion of the differential pressure (the square root of flow) to a linear flow rate. The absolute accuracy of a differential pressure flow device is fixed by the accuracy of the primary device.

The meters described thus far are termed variable-head because the pressure drop varies with the flow, orifice ratio being fixed. In contrast, the variable-area meter maintains a constant pressure differential but varies the orifice area with flow. The rotameter (Fig. 16.1.30) consists of a float positioned inside a tapered tube by action of the fluid flowing up through the tube. The flow restriction is now the annular area between the float and the tube (area increases as the float rises). The pressure differential is fixed, determined by the weight of the float and the buoyant forces. To satisfy

(16.1.1)

where g  acceleration due to gravity, r  fluid density, and gc  32.184 lbm  ft/(lbf)(s2) [1.0 kg · m/(N)(s2)]. Caution: If the flow pulsates, the average value of p1  p2 will be greater than that for steady flow of the same average flow. See “ASME Pipeline Flowmeters,” and “Pitot Tubes” in Sec. 3 for coverage of venturi tubes, flow nozzles, compressible flow, orifice meters, ASME orifices, and Pitot tubes. The tabulation orifice coefficients apply only for straight pipe upstream and downstream from the orifice. In most cases, satisfactory results are obtained if there are no fittings closer than 25 pipe diameters upstream and 5 diameters downstream from the orifice. The upstream limitation can be reduced a bit by employing straightening vanes. Reciprocating pumps in the line may introduce serious errors and require special efforts for their correction.

Fig. 16.1.30 Rotameter.

the volumetric flow equation then, the annular area (hence the float level) must increase with flow rate. Thus the rotameter may be calibrated for direct flow reading by etching an appropriate scale on the surface of the glass tube. The calibration depends on the float dimensions, tube taper, and fluid properties. The equation for volumetric flow is Q 5 CR sAT 2 AFd c

1/2 2gVF srF 2 rdd rAF

MEASUREMENT OF FLUID FLOW RATE

where AT  cross-sectional area of tube (at float position), AF  effective float area, VF  float volume, rF  float density, r  fluid density, and CR  rotameter coefficient (usually between 0.6 and 0.8). The coefficient varies with the fluid viscosity; however, special float designs are available which are relatively insensitive to viscosity effects. Also, fluid density compensation can be obtained. The rotameter reading may be transmitted for recording and control purposes by affixing to the float a stem which connects to an armature or permanent magnet. The armature forms part of an inductance bridge whose signal is amplified electronically to drive a pen-positioning motor. For pneumatic transmission, the magnet provides magnet coupling to a pneumatic motion transmitter external to the rotameter tube. This generates an air pressure proportional to the height of the float. The area meter is similar to the rotameter in operation. Flow area is varied by motion of a piston in a straight cylinder with openings cut into the wall. The piston position is transmitted as above by an armature and inductance bridge circuit. Turbine, propeller, and paddlewheel flowmeters operate by converting velocity into rotational energy by means of a flighted rotor. These flowmeters are primarily used to measure water, and water-based liquids, and come in a very wide variety of designs, styles, and materials of construction. Some of these devices are mechanical, using gears and cams to convert the rotation of the rotor to increment an electromechanical counter. Others use solid-state pickups such as magnetic induction coils and Hall effect sensors to sense the speed of rotation of the rotor.

Fig. 16.1.31 Propeller-type flowmeter.

These flowmeters are limited by the velocity of the fluid (they can be overdriven, with disastrous results), by the size and composition of solids they can pass, and by temperature and viscosity changes, and they are subject to significant wear in operation. Some turbine meters can be calibrated to be very highly accurate and are often used to meter expensive additives and blends. One of the most widely used flowmeter types is the electromagnetic flowmeter, often called a magmeter. The device operates by means of Faraday’s law, converting velocity of a conductive fluid to a proportional current signal by passing the fluid at right angles to an electromagnetic field. Magmeters are inherently the most accurate of the velocimetric flow technologies, because the current signal is directly proportional to the average velocity of the conductive fluid. Magmeters are therefore restricted to conductive fluids (generally above 5 microsiemens). Magmeters are supplied in a very wide range of sizes (generally from 1 to 2,000 mm), and the larger meters are among the most economical devices. Magmeters also have the advantage that they can be made out of a wide variety of materials, including ones suitable for ultrapure service and highly corrosive or abrasive service as well. Their final large advantage over other flowmeter types is that they are obstructionless, having no moving parts in the flow stream, and therefore do not increase the pressure drop to the system over the pressure drop of an equivalent length of pipe. Four different types of flowmeter using ultrasonic technology are in common use. These are Doppler flowmeters, transit time flowmeters, sonar flowmeters, and ultrasonic level devices used in conjunction with open-channel primary flow devices such as weirs and flumes. Doppler flowmeters operate by means of the Doppler effect. A signal of known frequency (typically 660 kHz) is beamed into a pipe. The

16-15

return echo is frequency-shifted proportionally to the velocity of the fluid in the pipe. Because homogenous fluids generally do not reflect the signal well, Doppler flowmeters are restricted to fluids with relatively large quantities of entrained particulate (usually a minimum of 100 mg/L of plus-100-mm solids). They are commonly used in mining, dredging, and wastewater sludge applications. Transit time flowmeters operate by beaming an ultrasonic signal into the pipe in the direction of flow, and a duplicate signal downstream. The difference in time for the transmitted signal to reach the receiver between the upstream and downstream signals is proportional to the velocity of the fluid. The difference in transit time is 2vl/(c2  v2), where l is the path length. For v V c, the factor (c2  v2) is nearly constant. Like magmeters, transit time flowmeters have no pressure drop. Transit time flowmeters can be provided in similar sizes to magmeters, and are used in all types of fluids, except slurries. Transit time flowmeters can be used in gas flow measurement. Increasing the number of paths of signal transmission can increase the accuracy of the flowmeter, and transit time flowmeters are increasingly used for transmission and custody transfer of petroleum products and gas. Sonar flowmeters are acoustic, rather than strictly ultrasonic, and operate by “listening” to the sonic properties of the fluid and determining velocity from the sonic profile. Ultrasonic open-channel flowmeters are actually ultrasonic level transmitters used at the measurement point of a flume or weir, and supplied with electronic conversion from measured height to flow rate. Because of their noncontacting nature, ultrasonic open-channel flowmeters have become the standard technology for this measurement, replacing floats and other mechanical devices. The limiting accuracy of these devices is not the accuracy of the sensor, but rather the accuracy of the primary device. Coriolis mass flowmeters are now common in many process industries. Generally available from fractional inch to 4 in (100 mm), at least one manufacturer produces up to 12-in sizes. Using the principle of measurement of the Coriolis force, these devices natively measure mass flow, and can be used to compute volumetric flow rate and total, and fluid density. Recent advances in Coriolis mass flowmeter design have permitted the device to also be used for measurement of gases. Another extremely common measurement devices is the fluidic flowmeter, of which the best known examples are the vortex-shedding, the vortex-precession (or swirlmeter), and the Coanda effect flowmeter. The vortex-shedding flowmeter operates by sensing in a variety of ways the vortices shed by a bluff body in the flow stream. The frequency of the shed vortices is proportional to the fluid velocity. The swirlmeter, an ancestor of the vortex-shedder, uses the combination of a stationary turbine rotor to impart a twist and a flow restriction to produce a standing wave, the frequency of which is proportional to velocity. The Coanda effect flowmeter uses the Coanda effect to produce a hydraulic switch. The frequency of the switching operation is proportional to the velocity of the fluid. Vortex-shedding flowmeters and swirlmeters are commonly used in chemical and steam applications, while Coanda effect flowmeters are normally found in district heating and gas applications in relatively small sizes. Vortex-shedding flowmeters are limited in turndown at low velocities, and should not be used in applications with large-diameter solids such as wastewater or mining. Thermal dispersion flowmeters operate by measuring the temperature differential caused by the passage of a fluid between two sensors. One sensor is heated to a constant temperature, and imparts heat to the fluid. The other sensor measures the temperature of the fluid. The temperature differential produces a power demand on the heater circuit that is proportional to the mass flow of the fluid. Thermal dispersion flowmeters are commonly used in gas flow measurement, but can also be used in some liquid flow measurements, generally as switches. These devices can be extremely accurate, and at least one manufacturer produces ultralow-flow thermal dispersion flowmeters that measure in the millilitre per hour range. The most common open-channel primary flow elements in current usage are the v-notch and rectangular weirs, and the Parshall and

16-16

INSTRUMENTS

Palmer-Bowlus flumes. All of these devices produce a flow restriction

that raises the fluid behind the restriction to a predictable height. The change in height H produces a measurement of flow rate by the formula Q  K/n, where K is a constant related to flow rate units and n is a constant related to the design of the specific primary element. The constant n is often written as a fraction, as in 3/2. Other forms of primary elements for open-channel flow exist, and sometimes the Manning formula and its derivatives are used to measure the flow in open conduits without a primary device. Manning’s formula is: v  K/n R2/3 S1/2 (English units) [SI units], where v  fluid velocity (ft/s) or [m/s], K  1.486 (English), K 1.00 [SI], R  hydraulic radius (ft) or [m]  A/Pw, A  cross-sectional area of flow (ft2) or [m2], Pw  wetted perimeter (ft) or [m], S  channel slope, and n  Manning’s roughness coefficient. Then Q  vA. Most flowmeters measure rate of flow. To measure the total quantity of fluid flowing during a specific time interval, it is necessary to integrate the flow rate over that interval. The integration, prior to the advent of microprocessor-based flow transmitters, was often done manually, or with the use of specially shaped cams. Almost all flow transmitters now integrate mathematically, and display total flow on an electronic or electromechanical register. POWER MEASUREMENT

Power is defined as the rate of doing work. Common units are the horsepower and the kilowatt: 1 hp  33,000 ft  lb/min  0.746 kW. The power input to a rotating machine in hp (W)  2pnT/k, where n  r/min of the shaft where the torque T is measured in lbf  ft (N  m), and k  33,000 ft  lbf/hp  min [60 N  m/(W  min)]. The same equation applies to the power output of an engine or motor, where n and T refer to the output shaft. Mechanical power-measuring devices (dynamometers) are of two types: (1) those absorbing the power and dissipating it as heat and (2) those transmitting the measured power. As indicated by the above equation, two measurements are involved: shaft speed and torque. The speed is measured directly by means of a tachometer. Torque is usually measured by balancing against weights applied to a fixed lever arm; however, other force measuring methods are also used. In the transmission dynamometer, the torque is measured by means of straingage elements bonded to the transmission shaft. There are several kinds of absorption dynamometers. The Prony brake applies a friction load to the output shaft by means of wood blocks, flexible band, or other friction surface. The fan brake absorbs power by “fan” action of rotating plates on surrounding air. The water brake acts as an inefficient centrifugal pump to convert mechanical energy into heat. The pump casing is mounted on antifriction bearings so that the developed turning moment can be measured. In the magnetic-drag or eddy-current brake, rotation of a metal disk in a magnetic field induces eddy currents in the disk which dissipate as heat. The field assembly is mounted in bearings in order to measure the torque. One type of Prony brake is illustrated in Fig. 16.1.32. The torque developed is given by L(W  W0), where L is the length of the brake arm, ft; W and W0 are the scale loads with the brake operating and with the brake free, respectively. The brake horsepower then equals 2pnL(W  W0)/33,000, where n is shaft speed, r/min.

range of operating conditions and plotted. Mechanical power measurement can then be made by measuring the electrical power input (or output) to the machine. In the electric-cradle dynamometer, the motor or generator stator is mounted in trannion bearing so that the torque can be measured by suitable scales. The engine indicator is a device for plotting cylinder pressure as a function of piston (or volume) displacement. The resulting p-n diagram (Fig. 16.1.33) provides both a measure of the work done in a reciprocating engine, pump, or compressor and a means for analyzing its performance (see Secs. 4, 9, and 14). If Ad is the area inside the closed curve drawn by the indicator, then the indicated horsepower for the cylinder under test  KnAp Ad where K is a proportionality factor determined by the scale factors of the indicator diagram, n  engine speed, r/min, Ap  piston area.

Fig. 16.1.33 Indicator diagram.

Completely mechanical indicators can be used only for low-speed machines. They have largely been superseded by electrical transducers using strain gages, variable capacitance, piezoresistive, and piezoelectric principles which are suitable for high-speed as well as low-speed pressure changes (the piezoelectric principle has low-speed limitations). The usual diagram is produced on an oscilloscope display as pressure vs. time, with a marker to indicate some reference event such as spark timing or top dead center. Special transducers can be coupled to a crank or cam shaft to give an electrical signal representing piston motion so that a p-n diagram can be shown on an oscilloscope.

ELECTRICAL MEASUREMENTS (See also Sec. 15.)

Electrical measurements serve two purposes: (1) to measure the electrical quantities themselves, e.g., line voltage, power consumption, and (2) to measure other physical quantities which have been converted into electrical variables, e.g., temperature measurement in terms of thermocouple voltage. In general, there is a sharp distinction between ac and dc devices used in measurements. Consequently, it is often desirable to transform an ac signal to an equivalent dc value, and vice versa. An ac signal is converted to dc (rectified) by use of selenium rectifiers, silicon or germanium diodes, or electron-tube diodes. Full-wave rectification is accomplished by the diode bridge, shown in Fig. 16.1.34. The rectified signal may be passed through one or more low-pass filter stages to smooth the waveform to its average value. Similarly, there are many ways of modulating a dc signal (converting it to alternating current). The most common method used in instrument applications is a solidstate oscillator.

Fig. 16.1.32 Prony brake.

In addition to eddy-current brakes, electric dynamometers include calibrated generators and motors and cradle-mounted generators and motors. In calibrated machines, the efficiency is determined over a

Fig. 16.1.34 Full-wave rectifier.

ELECTRICAL MEASUREMENTS

The galvanometer (Fig. 16.1.35), recently supplanted by the directreading digital voltmeter (DVM), is basic to dc measurement. The input signal is applied across a coil mounted in jeweled bearings or on a tautband suspension so that it is free to rotate between the poles of a

Fig. 16.1.35 D’Arsonval galvanometer.

permanent magnet. Current in the coil produces a magnetic moment which tends to rotate the coil. The rotation is limited, however, by the restraining torque of the hairsprings. The resulting deflection of the coil u is proportional to the current I: u5

NBWL I K

where N  number of turns in coil; W, L  coil width and length, respectively; B  magnetic field intensity; K  spring constant of the hairsprings. Galvanometer deflection is indicated by a balanced pointer attached to the coil. In very sensitive elements, the pointer is replaced by a mirror reflecting a spot of light onto a ground-glass scale; the bearings and hairspring are replaced by a torsion-wire suspension. The galvanometer can be converted into a dc voltmeter, ammeter, or ohmmeter by application of Ohm’s law, IR  E, where I  current, A; E  electrical potential, V; and R  resistance, . For a voltmeter, a fixed resistance R is placed in series with the galvanometer (Fig. 16.1.36a). The current i through the galvanometer is proportional to the applied voltage E: i  E/(r  R), where r  coil resistance. Different voltage ranges are obtained by changing the series resistance. An ammeter is produced by placing the resistance in parallel with the galvanometer or DVM (Fig. 16.1.36b). The current then divides between the galvanometer coil or DVM and the resistor in inverse ratio to their resistance values (r and R, respectively); thus, i  IR/(r  R), where i  current through coil and I  total current to be measured. Different current ranges are obtained by using different shunt resistances.

a known current flows through it. This principle is used for low-value resistances and in digital ohmmeters. Digital instruments are available for all these applications and often offer higher resolution and accuracy with less circuit loading. Fluctuating readings are difficult to follow, however. Alternating current and voltage must be measured by special means. A dc instrument with a rectifier input is commonly used in applications requiring high input impedance and wide frequency range. For precise measurement at power-line frequencies, the electrodynamic instrument is used. This is similar to the galvanometer except that the permanent magnet is replaced by an electromagnet. The movable coil and field coils are connected in series; hence they respond simultaneously to the same current and voltage alternations. The pointer deflection is proportional to the square of the input signal. The moving-iron-type instruments consist of a soft-iron vane or armature which moves in response to current flowing through a stationary coil. The pointer is attached to the iron to indicate the deflection on a calibrated nonlinear scale. For measuring at very high frequencies, the thermocouple voltmeter or ammeter is used. This is based on the heating effect of the current passing through a fixed resistance R. Heat is liberated at the rate of E 2/R or I 2 R W. DC electrical power is the product of the current through the load and the voltage across the load. Thus it can be simply measured using a voltmeter and ammeter. AC power is directly indicated by the wattmeter, which is similar to the electrodynamic instrument described above. Here the field coils are connected in series with the load, and the movable coil is connected across the load (to measure its voltage). The deflection of the movable coil is then proportional to the effective load power. Precise voltage measurement (direct current) can be made by balancing the unknown voltage against a measured fraction of a known reference voltage with a potentiometer (Fig. 16.1.9). Balance is indicated by means of a sensitive current detector placed in series with the unknown voltage. The potentiometer is calibrated for angular position vs. fractional voltage output. Accuracies to 0.05 percent are attainable, dependent on the linearity of the potentiometer and the accuracy of the reference source. The reference standard may be a Weston standard cell or a regulated voltage supply (based on diode characteristics). The balance detector may be a galvanometer or electronic amplifier. Precision resistance and general impedance measurements are made with bridge circuits (Fig. 16.1.37) which are adjusted until no signal is detected by the null detector (bridge is balanced). Then Z1Z3  Z2Z4. The basic Wheatstone bridge is used for resistance measurement where all the impedances (Z’s) are resistances (R’s). If R1 is to be measured, R1  R2R4 /R3, when balanced. A sensitive galvanometer for the null detector and dc voltage excitation is usual. All R2, R3, R4 must be calibrated, and some adjustable. For general impedance measurement, ac voltage excitation of suitable frequency is used. The null detector may be a sensitive ac meter, oscilloscope, or, for audio frequencies, simple earphones. The basic balance equation is still valid, but it now requires also that the sum of the phase angles of Z1 and Z3 equal the sum of the phase angles of Z2 and Z4. As an example, if Z1 is a capacitor, the bridge can be balanced if Z2 is a known capacitor while Z3 and Z4 are resistances. The phase-angle condition is met, and Z1Z3  Z2Z4 becomes (1/2pfC1)R3  (1/2pfC2)R4 and C1  C2R3 /R4. Variations on the basic

Fig. 16.1.36 (a) Voltmeter; (b) ammeter.

The common ohmmeter consists of a battery, a galvanometer with a shunt rheostat, and resistance in series to total Ri . The shunt is adjusted to give a full-scale (0 ) reading with the test terminals shorted. When an unknown resistance R is connected, the deflection is Ri/(Ri  R) fraction of full scale. The scale is calibrated to read R directly. A half-scale deflection indicates R  Ri. Alternatively, the galvanometer is connected to read voltage drop across the unknown R while

16-17

Fig. 16.1.37 Impedance bridge.

16-18

INSTRUMENTS

principle include the Kelvin bridge for measurement of low resistance, and the Mueller bridge for platinum resistance thermometers. Voltage measurement requires a meter of substantially higher impedance than the impedance of the source being measured. The vacuumtube cathode follower and the field-effect transistor are suitable for high-impedance inputs. The following circuitry may be a simple amplifier to drive a pointer-type meter, or may use a digital technique to produce a digital output and display. Digital counting circuits are capable of great precision and are widely adapted to measurements of time, frequency, voltage, and resistance. Transducers are available to convert temperature, pressure, flow, length and other variables into signals suitable for these instruments. The charge amplifier is an example of an operational amplifier application (Fig. 16.1.38). It is used for outputs of piezoelectric transducers in which the output is a charge proportional to input force or other input converted to a force. Several capacitors switchable across the feedback path provide a range of full-scale values. The output is a voltage.

Fig. 16.1.38 Charge amplifier application.

The cathode-ray oscilloscope (Fig. 16.1.39) is an extremely useful and versatile device characterized by high input impedance and wide frequency range. An electron beam is focused on the phosphor-coated face of the cathode-ray tube, producing a visible spot of light at the point of impingement. The beam is deflected by applying voltages to vertical

Fig. 16.1.39 Cathode-ray tube.

and horizontal deflector plates. Thus, the relationship between two varying voltages can be observed by applying them to the vertical and horizontal plates. The horizontal axis is commonly used for a linear time base generated by an internal sawtooth-wave generator. Virtually any desired sweep speed is obtainable as a calibrated sweep. Sweeps which change value part way across the screen are available to provide localized time magnification. As an alternative to the time base, any arbitrary voltage can be applied to drive the horizontal axis. The vertical axis is usually used to display a dependent variable voltage. Dualbeam and dual-trace instruments show two waveforms simultaneously. Special long-persistence and storage screens can hold transient waveforms for from seconds to hours. Greater versatility and unique capabilities are afforded by use of digital-storage oscilloscope. Each input signal is sampled, digitized, and stored in a first-in–first-out memory. Since a record of the recent signal is in memory when a trigger pulse is received, the timing of the end of storing new data into memory determines how much of the stored signal was before, and how much after, the trigger. Unlike storage screens, the stored signal can be amplified and shifted on the screen for detailed analysis, accompanied by numerical display of voltage and time for any point. Care must be taken that enough samples are taken in any waveform; otherwise aliasing results in a false view of the waveform.

The stored data can be processed mathematically in the oscilloscope or transferred to a computer for further study. Accessories for microcomputers allow them to function as digital oscilloscopes and other specialized tasks. VELOCITY AND ACCELERATION MEASUREMENT

Velocity or speed is the time rate of change of displacement. Consequently, if the displacement measuring device provides an output signal which is a continuous (and smooth) function of time, the velocity can be measured by differentiating this signal either graphically or by use of a differentiating circuit. The accuracy may be very limited by noise (high-frequency fluctuations), however. More commonly, the output of an accelerometer is integrated to yield the velocity of the moving member. Average speed over a time interval can be determined by measuring the time required for the moving body to pass two fixed points a known distance apart. Here photoelectric or other rapid sensing devices may be used to trigger the start and stop of the timer. Rotational speed may be similarly measured by counting the number of rotations in a fixed time interval. The tachometer provides a direct measure of angular velocity. One form is essentially a small permanent-magnet-type generator coupled to the rotating element; the voltage induced in the armature coil is directly proportional to the speed. The principle is also extended to rectilinear motions (restricted to small displacements) by using a straight coil moving in a fixed magnetic field. Angular velocity can also be measured by magnetic drag-cup and centrifugal-force devices (flyball governor). The force may be balanced against a spring with the resulting deflection calibrated in terms of the shaft speed. Alternatively, the force may be balanced against the air pressure generated by a pneumatic nozzle-baffle assembly (similar to Fig. 16.1.23). Vibration velocity pickups may use a coil which moves relative to a magnet. The voltage generated in the coil has the same frequency as the vibration and, for sine motion, a magnitude proportional to the product of vibration frequency and amplitude. Vibration acceleration pickups commonly use strain-gage, piezoresistive, or piezoelectric elements to sense a force F  Ma/gc . The maximum usable frequency of an accelerometer is about one-fifth of the pickup’s natural frequency (see Sec. 3.4). The minimum usable frequency depends on the type of pickup and the associated circuitry. The output of an accelerometer can be integrated to obtain a velocity signal; a velocity signal can be integrated to obtain a displacement signal. The operational amplifier is a versatile element which can be connected as an integrator for this use. Holography is being applied to the study of surface vibration patterns. MEASUREMENT OF PHYSICAL AND CHEMICAL PROPERTIES

Physical and chemical measurements are important in the control of product quality and composition. In the case of manufactured items, such properties as color, hardness, surface, roughness, etc., are of interest. Color is measured by means of a colorimeter, which provides comparison with color standards, or by means of a spectrophotometer, which analyzes the color spectrum. The Brinell and Rockwell testers measure surface hardness in terms of the depth of penetration of a hardened steel ball or special stylus. Testing machines with strain-gage elements provide measurement of the strength and elastic properties of materials. Profilometers are used to measure surface characteristics. In one type, the surface contour is magnified optically and the image projected onto a screen or viewer; in another, a stylus is employed to translate the surface irregularities into an electrical signal which may be recorded in the form of a highly magnified profile of the surface or presented as an averaged roughness-factor reading. For liquids, attributes such as density, viscosity, melting point, boiling point, transparency, etc., are important. Density measurements have already been discussed. Viscosity is measured with a viscosimeter, of which there are three main types: flow through an orifice or capillary

NUCLEAR RADIATION INSTRUMENTS

(Saybolt), viscous drag on a cylinder rotating in the fluid (MacMichael), damping of a vibrating reed (Ultrasonic) (see Secs. 3 and 4). Plasticity and consistency are related properties which are determined with special apparatus for heating or cooling the material and observing the temperature-time curve. The photometer, reflectometer, and turbidimeter are devices for measuring transparency or turbidity of nonopaque liquids and solids. A variety of properties can be measured for determining chemical composition. Electrical properties include pH, conductivity, dielectric constant, oxidation potential, etc. Physical properties include density, refractive index, thermal conductivity, vapor pressure, melting and boiling points, etc. Of increasing industrial application are spectroscopic measurements: infrared absorption spectra, ultraviolet and visible emission spectra, mass spectrometry, and gas chromatography. These are specific to particu-

lar types of compounds and molecular configurations and hence are very powerful in the analysis of complex mixtures. As examples, infrared analyzers are in use to measure low-concentration contaminants in engine oils resulting from wear and in hydraulic oils to detect deterioration. Xray diffraction has many applications in the analysis of crystalline solids, metals, and solid solutions. Of special importance in the realm of composition measurements is the determination of moisture content. A common laboratory procedure measures the loss of weight of the oven-dried sample. More rapid methods employ electrical conductance or capacitance measurements, based on the relatively high conductivity and dielectric constant values for ordinary water. Water vapor in air (humidity) is measured in terms of its physical properties or effects on materials (see also Secs. 4 and 12). (1) The psychrometer is based on the cooling effect of water evaporating into the airstream. It consists of two thermal elements exposed to a steady airflow; one is dry, the other is kept moist. See Sec. 4 for psychrometric charts. (2) The dew-point recorder measures the temperature at which water just starts to condense out of the air. (3) The hygrometer measures the change in length of such humidity-sensitive elements as hair and wood. (4) Electric sensing elements employ a wire-wound coil impregnated with a hygroscopic salt (one that maintains an equilibrium between its moisture content and the air humidity) such that the resistance of the coil is related to the humidity. The throttling calorimeter (Fig. 16.1.40) is most commonly used for determining the moisture in steam. A sampling nozzle is located preferably in a vertical section of steampipe far removed from any fittings. Steam enters the calorimeter through a throttling orifice and into a wellinsulated expansion chamber. The steam quality x (fraction dry steam) is determined from the equation x  (hc  hf) / hfg , where hc is the enthalpy of superheated steam at the temperature and pressure measured in the calorimeter; hf and hfg are, respectively, the liquid enthalpy

Fig. 16.1.40 Throttling calorimeter.

16-19

and the heat of vaporization corresponding to line pressure. The chamber is conveniently exhausted to atmospheric pressure; then only line pressure and temperature of the throttled steam need be measured. The range of the throttling calorimeter is limited to small percentages of moisture; a separating calorimeter may be employed for larger moisture contents. The Orsat apparatus is generally used for chemical analysis of flue gases. It consists of a graduated tube or burette designed to receive and measure volumes of gas (at constant temperature). The gas is analyzed for CO2, O2, CO, and N2 by bubbling through appropriate absorbing reagents and measuring the resulting change in volume. The reagents normally employed are KOH solution for CO2, pyrogallic acid and KOH mixture for O2, and cuprous chloride (Cu2Cl2) for CO. The final remaining unabsorbed gas is assumed to be N2. The most common errors in the Orsat analysis are due to leakage and poor sampling. The former can be checked by simple test; the latter factor can only be minimized by careful sampling procedure. Recommended procedure is the taking of several simultaneous samples from different points in the cross-sectional area of the flue-gas stream, analyzing these separately, and averaging the results. There are many instruments for measuring CO2 (and other gases) automatically. In one type, the CO2 is absorbed in KOH, and the change in volume determined automatically. The more common type, however, is based on the difference in thermal conductivity of CO2 compared with air. Two thermal conductivity cells are set into opposing arms of a Wheatstone-bridge circuit. Air is sealed into one cell (reference), and the CO2-containing gas is passed through the other. The cell contains an electrically heated resistance element; the temperature of the element (and therefore its resistance) depends on the thermal conductivity of the gaseous atmosphere. As a result, the unbalance of the bridge provides a measure of the CO2 content of the gas sample. The same principle can be employed for analyzing other constituents of gas mixtures where there is a significant thermal-conductivity difference. A modification of this principle is also used for determining CO or other combustible gases by mixing the gas sample with air or oxygen. The combustible gas then burns on the heated wire of the test cell, producing a temperature rise which is measured as above. Many other physical properties are employed in the determination of specific components of gaseous mixtures. An interesting example is the oxygen analyzer, based on the unique paramagnetic properties of oxygen. NUCLEAR RADIATION INSTRUMENTS (See also Sec. 9.)

Nuclear radiation instrumentation is increasing in importance with two main areas of application: (1) measurement and control of radiation variables in nuclear reactor-based processes, such as nuclear power plants and (2) measurement of other physical variables based on radioactive excitation and tracer techniques. The instruments respond in general to electromagnetic radiation in the gamma and perhaps X-ray regions and to beta particles (electrons), neutrons, and alpha particles (helium nuclei). Gas Ionization Tubes The ion chamber, proportional counter, and Geiger counter are common instruments for radiation detection and measurement. These are different applications of the gas-ionization tube distinguished primarily by the amount of applied voltage. A simple and very common form of the instrument consists of a gasfilled cylinder with a fine wire along the axis forming the anode and the cylinder wall itself (at ground potential) forming the cathode, as shown in Fig. 16.1.41. When a radiation particle enters the tube, its collision with gas molecules causes an ionization consisting of electrons (negatively charged) and positive ions. The electrons move very rapidly toward the positively charged wire; the heavier positive ions move relatively slowly toward the cathode. The above activity is detected by the resulting current flow in the external circuitry. When the voltage applied across the tube is relatively low, the number of electrons collected at the anode is essentially equal to that produced by the incident radiation. In this voltage range, the device is

16-20

INSTRUMENTS

of tracer techniques involve the use of tagged molecules embedded in the process to provide measures of wear, chemical reactions, etc. Other applications of radiation phenomena include level measurements based on a floating radioactive source, level measurements based on the back-scattering effect of the medium, pressure measurements in the high-vacuum region based on the amount of ionization caused by alpha rays, location of interface in pipeline transmission applications, and certain chemical analysis applications. INDICATING, RECORDING, AND LOGGING Fig. 16.1.41 Gas-ionization tube.

called an ion chamber. The device may be used to count the number of radiation particles when the frequency is low; when the frequency is high, an external integrating circuit yields an output current proportional to the radiation intensity. Since the amplification factor of the ion chamber is low, high-gain electronic amplification of the current signal is necessary. If the applied voltage is increased, a point is reached where the radiation-produced ions have enough energy to collide with other gas molecules and produce more ions which also enter into collisions so that an “avalanche” of electrons is collected at the anode. Thus, there is a very considerable amplification of the output signal. In this region, the device is called a proportional counter and is characterized by the voltage or current pulse being proportional to the energy content of the incident radiation signal. With still further increase in the applied voltage, a point of saturation is reached wherein the output pulses have a constant amplitude independent of the incident radiation level. The resulting Geiger counter is capable of producing output pulses up to 10 V in amplitude, thus greatly reducing the requirements on the external circuitry and instrumentation. This advantage is offset somewhat by a lower maximum counting rate and more limited ability to differentiate among the various types of radiation as compared with the proportional counter. The scintillation counter is based on the excitation of a phosphor by incident radiation to produce light radiation which is in turn detected by a photomultiplier tube to yield an output voltage. The signal output is greatly amplified and nearly proportional to the energy of the initial radiation. The device may be applied to a wide range of radiations, it has a very fast response, and, by choice of phosphor material, it offers a large degree of flexibility in applications. Applications to the Measurement of Physical Variables The ready availability of radioactive isotopes of long half-life, such as cobalt 60, make possible a variety of industrial and laboratory measuring techniques based on radiation instruments of the type described above. Most applications are based on (1) radiation absorption, (2) tracer identification, and (3) other properties. These techniques often have the advantages of isolation of the measuring device from the system, access to a variable not observable by conventional means, or measurement without destruction or modification of the system. In the utilization of absorption properties, a radioactive source is separated from the radiation-measuring device by that part of the system to be measured. The measured radiation intensity will depend on the fraction of radiation absorbed, which in turn will depend on the distance traveled through the absorbing medium and the density and nature of the material. Thus, the instrument can be adapted to measuring thickness (see Fig. 16.1.12), coating weight, density, liquid or solids level, or concentration (of certain components). Tracer techniques are effectively used in measuring flow rates or velocities, residence time distributions, and flow patterns. In flow measurement, a sharp pulse of radioactive material may be injected into the flow stream; with two detectors placed downstream from the injection point and a known distance apart, the velocity of the pulse is readily measured. Alternatively, if a known constant flow rate of tracer is injected into the flow stream, a measure of the radiation downstream is easily converted into a measure of the desired flow rate. Other applications

Rapid change has overtaken the ways in which process information is presented and stored. The control panel of the 1970s, with banks of analog pointer indicators and circular and strip chart recorders, has given way to digital indicators, dataloggers, and recorder simulations that are entirely software based. Some industrial sectors continue the use of paper chart recorders, so typical strip and circular chart recorders are shown here.

Fig. 16.1.42 Circular-chart recorder.

Fig. 16.1.43 Strip-chart recorder.

These include the nuclear power, food, and pharmaceuticals industry sectors. However, even these last holdouts are now moving to more modem means of indicating and recording information about measured variables. With the advent of liquid crystal displays (LCDs) that can display graphical elements, even field instruments now have digital displays, some of which mimic analog indicators with “dials” or “bargraphs” or other even more sophisticated displays.

INTRODUCTION

Replacing recorders in most applications now are digital dataloggers with cathode-ray tube (CRT) or LCD displays that simulate the visual aspect of an analog trend recorder. These are designed with the same form factor as older mechanical recorders so that they can be inserted into precut holes in existing control panels. Most HMI packages for control systems contain a software system called a “data historian,” which provides for logging and display of large amounts of process measurement data. Data historian software is often used to provide data to advanced process control software packages and to enterprise software systems for quality control and supply chain integration. Field dataloggers have become small and are operated by very low power internal batteries, and can even be inserted into, for example, a single carton of food for long-term monitoring. These dataloggers can be accessed by connection to a laptop, another type of host computer, or remotely by means of wireless connections such as Bluetooth, Zigbee, and the various IEEE 802.11 standards. INFORMATION TRANSMISSION

In the process plant, information is transmitted from field devices and control elements to control stations and the process control computer system by means of either analog or digital signals. With the exception of some refinery and petrochemical applications, where pneumatic instrumentation, using a 3 to 15 lb/in2 variable pressure range, is still used, the vast majority of analog information transmission is by means of either variable voltage, current, or frequency. ANSI/ISA50 standardized the analog transmission protocol to be 4 to 20 mA dc (1 to 5 V dc) into 1,000 ohms resistive as equal to 0 to 100 percent scale. Another transmission protocol in less common use is 0 to 20 mA dc, 10 to 50 V dc, among many others. Two-wire instruments receive their operational power as well as transmit their signal on the “current output loop.” Four-wire instruments require separate power supplies for operation.

16.2

16-21

ANSI/ISA50 chose a “live zero” at 4 mA dc so that a reading of 0 mA would indicate a diagnostic failure condition. Numerous manufacturers provide field converters to take voltages and frequencies and convert them to current loop outputs, and vice versa. While many field sensors continue to have analog outputs (voltage, current, frequency, or pressure), many field instruments are now supplied with digital communications systems built into the device. The most ubiquitous in process automation is the HART protocol (www.hartcomm.org), which is a digital communications system carried on existing 4- to 20-mA dc current loop outputs. The HART protocol permits remote readout, calibration, diagnostics, and reprogramming of many field instruments and control valves. Other bus systems are in use, some relatively old, such as IEEE 488 GPIB (or BPIB), developed as a general-purpose interface bus. Other buses such as Modbus are often found in machine control applications in discrete automation. Another digital communications bus, BACNet, has found wide acceptance in building automation systems worldwide. Growing reliance is being placed, at least in new plant construction, on two other, more fully featured communications methods, known by the common name of fieldbuses. Profibus (www.profibus.com) and Foundation Fieldbus (www.fieldbus.org) provide complete bidirectional communications and control between field instruments and final control elements and the process control system. The design of fieldbuses is beginning to converge with the design of enterprise LAN and WAN (local- and wide-area networks) using TCP/EP protocols over Ethernet configured networks. Recently, IEEE released a standard for power over Ethernet, but it is not yet workable for field instruments. Despite intellectual property challenges, the use of embedded web servers in field instruments and controllers is proliferating. This permits the use of standard Internet-based communication protocols by even a pressure transmitter or flowmeter. The control system then is a virtual control system architected as a series of World Wide Web pages, rather than a physical hardware set.

AUTOMATIC CONTROLS by Gregory V. Murphy

REFERENCES: Thaler, “Elements of Servomechanism Theory,” McGraw-Hill. Shinskey, “Process Control Systems: Application, Design and Tuning,” McGrawHill. Zoss, “Applied Instrumentation in the Process Industries,” Vol. IV, Gulf Publishing Co. Truxal, “Control Engineers Handbook,” McGraw-Hill. A. W. Langill, “Automatic Control Systems Engineering,” Prentice-Hall. Kuo, “Automatic Control Systems,” Prentice-Hall. Phillips and Nagle, “Digital Control System Analysis and Design,” Prentice-Hall. Lewis, “Applied Optimal Control and Estimation: Digital Design and Implementation,” Prentice-Hall. Cochin and Plass, “Analysis and Design of Dynamic Systems,” HarperCollins. Astrom and Hagglund, “Automatic Tuning of PID Controllers,” Instrument Society of America. Maciejowski, “Multivariable Feedback Design,” Addison-Wesley. Doyle and Stein, “Robustness with Observers,” IEEE Trans. Automatic Control, vol. AC-24, pp. 607–611, August 1979. Murphy and Bailey, LQG/LTR Control System Design for a Low-Pressure Feedwater Heater Train with Time Delay, Proc. IECON, 1990. Murphy and Bailey, Evaluation of Time Delay Requirements for Closed-Loop Stability Using Classical and Modern Methods, IEEE Southeastern Symp. on System Theory, 1989. Murphy and Bailey, “LQG/LTR Robust Control System Design for a Low-Pressure Feedwater Heater Train,” Proc. IEEE Southeastcon, 1990. Kazerooni and Narayanan, “Loop Shaping Design Related to LQG/LTR for SISO Minimum Phase Plants,” IEEE American Control Conf., Vol. 1, 1987. Murphy and Bailey, “Robust Control Technique for Nuclear Power Plants,” ORNL-10916, March 1989. Seborg, Edgar, and Mellichamp, “Process Dynamics and Control,” Prentice-Hall. Buckley, “Techniques of Process

Control,” Wiley. Birdwell, Crockett, Bodenheimer, and Chang, The CASCADE Final Report: Vol. II, “CASCADE Tools and Knowledge Base,” University of Tennessee. Wang and Birdwell, A Nonlinear PID-Type Controller Utilizing Fuzzy Logic, Proc. Joint IEEE/IFAC Symp. on Controller-Aided Control System Design, 1994. Upadhyaya and Eryurek, Application of Neural Networks for Sensor Validation and Plant Monitoring, Nuclear Technology, 97, no. 2, Feb. 1992. Vasadevan et al., Stabilization and Destabilization Slugging Behavior in a Laboratory Fluidized Bed, International Conf. on Fluidized Bed Combustion, 1995. Doyle and Stein, “Robustness with Observers.” IEEE Trans. Automatic Control, AC-24, 1979. Upadhaya et al. “Development and Testing of an Integrated Validation System for Nuclear Power Plants,” Report prepared for the U.S. Dept. of Energy. Vols. 1–3, DOE/NE/37959-34, 35, 36, Sept. 1989. INTRODUCTION

The purpose of an automatic control on a system is to produce a desired output when inputs to the system are changed. Inputs are in the form of commands, which the output is expected to follow, and disturbances, which the automatic control is expected to minimize. The usual form of an automatic control is a closed-loop feedback control which Ahrendt defines as “an operation which, in the presence of a disturbing influence, tends to reduce the difference between the actual state of a system

16-22

AUTOMATIC CONTROLS

and an arbitrarily varied desired state and which does so on the basis of this difference.” The general theories and definitions of automatic control have been developed to aid the designer to meet primarily three basic specifications for the performance of the control system, namely, stability, accuracy, and speed of response. The terminology of automatic control is being constantly updated by the ASME, IEEE, and ISA. Redundant terms, such as rate, preact, and derivative, for the same controller action are being standardized. Common terminology is still evolving. The introduction of the digital computer as a control device has necessitated the introduction of a whole new subset of terminology. The following terms and definitions have been selected to serve as a reference to a complex area of technology whose breadth crosses several professional disciplines. Adaptive control system: A control system within which automatic means are used to change the system parameters in a way intended to improve the performance of the system. Amplification: The ratio of output to input, in a device intended to increase this ratio. A gain greater than 1. Attenuation: A decrease in signal magnitude between two points, or a gain of less than 1. Automatic-control system: A system in which deliberate guidance or manipulation is used to achieve a prescribed value of a variable and which operates without human intervention. Automatic controller: A device, or combination of devices, which measures the value of a variable, quantity, or condition and operates to correct or limit deviation of this measured value from a selected command (set-point) reference. Bode diagram: A plot of log-gain and phase-angle values on a logfrequency base, for an element, loop, or output transfer function. Capacitance: A property expressible by the ratio of the time integral of the flow rate of a quantity (heat, electric charge) to or from a storage, divided by the related potential charge. Command: An input variable established by means external to, and independent of, the automatic-control system, which sets the ideal value of the controlled variable. See set point. Control action: Of a control element or controlling system, the nature of the change of the output affected by the input. Control action, derivative: That component of control action for which the output is proportional to the rate of change of input. Control action, floating: A control system in which the rate of change of the manipulated variable is a continuous function of the actuating signal. Control action, integral (reset): Control action in which the output is proportional to the time integral of the input. Control action, proportional: Control action in which there is a continuous linear relationship between the output and the input. Control system, sampling: Control using intermittently observed values of signals such as the feedback signal or the actuating signal. Damping: The progressive reduction or suppression of the oscillation of a system. Deviation: Any departure from a desired or expected value or pattern. Steady-state deviation is known as offset. Disturbance: An undesired variable applied to a system which tends to affect adversely the value of the controlled variable. Error: The difference between the indicated value and the accepted standard value. Gain: For a linear system or element, the ratio of the change in output to the causal change in input. Load: The material, force, torque, energy, or power applied to or removed from a system or element. Nyquist diagram: A polar plot of the loop transfer function. Nichols diagram: A plot of magnitude and phase contours using ordinates of logarithmic loop gain and abscissas of loop phase angle. Offset: The steady-state deviation when the command is fixed. Peak time: The time for the system output to reach its first maximum in responding to a disturbance. Proportional band: The reciprocal of gain expressed as a percentage.

Resistance: An opposition to flow that results in dissipation of energy and limitation of flow. Response time: The time for the output of an element or system to change from an initial value to a specified percentage of the steadystate. Rise time: The time for the output of a system to increase from a small specified percentage of the steady-state increment to a large specified percentage of the increment. Self-regulation: The property of a process or a machine to settle out at equilibrium at a disturbance, without the intervention of a controller. Set point: A fixed or constant command given to the controller designating the desired value of the controlled variable. Settling time: The time required, after a disturbance, for the output to enter and remain within a specified narrow band centered on the steadystate value. Time constant: The time required for the response of a first-order system to reach 63.2 percent of the total change when disturbed by a step function. Transfer function: A mathematical statement of the influence which a system or element has on a signal or action compared at input and output terminals. BASIC AUTOMATIC-CONTROL SYSTEM

A closed-loop control system consists of a process, a measurement of the controlled variable, and a controller which compares the actual measurement with the desired value and uses the difference between them to automatically adjust one of the inputs to the process. The physical system to be controlled can be electrical, thermal, hydraulic, pneumatic, gaseous, mechanical, or described by any other physical or chemical process. Generally, the control system will be designed to meet one of two objectives. A servomechanism is designed to follow changes in set point as closely as possible. Many electrical and mechanical control systems are servomechanisms. A regulator is designed to keep output constant despite changes in load or disturbances. Regulatory controls are widely used on chemical processes. Both objectives are shown in Fig. 16.2.1. The control components can be actuated pneumatically, hydraulically, electronically, or digitally. Only in very few applications does actuation affect controllability. Actuation is chosen on the basis of economics. The purpose of the control system must be defined. A large capacity or inertia will make the system sluggish for servo operation but will help to minimize the error for regulator operation. PROCESS AS PART OF THE SYSTEM

Figure 16.2.1 shows the process to be part of the control system either as load on the servo or process to be controlled. Thus the process must be designed as part of the system. The process is modeled in terms of its static and dynamic gains in order that it be incorporated into the system diagram. Modeling uses Ohm’s and Kirchhoff’s laws for electrical systems, Newton’s laws for mechanical systems, mass balances for fluidflow systems, and energy balances for thermal systems.

Fig. 16.2.1 Feedback control loop showing operation as servomechanism or regulator.

PROCESS AS PART OF THE SYSTEM

Consider the electrical system in Fig. 16.2.2 E1 2 E2 i5 R dE2 i5C dt

Linearizing w2 w2 2 p 2 p2 w2 5 x 2 x 1 2s p 2 p2d ss ss

(16.2.1)

RC

where aR

dE2 1 E2 5 E1 dt

(16.2.2)

volt-second coulomb b aC b 5 t s 5 time constant coulomb volt

Fig. 16.2.2 Electrical system where current flows upon closing switch.

Consider the mass balance of the vessels shown in Figs. 16.2.3 and 16.2.4: (16.2.3)

2s p 2 p2d dp 2 x 1 p2 1p5 x dt ss

(16.2.9)

2s p 2 p2d V RC 5 t min R5 w2 nRT 3 and the terms are defined: V  volume, ft (m3); A  cross-sectional area, ft2 (m2); L  level, ft (m); r  density, lb/ft3 (g/ml); x  valve stem position (normalized 0 to 1); T  temperature, F (C); p  pressure, lb/in2 (kPa); and w  mass flow, lb/min (kg/min). The thermal process of Fig. 16.2.5 is modeled by a heat balance (Shinskey, “Process Control Systems,” McGraw-Hill): where

C5

Mc

Accumulation 5 input 2 output dsrVd 5 w1 2 w2 lb/min dt

(16.2.8)

Substituting

Combining RC

16-23

dT 5 wc sT0 2 T d 2 UAsT 2 Twd dt

UA wc dT Mc T 1 T 1T5 wc 1 UA dt wc 1 UA 0 wc 1 UA w where

1 RC 5 t min wc 1 UA UA wc dT RC 1T5 T 1 T wc 1 UA 0 wc 1 UA w dt

C 5 Mc

(16.2.10) (16.2.11)

R5

(16.2.12)

The terms are defined: M  weight of process fluid in vessel, lb (kg); c  specific heat, Btu/lb  F (J/kg  C); U  overall heat-transfer coefficient, Btu/ft2  min  F (W/m2  C); and A  heat-transfer area, ft2 (m2).

Fig. 16.2.3 Liquid level process.

Fig. 16.2.5 Thermal process with heat from vessel being removed by cold water in jacket. Fig. 16.2.4 Gas pressure process. Newton’s laws can be applied to the manometer shown in Fig. 16.2.6.

For the liquid level process (Fig. 16.2.3): V  AL

(16.2.4)

Linearizing

Inertia force 5 restoring force 2 flow resistance Alr d 2h g dh gc dt2 5 Aap 2 2hr gc b 2 RA dt

w2 w2 5 x 2 x ss

Substituting rA

w2 dL 5 w1 2 x 2 x dt ss

(16.2.5)

where rA  analogous capacitance. For the gas pressure process (Fig. 16.2.4): dr 5 w1 2 w2 dt

(16.2.6)

r n a p b 5 constant

(16.2.7)

V From thermodynamics

Fig. 16.2.6 Filled manometer measuring pressure P.

(16.2.13)

16-24

AUTOMATIC CONTROLS

Flow resistance for laminar flow is given by the Hagen-Poiseuille

equation: driving force 321m 5 2 rate of transfer d gc

(16.2.14)

161m dh 1 d 2h 1 1 h 5 hi 2g dt 2 rdg 2 dt

(16.2.15)

R5

of energy to the load. The basic control system is represented schematically in Fig. 16.2.8.

Substituting

Fig. 16.2.8 A basic closed-loop control system.

In standard form t2c

dh d 2h 1 2tc z 1 h 5 hi dt dt 2

(16.2.16)

The differential equation of this basic system is readily obtained from the idealized equations Load torque TL 5 J

where tc  characteristic response time, min,  1/[(60) (2p)vn] where vn  natural frequency, Hz; and z  damping coefficient (ratio), dimensionless. The variables tc and z are very valuable design aids since they define system response and stability in terms of system parameters.

d 2uo dt 2

1f

duo dt

(16.2.17)

Developed torque TD 5 Ke

(16.2.18)

Error e 5 ui 2 uo

(16.2.19)

The above equations combine to yield the system differential equation: TRANSIENT ANALYSIS OF A CONTROL SYSTEM

The stability, accuracy, and speed of response of a control system are determined by analyzing the steady-state and the transient performance. It is desirable to achieve the steady state in the shortest possible time, while maintaining the output within specified limits. Steady-state performance is evaluated in terms of the accuracy with which the output is controlled for a specified input. The transient performance, i.e., the behavior of the output variable as the system changes from one steadystate condition to another, is evaluated in terms of such quantities as maximum overshoot, rise time, and response time (Fig. 16.2.7).

J

d 2uo dt 2

1f

duo 1 Kuo 5 Kui dt

(16.2.20)

Step-Input Response of a Viscous-Damped Control If the control system described in Fig. 16.2.8 by Eq. (16.2.20) is subjected to a step change in the input variable ui, a solution uo  uo(t) can be obtained as follows. (1) Let the ratio 2K>J be designated by the symbol !n and be called the natural frequency. (2) Let the quantity 2 2JK be designated by the symbol fc and be called the friction coefficient required for critical damping. (3) Let f/fc be designated by the symbol z and be called the damping ratio. Equation (16.2.20) can then be written as

d 2uo

1 2zvn

duo 1 v2n uo 5 v2nui dt

uo 5 0 and

duo 5 0 at t 5 0 dt

dt 2

(16.2.21)

For ui  1:

The complete solution of Eq. (16.2.21) is uo 5 1 2

exps 2zvn td 21 2 z2

sina 21 2 z2vnt 1 tan21 Fig. 16.2.7 System response to a unit step-function command. Transient-Producing Disturbances A number of factors affect the quality of control, among them disturbances caused by set-point changes

and process-load changes. Both set point and process load may be defined in terms of the setting of the final control element to maintain the controlled variable at the set point. Thus both cause the final control element to reposition. For a purely mechanical system the disturbance may take the form of a vibration, a displacement, a velocity, or an acceleration. A process-load change can be caused by variations in the rate of energy supply or variations in the rate at which work flows through the process. Reference to Fig. 16.2.5 and Eq. (16.2.12) shows disturbances to be variations in inlet process fluid temperature and cooling-water temperature. Further linearization would show variations in process flow and the overall heat-transfer coefficient to also be disturbances. The Basic Closed-Loop Control To illustrate some characteristics of a basic closed-loop control, consider a mechanical, rotational system composed of a prime mover or motor, a total system inertia J, and a viscous friction f. To control the system’s output variable uo, a command signal ui must be supplied, the output variable measured and compared to the input, and the resulting signal difference used to control the flow

21 2 z2 b z

(16.2.22)

Equation (16.2.22) is plotted in dimensionless form for various values of damping ratio in Fig. 16.2.9. The curves for z  0.1, 2, and 1 illustrate the underdamped, overdamped, and critically damped case, where any further decrease in system damping would result in overshoot.

Fig. 16.2.9 Transient response of a second-order viscous-damped control to unit-step input displacement.

TRANSIENT ANALYSIS OF A CONTROL SYSTEM

Damping is a property of the system which opposes a change in the output variable. The immediately apparent features of an observed transient performance are (1) the existence and magnitude of the maximum overshoot, (2) the frequency of the transient oscillation, and (3) the response time. Maximum Overshoot When an automatic-control system is underdamped, the output variable overshoots its desired steady-state condition and a transient oscillation occurs. The first overshoot is the greatest, and it is the effect of its amplitude which must concern the control designer. The primary considerations for limiting this maximum overshoot are (1) to avoid damage to the process or machine due to excessive excursions of the controlled variable beyond that specified by the command signal, and (2) to avoid the excessive settling time associated with highly underdamped systems. Obviously, exact quantitative limits cannot generally be specified for the magnitude of this overshoot. However, experience indicates that satisfactory performance can generally be obtained if the overshoot is limited to 30 percent or less. Transient Frequency An undamped system oscillates about the final steady-state condition with a frequency of oscillation which should be as high as possible in order to minimize the response time. The designer must, however, avoid resonance conditions where the frequency of the transient oscillation is near the natural frequency of the system or its component parts. Rise Time Tn , Peak Overshoot P, Peak Time Tp These quantities are related to z and vn in Figs. 16.2.10 and 16.2.11. Some useful formulas are listed below: vnTn < 1.02 1 0.48z 1 1.15z2 1 0.76z3 vnTs 5

17.6 2 19.2z 23.8 1 9.4z P 5 expa

16-25

Fig. 16.2.10 Rise time Tr as a function of z and vn.

0#z#1

0.2 # z # 0.75 f 2% tolerance band 0.75 # z # 1

2pz 21 2 z2

b

Tp 5

p vn 21 2 z2

Although these quantities are defined for a second-order system, they may be useful in the early design states of higher-order systems if the response of the higher-order system is dominated by roots of the characteristic equation near the imaginary axis. Derivative and Integral Compensation (Thaler) Four common compensation methods for improving the steady-state performance of a proportional-error control without damaging its transient response are shown in Fig. 16.2.12. They are (1) error derivative compensation, (2) input derivative compensation, (3) output derivative compensation, (4) error integral compensation.

Fig. 16.2.11 Peak overshoot P and peak time Tp as functions of z and vn. Error Derivative Compensation The torque equilibrium equation is

J

d 2uo dt 2

1f

duo de 5 K2e 1 K1 dt dt

(16.2.23)

Writing Eq. (16.2.23) in terms of the input and output variables yields J

d 2uo dt 2

1 s f 1 K1d

duo dui 1 K2 uo 5 K2 ui 1 K1 dt dt

Fig. 16.2.12 Derivative and integral compensation of a basic closed-loop system. (a) Error derivative compensation; (b) input derivative compensation; (c) output derivative compensation; (d) error integral compensation. (Thaler.)

(16.2.24)

16-26

AUTOMATIC CONTROLS

By adjusting K1 and reducing f so that the quantity f  K1 is equal to f in the uncompensated system, the system performance is affected as follows: (1) e resulting from a constant-first-derivative input is reduced because of the reduction in viscous friction; (2) the transient performance of the uncompensated system is preserved unchanged. Derivative Input Compensation The torque equilibrium equation is d 2uo

J

dt

dui duo 5 K2e 1 K1 dt dt

1f

2

TIME CONSTANTS

(16.2.25)

Writing Eq. (16.2.25) in terms of the input and output variables yields J

d 2uo dt 2

1f

dui d 2uo 1 K2uo 5 ui 1 K1 dt dt

(16.2.26)

Examination of Eq. (16.2.26) yields the following information about the compensated system’s performance: (1) since the characteristic equation is unchanged from that of the uncompensated system, the transient performance is unaltered; (2) the steady-state solution to Eq. (16.2.26) is uo 5 ui 2

K1 dui f a1 2 b K2 K2 dt

(16.2.27)

Therefore the input derivative signal can reduce the steady-state error by adjusting K1 to equal K2. Derivative Output Compensation The torque equilibrium equation is J

d 2uo dt

2

1f

duo d 2uo 5 K2e 6 K1 dt dt

torque is produced in a short period of time, increasing the effect T/J ratio, and thereby decreasing the damping. The general effects of error integral compensation within its useful range are (1) steady-state error is eliminated; and (2) transient response is adversely effected, resulting in increased overshoot and the attendant increase in response time.

The time constant t, the characteristic response time (tc), and the damping coefficient are combined with operational calculus to design a control system without solving the classical differential equations. Note that the electrical, Eq. (16.2.2), mass, Eq. (16.2.9), and energy, Eq. (16.2.12) processes are all described by analogous first-order differential equations of the form t

dc 1 c 5 Ka dt

(16.2.35)

Solving with initial conditions equal to zero, the time-domain response to a step disturbance is c 5 Kas1 2 e t>td

(16.2.36)

Plotting is shown in Fig. 16.2.13. Figure 16.2.13 shows the time constant to be defined by the point at 63.2 percent of the response, while response is essentially complete at three time constants (95 percent).

(16.2.28)

Writing Eq. (16.2.28) in terms of the input and output variables yields J

d 2uo dt

2

1 s f 6 K1d

duo 1 K2uo 5 K2ui dt

(16.2.29)

Examination of Eq. (16.2.29) yields the following information about the compensated system’s performance. (1) Output derivative feedback produces the same system effect as the viscous friction does. This compensation therefore damps the transient performance. (2) Under conditions where ui  ct, the steady-state error is increased. Error Integral Compensation Error integral compensation is used where it is necessary to eliminate steady-state errors resulting from input signals with constant first derivatives or under conditions of externally applied loads. The torque equilibrium equation is J

d 2uo dt 2

1f

t duo 6 external load torque 5 K2e 1 K1 3 e dt (16.2.30) dt 0

Writing Eq. (16.2.30) in terms of the input variable and the error yields External load torque 1 J 5J

d 2ui

1f

dt 2

dt 2

5

t de d 2e 1f 1 K2e 1 K1 3 e dt 2 dt dt 0

de d 2e 5 2 50 dt dt

(16.2.31)

dui 5 const dt

(16.2.32)

Equation (16.2.31) assumes the form 6 External load torque 1 f

t dui 5 K2e 1 K1 3 e dt dt 0

(16.2.33)

Since the sum of the load torque and the term f(dui /dt) is finite, e  0 for `

3 e dt S `

Operational calculus provides a systematic and simple method for handling linear differential equations with constant coefficients. The Laplace operator is used in control system analysis because of the straightforward transformation among the domains of interest: Time domain d dt

dui dt

At steady state for t W 0 and with a step change in the input derivative d 2ui

Fig. 16.2.13 Response of first-order system showing time constant relationship.

(16.2.34)

0

If the integrating coefficient is a small number, additional torque produced by the integration action is developed very slowly, and, although the steady-state error is eventually eliminated, the transient performance is essentially unchanged. However, if K1 is a large value, a large

Complex domain s

Frequency domain jv

Extensive tables of transforms are available for converting between the time and complex domains and back again. Transformation from the complex or frequency domains to time is difficult, and it is seldom attempted. Time-domain solutions are obtained only from computer simulations, which can solve the finite difference equations fast enough to serve as a design tool. Control system analysis and design proceed with a number of analytical and graphical techniques which do not require a time-domain solution. The root locus method is a graphical technique used in the complex domain which provides substantial insight into the system. It has the weaknesses of handling dead time poorly and of the graph’s being very tedious to plot. It is used only when computerized plotting is available. BLOCK DIAGRAMS

The physical diagram of the system is converted to a block diagram in order that the different components of the system (all the way from a steam boiler to a thermocouple) can be placed on a common mathematical footing for analysis as a system. The block diagram shows the

BLOCK DIAGRAMS

functional relationship among the parts of the system by depicting the action of the variables in the system. The circle represents an algebraic function of addition. Each system component is represented by a block which has one input and one output. The block represents a dynamic function such that the output is a function of time and also of the input. The dynamic function is called a transfer function—the ratio of the Laplace transform of the output variable to the input variable with all initial conditions equal to zero. The input and output variables are considered as signals, and the blocks are connected by arrows to show the flow of information in the system. Rewriting Eq. (16.2.12) for the thermal system, dT 1 T 5 K1T0 1 K2Tw dt

(16.2.37)

sts 1 1dT 5 K1T0 1 K2Tw

(16.2.38)

RC Transforming

for which the block diagram is shown in Fig. 16.2.14. Two conditions are specified: (1) the components must be described by linear differential equations (or nonlinear equations linearized by suitable approximations), and (2) each block is unilateral. What occurs in one component may not affect the components preceding.

16-27

Closed-loop transfer function Eq. (16.2.40) can be determined by obser-

vation from uo ssd Ussd

5

feedforward functions 1 1 complete-loop functions

The denominator of Eq. (16.2.40) is the characteristic equation which determines stability. Most systems are designed on the basis of the characteristic equation since it sets both response time tc and damping z. The equation is undefined (unstable) if G1(s)G2(s)H(s) equals 1. But 1 is a vector of magnitude 1 and a phase of 180. This fact is used to determine stable parameter adjustments in the graphical techniques to be discussed later. The input disturbance [U(s)] can be any time function in actual operation. The step input is widely used for analysis and testing since it is easily implemented; it results in a simple Laplace transform; it is the most severe type of disturbance; and the response to a step change shows the maximum error that could occur. In many complex control systems, especially in the nonmechanical process-control field, auxiliary feedback paths are provided in order to adjust the system’s performance. Figure 16.2.16a illustrates such a condition. In analyzing such a system it is usually best to combine secondary loops into the main control loop to form an equivalent series block and transfer function. The system of Fig. 16.2.16a might be reduced in the following sequence.

Fig. 16.2.14 Block diagram of thermal process. Block-Diagram Algebra

The block diagram of a single-loop feedback-control system subjected to a command input R(s) and a disturbance U(s) is shown in Fig. 16.2.15.

Fig. 16.2.15 Single-loop feedback control system.

When U(s)  0 and the input is a reference change, the system may be reduced as follows: Essd 5 2uo ssdHssd

1. Replace K3G3(s) and K4H1(s) with a single equivalent element

uo ssd 5 Essd[G1 ssdG2 ssd] Therefore

uo ssd ui ssd

5

(16.2.39)

G1 ssdGssd 1 1 G1 ssdG2 ssdHssd

When ui(s)  0 and the input is a disturbance, the system may be reduced as follows: Essd 5 2uo ssdHssd [Essd G1 ssd 1 Ussd]G2 ssd 5 uo ssd uo ssd Ussd

5

Fig. 16.2.16 Reduction of a closed-loop control system with multiple secondary loops.

G2 ssd 1 1 G1 ssdG2 ssdHssd

K3G3 ssd uo 5 5 K6G6 u2 1 1 K4H1 ssdK3G3 ssd

(16.2.41)

The result of this first reduction is shown in Fig. 16.2.16b: 2. Figure 16.2.16b can be treated in a similar fashion and a single block used to replace K2G2 , K6G6 , and K5H2 K2G2K6G6 ssd uo 5 5 K7G7 u1 1 1 K5H2K2G2K6G6 ssd

(16.2.42)

The result of this second reduction is shown in Fig. 16.2.16c. The resulting open-loop transfer function is (16.2.40)

uo >e 5 K1G1K7G7 ssd

(16.2.43)

16-28

AUTOMATIC CONTROLS

There are no loops remaining if the forward path abcd is removed;

The closed-loop or frequency response function is uo K1G1K7G7 ssd 5 ui 1 1 K1G1K7G7 ssd

Thus

Equation (16.2.44) can, of course, be expanded to include the terms of the system’s secondary loops. SIGNAL-FLOW REPRESENTATION

An alternate graphical representation of the mathematical relationships is the signal-flow graph. For complicated systems it allows a more compact representation and more rapid reduction techniques than the block diagram. In Fig. 16.2.17, the nodes represent the variables ui, e, u1,. . . , uo, and the branches the relationships between the nodes, of the system shown in Fig. 16.2.16. For example,

and

6 1 5 1

(16.2.44)

u1 ssd 5 eK1G1 ssd 2 K5H2 ssduo ssd

(16.2.45a)

e 5 ui 2 uo

(16.2.45b)

Fig. 16.2.17 Signal flow graph of the closed-loop control system shown in Fig. 16.2.16.

uo >u1 5 K1G1 ssdK2G2 ssdK3G3 >[1 1 K1H1 ssdK3G3 ssd 1 K2G2 ssdK3G3 ssdK5H2 ssd 1 K1G1 ssdK2G2 ssdK3G3 ssd]

(16.2.47)

which is identical with Eq. (16.2.44). CONTROLLER MECHANISMS

The controller modifies the error signal in a desired manner to produce an output pressure which is used to actuate the valve motor. The several controller modes used singly or in combination are (1) the proportional mode in which Pout(t)  Kc E(t), (2) the integral mode, in which Pout(t)  1/T1  E(t) dt, and (3) the rate mode, in which Pout(t)  T2 dE(t)/dt. In these expressions Pout(t)  controller output pressure, E(t)  input error signal, Kc  proportional gain, 1/T1  reset rate, and T2  rate time. A pneumatic controller consists of an error-detecting mechanism, control modes made up of proportional (P), integral (I), and derivative (D) actions in almost any combination, and a pneumatic amplifier to provide output capacity. The error-detecting mechanism is a differential link, one end of which is positioned by the signal proportional to the controlled variable, and the other end of which is positioned to correspond to the command set point. The proportional action is provided by a flapper nozzle (Fig. 16.2.18), where the flapper is positioned by the error signal. A motion of 0.0015 in by the flapper is sufficient for nearly full output range. Nozzle back pressure is inversely proportional to the distance between nozzle opening and flapper.

Signal-flow terminology follows: Source: node having only outgoing branches, for example, ui Sink: node having only incoming branches, uo Path: series of branches with the same sense of direction, for exam-

ple, abcd, cdf Forward path: path originating at a source and ending at a sink, with no node encountered more than once, for example, abcd Path gain: product of the coefficients along a path, for example, 1 [K1G2(s)][K2G2(s)][K3G3(s)] Feedback loop: path starting at a node and ending at the same node, for example, bcdg Loop gain: product of coefficients along a feedback loop, for example, [K1G2(s)][K2G2(s)][K3G3(s)] (1)

The overall gain of the system can be calculated from G5

iGi  i

(16.2.46)



where Gi  gain of ith forward path and  5 1 2 L1 1 L2 2 L3 1 . . . 1 s 21dkLk where L1  sum of the gains of each forward loop; L2  sum of products of loop gains for nontouching loops (no node is common), taken two at a time; L3  sum of products of loop gains for nontouching loops taken three at a time; i  value of  for signal flow graph resulting when ith path is removed. From Fig. 16.2.17 there is only one forward path, abcd. 6 G1 5 K1G1 ssdK2G2 ssdK3G3 ssd Closed loops are de, cdf, and bcdg, with gains K1H1(s)K3G3(s), K2G2(s)K3G3(s)K5H2(s), and K1G1(s)K2G2(s)K3G3(s). There are no nontouching closed loops; 6  5 1 1 K1H1 ssdK3G3 ssd 1 K2G2 ssdK3G3 ssdK5H2 ssd 1 K1G1 ssdK2G2 ssdK3G3 ssd

Fig. 16.2.18 Gain reduction of pneumatic amplifier by means of feedback bellows. (Raven, “Automatic Control Engineering,” McGraw-Hill.)

The controller employs a power-amplifying pilot for providing a larger quantity of air than could be provided through the small restriction shown in Fig. 16.2.18. The nozzle back pressure, instead of operating the final control element directly, is transmitted to a bellows chamber where it positions the pilot valve. The combination of flapper-nozzle amplifier and power relay shown in Fig. 16.2.18 has a very high gain since small flapper displacements can result in the output traversing the full range of output pressure. Negative feedback is employed to reduce the gain. Controller output is connected to a feedback bellows which operates to reposition the flapper. With the feedback bellows, a movement of the flapper toward the nozzle increases back pressure, causing output pressure to decrease and the feedback bellows to move the flapper away from the nozzle. Thus the mechanism is stabilized. Figure 16.2.19 shows controller gain being varied by adjusting the point at which the feedback bellows bears on the flapper. The rate mode, called “anticipatory,” can take large corrective action when errors are small but have a high rate of change. The mode resists not only departures from the set point but also returns and so provides a stabilizing action. Since the rate mode cannot control to a set point, it is not used alone. When used with the proportional mode, its stabilizing influence (90 phase lead; see Table 16.2.3) may allow an increase in gain Kc and a consequent decrease in steady-state error.

CONTROLLER MECHANISMS

Fig. 16.2.19 Proportional controller with negative feedback. (Reproduced by permission. Copyright © John Wiley & Sons, Inc. Publishers, 1958. From D. P. Eckman, “Automatic Process Control.”) Proportional-derivative (PD) control is obtained by adding an adjustable feedback restriction as shown in Fig. 16.2.20. This results in delayed negative feedback. The restriction delays and reduces the feedback, and, since the feedback is negative, the output pressure is momentarily higher and leads, instead of lags, the error signal.

Fig. 16.2.21 Proportional-integral controller. (Reproduced by permission. Copyright © John Wiley & Sons, Inc. Publishers, 1958. From D. P. Eckman, “Automatic Process Control.”)

The heart of the electronic controller is the high-gain operational amplifier with passive elements at the input and in the feedback. Figure 16.2.22 shows the classical control elements, though circuits in commercially available controllers are far more sophisticated. The current drawn by the dc amplifier is negligible, and the amplifier gain is around 1,000, so that the junction point is essentially at ground potential. The reset amplifier has delayed negative feedback, similar to the pneumatic circuit in Fig. 16.2.21. The derivative amplifier has advanced negative feedback, similar to Fig. 16.2.20. Gains are adjusted by changing the ratios of resistors or capacitors, while adjustable time constants are made up of a variable resistor and a capacitor (RC). Regardless of actuation, the commonly applied controller modes are as follows:

Fig. 16.2.20 Proportional-derivative controller. (Reproduced by permission. Copyright © John Wiley & Sons, Inc. Publishers, 1958. From D. P. Eckman, “Automatic Process Control.”)

Since the proportional mode requires an error signal to change output pressure, set point and load changes in a proportionally controlled system are accompanied by a steady-state error inversely proportional to the gain. For systems which, because of stability considerations, cannot tolerate high gains, the integral mode added to the proportional will eliminate the steady-state error since the output from this mode is continually varying so long as an error exists. The addition of the integral mode to a proportional controller has an adverse effect on the relative stability of the control because of the 90 phase lag introduced. Proportional-integral (PI) control is obtained by adding a positive feedback bellows and an adjustable restriction (Fig. 16.2.21). The addition of the positive feedback bellows cancels the gain reduction brought about by the negative feedback bellows at a rate determined by the adjustable restriction. Electronic controllers which are analogous to the pneumatic controller have been developed. They have the advantages of elimination of time lags; compatability with computers; being less expensive to install (although more expensive to purchase); being more energy efficient; and being immune to low temperatures (water in pneumatic lines freezes).

16-29

Designation

Transform

Symbol

Proportional

K (gain) 1 1 1 ts i tds 1 1

P

Reset Derivative Floating

std >ads 1 1 1 tFs

I D F

The modes are combined as needed into P, PI, PID, PD, and F. Choice depends on the application and can be made from the Bode diagram. The block diagram for a PID controller is shown in Fig. 16.2.23. The derivative mode can be placed on the measured variable signal (as shown), on the error, or on the controller output. The arrangement of Fig. 16.2.23 is preferred for regulation since the derivative does not affect set-point changes, and the derivative acts to reduce overshoot on start-up (Zoss, “Applied Instrumentation in the Process Industries,” Vol. IV, Gulf Publishing Co.). Regardless of a sequence, the block diagram of a PID controller is commonly drawn as shown in Fig. 16.2.24.

Fig. 16.2.22 Simplified circuit for a typical electronic controller.

16-30

AUTOMATIC CONTROLS

and the equal-percentage characteristic is Cv 5 Cv|max srdx21

(16.2.50)

where q  flow, gpm (m3/min); Cv|max  valve sizing coefficient; p  pressure drop across valve, lb/in2 (kPa); G  specific gravity (density, g/ml); r  valve rangeability; and x  stem position, normalized to 0 to 1. The flow characteristic relationship is not necessarily the lift-flow characteristic of the valve when installed since the valve is but one component in a piping system in which pressure drops vary with the flow rate. The change in characteristic as less percentage of total system drop is taken across the control valve is shown in Fig. 16.2.26.

Fig. 16.2.23 Block diagram of PID controller.

Fig. 16.2.24 Combined controller block diagram.

FINAL CONTROL ELEMENTS

The final control element is a mechanism which alters the value of the manipulated variable in response to an output signal from the automaticcontrol device. It will typically receive a signal from the controller and manipulate a flow of material or energy to the process. The final control element can be a control valve, an electrical motor, a servovalve, or a damper. Servovalves are discussed in the next section, “HydraulicControl Systems.” The final control element often consists of two parts: first, an actuator which translates the controller signal into a command for the manipulating device, and, second, a mechanism to adjust the manipulated variable. Figure 16.2.25 shows a control valve made up of an actuator and an adjustable orifice.

Fig. 16.2.26 Control valve linear flow characteristic. (Reproduced by permission. Copyright © John Wiley & Sons, Inc. Publishers, 1958. From D. P. Eckman, “Automatic Process Control.”)

Valve characteristics are generally selected according to their ability to compensate for nonlinearities in the system. Nonlinearities typically represent a change in gain which in turn changes the character of the control response for a given controller setting when load- or set-point changes occur or when there is a variable overall pressure drop across the system. Equal-percentage valves, for example, tend to control over widely varying operating conditions, since the change in flow is always proportional to flow rate. The selection of the proper valve therefore depends on study of the particular system. HYDRAULIC-CONTROL SYSTEMS

Fig. 16.2.25 Control valve and pneumatic actuator. (Reproduced by permission. Copyright © John Wiley & Sons, Inc. Publishers, 1958. From D. P. Eckman, “Automatic Process Control.”)

The most commonly used actuator is a diaphragm motor in which the output pressure from the controller is counteracted not only by the spring but also by fluid forces in the valve body. The latter may cause serious deviation from linear static behavior with deleterious control effects. High friction at the valve stem or large unbalanced fluid forces at the plug can be overcome by valve positioners, which are essentially proportional controllers. A control valve in liquid service is described by Eq. (16.2.48): q 5 Cv

p Å G

(16.2.48)

And Cv is described by its relationship to stem lift x, which is called the valve-flow characteristic. The linear characteristic is Cv 5 Cv|max x

(16.2.49)

Hydraulic systems are used for rapid-response servomechanisms at high power levels. Operating system pressures are from 50 to 100 lb/in2 for slower-acting systems and up to 5,000 lb/in2 where lightweight and fast responses are required. Compared with electrical systems the major advantages are a rapid response in the large horsepower ranges and the capability of operating at high power-density levels since the fluid can transmit dissipated energy from the point of generation. Compared with pneumatic systems, hydraulic systems are faster because the fluid is essentially incompressible. Major disadvantages are vulnerability to dirt, since the components generally require close machining tolerances, and the danger of fire and explosion resulting from the flammability of the hydraulic fluids used. The direction and volume of flow are controlled by servo valves in the system. They may be single-stage (pilot-operated) and mechanically or electrically actuated. A schematic of a spool-type four-way single-stage control piston and inertia load is shown in Fig. 16.2.27. Hydraulic fluid at constant pressure enters at the supply port. With displacement of the spool valve downward, for example, inflow to the top side of the piston moves the piston downward. Because of machining tolerances the spool dimensions are either large (overlapped) or smaller (underlapped) than the port dimension. Underlapped valves permit leakage to the piston in the centered position; overlapped valves result in a dead zone, where motion x results in no flow until a port is opened.

STEADY-STATE PERFORMANCE

16-31

Fig. 16.2.27 Four-way valve-piston circuit. (Truxal, “Control Engineer’s Handbook,” McGraw-Hill.)

The transfer function of this circuit is given as (Truxal, “Control Engineers’ Handbook”) y x5

C1

1 1 1 asC1/C2d

C1m/C2 1 Va/2BA2 VM 1 sc s2 1 s 1 1d 2 1 1 asC /C d 1 1 asC1/C2d 2BA 1 2

where C1  servo velocity gradient, in/(s)(in), C2  servo force gradient, lb/in, a  viscous friction of load and piston, lb/(in)(s), B  bulk modulus of fluid, lb/in2 M  mass of load and piston, lb/(in)(s2), A  piston area, in2, and V  effective entrained fluid volume, in3 (one-half of total entrained volume between valve and piston). The velocity of the output is proportional to the input resulting in a velocity-control servo. To convert this system to a position-control servo, mechanical, hydraulic, or electrical feedback may be employed. A valve-piston position servo with mechanical feedback is shown in Fig. 16.2.28. Any difference between the input D and the piston position y causes a motion x, which causes the piston to move in a direction opposite to D, that is, in a direction to reduce x. The lever ratio establishes the relationship between y and D.

Fig. 16.2.29 Two-stage electrohydraulic servo valve. The first-stage is a fourway flapper valve with a calibrated pressure output driving a second-stage, springloaded, four-way spool valve. (Moog Servocontrols, Inc.)

In Fig. 16.2.30, P1, the supply pressure, is constant. Input motion of the flapper toward the nozzle increases P2 and drives the piston toward the right. The steady-state characteristic P2 vs. x is shown in Fig. 16.2.31.

Fig. 16.2.30 Flapper valve. (Raven.)

Fig. 16.2.28 Valve-piston position servomechanical feedback. (Truxal, “Control Engineer’s Handbook,” McGraw-Hill.)

Most commercially available servo valves are two stages, permitting electrohydraulic action. The pilot stage can be operated by a low-power, short-travel electrical device, with a concomitant increase in flexibility. A typical pilot-operated servo valve is shown in Fig. 16.2.29. In this case the pilot is a double-flapper valve rather than a spool valve. (In general, small, accurate low-leakage spool valves are costly.) Upward movement of the flapper by the actuating motor results in increased pressure to the right end of the power spool. Hydraulic feedback occurs because of the increased flow across restrictor a. The power spool moves to the left until the unbalanced pressure is matched by the spring resistance. The disadvantage of this valve is the continual leakage flow through the flapper nozzle, but the torque motor has a low-power requirement and is inexpensive. The major disadvantages of spool-type valves are (1) high cost, because of high-tolerance requirements between the valve lands, (2) high static friction and inertia, and (3) susceptibility to dirt. The flapper valve is less expensive to manufacture than a spool valve of equivalent characteristics and is not so susceptible to damage by dirt particles.

Fig. 16.2.31 Equilibrium curve of P2 versus x for a flapper valve. (Raven, “Automatic Control Engineering,” McGraw-Hill.) STEADY-STATE PERFORMANCE

The steady-state error of a control system can be determined by using the final value theorem (see Laplace Transforms, Sec. 2) in which uostd 5 suossd provided uo std is stable tS `

sS 0

The classification of control systems according to the form of the open-loop transfer function facilitates the determination of the steadystate errors when the system is subjected to various inputs. The open-loop transfer function uo(s)/e(s)  KG(s) may be written KGssd 5

uo ssd

essd Ks1 1 tasds1 1 tasds1 1 tosd c 5 N s s1 1 t sds1 1 t sds1 1 t sd c 1

2

3

(16.2.51a)

16-32

AUTOMATIC CONTROLS

The system type is given according to the value of N as Type 0 system Type 1 system Type 2 system Type 3 system

N N N N

5 5 5 5

0 1 2 3

(16.2.51b)

Error coefficients, based on a system with unity feedback [H(s)  1], are

defined as Positional error constant 5 Ko 5 lim KGssd

(16.2.52a)

Velocity error constant 5 Kv 5 lim sKGssd

(16.2.52b)

Acceleration error constant 5 Ka 5 lim s2KGssd

(16.2.52c)

sS0

sS0

sS0

Linearizing and rearranging: Mj Tw1 2 Tw ww ww Tw s 1 Tw 5 c ww d c x d x

where Mj  weight of cooling water in jacket, lb (kg); c  specific heat of water, Btu/lb  F (J/kg  C); T  temperature, F; x  valve stem position (normalized 0 to 1); and ww  water flow, lb/min (kg/min). For simplicity, let the jacket time constant Mj /ww be so small as to be negligible (not typically the case). The block diagram of Fig. 16.2.32 then represents process diagram Fig. 16.2.5. The gain K in each case is an incremental change in output divided by an incremental change in input. For example, if the temperature transmitter has a temperature span of 100F and an output of 4 to 20 mA, the gain would be

A summary of error coefficients for systems of different types is given in Table 16.2.1.

Kt 5

Ko 5 Kc Kv KT a

N

Ko

Kv

Ka

0 1 2

const  

0 const 

0 0 const

Tw1 2 Tw UA b ba x wc 1 UA

Combining the closed-loop system of Fig. 16.2.32 results in the closedloop equation

A summary of the errors for types 0, 1, and 2 systems, when subjected to various inputs is given in Table 16.2.2.

a T5 11

Table 16.2.2

Summary of Errors

Input error

ui(t)  A e0/A

ui(t)  vt en/n

ui(t)  at2 ea/a

N0 1 2

1/(1  K0) 0 0

 1/Kv 0

  1/Ka

wc 1 ba bT wc 1 UA ts 1 1 0

Ko sti s 1 1dstd s 1 1d td ti ssts 1 1d a a s 1 1b atT s 1 1b

(16.2.55)

The time-dependent modes on the controller have been designed to compensate for the dynamics of the rest of the loop. Let ti 5 t and td 5 tT After cancellation, Eq. (16.2.55) becomes

The higher the system type, the better is the output able to follow the higher degrees of input. Higher-type systems, however, are more difficult to stabilize, and a compromise must be made between the steadystate error and the settling time of the response.

a T5

The transfer functions of the process, controller, and final control element can be combined with the measuring transducer into a block diagram of the complete control loop. Consider the thermal system of Fig. 16.2.5 and the heat balance on the vessel jacket: (16.2.53)

Fig. 16.2.32 Block diagram of a thermal system.

wc bT wc 1 UA 0

sts 1 1d C

CLOSED-LOOP BLOCK DIAGRAM

dTw 5 wwcsTw1 2 Twd dt

20 2 4 16 5 5 0.16 mA/ 8F 100 100

The open-loop gain is defined as the product of all the gains in the control loop:

Table 16.2.1 Summary of Error Coefficients

Mj c

(16.2.54)

11

Kc

(16.2.56)

S td ti s a a s 1 1b

Equation (16.2.56) can be multiplied out to become td wc b sti sd a a s 1 1b T0 wc 1 UA T5 titd sts 1 1d a a s 2 1 ti s 1 Kc b a

(16.2.57)

FREQUENCY RESPONSE Table 16.2.3

16-33

Process Characteristics versus Mode of Control Process time lags Load changes

Number of process capacities

Process reaction rate

Resistancecapacitance (RC)

Single

Slow

Moderate to large

Dead time (transportation) Small

Size

Suitable mode of control

Speed

Any

Any

Moderate

Slow Slow

Two-position. Two-position with differential gap Multiposition. Proportional input

Single (self-regulating)

Fast

Small

Small

Any

Multiple

Slow to moderate

Moderate

Small

Small

Moderate

Proportional position

Multiple

Moderate

Any

Small

Small

Any

Proportional plus rate

Multiple

Any

Any

Small to moderate

Large

Slow to moderate

Proportional plus reset

Multiple

Any

Any

Small

Large

Fast

Proportional plus reset plus rate

Any

Faster than that of the control system

Small or nearly zero

Small to moderate

Any

Any

Wideband proportional plus fast reset

Moderate

Floating modes: Single-speed Multispeed Proportional-speed floating

SOURCE: Considine, “Process Instruments and Controls Handbook,” McGraw-Hill.

Equation (16.2.57) can be written in the standard form of Eqs. (16.2.16) and (16.2.21): a T5

td wc b stisd a a s 1 1b T0 wc 1 UA

Kc sts 1

1dst2c s2

1 2tczs 1 1d

The shift from the complex domain to the frequency domain (linear systems, zero initial conditions) is accomplished by simply substituting jv for s. Consider Eq. (16.2.38). sts 1 1dT 5 K1T0

(16.2.58)

Controller gain Kc can then be chosen to give the best response tc with stability z. More rigorous techniques for determining controller parameter values are shown under Bode diagrams and Routh’s criterion. Table 16.2.3 provides preliminary guidance for the selection of control modes.

K1 T 5 T0 tjv 1 1 Multiply by the complex conjugate K1 tjv 2 1 T 5 T0 tjv 1 1 tjv 2 1

FREQUENCY RESPONSE

Although it is the time response of the control system that is of major importance, study of the effect on transient response of changes in system parameters, either in the process or controller, is more conveniently made from a frequency-response analysis of the system. The frequency response of a system is the steady-state output of the system to input sinusoids of varying frequency. The output for a linear system can be completely described in terms of the amplitude ratio of the output sinusoid to the input sinusoid. The amplitude ratio or gain, and phase, are functions of the frequency of the input sinusoid. The use of sinusoidal methods to analyze and test dynamic systems has gained widespread popularity because system response is obtained easily from the response of the individual elements, no matter how many elements are included. By contrast, transient analysis is quite tedious, with only three dynamic components in the system, and is too difficult to be worthwhile for four or more components. Frequency-response analysis provides quantitative information concerning the system on maximum gain Ko for stability, time response tc , vn, and the controller parameter adjustments ti , td . In most cases, this information is adequate for assurance of stability, for comparing alternatives, or for judging the merits of a proposed control system. System parameters are generally obtained from open-loop frequency response, which is the response with the loop broken between the controller and the final control element (Fig. 16.2.32). Frequency response also lends itself to system identification by testing for systems not readily amenable to mathematical analysis by subjecting the system to input sinusoids of varying frequency.

(16.2.59)

Substituting

Collecting terms tv 1 T 5 K1 c 2 2 2j 2 2 d T0 tv 11 tv 11

(16.2.60)

The first term is the real part of the solution, and the second term, the imaginary part. These terms define a vector, shown in Fig. 16.2.33,

Fig. 16.2.33 Polar plot showing vector.

16-34

AUTOMATIC CONTROLS

which has magnitude M and phase angle u as determined either graphically or by the Pythagorean theorem. M5 c

1>2 1 d tv 11 2

2

u 5 tan21 a

2zv b 1

(16.2.61) (16.2.62)

GRAPHICAL DISPLAY OF FREQUENCY RESPONSE Sinusoidal response is plotted in three different ways: (1) the rectangu-

lar plot with the log of the amplitude versus the log of the frequency, called a Bode plot; (2) a phase-margin plot with magnitude shown versus a function of phase with frequency as a parameter, called a Nichols plot; and (3) a polar plot with magnitude and phase shown in vector form with frequency as a parameter, called a Nyquist plot. The Nichols plot is actually a plot of phase margin (180  u) versus frequency and is used to define system performance. The polar plot enables the absolute stability of the system to be determined without the need for obtaining the roots of the characteristic equation. The logarithmic (Bode) plot has the advantage of ease in plotting, especially in design, since the individual effects of cascaded elements can be gaged by superposition. All the graphical procedures use the “minus 1” point [discussed with Eq. (16.2.38)] as the criterion for stability (magnitude  1, phase  180).

Fig. 16.2.35 Typical (uo/e)/( jv) plot.

NYQUIST PLOT

The Nyquist diagram is a graphical method of determining stability. The diagram is essentially a mapping into the G(s) plane of a contour enclosing the major portion of the right half of the s plane. Figure 16.2.34 shows the Nyquist plot of a typical system.

Fig. 16.2.36 Typical Nyquist diagram showing loop transfer function |uo /ui|. (ASME, “Terminology for Automatic Control,” Pub. ASA C85.1-1963.)

Fig. 16.2.34 Nyquist diagram for G(s)  1/[s(s  a)(s  b)]. (Truxal, “Control System Synthesis,” McGraw-Hill.)

The frequency response of a closed-loop system can also be derived from the polar plot of the direct transfer function KG(s). Systems with dynamic elements in the feedback can be transformed into direct systems by block-diagram manipulation. In Fig. 16.2.35, the amplitude ratio uo /ui at any frequency v is the ratio of the lengths of the vectors Ob and ab. The angle formed by the vectors ab and Ob is the phase angle of the frequency response ∠uo /ui(jv). The numerator of the transfer function is a constant representing the closed-loop gain. Changing the gain proportionately changes the length of the vector Ob. Figure 16.2.36 shows another typical Nyquist plot. The Nyquist diagram is widely used in the design of electrical systems, but plots are very tedious to make and are generated via computer programs.

BODE DIAGRAM

The most common method of presenting the response data at various frequencies is to use the log-log plot for amplitude ratios, accompanied by a semilog plot for phase angles. These rectangular plots are called Bode diagrams, after H. W. Bode, who did basic work on the theory of feedback amplifiers. A major advantage of this method is that the numerical computation of points on the curve is simplified by the fact that log magnitude versus log frequency can be approximated to engineering accuracy with straight-line approximations. In the sinusoidal testing of a system, the system dynamics will be faster than a very low frequency test signal, and the output amplitude has time to recover fully to the input amplitude. Thus the magnitude ratio is 1. Conversely, at high frequency, the system does not have time to reach equilibrium, and output amplitude will decrease. Magnitude ratio goes to a very small value under this condition. Thus the system will show a decrease in magnitude ratio as frequency increases. By similar reasoning, the difference

BODE DIAGRAM

16-35

between input and output phase angles becomes progressively more negative as frequency increases. Consider the equations for magnitude, Eq. (16.2.61), and phase, Eq. (16.2.62). If tw V 1, 1 1/2 M5 a b 51 1

u 5 tan21 s0d 5 0

and if tw W 1, M5 a

1 1/2 1 b 5 tv < 0 t v2 2

u 5 tan21 a

2tw b < 2908 1

The two straight-line asymptotes of magnitude, if extended, would intersect at tv  1. Thus a relationship between frequency and time constant is established. The point at which tv  1 is called the corner frequency, or the break frequency. The asymptote past the break frequency is easily plotted from a point at the break frequency v  1/t and a point at 10 times the frequency and 0.1 times the magnitude ratio. The frequency response of systems containing different dynamic elements may be calculated by suitably employing the magnitude and phase characteristics of each element separately. Combined magnitudes are calculated |M1M2| 5 |M1| 3 |M2| Or, on the log-log Bode diagram,

Fig. 16.2.37 Combined transfer functions. (Reproduced by permission. Copyright © John Wiley & Sons, Inc. Publishers, 1958. From D. P. Eckman, “Automatic Process Control.”)

Table 16.2.4 lists additional frequency-response relationships for system elements. The noninteracting controller, item 8, has the advantage of no interaction among tuning parameters but has the disadvantages of being more difficult to plot on the Bode diagram and of not offering the direct compensation of the controller shown in Fig. 16.2.24. Figure 16.2.38 shows a typical combined Bode diagram. The ordinate has a scale in decibels in addition to the magnitude ratio scale. Decibels (dB) are sometimes used to provide a linear scale. They are defined

log |M1M2|  log |M1|  log |M2|

dB 5 220 log M

The logarithmic scale permits multiplication by adding vertical distances. For phase

The discontinuity at the break frequency due to the meeting of the straight-line asymptotes will not be found in actual test data. Instead, the corner will be rounded, as shown dotted in Fig. 16.2.39. The error due to the approximation is some 3 dB at the break frequency, as shown in the figure. Quadratic terms (Table 16.2.4) also lend themselves to asymptotic approximation, but the form of the magnitude and phase plots around the break frequency depends on the damping coefficient z. The slope of the magnitude ratio attenuation past the break point is twice that of the first-order system.

>u1u2 5 /u1 1 /u2

Thus phase addition is also made by adding vertical distances. Figure 16.2.37 shows the combination of the straight-line asymptotes for the following transfer function: Gssd 5

1 tssts 1 1d

(16.2.63)

Fig. 16.2.38 Typical Bode diagram showing loop-transfer function B/E. (ASME, “Terminology for Automatic Control,” Pub. ASA C85.1-1963.)

16-36

Table 16.2.4

Frequency-Response Equations for Some Common Control-System Elements

Description

Transfer function G(s)

Frequency response Gs jvd

1. Dead time

e2TLs

2. First-order lag

3. Second-order lag

4. Quadratic (underdamped)

Magnitude ratio

Phase angle

e2jvTL

1

2vTL radians

1 Ts 1 1

1 jvT 1 1

1 2v2T 2 1 1

2 tan21 svT d

1 sTs 1 1dsaTs 1 1d

1 2av2T 2 1 js1 1 advT 1 1

2s1 2 av2T 2d2 1 s1 1 ad2v2T 2

1 2z s 2 av b 1 v s 1 1 n n

1

1

v 2 v 2a v b 1 j2z v 1 1 n n

v v a1 2 2 b 1 4z2 a v b Å n vn

K

K

K

5. Ideal proportional controller 6. Ideal proportional-plus-reset controller

K a1 1

1 Ti 5 r r  reset rate 7. Ideal proportional-plus-rate controller 8. Ideal proportional-plus-reset-plus-rate controller

K

1 b Tis

K a1 1

or Ti s 1 1

K

Ti s

K(1  Tds) K a1 1 Tds 1

1 b Tis

2

Å

K

2

11 a

2tan21 c

2tan21 D 2

s1 1 ad vT 1 2 aT 2v2 v 2z v

n

v 2 1 2 av b n 0

1 b vTi

2

2 tan21 a

1 b vTi

jvTi

K a1 1 jvTd 1

1 b jvTi

or jvTi 2 v2TdTi 1 1 jvTi

K 21 1 v2T d2 K 2svTid2 1 s1 2 v2TdTid2

tan21 svTdd tan21 avTd 2

d

T

or jvTi 1 1

Ks1 1 jvTdd

K SOURCE: Considine, “Process Instruments and Controls Handbook,” McGraw-Hill.

1 b jvTi

1

1 b vTi

STABILITY AND PERFORMANCE OF AN AUTOMATIC CONTROL

Fig. 16.2.39 Bode plot of term (jvta  1).

CONTROLLERS ON THE BODE PLOT Controller modes, gain, reset, and derivative are plotted individually (in the form of Fig. 16.2.24) on the Bode diagram and summed as required. The functions for reset (integral) and derivative modes are moved horizontally on the diagram to determine proper parameter adjustment. The combined magnitude ratio curve for the whole system is moved vertically to determine open-loop gain. After all the dynamic elements of the system are plotted and combined, reset and derivative are set in accordance with the following

16-37

The Nyquist Stability Criterion The KG(jv) locus for a typical singleloop automatic-control system plotted for all positive and negative frequencies is shown in Fig. 16.2.40. The locus for negative values of v is the mirror image of the positive v locus in the real axis. To complete the diagram, a semicircle (or full circle if the locus approaches  on the real axis) of infinite radius is assumed to connect in a positive sense, the  locus at v → 0 with the negative locus at ! → 0. If this locus is traced in a positive sense from v →  to v → 0, around the circle at , and then along the negative-frequency locus the following may be concluded: (1) if the locus does not enclose the 1  j0 point, the system is stable; (2) if the locus does enclose the 1  j0 point, the system is unstable. The Nyquist criterion can also be applied to the log magnitude of KG(jv) and phase-vs.-log v diagrams. In this method of display, the criterion for stability reduces to the requirement that the log magnitude of KG(jv) must cross the 0-dB axis at a frequency less than the frequency at which the phase curve crosses the 180 line. Two stability conditions are illustrated in Fig. 16.2.41. The Nyquist criterion not only provides a simple test for the stability of an automatic-control system but also indicates the degree of stability of the system by indicating the degree to which KG(jv) locus avoids the 1  j0 point.

criteria: Reset: The reset phase curve is positioned horizontally so that it contributes 10 of phase lag where the system phase lag is 170. Derivative: The derivative phase curve is positioned horizontally so that it contributes 60 of phase lead where the system phase lag is 180.

The magnitude ratio curves are positioned with the phase-angle curves, and parameter values for reset and derivative can be read at the break points.

STABILITY AND PERFORMANCE OF AN AUTOMATIC CONTROL

Fig. 16.2.41 Nyquist stability criterion in terms of log magnitude KG( jv) diagrams. (a) Stable; (b) unstable. (Porter, “An Introduction to Servomechanism,” Wiley.)

An automatic-control system is stable if the amplitude of transient oscillations decreases with time and the system reaches a steady state. The stability of a system can be evaluated by examining the roots of the differential equation describing the system. The presence of positive real roots or complex roots with positive real parts dictates an unstable system. Any stability test utilizing the open-loop transfer function or its plot must utilize this fact as the basis of the test.

The concepts of phase margin and gain margin are employed to give this quantitative indication of the degree of stability of an automaticcontrol system. Phase margin is defined as the additional negative phase shift necessary to make the phase angle of the transfer function 180 at the frequency where the magnitude of the KG(jv) vector is unity. Physically, phase margin can be interpreted as the amount by which the unity KG vector has to be shifted to make a stable system unstable.

Fig. 16.2.40 Typical KG( jv) loci illustrating application of Nyquist’s stability criterion. (a) Stable; (b) unstable.

16-38

AUTOMATIC CONTROLS

In a similar manner, gain margin is defined as the reciprocal of the magnitude of the KG vector at 180. Physically, gain margin is the number by which the gain must be multiplied to put the system to the limit of stability. Thaler suggests satisfactory results can be obtained in most control applications if the phase margin is between 40 and 60 while the gain margin is between 3 and 10 (10 to 20 dB). These values will ensure a small transient overshoot with a single cycle in the transient. The margin concepts are qualitatively illustrated in Figs. 16.2.40 and 16.2.41. Routh’s Stability Criterion The frequency-response equation of a closed-loop automatic control is uo KGs jvd 5 ui 1 1 KGs jvd

sampled-data feedback system is shown in Fig. 16.2.43. The error signal in continuous form is P(t), and the sampled error signal is P*(t). Figure 16.2.42 shows the relationship between these signals in a graphical form.

(16.2.64)

The characteristic equation obtained therefrom has the algebraic form As jvdn 1 Bs jvdn21 1 Cs jvdn22 1 . . . 5 0 (16.2.65)

Fig. 16.2.42 Ideal sampler, showing continuous input and sampled output.

The purpose of Routh’s method is to determine the existence of roots of this equation which are positive or which are complex with positive real parts and thus identify the resulting instability. To apply the criterion the coefficients are written alternately in two rows as A B

C D

E F

C D a2 b2 g2

E F a3 b3

G H Fig. 16.2.43 Sampled data feedback system.

This array is then expanded to A B a1 b1 g1

G H

z Transformation In the analysis of continuous-data systems, it has been shown that the Laplace transformation can be used to reduce ordinary differential equations to algebraic equations. For sampled-data systems, an operational calculus, the z transform, can be used to simplify the analysis of such systems. Consider the sampler as an impulse modulator; i.e., the sampling modulates an infinite train of unit impulses with the continuous-data variable. Then the Laplace transformation can be shown to be

where a1, a2, a3, b1, b2, b3, g1 and g2 are computed as a1 5

BC 2 AD B

BE 2 AF a2 5 B a3 5

BG 2 AH B

b1 5

Da1 2 Ba2 a1

Fa1 2 Ba3 b2 5 a1 Ha1 2 Bo b3 5 a1

`

b1a2 2 a1b2 g1 5 b1 g2 5

b1a3 2 a1b3 b1

When the array has been computed, the left-hand column (A, B, a1, b1, g1) is examined. If the signs of all the numbers in the left-hand column are the same, there are no positive real roots. If there are changes in sign, the number of positive real roots is equal to the number of changes in sign. It should be recognized that this is a test for instability; the absence of sign changes does not guarantee stability. SAMPLED-DATA CONTROL SYSTEMS Definition Sampled-data control systems are those in which continuous information is transformed at one or more points of the control system into a series of pulses. This transformation may be performed intentionally, e.g., the flow of information over long distances to preserve the accuracy of the data during the transmission, or it may be inherent in the generation of the information flow, e.g., radiating energy from a radar antenna which is in the form of a train of pulses, or the signals developed by a digital computer during a direct digital control of machine-tool operation. Methods of analysis analogous to those for continuous-data systems have been developed for the sampled-data systems. Discussed herein are (1) sampling, (2) the z transformation, (3) the z-transfer function, and (4) stability of sampled data systems. Sampling The ideal sampler is a simple switch (Fig. 16.2.42) which is closed only instantaneously and opens and closes at a constant frequency. The switch, which may or may not be a physical component in a sampled-data feedback system, indicates a sampled signal. Such a

F * ssd 5 g fsnTde2nTs

(16.2.66)

n50

(See A. W. Langill, “Automatic Control Systems Engineering.”) In terms of the z transform, this becomes `

F * ssd 5 Fszd 5 g fsnT dz2n

(16.2.67)

n50

where zn  enTs. Tables in Sec. 2 list the Laplace transforms for a number of continuous time functions. Table 16.2.5 lists the z transforms for some of these functions. It should be noted that unlike the Laplace transforms, the z-transform method imposes the restriction that the sampled-data-system response can be determined only at the sampling instant. The same z transform may apply to different time functions which may have the same value at Table 16.2.5

Laplace and z Transforms f(t)

Fssd 5 £[fstd]

F*(z)  F(z)

Unit ramp function

t

1 s2

Tz sz 2 1d2

Unit acceleration function

t2 2

1 s3

e2at

1 s1a

Time function

Exponential function Sinusoidal function

sin bt

Cosinusoidal function

cos bt

1/ T 2 2

zsz 1 1d sz 2 1d3

z z 2 e2aT

b

z sin bT

s2 1 b2

z2 2 2z cos bT 1 1

s s2 1 b2

z2 2 2z cos bT 1 1

zsz 2 cos bT d

MODERN CONTROL TECHNIQUES Table 16.2.6

16-39

Typical Block Diagrams of Sampled-Data Control Systems and Their Transforms System

Laplace transform of the output, C(s) G1 ssd 1 1 G1G*2 szd

z-transform of the output, C*(z) G*1 szd

R*szd

G1 ssd 1 1 G1 ssdG2 ssd

1 1 G1G*2 szd

c

R*szd

R*szd

G1 ssd

G*1 szd 1 1 G*1 szdG*2 szd

G1 ssd cRssd 2 G2 ssd

G1 ssd cR* szd 2

RG*1 szd 11

G*1 szdG*2 szdR* szd 11

G*1 szdG*2 szd

G1 ssd cR* szd 2 G2 ssd

G2 ssd 1 1 G1G2G*3 szd

G1G*2 szd

1 1 G1G*2 ssd

G*1 szd

d

R* szdG*1 szd

1 1 G*1 szdG*2 szd

d

RG*1 szd

G2 ssd 1 1 G*2 szd 1 G1G*2 szd

R*szd

RG*1 ssd

d

1 1 G1G*2 szd

d R* szd *

1 1 G1 ssdG2 ssd

RG*1 szd

G*1 szd 1 1 G1G*2 szd

R* szd

R* szd

G*2 szd 1 1 G1G2G*3 szd

RG*1 szd

G*2 szd 1 1 G*2 szd 1 G1G*2 szd

RG*1 szd

SOURCE: John G. Truxal (ed.), “Control Engineers’ Handbook,” McGraw-Hill.

the instant of sampling. The z-transform function is not defined, therefore, in a continuous sense, and the inverse z transform is not unique. z-Transfer Function The ratio of the sampled output function of a discrete network to the sampled input function is the z-transfer function. A discrete network is one which has both a sampled input and output. A table of block diagrams of a number of sampled-data control systems with their associated transfer functions is presented in Table 16.2.6. Stability of Sampled-Data Systems The stability of sampled-data systems can be demonstrated utilizing frequency-response methods which have been discussed in this section. See “Stability and Performance of an Automatic Control.” The Nyquist stability criterion again applies with the same conclusions relative to the 1 point, and the methods of generalizing the open- and closed-loop frequency response plots remain the same. MODERN CONTROL TECHNIQUES

Many engineers and researchers have been actively pursuing the application and development of advanced control strategies. In recent years this effort has included extensive work in the area of robust

control theory. One of the primary attractions of this theory is that it generalizes the single input-single system output (SISO) concept of gain and phase margins, and its effect on system sensitivity to multivariable control system. In this section the design technique for a linear quadratic gaussian with loop transfer recovery (LQG/LTR) robust controller design method will be summarized. The concepts required to understand this method will be reviewed. A linearized model of a simple process has been chosen to illustrate this technique. The simple process has three state variables, one input, and one output. Three control system design methods are compared: LQG, a LQG/LTR, and a proportional plus integral controller (PI). Previous Work and Results

Most applications of modern control techniques to process control have appeared in the form of optimal control strategies which were in the form of full-state feedback control with and without state estimation [linear quadratic regulator (LQR)]. The regulator problem, coupled with a state estimation problem, was shown not to have desirable robustness properties. Doyle and Stein (Doyle and Stein, 1979) proposed the multivariable robust design philosophy known as LQG/LTR to eliminate the

16-40

AUTOMATIC CONTROLS

shortcomings of the LQR and LQG design techniques. The LQG/LTR design technique has been applied to control submersible vehicles, engine speed, helicopters, ship steering, and large, flexible space structures. The LQG/LTR method requires the plant to be minimum phase, which cannot be guaranteed in a plant process. Typically nonminimum phase conditions in a plant process are due to time delay. The LQG/LTR design procedure offers no guarantees when a nonminimum-phase plant is used. However recent work develops a method to obtain a robust controller design for plants with time delay (Murphy and Bailey, IEEE 1989 and IECON 1990). The LQG/LTR synthesis method for minimum phase plants achieves desired loop shapes and maximum robustness properties in the design of feedback control systems. The LQG/LTR design procedure uses singular value analysis and design procedures to obtain the desired performance and stability robustness. The use of singular values yields a frequency-domain design and analysis method generally expected by control engineers to determine system robustness. The LQG/LTR synthesis method is applicable to both multiple-input/multiple-output (MIMO) and SISO control systems. The performance limitations of this design method are analogous to those of a SISO system.

control system has proven to offer great practical considerations in the design of automatic control systems. Robustness Concepts of the LQG and LQG/LTR Control Systems

The LQG/LTR design procedure is based on the system configuration of the LQG controller shown in Fig. 16.2.44. The LQG controller consists of a Kalman filter state estimator and a linear quadratic regulator. The Kalman filter state estimator has good robustness properties for plant perturbations at the plant output. The linear quadratic regulator (LQR) has good robustness properties for perturbations at the plant input. Even though its components separately have good robustness properties, the LQG controller is found to have no guaranteed robustness properties at either the input (point 2) or the output (point 3) of the plant.

Definition of Robustness

A system is considered to be robust and have good robustness properties if it has a large stability margin, good disturbance attenuation, and/or low sensitivity to parameter variations. The term stability margin refers to the gain margin and the phase margin, which are quantitative measures of stability. A proven method of obtaining good robustness properties is the use of a feedback control system, which can be designed to allow for variations in the system dynamics. Some causes of variation in system dynamics are Modeling and data errors in the nominal plant and system. Changes in environmental conditions, manufacturing tolerances, wear

due to aging, and noncritical material failures. Errors due to calibration, installation, and adjustments. Feedback control systems with good feedback properties have been synthesized for SISO systems. Classical frequency domain techniques such as Nyquist, root-locus, Bode, and Nichols plots have been used to obtain the feedback control system for the SISO system. These design techniques have allowed the synthesis of feedback control systems yielding insensitivity to bounded parameter variations and a large stability margin. The success of the feedback control system for the SISO system has led to the direct extension of the classical frequency domain technique to the design of a multivariable feedback control system. This extension to the multivariable design problem examines an individual feedback loop as the phase and gain margins are varied, while the nominal phase and gain values in the remaining feedback paths are held constant. This technique, however, fails to consider the results of simultaneous variation of gain and phase in all paths, which is a real-world possibility and needs to be considered. A method of obtaining a feedback control system with good robustness properties that takes into consideration simultaneous gain and phase variation is the LQG/LTR. The LQG/LTR technique can be applied to MIMO systems or SISO systems. The LQG/LTR technique not only has the good robustness properties of the classical frequency domain techniques but also is capable of minimizing the effects of unmodeled high-frequency dynamics, neglected nonlinearities, and a reduced-order model. Computer control system design tools, for instance, developed by Integrated Systems Incorporated (ISI) and The MathWorks can be used in synthesizing the LQG/LTR technique and other controller design techniques. The use of computer-aided design tools eliminates the numerical programming burden of programming the complex LQG/LTR algorithm. In what follows, the needed concepts are defined and the LQG/LTR procedure for development of a model-based compensator (MBC) is described. A unity-feedback MBC is selected for the controller structure, because of its similarity to the SISO unity-feedback control system well known to classical control designers. The MBC closed-loop

Fig. 16.2.44 Summary of the robustness properties of the LQG block diagram.

The LQG/LTR design procedure allows us to recover robustness properties at either the input or the output of the plant. If robustness is desired at the input to the plant, first a nominal robust LQR design is made to satisfy the design constraints. Next, an LTR step is made to design a Kalman filter gain that recovers the robustness at the input to the plant of the LQG controller that is approximately that of the nominal LQR design. This implies from Fig. 16.2.44 that the robustness properties at points 1 and 2 are approximately the same. If robustness is desired at the output of the plant, first a nominal robust Kalman filter design is made to satisfy the performance constraints. Next, an LTR step is made to design an LQR gain that recovers the robustness at the output of the plant that is approximately that of the nominal Kalman filter design. This implies from Fig. 16.2.44 that the robustness properties at points 3 and 4 are approximately the same. The block diagram of the unity-feedback MBC is shown in Fig. 16.2.45. This control system structure allows tracking and regulation of a reference input at the output of the plant. The filter and regulator gains used in the controller R(s) are obtained appropriately, depending on whether robustness is desired at the input or the output of the plant. Note that the robustness properties of the unity-feedback MBC (UFMBC) are the same as those of the LQG system, since the UFMBC is just an alternative structure of the LQG system.

MULTIVARIABLE PERFORMANCE AND STABILITY ROBUSTNESS

16-41

for 1  i  k, where k is the number of singular values  min (m, n). The maximum and minimum singular values are defined, respectively, as ssAd 5 ||A||2 and

ssAd 5 max x20

Fig. 16.2.45 Unity feedback model-based compensator (UFMBC).

Controllability, Observability, Stabilizability, and Detectability

The use of matrix algebra is very important in the study of MIMO control system design methods. Some MIMO control system design methods use matrix algebra to decouple the system so that the plant matrix transfer function consists only of individual decoupled transfer functions, each represented conventionally as the ratio of two polynomials. This simplification then allows the use of conventional methods such as the Bode plot, Nyquist plot, and Nichols chart as a means of designing a controller for each loop of the MIMO plant. The LQG/LTR control system design method, however, does not require decoupling of the plant but rather preserves the coupling between the individual loops of the MIMO plant. Each step of the LQG/LTR design technique therefore requires the use of some matrix algebra, vector and matrix norms, and singular values during the control system design procedure. For instance, matrix algebra is used to derive a suitable representation of the closed-loop system for stability and robustness analysis. Matrix norms and singular values are used to examine the matrix magnitude of the MIMO transfer function. The matrix magnitude or singular values of a system allows preservation of the coupling between the various loops of the MIMO control system. The concepts of matrix norm, singular values, controllability, observability, stabilizability, and detectability are very important in applying MIMO design procedures. It is therefore important that these concepts be well understood. The purpose of this section is to provide background material in mathematics and control systems that has proved useful in deriving the procedures and analysis required to implement the control system design procedure. Matrix Norm and Singular Values

The matrix norm of ||A||2 is defined for the m  n matrix A. The matrix A defines a linear transformation from the vector space V, called the domain to a vector space W, which is called the range. Thus the linear transformation (16.2.68)

transforms a vector x in V  Cn into a vector A(x) in W  Cm. The matrix norm of ||A||2 is defined as ||A|| ||A||2 5 max ||x|| 2 5 l2 norm 2 x20

(16.2.69)

Since the l2 norm is a useful tool in control system analysis, further insight into its definition is given. The l2 norm of Eq. (16.2.69) can be written as

The LQG/LTR control system design method has a certain requirement of the plant for which the controller is being designed: the plant must be at least stabilizable and detectable. Therefore it is important that the conditions of controllability, observability, and the less restricted conditions of stabilizability and detectability be understood. A description of these concepts follows. Given the state equation for the linear time invariant (LTI) system # described as x (t)  Ax(t)  Bu(t) and y(t)  Cx(t). The matrices A, B, and C are constant. Note that x  Rn, n  Rp, and y  Rp are, respectively, the state, input, and output vectors. Variables n and p are the dimensions of the states and the input and output vectors. The input and output vectors are of the same dimension. A system is said to be controllable if it is possible to transfer the system from any initial state x0 in state space to any other state xF using the input in a finite period of time. The system is said to be uncontrollable otherwise. The condition of controllability places no constraint on the input in the system. The algebraic test for controllability of the LTI system state equation uses the controllability matrix Q  [B AB . . . An  1 B] (16.2.76) where n is the number of states. The LTI system is controllable if the rank of Q is equal to n. If the rank of Q is less than n, the system is not controllable. An uncontrollable system indicates that some modes or poles are not affected by the control input. Uncontrollable systems with unstable modes imply that a controller design is not possible to ensure system stability. If the uncontrollable modes of the uncontrollable system are stable, then the system is stabilizable. An LTI system is considered observable at initial time t0 if there exists a finite time t1 greater than t0 such that for any initial state x0 in state space using knowledge of the input u and output y over the time interval t0 t  t1, the state x0 can be determined. The system is said to be unobservable otherwise. The algebraic test for observability of the LTI system uses the observability matrix N  [CT ATCT . . . (AT)n  1CT] (16.2.77) The LTI system is observable if the rank of N is equal to n, where n is the number of states. If the rank of N is less than n, the system is unobservable. An unobservable system indicates that not all system modes contribute to the output of the system. If the unobservable modes (states) are stable, then the system is considered to be detectable.

||A||2 5 max[li sAHAd]1/2

i 5 1, 2, . . . , n

(16.2.70)

EVALUATING MULTIVARIABLE PERFORMANCE AND STABILITY ROBUSTNESS OF A CONTROL SYSTEM USING SINGULAR VALUES

||A||2 5 max[l i sAAHd]1/2

i 5 1, 2, . . . , n

(16.2.71)

Performance Robustness

i

i

The matrices AHA and AAH are Hermitian and positive semidefinite. The eigenvalues of these matrices are real and nonnegative. The definition of the singular value of a complex matrix A  Cnn is si(A)  [li(AHA)]1/2  [li(AAH)]1/2  0

i  1, 2, . . . , n

(16.2.72)

If A  Cm  n, which indicates that A is a nonsquare matrix, then si sAd 5 [li sAHAd]1/2 5 [li sAAHd]1/2

(16.2.75)

provided A has an inverse.

MATHEMATICS AND CONTROL BACKGROUND

A(x)  Ax

||Ax || 1 5 ||x ||2 ||A21||2

(16.2.74)

(16.2.73)

The requirements for the control system design are formulated by placing certain restrictions on the singular values of the return ratio, return difference, and inverse return difference of the control system. The block diagram in Fig. 16.2.46 shows the controller K(s) and plant G(s) with input dI(s) and output do(s) disturbances and sensor noise n(s). Using Fig. 16.2.46, a frequency-domain transfer function representation for the plant output is first obtained. These transfer function representations, used in conjunction with concepts of singular values, allow the formulation of requirements in the frequency domain to obtain a control system with good performance and stability robustness.

16-42

AUTOMATIC CONTROLS

In this design, performance robustness is of concern at the output of the plant. Therefore it is required that the plant output transfer function be derived in order to analyze and assist in obtaining a control system design to meet certain performance robustness requirements. Using Fig. 16.2.46,

Thus the magnitude of the minimum singular value of the return ratio s[G( jv)K( jv)] must be large over the frequency range of the input to obtain good command following of the control system. Disturbance Rejection To minimize the effect of the output disturbance, ydo(s), on the output signal y(s), Eq. (16.2.82) is used with n(s)  r(s)  dI(s)  0, which results in ydo ssd 5 [I 1 GssdKssd]21do ssd

(16.2.87)

Rejection of this output disturbance at the plant output requires that the magnitude of the return difference I  G(s)K(s) to meet the following condition s [I 1 Gs jvdKs jvd] W 1

4 v  vdo

(16.2.88)

which implies s [Gs jvdKs jvd] W 1

Fig. 16.2.46 Model-based unity feedback MIMO control system.

the plant output transfer function is derived and performance robustness requirements established. From Fig. 16.2.46 it can be shown that yssd 5 do ssd 1 Gssdurssd e(s)  r(s)  y(s)  n(s) urssd 5 do ssd 1 ussd 5 dI ssd 1 Kssdessd.

and

(16.2.78) (16.2.79) (16.2.80)

where G(s)  C[sI  A] 1B is the plant transfer function and A, B, and C are system matrices. The transfer functions y(s), do(s), e(s), r(s), n(s), u(s), dI(s), and K(s) are, respectively, the output, output disturbance, error, reference, noise, input, input disturbance, and controller transfer functions. Using Eqs. (16.2.79) and (16.2.80) it can be shown that Eq. (16.2.78) becomes y(s)  [G(s)K(s)][r(s)  y(s)  n(s)]  do(s)  G(s)dI (s)  G(s)K(s)r(s)  G(s)K(s)y(s)  G(s)K(s)n(s)  do(s)  G(s)dI (s)

(16.2.81)

ydI ssd 5 [I 1 Gs jvdKs jvd]21[Gs jvddI s jvd]

y(s)  [I  G(s)K(s)]1[G(s)K(s)r(s)  G(s)K(s)n(s) do(s)  G(s)dI(s)]  [I  G(s)K(s)]1{G(s)K(s)[r(s)  n(s)]}  [I  G(s)K(s)]1[do(s)  G(s)dI(s)]  yr(s)  ydt(s)  ydo(s)  yn(s) (16.2.82) where yr(s), ydl(s), ydo(s), and yn(s) are, respectively, the contributions to the plant output due to the reference input, input disturbance, output disturbance, and noise input to the system. The requirements for good command following, disturbance rejection, and insensitivity to sensor noise are defined by using singular values to characterize the magnitude of MIMO transfer functions. These requirements are also applicable to SISO systems (Kazerooni and Narayan; Murphy and Bailey, April 1989). Command Following The conditions required to obtain a MIMO control system design with good command following is described. Good command following of the reference input r(s) by the plant output y(s) implies that yssd < rssd

4 s  sR

(16.2.83)

where s  jv and sR  jvR and vR is the frequency range of the input r(s). Letting do(s)  dI(s)  n(s)  0 in Eq. (16.2.82) results in yr ssd 5 [I 1 Gs jvdKs jvd]21Gs jvdKs jvdrs jvd 5 5I 1 [Gs jvdKs jvd] 6 rs jvd 21 21

4 v  vR

4 v  vR

(16.2.84)

where G( jv)K( jv) is defined as the return ratio. Therefore, to obtain good command following, the system inverse return difference I  [G( jv)K( jv)]1 must have the property such that I 1 [Gs jvdKs jvd]21 < I

4 v  vR

(16.2.85)

which implies that s[Gs jvdKs jvd] W 1

4 v  vR

(16.2.86)

(16.2.89)

4 v  vdI

(16.2.90)

where vdI is the frequency range of the input disturbance dI(s). To minimize the effect of ydI(s) on the plant output requires that the magnitude of the return difference be large (in general, the larger the better). This requirement implies that s [Gs jvdKs jvd] W 1

4 v  vdI

(16.2.91)

to reject the input disturbance. It should be noted that this equation gives only general conditions for rejection. The control engineer must determine how much greater than 1 the return ratio magnitude must be. Noise Rejection The requirements to minimize the effects of sensor noise on the plant output must be determined. First letting n(s)  dI(s)  do(s)  0 in Eq. (16.2.82) results in yn ssd 5 [I 1 GssdKssd]21[GssdKssd]nssd 5 5I 1 [GssdKssd]216 21nssd

Therefore,

4 v  vdo

where ydo is the frequency range of the output disturbance do(s). With n(s)  r(s)  do(s)  0 in Eq. (16.2.82), the effect of input disturbance on the plant output is examined. This effect is seen to be

4 s  sn 4 s  sn

(16.2.92)

where sn  jvn and !n is the frequency range of the sensor noise. Thus to minimize the effects of sensor noise on the plant output it is desired to have the magnitude of yn(s) small over the frequency range of the noise. Thus s [Gs jvdKs jvd] W 1

4 v  vn

(16.2.93)

is required to minimize the effect of noise on the plant output. Any overlapping of any of the frequency ranges vR, vdI, vdo, vn will require system design specification tradeoffs to be made. This will definitely occur when the lower frequency ranges of vR and/or vdI and vdo overlap the higher frequency range vn. Stability Robustness

All linear control system design methods are based on a linear model of the plant. Because this design model only approximates the actual plant, an error exists between the design model and the actual plant. This error can be evaluated as absolute or relative. The method of evaluation of this error (model uncertainty) depends on the nature of the differences between the actual design model and the actual plant. The evaluation of an absolute error defines an additive model uncertainty, whereas the evaluation of a relative error defines a multiplicative model uncertainty. The additive and multiplicative model uncertainties can be further classified as either structured or unstructured model uncertainties (Murphy and Bailey, ORNL 1989). A structured model uncertainty is usually a low-frequency phenomenon characterized by variations in the parameters of the linear time-invariant plant design model. These parameter variations are caused by changes in the plant operating conditions, wear due to aging, and environmental conditions. A structured uncertainty also indicates that the gain and phase information concerning the uncertainty is known. Additive model uncertainties are generally considered structured uncertainties.

REVIEW OF OPTIMAL CONTROL THEORY

16-43

An unstructured uncertainty is typically a high-frequency phenomenon in which the only information known about the uncertainty is its magnitude. Unstructured uncertainties usually have the characteristic of becoming significant at high frequencies. Multiplicative uncertainties are characterized as unstructured. Unstructured uncertainties are usually caused by the truncation of high-frequency plant dynamics or modes due to the linearization process, and the neglect of plant dynamics such as actuators and sensors. Some plant dynamics that characterize unstructured uncertainty are flexible-body dynamics, electrical and mechanical resonances, and transport delays. Most linear control-system design methods neglect these high-frequency plant dynamics. The result is that if a highfrequency bandwidth controller is implemented, the high-frequency modes could be excited, resulting in an unstable control system. Model uncertainty clearly is seen to impose limitations on the achievable performance of a feedback control system design. Hence this section will focus on the limitations imposed by uncertainty, not on the difficult problem of exact representation of the system uncertainty. Multiplicative Uncertainty Multiplicative uncertainty (perturbation) can be represented at the input or output of the plant, and is called respectively the input multiplicative uncertainty or the output multiplicative uncertainty. The model uncertainty due to actuator and sensor dynamics is modeled, respectively, as an input or output multiplicative perturbation. A reduced-order model of an already linearized model results in a multiplicative model uncertainty. Time delay at the input or output of a plant yields, respectively, input and output multiplicative uncertainties in the plant. Even though other multiplicative model uncertainties exist among various types of control system design problems, the model uncertainties mentioned above are a common consideration in most control problems.

Additive Uncertainty The block diagram of a control system with a plant additive model uncertainty is shown in Fig. 16.2.48. The representation of the additive model uncertainty using an absolute error criterion is defined as

Output Multiplicative Uncertainty

Fig. 16.2.48 Control system with additive perturbation.

The output multiplicative uncertainty G(s) is defined as where

Gssd 5 [Grssd 2 Gssd]G21 ssd Grssd 5 LssdGssd

(16.2.94) (16.2.95)

is the perturbed transfer function, L(s) represents dynamics not included in the nominal plant G(s). Thus resulting in Gssd 5 [Lssd 2 I]

(16.2.96)

A block diagram of a control system with output multiplicative perturbation is shown in Fig. 16.2.47. The conditions of stability for a control system with output multiplicative uncertainty require that s 5I 1 [Gs jvdKs jvd]216 . s [Gs jvd]

4v . 0

(16.2.97)

Gssd 5 Grssd 2 Gssd

(16.2.98)

where in this case the perturbed transfer function G(s) is Grssd 5 [C 1 C]5sI 2 [A 1 A]6 21[B 1 B]

(16.2.99)

and the nominal plant transfer function G(s) is Gssd 5 C[sI 2 A]21B

(16.2.100)

and where A, B, and C are the nominal system matrices and A, B, and C are the perturbation matrices that indicate the respective deviations from the nominal system matrices. The condition for stability of the additively perturbed control system requires that s [Gs jvdKs jvd] . s [Gs jvdKs jvd]

4 v . 0 (16.2.101)

Table 16.2.7 contains a summary of the general design requirements presented in this section to obtain a control system with good performance and stability robustness. REVIEW OF OPTIMAL CONTROL THEORY Linear Quadratic Regulator

Consider the linear time-invariant state-space system, first-order vector differential equation model # x std 5 Axstd 1 Bustd (16.2.102) y(t)  Cx(t) (16.2.103) where A, B, and C are constant system matrices, and x(t), u(t), and y(t) are, respectively, the system state, input, and output vectors. The system is stabilizable and controllable. The goal of this procedure is to minimize the finite performance index `

J 5 3 [xT stdQxstd 1 uT stdRustd] dt `

5 3 [xTCTCx 1 uTRu] dt 0 `

5 3 [yTy 1 uTRu] dt Fig. 16.2.47 Control system with output multiplicative perturbation. Table 16.2.7

where Q is a symmetric positive semidefinite matrix and R is positive definite.

General System Design Specifications

System requirements Good command-following Good disturbance rejection Good immunity to noise Good system response to high-frequency modeling error* Good insensitivity to parameter variations at low frequencies*

(16.2.104)

Range s[Gs jvdKs jvd] W 0 dB 4 v  vR s[Gs jvdKs jvd] W 0 dB 4 v  vd 4 v  vn s[Gs jvdKs jvd] V 0 dB s 5I 1 [Gs jvdKs jvd]216 . ||Gs jvd|| s[Gs jvdKs jvd] . s [Gs jvdKs jvd]

* The G( jv) indicated corresponds to the applicable multiplicative or additive model uncertainty.

4 v . 0 rad/s

16-44

AUTOMATIC CONTROLS

The performance index J, referred to as the quadratic performance index, implies that a control u(t) is sought to facilitate the minimization of J. The weighting matrices Q and R are selected to reflect the importance of particular states and control inputs. In general the weighting of the diagonal elements of Q can be determined by the importance of the state, as observed by the C matrix of the output equation. The effect of Q is the ability to control the transient response of the LQR. The effect of R is the ability to control the energy resulting from u. Therefore a linear control law that minimizes J is defined as u(t)  Kx(t) K  R1 BTS

where

(16.2.105) (16.2.106)

where, in turn, S is a constant symmetric positive semidefinite matrix that satisfies the algebraic Riccati equation Q  SBR1 BTS  ATS  SA  0

(16.2.107)

The closed-loop regulator is given by # x std 5 [A 2 BK]xstd 5 [A 2 BR21BTS]xstd

(16.2.108)

and is asymptotically stable provided the state equation (16.2.102) and the output equation (16.2.103) are detectable. Kalman Filter

What follows is a method of reconstructing an estimate of the states using only the output measurements of the system. The method of reconstruction must be applicable when process noise and measurement noise, respectively, corrupt the plant state equations and the output measurement equations. Let us consider the stochastic linear system state space vector differential equation model # x std 5 Axstd 1 Bustd 1 dstd (16.2.109) y(t)  Cx(t)  n(t) (16.2.110) where A, B, C, and  are constant system matrices. Vectors s(t), u(t), and y(t) are, respectively, the state, input, and output vectors. Vector d(t) is a process noise random vector and vector n(t) is a measurement noise random vector. Assuming that d(t) and n(t) are zero-mean, uncorrelated, gaussian white noises, E[dstdE[nstd] 5 0 for all t (16.2.111) E[dstdd T std] 5 Dodst 2 td (16.2.112) E[nstdnT std] 5 Ndst 2 td for all t , t (16.2.113) E[dstdnT std] 5 0 (16.2.114) where t is the difference between two points in time, E[ ] is mean value, d is an impulse function, and N and Do are constant symmetric, positive definite, and positive semidefinite matrices, respectively. Note that N and Do are constant because it is assumed that d(t) and n(t) are widesense stationary. The state estimate xˆ of the actual measured state x is obtained from the noisy measurement y. Therefore we can define a state error vector estd 5 xstd 2 xˆ std (16.2.115) where we desire to minimize the mean square error e 5 E[eT stdestd] The estimator can thus be shown to take the form # xˆ 5 Axstd 1 Bustd 1 F[ystd 2 Cxstd]

(16.2.116) (16.2.117)

where F is the filter gain which minimizes Eq. (16.2.116); F is defined as F 5 CTN21

(16.2.118)

where  is a constant-variance matrix of the error e(t). It is assumed here that e(t) is wide-sense stationary. The matrix  is obtained by solving the algebraic variance Riccati equation where

D 2 CTN21C 1 A 1 AT 5 0 D 5 Do

(16.2.119) (16.2.120)

A sufficient condition to obtain a unique and positive definite  from Eq. (16.2.117) requires that [A, C] be completely observable. However, if the pair [A, C] is required to be detectable, then  can be positive semidefinite. The reconstruction error e(t) satisfies the differential equation dstd # e std 5 [A 2 FC]e std 1 [ 2 F] c d nstd

(16.2.121)

such that estd S 0 as t S `

4 t $ t0

if and only if the observer is asymptotically stable. The poles of the observer (filter) are found by using the closed-loop dynamics matrix [A  FC]. To obtain asymptotic stability of the filter, it is required that the pair [A, ] be completely controllable. The necessary and sufficient condition of stabilizability of the pair [A, ] will ensure stability also. PROCEDURE FOR LQG/LTR COMPENSATOR DESIGN

An LQG/LTR compensator will be designed for a minimum phase plant (no right half plane zeroes). The control system so designed will be guaranteed to be stable and will be robust at the plant output. First, it is assumed that if zero steady-state error is desired in the control system design, the plant inputs will be augmented to obtain the desired integral action. In some cases the dynamics of the plant may have poles that are nearly zero, which establishes a near integral action in the plant. In such a case the addition of integrals at the plant input causes a much greater than 20-dB/decade roll-off, thus yielding unreasonable plant dynamics characterized by the return ratio singular-value plots. Therefore caution is required in deciding how or whether to augment the plant dynamics. In the following description of the compensator design it is assumed that the nominal plant transfer function G(s) has been augmented as required by the control engineers’ desires. Step 1 The first step requires selecting the appropriate transfer function [GFOL(s)] that satisfies the desired performance and stability robustness of the final control system. Therefore, the appropriate scalar m  0 and a constant matrix L such that the singular values of GFOL ssd 5 s1/ 2md[CssI 2 Ad21L]

(16.2.122)

approximately meet the desired control system requirements. The system matrices C and A of the minimum phase component are assumed to have been augmented already to obtain the necessary integral action. The selection of L is made so that at low frequencies (s  jv → 0), high frequencies (s  jv → jv), or intermediate frequencies (s  jv → jvI) the singular values of GFOL(s) are approximately the same. After L is selected, then m is adjusted to obtain the desired crossover frequency s[GFOL(jv)]; therefore m functions as a gain parameter of the transfer function. It should be noted that it is desired to balance the singular values of GFOL(s) in the region of the crossover frequency in order to obtain similar responses for each loop of the system. Also, typically it is desired to have the return ratio singular values roll off at a rate no greater than 20 dB/decade in the region of gain crossover. Step 2 In this step the Kalman filter transfer function is defined as GKF(s)  C(sI  A)1F

(16.2.123)

where F is the Kalman filter gain and C and A are the system matrices of the linear time-invariant state-space system. To compute the filter gain F, first the following Kalman filter algebraic Riccati equation (ARE) is solved LLT 2 s1/ 2mdCTC 1 A 1 AT 5 0

(16.2.124)

where L and m are as obtained in step 1 and  (the covariance matrix) is the solution to the equation. The filter gain is thus computed to be F 5 s1/ 2mdCT

(16.2.125)

EXAMPLE CONTROLLER DESIGN FOR A DEAERATOR

16-45

Fig. 16.2.49 Block diagram of unity-feedback LQG/LTR control system.

Thus as seen above, L and m are tunable parameters used to produce the necessary filter gain F to meet the desired system performance and stability robustness constraints. This step completes the LQG/LTR design requirement of designing the target Kalman filter transfer function matrix with the desired properties which will be recovered at the plant output during the loop transfer recovery step. The general block diagram of the LQG/LTR control system is shown in Fig. 16.2.49. Step 3 The transfer function of the LQG/LTR controller is defined as K(s)  K[sI  (A  BK  FC)]F

The Bode gain and phase plots of the resulting control system design will be presented for each controller. Also, the singular-value plots of the return ratio, return difference, and inverse return difference for opening the loop at the plant output (point 3 in Fig. 16.2.50) will be presented.

(16.2.126)

where A, B, and C are the system matrices of Eqs. (16.2.102) and (16.2.103). The filter gain F is as stated in Eq. (16.2.125). The regulator gain K will be computed in this step to achieve loop transfer recovery. The selection of K will yield a return ratio at the plant output such that GssdKssd S GKF ssd

(16.2.127)

where G(s) is the plant transfer function, and K(s) is the controller transfer function. The first step in selecting K requires solving the LQR problem, which has the following ARE: T T Qsqd 2 SBR21 0 B S 1 SA 1 A S 5 0

(16.2.128)

where R0  I is the control weighting matrix and Q(q)  Q0  q2CCT is the modified state weighting matrix. The scalar q is a free design parameter. The resulting regulator gain K is computed as K  BTS

As q → 0 Eq. (16.2.128) becomes the ARE to the optimal regulator problem. As q →  the LQG/LTR technique guarantees that, since the design model G(s)  (C(s)I  A)1B has no nonminimum phase zeroes, then pointwise in s lim GssdKssd S GKF ssd

qS`

(16.2.130)

lim 5CssI 2 Ad21BK[sI 2 sA 2 BK 2 FCd]21F6 S

qS`

Fig. 16.2.50

Block diagram of LQG system.

(16.2.129)

CssI 2 Ad21F

(16.2.131)

The deaerator was chosen because of its simple mathematical structure; one input, one output, and three state variables. Thus the model is easy to follow but complex enough to illustrate the design technique. The Linearized Model

The mathematical model of the deaerator is nonlinear, with a process flow diagram as shown in Fig. 16.2.51. This nonlinear plant is linearized about a nominal operating condition, and the resulting linearized plant

This completes the LQG/LTR design procedure. Now that the loop transfer recovery step is complete, the singular-value plots of the return ratio, return difference, and inverse return difference must be examined to verify that the desirable loop shape has been obtained. EXAMPLE CONTROLLER DESIGN FOR A DEAERATOR

In this section, three linear control methods are used to obtain a levelcontrol system design for a deaerator. The three linear control system design methods that will be used are proportional plus integral, linear quadratic gaussian, and linear quadratic gaussian with loop transfer recovery. In this investigation, the dual of the procedure developed for the LQG/LTR design at the plant input will be used to obtain a robust control system design at the plant output (Murphy and Bailey, ORNL 1989).

Fig. 16.2.51 General process flow diagram of the deaerator.

16-46

AUTOMATIC CONTROLS

model will be used to obtain the controller design. The linear deaerator model is described by the state space linear time-invariant (LTI) differential equation # x std 5 Axstd 1 Bustd

(16.2.132)

y(t)  Cx(t)

(16.2.133)

where A, B, and C are the system matrices given, respectively, as 253.802 1.7093 9.92677 A 5 C 0.0001761 20.0009245 20.0053004 S 20.001124 20.0243636 20.139635 29526.25 B 5 C 0.265148 S 21.72463 C 5 [0.0 3.2833

0.06373]

The system states are defined as dP x 5 C dr S dH where dP is operating pressure between the pump and the extraction line, dr is fluid density, and dH is internal energy of the tank level. Plant output y  change in deaerator tank level, and the plant input u  change in control valve. LQG Controller Design

The design considerations for the tracking LQG control system will first be discussed. The block diagram of the LQG control system is shown in Fig. 16.2.50. It is apparent that output tracking of a reference input will not occur with the present LQG controller configuration. An alternative compensator structure, shown in Fig. 16.2.52, is therefore used to obtain a tracking LQG controller. The controller K(s) uses the same filter gain F and regulator gain K computed by the LQG design procedure. The detailed block diagram is the same structure that will be used for the LQG/LTR controller shown in Fig. 16.2.49.

Fig. 16.2.53 Open-loop frequency response of the PI, LQG, LQG/LTR control system. (a) Frequency response of system gain; (b) frequency response of system phase.

uncertainty of G(s) at the system’s output. To obtain good command following and disturbance rejection requires that ssG4 ssdd $ 20 dB

4 v , 0.1 rad/s

(16.2.134)

where G4(s) is the loop transfer function at the output of the KBF (point 4 of Fig. 16.2.50). Assuming the high-frequency uncertainty G(s) for this system becomes significant at 5 rad/s, then for robustness s5I 1 [G4 ssd]216 $ ||Gsjvd|| 5 5 dB

4 v $ 5 rad/s

Fig. 16.2.52 Alternative compensator structure.

Using a control system design package MATRIXX, the regulator gain K and filter gain F are computed as K 5 105 3 [0.0233 3.145 and

20.1024]

0.1962 F 5 C 9.9998 S 20.0083

given user-defined weighting matrices. The frequency response of the open-loop transfer function of the closed-loop compensated system yssd tank level 5 reference input rssd where y(s) is the tank level and r(s) is the reference input, as shown in Fig. 16.2.53.

20.0107 F 5 C 0.472 S 20.02299

LQG/LTR Controller Design

This LQG/LTR control system has the structure shown in Fig. 16.2.52. This LQG/LTR control system design is a two-step process. First, a Kalman filter (KBF) is designed to obtain good command following and disturbance rejection over a specified low-frequency range. Also, the KBF design is made to meet the required robustness criteria with system

(16.2.135)

In this application, the frequency at which system uncertainty becomes significant, 5 rad/s, and the magnitude of the uncertainty, 5 dB at 5 rad/s, are assumed values. This assumption is required because of the lack of information on the high-frequency modeling errors of the process being studied. The second step is to recover the good robustness properties of the loop transfer function G4(s) of point 4 at the output node y (point 3). This will be accomplished by applying the loop transfer recovery (LTR) step at the output node (point 3). The LTR step requires the computation of a regulator gain (K) to obtain the robustness properties of the loop transfer function G4(s) at the output node y. The tool used to obtain the LQG/LTR design for this system, considering the low- and high-frequency bound requirements is CASCADE, a computer-aided system and control analysis and design environment that synthesizes the LQG/LTR design procedure (Birdwell et al.). Note that more recent computer-aided control system design tools synthesize this design procedure. The resulting filter gain F and regulator K gain for this system are, respectively,

and

K 5 [0.660 27703.397

589.4521]

The resulting open-loop frequency response of the compensated system is shown in Fig. 16.2.53.

EXAMPLE CONTROLLER DESIGN FOR A DEAERATOR

16-47

Fig. 16.2.54 Block diagram of a PI control system. PI Controller Design

The PI controller for this system design takes on a structure similar to the three-element controller prevalent in process control. Therefore the PI control system structure takes the form shown in Fig. 16.2.54. The classical design details of the deaerator are shown in Murphy and Bailey, ORNL 1989. The transfer functions used in this control structure are defined as wssd condensate flow 5 controller output ussd 5 4.5227 3 108 3 and

(16.2.136) s 1 0.142 ss 1 0.1405dss 1 53.8015d

yssd tank level 5 condensate wssd 5 1.6804 3 1029 3

(16.2.137) ss 1 0.1986dss 1 47.458d sss 1 0.142d

The constants K1 and K2 are used for unit conversion and defined, respectively, as 1.0 and 106. The terms Kp and Ki are defined, respectively, as the proportional gain and the integral gain. Using the practical experience of a prior three-element controller design, the proportional and integral gains are selected to be Kp  0.1 and Ki  0.05. The resulting open-loop frequency response plot of the compensated system y(s)/r(s) is shown in Fig. 16.2.54. Precompensator Design

The transient responses of the PI, LQG, and LQG/LTR control systems are shown in Fig. 16.2.55. The transient response of the LQG/LTR controller is seen to have a faster time to peak than the LQG and PI transient responses. Considering the practical physical limitations of the plant, it appears unlikely that the required rise time dictated by the transient response of the LQG/LTR control system is possible. The obvious solution would be to redesign the LQG/LTR control system to obtain a slower, more practical response. In the case of the LQG/LTR control system, the possibility of a redesign is eliminated, because the control system was designed to meet certain command-tracking, disturbance-rejection, and stability-robustness requirements. Therefore, to eliminate this difficulty, a second-order precompensator will be placed in the forward path of the reference input signal. This precompensator will shape the output transient response of the system, since the control system design requires the output to follow the reference input. The precompensator in this investigation will be required to have a time to peak of 30 s with a maximum overshoot of about 3 percent. The requirements are selected to emulate somewhat the transient response of the PI controller. The transfer function of the second-order precompensator used is as follows: v2n rssd 5 2 ri ssd s 1 2jvns 1 v2n

(16.2.138)

Fig. 16.2.55 PI, LQG, LQG/LTR, LQG/LTR (with precompensator) control system closed-loop transient responses.

where j is the damping ratio, vn is the natural frequency, r(s) is the output of the precompensator, and ri(s) is the actual reference input. The precompensator Eq. (16.2.138) can be represented in state equation form as follows: x 0 1 B 1R 5 B R x2 2v2n 2zvn and

x 0 B 1 R 1 B R ri std (16.2.139) x2 1

x rstd 5 [v2n 0] B 1 R x2

(16.2.140)

where ri(t) is the time-domain representation of the reference input signal and r(t) is the time-domain signal at the output of the precompensator. Considering the 3 percent overshoot requirement, then j must be 0.7. Using the expression tmax 5

p vn 21 2 s0.7d2

(16.2.141)

where tmax must be 30 s, gives vn 5

p

30 21 2 s0.7d2 The complete state equation of the precompensator is x 0 B 1R 5 B x2 20.0215 and

1 x 0 R B 1 R 1 B R ri std 20.2052 x2 1

rstd 5 [0.0215 0]

x B 1R x2

(16.2.142)

(16.2.143)

(16.2.144)

16-48

AUTOMATIC CONTROLS

The resulting transient response of the LQG/LTR control system using the precompensator is shown in Fig. 16.2.55. ANALYSIS OF SINGULAR-VALUE PLOTS

Systems with large stability margins, good disturbance rejection/command following, low sensitivity to plant parameter variations, and stability in the presence of model uncertainties are described as being robust and having good robustness properties. The singular-value plots for the PI, LQG, and LQG/LTR control systems are examined to evaluate the robustness properties. The singular-value plots of the return ratio, return difference, and the inverse return difference are shown in Figs. 16.2.56, 16.2.57, and 16.2.58.

Fig. 16.2.58 Singular-value plots of inverse return difference. Deaerator Study Summary and Conclusions Summary of Control System Analysis The design criteria used for analysis in this system is defined as shown in Table 16.2.8. The performance and robustness results summarized from the analysis using the singular-value plots are shown in Table 16.2.9, from which it appears that the LQG/LTR control system has the widest low-frequency range of disturbance rejection and insensitivity to parameter variations. The PI control system has the worst disturbance rejection property and the most sensitivity to parameter variations. All of the control systems are capable of maintaining a stable robust system in the presence of high-frequency modeling errors, but the PI control system is best. Though all the control systems also have good immunity to noise at frequencies significantly greater than 5 rad/s, the PI control system has the best immunity. Unlike the other systems, the LQG/LTR control system meets all the design criteria. Its design was obtained in a systematic manner, in contrast to the trial-and-error method used for the LQG control system. The LQG controller design was obtained by shaping the output transient response using the system state weighting matrix, then examining the system’s singular-value plots. The PI control system used is simply a

Fig. 16.2.56 Singular-value plots of return ratio.

Table 16.2.8

Fig. 16.2.57 Singular-value plots of return difference.

Table 16.2.9

System Design Specifications

System requirements

Range

Good command-following/ disturbance rejection Good system response to highfrequency modeling error Good insensitivity to parameter variations at low frequencies Good immunity to noise, v  5 rad/s

s[Ls jvd] . 20 dB 4 v # 0.1 rad/s ssI 1 [Ls jvd]21d $ ||L|| 5 5 dB 4 v # 5 rad/s s[I 1 Ls jvd] $ 26 dB for low frequencies ssLs jvd] V 0 dB for high frequencies

Performance and Robustness Results System properties

Command following/disturbance System response to high-frequency modeling error sv $ 5 rad/sd Insensitivity to parameter variations at low frequencies Immunity to noise sv $ 5 rad/sd

PI control system

LQG control system

LQG/LTR control system

s[Ls jvd] $ 20 dB 4 v # 0.01 rad/s ssI 1 [Ls jvd]21d $ 43.0 dB s[I 1 Ls jvd] $ 26 dB 4 v # 0.0055 rad/s s [Ls jvd] # 2 42.5 dB

s[Ls jvd] $ 20 dB 4 v # 0.017 rad/s ssI 1 [Ls jvd]21d $ 30 dB

s[Ls jvd] $ 20 dB 4 v # 0.1 rad/s ssI 1 [Ls jvd]21d $ 16.7 dB

s[I 1 Ls jvd] $ 26 dB 4 v # 0.008 rad/s s[Ls jvd] # 2 30 dB

s[I 1 Ls jvd] $ 26 dB 4 v # 0.0055 rad/s s[Ls jvd] # 216 dB

ANALYSIS OF SINGULAR-VALUE PLOTS

three-element control system strategy that has previously been used in the simulation study of the deaerator flow control system. Conclusions It should be evident that performance characteristics such as a suitable transient response do not imply that the system will have good robustness properties. Obtaining suitable performance characteristics and a stable robust system are two separate goals. Classical unity-feedback design methods have been used for transient response shaping, with little consideration for robustness properties. The main advantage of a closed-loop feedback system is that good performance and stability robustness properties are obtainable. As has been shown, transient response shaping can be obtained easily using a precompensator. Therefore it appears that the goal of the control system designer is first to design a stable robust system and then use prefiltering to obtain the desired transient response. Optimal control methods as demonstrated by the LQG control system do not guarantee good robustness properties when applied systematically to meet minimization requirements of a performance index. Methods such as pole placement emphasize transient response characteristics without regard to robustness, which could be detrimental to the system integrity in the presence of model uncertainties. Classical and Modern Representation of Process Models

A dynamic unsteady state model will be developed for a heated stirredtank process shown in Fig. 16.2.59. This process is common to the chemical and other industries. This process will be used to show that the classical frequency domain models (Buckley, “Techniques of Process Control,” Wiley) presently and previously used in classical control design can also easily be represented in a state differential equation form common for use with the modern control techniques.

16-49

Equation (16.2.146) gives the rate of mass accumulation of liquid in the tank (Seborg, Edgar, Mellichamp, “Process Dynamics and Control,” Prentice-Hall). The simplifying assumptions for this model are that r and C are constants for Eqs. (16.2.145) and (16.2.146). Thus it follows that Eq. (16.2.146) becomes r dV 5 win 2 wout dt

(16.2.147)

and the differentiation of the left-hand side of Eq. (16.2.145) yields C

d[VrsTout 2 Ttankd] dV 5 rCsTout 2 Ttankd dt dt 1 VrC

dsTout 2 Ttankd dt

(16.2.148)

Substituting the mass balance relation of Eq. (16.2.147) into Eq. (16.2.148) yields C

d[VrsTout 2 Ttankd] dTout 5 VrC dt dt 1 CsTout 2 Ttankdswin 2 woutd

(16.2.149)

where dTtank/dt  0. Now substituting the right-hand side of Eq. (16.2.149) into the left-hand side of Eq. (16.2.145) yields with simplification VrC

dTout 5 CsTin 2 Toutd win 1 Q dt

(16.2.150)

The resulting equations of the heated stirred-tank process model are dV 5 swin 2 woutd>r dt

Win,Tin

VrC

dTout 5 CsTin 2 Toutdwin 1 Q dt

(16.2.151) (16.2.152)

where Eq. (16.2.152) is nonlinear. This set of equations gives the flexibility to define various operating conditions for the heated stirred-tank process. Next we will formulate the classical and modern representation for the process model that can be used for control system design. Linearization of the Nonlinear Model

Wout,Tout Fig. 16.2.59 Process diagram of heated stirred-tank.

The first step in developing the model is to obtain the system dynamic energy balance as follows C

d[VrsTout 2 Ttankd] 5 winCsTin 2 Ttankd dt 2 woutCsTout 2 Ttankd 1 Q

(16.2.145)

where Tin, Tout, and Ttank are, respectively, the liquid temperatures in the inlet and outlet lines and the tank for enthalpy calculations (assumed constant). The terms win, wout, C, V, r, and Q are, respectively, the inlet and outlet mass flow rate, the specific heat of the liquid, the tank liquid volume, density, and the heat input to the tank from the electrical heating unit. Equation (16.2.145) defines the rate of energy accumulation for the tank contents. The dynamic unsteady-state mass balance for the tank contents will now be defined as dsVrd 5 win 2 wout dt

(16.2.146)

Typical classical and more of the popular modern control techniques of control system design use linear process models. Equation (16.2.152) above is nonlinear. We must make some assumptions to obtain a linear equation model. The assumptions are: the process has a constant liquid volume and the inlet and outlet flow rates are equal and constant (w  win  wout). The initial steady-state conditions of the heated stirred-tank process at time t  0 are assumed as follows: Touti  Tout (0), Tini  Tin(0), and Qint  Q(0). Considering that the dynamic components of the system variables are more important for the control system designer, the initial steady-state component will be subtracted from the process variables. Applying this normalization to Eq. (16.2.152) yields the following perturbation model equation VrC

dsTout 2 Toutid 5 C[sTin 2 Tinid 2 sTout 2 Toutid]win dt 1 sQ 2 Qintd

(16.2.153)

Let us define dTin 5 Tin 2 Tini, dTout 5 Tout 2 Touti, and dQ  Q  Qint. Therefore Eq. (16.2.153) becomes t

dsdToutd 5 dTin 2 dTout 1 dQ/Cw dt

(16.2.154)

where t 5 Vr/w and Eq. (16.2.154) is a linear perturbed differential equation describing the dynamics of the heated stirred-tank process.

16-50

AUTOMATIC CONTROLS

Classical Transfer Function Model

Equation (16.2.154) can be used to obtain the frequency domain process transfer function. Let’s assume that at time t  0, dTout 5 0, and the inlet temperature is constant [dTin std 5 Tin std 2 Tini s0d 5 0, for t $ 0]. Thus Eq. (16.2.154) becomes t

dsdToutd dQ 5 2dTout 1 Cw dt

(16.2.155)

and, taking the Laplace transform and simplifying, the heated stirredtank transfer function is found to be dTout ssd dQssd

5

1 Cwsts 1 1d

(16.2.156)

Assuming dQ, the input heat, makes a step change from dQ  Q  Qint, where Q  Qint, to dQ  Q  Qint, where Q 5 Q 1 Qint at t  0, then for t $ 0, dQ 5 Q. For these conditions the heated stirred-tank transfer function now becomes dTout ssd 5

Q Cwsts 1 1dssd

(16.2.157)

A general block diagram of the perturbed transfer function process model using Eq. (16.2.156) with closed-loop feedback to control the output temperature is shown in Fig. 16.2.60. Controller Tsetpoint +

e(s)

Process 1 Cw (τs + 1)

Q

K(s)



Tout

Fig. 16.2.60 Block diagram for heated stirred-tank closed-loop temperature feedback control using process transfer function. Modern State Differential Equation Process Model

The state differential equation typically used in modern control for the process model takes the general form of # x std 5 Axstd 1 Bustd (16.2.158) ystd 5 Cxstd

(16.2.159)

The perturbed state differential equation is easily obtained for the heated stirred-tank process by using Eq. (16.2.155), resulting in # dTout 5 [21/t]dTout 1 [1/tCw]dQ (16.2.160) # # where A 5 21/t, B 5 1/tCw, x 5 dTout, x 5 dTout, u 5 dQ, C 5 1, and y 5 dTout. A block diagram of the process model using Eq. (16.2.160) with closed-loop feedback to control the output temperature is shown in Fig. 16.2.61. Controller Tsetpoint + −

e(s)

K(s)

Q

B

+ +

Process Tout 1 s

C

Tout

A

Fig. 16.2.61 Block diagram for heated stirred-tank closed-loop temperature feedback control using state differential equations. Conclusion

This heated stirred-tank example provides a detailed look at the derivation of the process transfer function and the process differential equation that would be used, respectively, for the frequency domain classical and modern control system design methods. It is clear that obtaining the linearized process model is a crucial step in this process. It should be

noted that obtaining the transfer function model is an easy step from the state differential equation used in modern control system design. Finally, it should be observed that the time constant obtained during this example is directly related to the design properties of the process. Therefore it is clear that the best control engineer cannot overcome any inherent characteristic of the process that limits, for instance, the desire for a fast process response. This makes a clear argument for the need to include the control engineer in the initial design efforts for a new process or the modification of an existing process to ensure the desired system performance is obtained in the final process. TECHNOLOGY REVIEW Fuzzy Control

Fuzzy controllers are a popular method for construction of simple control systems from intuitive knowledge of a process. They generally take the form of a set of IF . . . THEN . . . rules, where the conditional parts of the rules have the structure “Variable i is fuzzy value 1 AND Variable j is fuzzy value 2 . . . . ” Here the fuzzy value refers to a membership function, which defines a range of values in which the variable may lie and a number between zero and one for each element of that range specifying the possibility (where certainty is one) that the variable takes that value. Most fuzzy control applications use an error signal and the rate of change of an error signal as the two variables to be tested in the conditionals. The actions of the rules, specified after THEN, take a similar form. Because only a finite collection of membership functions is used, they can be named by mnemonics which convey meaning to the designer. For example, a fuzzy controller’s rule base might contain the following rules: IF temperature-error is high and temperature-error-rate is slow THEN fuel-rate is negative. IF temperature-error is zero and temperature-error-rate is slow THEN fuel-rate is zero.

Fuzzy controllers are an attractive control design approach because of the ease with which fuzzy rules can be interpreted and can be made to follow a designer’s intuitive knowledge regarding the control structure. They are inherently nonlinear, so it is easy to incorporate effects which mimic variable gains by adjusting either the rules or the definitions of the membership functions. Fuzzy controller technology is attractive, in part because of its intuitive appeal and its nonlinear nature, but also because of the frequent claim that fuzzy rules can be utilized to approximate arbitrary continuous functional relationships. While this is true, in fact it would require substantial complexity to construct a fuzzy rule base to mimic a desired function sufficiently well for most engineers to label it an approximation. Herein also lie the potential liabilities of fuzzy controller technology: there is very little theoretical guidance on how rules and membership functions should be specified. Fuzzy controller design is an intuitive process; if intuition fails the designer, one is left with very little beyond trial and error. One remedy for this situation is the use of machine, or automated learning technology to infer fuzzy controller rules from either collected data or simulation of a process model. This process is inference, rather than deductive reasoning, in that proper operation of the controller is learning by example, so if the examples do not adequately describe the process, problems can occur. This inference process, however, is quite similar to the methods of system identification, which also infer model structure from examples, and which is an accepted method of model construction. Machine learning technology enables utilization of more complex controller structures as well, which introduce additional degrees of freedom in the design process but also enhance the capabilities of the controller. One example of this approach is the Fuzzy PID controller structure (Wang and Birdwell), which utilizes a fuzzy rule base to generate gains for a PID controller. Other similar approaches have been reported in the literature. An advantage of this approach is the ability to establish theoretical results on closed-loop stability, which are otherwise lacking for fuzzy controllers.

TECHNOLOGY REVIEW Signal Validation Technology

Continuous monitoring of instrument channels in a process industry facility serves many purposes during plant operation. In order to achieve the desired operating condition, the system states must be measured accurately. This may be accomplished by implementing a reliable signal validation procedure during both normal and transient operations. Such a system would help reduce challenges on control systems, minimize plant downtime, and help plan maintenance tasks. Examples of measurements include pressure, temperature, flow, liquid level, electrical parameters, machinery vibration, and many others. The performance of control, safety, and plant monitoring systems depends on the accuracy of signals being used in these systems. Signal validation is defined as the detection, isolation, and characterization of faulty signals. This technique must be applicable to systems with redundant or single sensor configuration. Various methods have been developed for signal validation in aerospace, power, chemical, metals, and other process industries. Most of the early development was in the aerospace industry. The signal validation techniques vary in complexity depending on the level of information to be extracted. Both model-based and direct data-based techniques are now available. The following is a list of techniques often implemented for signal validation: Consistency checking of redundant sensors Sequential probability ratio testing for incipient fault detection Process empirical modeling (static and dynamic) for state estimation Computational neural networks for state estimation Kalman filtering technique for state estimation in both linear and nonlinear systems Time-series modeling techniques for sensor response time estimation and frequency bandwidth monitoring PC-based signal validation systems (Upadhyaya, 1989) are now available and are being implemented on-line in various industries. The computer software system consists of one or more of the above signal processing modules, with a decision maker that provides sensor status information to the operator. An example of signal validation (Upadhyaya and Eryurek) in a power plant is the monitoring of water level in a steam generator. The objective is to estimate the water level using other related measurements and compare this with the measured level. An empirical model, a neural network model, or the Kalman filtering technique may be used. For example, the inputs to an empirical model consist of steam generator main feedwater flow rate, steam generator pressure, and inlet and outlet temperatures of the primary water through the steam generator. Such a model would be developed during normal plant operation, and is generally referred to as the training phase. The signal estimation models should be updated according to the plant operational status. The use of multiple signal validation modules provides a high degree of confidence in the results. Chaos

Recent advances in dynamical process analysis have revealed that nonlinear interactions in simple deterministic processes can often result in highly complex, aperiodic, sometimes apparently “random” behavior.

16-51

Processes which exhibit such behavior are said to be exhibit deterministic chaos. The term chaos is perhaps unfortunate for describing this behavior. It evokes images of purely erratic behavior, but this phenomenon actually involves highly structured patterns. It is now recognized that deterministic chaos dominates many engineering systems of practical interest, and that, in many cases, it may be possible to exploit previously unrecognized deterministic structure for improved understanding and control. Deterministic chaos and nonlinear dynamics both apply to phenomena that arise in process equipment, motors, machinery, and even control systems as a result of nonlinear components. Since nonlinearities are almost always present to some extent, nonlinear dynamics and chaos are the rule rather than the exception, although they may sometimes occur to such a small degree that they can be safely ignored. In many cases, however, nonlinearities are sufficiently large to cause significant changes in process operation. The effects are typically seen as unstable operation and/or apparent noise that is difficult to diagnose and control with conventional linear methods. These instabilities and noise can cause reduced operating range, poor quality, excessive downtime and maintenance, and, in worst cases, catastrophic process failure. The apparent erratic behavior in chaotic processes is caused by the extreme sensitivity to initial conditions in the system, introduced by nonlinear components. Though the system is deterministic, this sensitivity and the inherent uncertainty in the initial conditions make long-term predictions impossible. In addition, small external disturbances can cause the process to behave in unexpected ways if this sensitivity has not been considered. Conversely, if one can learn how the process reacts to small perturbations, this inherent sensitivity can be taken advantage of. Once the system dynamics are known, small changes in system parameters can be used to drive the system to a desired state and to keep it there. Thus, great effects can be achieved through minimal input. Various control algorithms have been developed that use this principle to achieve their goals. A recent example from engineering is the control of a slugging fluidized bed by Vasadevan et al. The following example illustrates the use of chaos analysis and control which involves a laboratory fluidized-bed experiment. Several recent studies have recognized that some modes of fluidization in fluidized beds can be classified as deterministic chaos. These modes involve large-scale cyclic motions where the mass of particles move in a pistonlike manner up and down in the bed. This mode is usually referred to as slugging and is usually considered undesirable because of its inferior transfer properties. Slugging is notoriously difficult to eradicate by conventional control techniques. Vasadevan et al. have shown that the inherent sensitivity to small perturbations can be exploited to alleviate the slugging problem. They did this by adding an extra small nozzle in the bed wall at the bottom of the bed. The small nozzle injected small pulses of process gas into the bed, thus disrupting the “normal” gas flow. The timing of the pulses was shown to be crucial to achieve the desired effect. Different injection timing caused the slugging behavior to be either enhanced or destroyed.

16.3 SURVEYING by W. David Teter REFERENCES: Moffitt, Bouchard, “Surveying,” HarperCollins. Wolf and Brinker, “Elementary Surveying,” Harper & Rowe. Kavanaugh and Bird, “Surveying Principles and Applications,” Prentice-Hall. Kissam, “Surveying for Civil Engineers,” McGraw-Hill. Anderson and Mikhail, “Introduction to Surveying,” McGraw-Hill. McCormac, “Surveying Fundamentals,” Prentice-Hall. INTRODUCTION

Surveying is often defined as the art and science of measurement for location or establishment of position above, on, or beneath the earth’s surface. The principles of surveying practice have remained consistent from their earliest inception, but in recent years the equipment and technology have changed rapidly. The emergence of the total station device, electronic distance measurements (EDM), Global Positioning System (GPS), and geographic information systems (GIS) is of significance in comparing modern surveying to past practice. The information required by a surveyor remains basically the same and consists of the measurement of direction (angle) and distance, both horizontally and vertically. This requirement holds regardless of the type of survey such as land boundary description, topographic, construction, route, or hydrographic.

Fig. 16.3.1 Slope-reduction measurements used with EDM devices. Electronic Distance Measurement

Modern surveying practice employs EDM technology for precise and rapid distance measurement. A representation of an EDM device is shown in Fig 16.3.2. The principle of EDM is based on the comparison of the modulated wavelength of an electromagnetic energy source (beam) to the time required for the beam to travel to and return from a

HORIZONTAL DISTANCE

Most surveying and engineering measurements of distance are horizontal or vertical. Land measurement referenced to a map or plat is reduced to horizontal distance regardless of the manner in which the field measurements are made. The methods and devices used by the surveyor to measure distance include pacing, odometers, tachometry (stadia), steel tape, and electronic distance measurement. These methods produce expected precision ranging from 2 percent for pacing to 0.0003 percent for EDM. The surveyor must be aware of the necessary precision for the given application. Tapes

Until the advent of EDM technology, the primary distance-measuring instrument was the steel tape, usually 100 ft or 30 m (or multiples) in length, with 1 ft or 1 dm on the end graduated to read with high precision (0.01 ft or 1 mm). Cloth tapes can be used when low precision is acceptable. Corrections Measurements with a steel tape are subject to variations that normally must be corrected for when high precision is required. Tapes are manufactured to a standard length at usually 68F (20C), for a standard pull of between 10 and 20 lb (4.5 and 9 kg), and supported over the entire length. The specifications for the tape are provided by the manufacturer. If any of the conditions vary during field measurement, corrections for temperature, pull, and sag will have to be made according to the manufacturer’s instructions. Tape Use Most surveying measurements are made with reference to the horizontal, and if the terrain is level, involves little more than laying down the tape in a sequential manner to establish the total distance. Care should be taken to measure in a straight line and to apply constant tension. When measuring on a slope, either of two methods may be employed. In the first method, the tape is held horizontal by raising the low end and employing a plumb bob for location over the point. In cases of steep terrain a technique known as breaking tape is used, where only a portion of the total tape length is used. With the second method, the slope distance is measured and converted by trigonometry to the horizontal distance. In Fig. 16.3.1 the horizontal distance D  S cos (a) where S is the slope distance and a is the slope angle which must be estimated or determined. 16-52

Fig. 16.3.2 Electronic distance meter (EDM). (PENTAX Corp.)

point at an unknown distance. The EDM devices may be of two types: (1) electrooptical devices employing light transmission within or just beyond the visible region or (2) devices transmitting in the microwave spectrum. Electrooptical devices require an active transmitter and a passive reflector, while microwave devices require an active transmitter and an identical unit for reception and retransmission at the endpoint of the measured line. Ideally the EDM beam would propagate at the velocity of light; however, the actual velocity of propagation is affected by the index of refraction of the atmosphere. As a result, the atmospheric conditions of temperature, pressure, and humidity must be monitored in order to apply corrections to the measurements. Advances in electrooptical technology have made microwave devices, which are highly sensitive to atmospheric relative humidity, nearly obsolete. Atmospheric Corrections EDM devices built prior to about 1982 require a manual computation for atmospheric correction; however, the more modern instruments allow for keystroke entry of values for temperature and pressure which are processed and applied automatically as a correction for error to the output reading of distance. The relationships between temperature, pressure, and error are shown in Fig. 16.3.3. Microwave devices require an additional correction for relative humidity. A psychrometer wet-dry bulb temperature difference of 3F (2C) causes an error of about 0.001 percent in distance measurement. Instrument Corrections Electrooptical devices may also be subject to error attributed to reflector constant. This results from the physical center of the reflector not being coincident with the effective or optical center. The error can range to 30 or 40 mm and is unique, but is usually known, for each reflector. The older EDM devices required this value to

VERTICAL DISTANCE

16-53

Fig. 16.3.4 Determination of elevation of an inaccessible point.

Fig. 16.3.3 Relationship of error to atmospheric pressure and temperature.

be subtracted from each reading, but in more modern devices the reflector constant can be preset into the instrument and the compensation occurs automatically. Finally, microwave EDM devices are subject to error from ground reflection which is known as ground swing. Care must be taken to minimize this effect by deploying the devices as high as possible and averaging multiple measurements. EDM Field Practice Practitioners have developed many variations in employing EDM devices. Generally the field procedures are governed by the fact that the distance readout from the device is a slope measurement which must be reduced to the horizontal equivalent. Newer EDM devices may allow for automatic reduction by keystroke input of the vertical angle, if known, as would be the case when the EDM device is employed with a theodolite or if the device is a total station (see later in the section). Otherwise the slope reduction can be accomplished by using elevation differences. With reference to Fig. 16.3.1, it is seen that H  (ELA  ha)  (ELB  hb) and that the horizontal distance D  (S2  H2)1/2. EDM devices are sometimes mounted on a conventional theodolite. In such cases the vertical angle as measured by the theodolite must be adjusted to be equivalent to the angle seen by the EDM device. It is common to sight the theodolite at a point below the reflector equal to the vertical distance between the optical centers of the theodolite and the EDM device.

the instrument point is equal to H  h1, where h1 is the height of the instrument. Differential Leveling This method requires the use of an instrument called a level, a device that provides a horizontal line of sight to a rod graduated in feet or metres. Figure 16.3.5 shows a vintage level, and Fig. 16.3.6 shows a modern “automatic” level. The difference is that with the older device, four leveling screws are used to center the spirit-level bubble in two directions so that the instrument is aligned with the horizontal. The modern device adjusts its own optics for precise levelness once the three leveling screws have been used to center the rough-leveling/circular level bubble within the scribed circle. The instrument legs should be planted on a firm footing. The eyepiece is checked and adjusted to the user’s eye for clear focus on the crosshairs, and the objective-lens focus knob is used to obtain a clear image of the graduated rod.

Fig. 16.3.5 Y level.

VERTICAL DISTANCE

The acquisition of data for vertical distance is also called leveling. Methods for the determination of vertical distance include direct measurement, tachometry or stadia leveling, trigonometric leveling, and differential leveling. Direct measurement is obvious, and stadia leveling is discussed later. Trigonometric Leveling This method requires the measurement of the vertical angle and slope distance between two points and is illustrated in Fig. 16.3.1. The vertical distance H  S sin (a). A precise value for the vertical angle and slope distance will not be obtained if the EDM device is mounted on the theodolite or the instrument heights ha and hb are not equal. See “EDM Field Practice,” above. Before the emergence of EDM devices, the slope distance was simply taped and the vertical angle measured with a theodolite or transit. If the elevation of an inaccessible point is required, the EDM reflector cannot be placed there and a procedure employing two setups of a transit or theodolite may be used, as shown in Fig. 16.3.4. Angle Z3  Z1  Z2, and the application of the law of sines shows that distance A/sin Z2  C/sin Z3. C is the measured distance between the two instrument setups. The vertical distance H  A sin Z1. The height of the stack above

Fig. 16.3.6 Automatic (self-adjusting) level. To determine the difference in elevation between two points, set the level nearly midway between the points, hold a rod on one, look through the level and see where the line of sight, as defined by the eye and horizontal cross wire, cuts the rod, called rod reading. Move the rod to the second point and read. The difference of the readings is the difference in level of the two points. If it is impossible to see both points from a single setting of the level, one or more intermediate points, called

16-54

SURVEYING

turning points, are used. The readings taken on points of known or assumed elevation are called plus sights, those taken on points whose elevations are to be determined are called minus sights. The elevation of a point plus the rod readings on it gives the elevation of the line of sight; the elevation of the line of sight minus the rod reading on a point of unknown elevation gives that elevation. In Fig. 16.3.7, I1 and I2 are intermediate points between A and B. The setups are numbered. Assuming A to be of known elevation, the reading on A is a  sight; the reading on I1 from 1 is a – sight; the reading on I1 from 2 is a  sight and on I2 is a – sight. The algebraic sum of the plus and minus sights is the difference of elevation between A and B. Target rods can usually be

Fig. 16.3.7 Determining elevation difference with intermediate points.

read by vernier to thousandths of a foot. In grading work the nearest tenth of a foot is good; in lining shafting the finest possible reading is none too good. It is desirable that the sum of the distances to the plus sights approximately equal the sum of the distances to the minus sights to ensure compensation of errors of adjustment. On a side hill this can be accomplished by zigzagging. When the direction of pointing is changed, the position of the level bubble should be checked. In the case of a vintage instrument, the bubble should be adjusted back to a centered position along the spirit level. An automatic level requires the bubble to be repositioned within the centering circle. To Make a Profile of a Line A benchmark is a point of reasonably permanent character whose elevation above some surface—such as sea level—is known or assumed and used as a reference point for elevation. The level is set up either on or a little off the line some distance—not more than about 300 ft (90 m)—from the starting point or a convenient benchmark (BM), as at K in Fig. 16.3.8. A reading is taken on the BM and added to the known or assumed elevation to get the height of the instrument, called HI. Readings are then taken at regular intervals (or stations) along the line and at such irregular points as may be necessary to show change of slope, as at B and C between the regular points. The regular points are marked by stakes previously set “on line” at distances of 100 ft (30 m), 50 ft (15 m), or other distance suitable to the character of

the ground and purpose of the work. When the work has proceeded as far as possible—not more than about 300 ft (90 m) from the instrument for good work—a turning point (TP) is taken at a regular point or other convenient place, the instrument moved ahead and the operation continued. The first reading on the BM and the first reading on a TP after a new setup are plus sights (S); readings to points along the line and the first reading on a TP to be established are minus sights (S). The notes are taken in the form shown in Fig. 16.3.9. The elevation of a given point, both sights taken on it, and the HI determined from it all appear on a line with its station (Sta) designation. In plotting the profile, the vertical scale is usually exaggerated from 10 to 20 times. Inspection and Adjustment of Levels Leveling instruments are subject to maladjustment through use and should be checked from time to time. Vintage instruments require much more attention than modern devices. Generally, the devices should be checked for verticality of the crosshair, proper orientation of the leveling bubble, and the optical line of sight. The casual user would normally not attempt to physically adjust an instrument, but should be familiar with methods for inspection for maladjustment as follows. 1. Verticality of crosshair. Set up the level and check for coincidence of the vertical crosshair on a suspended plumb line or vertical corner of a building. The crosshair ring must be rotated if this condition is not satisfied. This test applies to all types of instruments. 2. Alignment of the bubble tube. This check applies to older instruments having a spirit level. In the case of a Y level, the instrument is carefully leveled and the telescope removed from the Ys and turned end for end. In the case of a dumpy level, the instrument is carefully leveled and then rotated 180. In each case, if the bubble does not remain centered in the new position, the bubble tube requires adjustment. 3. Alignment of the circular level. This check applies to tilting levels and modern automatic levels equipped with a circular level consisting of a bubble to be centered within a scribed circle. The instrument is carefully adjusted to center the bubble within the leveling circle and then rotated through 180. If the bubble does not remain centered, adjustment is required. 4. Line of sight coincident with the optical axis. This check (commonly called the two-peg test) applies to all instruments regardless of construction. The check is performed by setting the instrument midway between two graduated rods and taking readings on both. The instrument is then moved to within 6 ft of one of the rods, and again readings are taken on both. The difference in readings should agree with the first set. If they do not, an adjustment to raise or lower the crosshair is necessary. ANGULAR MEASUREMENT

Fig. 16.3.8 Making a profile.

Fig. 16.3.9 Form for surveyor’s notes.

The instruments used for measurement of angle include the transit (Fig. 16.3.10), the optical theodolite, and the electronic total station (Fig. 16.3.11). Angular measurements in both horizontal and vertical planes are obtained with the quantitative value derived in one of three

ANGULAR MEASUREMENT

16-55

Fig. 16.3.12 Use of graduated circles to measure the angles between points.

Fig. 16.3.10 Vintage transit.

Fig. 16.3.11 Electronic total station. (PENTAX Corp.)

ways depending on the era in which the instrument was manufactured. In principle, the effective manner in which these devices operate is shown in Fig. 16.3.12. The graduated outer circle is aligned at zero with the inner-circle arrow, and an initial point A is sighted on. Then, with the outer circle held fixed by means of clamps on the instrument, the

inner circle (with pointer and moving with the telescope) is rotated to a position for sighting on point B. The relative motion between the two circles describes the direction angle which is read to precision with the aid of the scale’s vernier. In the actual case, only the transit operates in the mechanical manner described. The operation of an optical theodolite occurs internally with the user viewing internal scales that move relative to each other. Many optical theodolites have scale microscopes or scale micrometres for precise reading. The electronic total station (discussed later) is also dependent on internal optics, but in addition it gives its angle values in the form of a digital display. Examination of these devices reveals the presence of an upper clamp (controlling the movement of the inner circle described before) and a lower clamp (controlling the movement of the outer circle). Both clamps have associated with them a tangent screw for very fine adjustment. Setting up the instrument involves leveling the device and, unlike the procedure with a level, locating the instrument exactly over a particular point (the apex of the angle). This is accomplished with a suspended plumb (in the case of older transits) or an optical (line-of-sight) plummet in modern instruments. Prior to any measurement the clamps are manipulated so as to zero the initial angular reading prior to turning the angle to be read. In the case of the digital output total station, the initial angular reading is zeroed by pushing the zero button on the display. A total station, in effect, has only one motion clamp. Angle Specification Figure 16.3.13 shows several ways in which direction angles are expressed. The angle describing the direction of a line may be expressed as a bearing, which is the angle measured east or west from the north or south line and not exceeding 90, e.g.: N45E, S78E, N89W. Another system for specifying direction is by azimuth angle. Azimuths are measured clockwise from north (usually) up to 360. Still other systems exist where direction is expressed by angle to the right (or left), deflection angles, or interior angles. To Produce a Straight Line Set up the instrument over one end of the line; with the lower motion clamp and tangent screw bring the telescopic line of sight to the other end of the line marked by a flag, a pencil, a pin, or other object; transit the telescope, i.e., plunge it by revolving on its horizontal axis, and set a point (drive a stake and “center” it with a tack or otherwise) a desired distance ahead in line with the telescopic line of sight; loosen the lower motion clamp and turn the instrument in azimuth until the line of sight can be again pointed to the other end of the line; again transit and set a point beside the first point set. If the instrument is in adjustment, the two points will coincide; if not, the point marking the projection of the line lies midway between the two established points. To Measure a Horizontal Angle Set up the instrument over the apex of the angle; with the lower motion bring the line of sight to a distant point in one side of the angle; unclamp the upper motion and bring the line of sight to a distant point in the second side of the angle, clamp and set exactly with the tangent screw; read the angle as displayed using scale micrometre or vernier if required. To Measure a Vertical Angle Set up the instrument over a point marking the apex of the angle A (see Fig. 16.3.14) by the lower motion and the motion of the telescope on its horizontal axis, bring the intersection of the vertical and horizontal wires of the telescope in line with a point as much above the point defining the lower side of the angle as

16-56

SURVEYING To Run a Traverse A traverse is a broken line marking the line of a road, bank of a stream, fence, ridge, or valley, or it may be the boundary of a piece of land. The bearing or azimuth and length of each portion of the line are determined, and this constitutes “running the traverse.” To Establish Bearing The bearing of a line may be specified relative to a north-south line (meridian) that is true, magnetic, or assumed, and whose angle is less than 90. Whatever the reference meridian, the instrument is set over one end of the line, and the horizontal angle readout is set to zero with the instrument pointing in the direction of the north-south meridian. This is shown in Fig. 16.3.12 when point A is in the direction of the meridian (north). Using appropriate clamps, the angle is turned by pointing the telescope to the other end of the desired line. In Fig. 16.3.12 the bearing angle would be N60E. Note that modern electron total stations generally do not have a magnetic needle and, as a result, the bearings or azimuths are typically measured relative to an arbitrary (assumed) meridian. To establish azimuth the same procedure as described for the determination of bearing may be used. An azimuth may have values up to 360. When the preceding line of a traverse is used to orient the instrument, the azimuth of this line is known as the back azimuth and its value should be set into the instrument as it is pointed along the preceding line of the traverse. The telescope is turned clockwise to the next line of the traverse to establish the forward azimuth. In traverse work, azimuths and bearings are not usually measured except for occasional checks. Instead, deflection angles or angles to the right from one course to the next are measured. The initial line (course) is taken as an assumed meridian, and the bearings or azimuths of the other courses with respect to the initial course are calculated. The calculations are derived from the measured deflection angles or angles to the right. If the magnetic or true meridian for the initial course is determined, then the derived bearings or azimuths can likewise be adjusted. In Fig. 16.3.15, the bearing of a is N40E, of b is N88 30E, of c is S4920E  180  (40  4830  4210), of d is S3640W  86  4920, or 40  4830  4210  86  180, of e is N8120W  180  (3640  62). No azimuths are shown in Fig. 16.3.15, but note that the azimuth of a is 40 and the back azimuth of a is 220 (40  180). The azimuths of b and c are 8830 and 13040 respectively.

Fig. 16.3.15 Bearings derived from measured angles.

Fig. 16.3.13 Diagram showing various methods of specifying direction angles.

the telescope is above the apex; read the vertical angle, turn the telescope to a point which is the height of the instrument above the point marking the upper side of the angle and read the vertical angle. How to combine the readings to find the angle will be obvious.

Fig. 16.3.14 Measuring a vertical angle.

Inspection and Adjustment of the Instrument The transit, optical theodolite, and total station devices are subject to maladjustment through use and should be checked from time to time. Regardless of the type of instrument, the basic relationships that should exist are (1) the plate level-bubble tubes must be horizontal, (2) the line of sight must be perpendicular to the horizontal axis, (3) the line of sight must move in the vertical plane, (4) the bubble (if there is one) of the telescope tube must be centered when the scope is horizontal, and (5) the reading of vertical angles must be zero when the instrument is level and the telescope is level. All modern devices are subject to the following checks: (1) adjustment of the plate levels, (2) verticality of the crosshair, (3) line of sight perpendicular to the horizontal axis of the telescope, (4) the line of sight must move in a vertical plane, (5) the telescope bubble must be centered (two-peg test), (6) the vertical circle must be indexed, (7) the circular level must be centered, (8) the optical plummet must be vertical, (9) the reflector constant should be verified, (10) the EDM beam axis and the line of sight must be closely coincident, and (11) the vertical and horizontal zero points should be checked.

SPECIAL PROBLEMS IN SURVEYING AND MENSURATION To Measure Distances with the Stadia In the transit and optical theodolite telescope are two extra horizontal wires so spaced (when fixed by the maker) that they are 1/100 of the focal length of the objective apart. When looking through the telescope at a rod held in a vertical position, 100 times the rod length S intercepted between the two extra horizontal wires plus an instrumental constant C is the distance D from the center of the instrument to the rod if the line of sight is horizontal, or D  100S  C (see Fig. 16.3.16). If the line of sight is inclined by a vertical angle A, as in Fig. 16.3.17, then if S is the space intercepted on the rod and C is the instrumental constant, the distance is given by the formula D  100S cos2 A  C cos A. For angles less than 5 or 6, the distance is given with sufficient exactness by D  100S. Although theory would indicate that distances can thus be determined to within 0.2 ft, in practice it is not well to rely on a precision greater than the nearest foot for distances of 500 ft or less.

Fig. 16.3.16 measurement.

Horizontal stadia

Fig. 16.3.17 measurement.

16-57

Contour Maps A contour map is one on which the configuration of the surface is shown by lines of equal elevation called contour lines. In Fig. 16.3.18, contour lines varying by 10 ft in elevation are shown. H, H are hill peaks, R, R ravines, S, S saddles or low places in the ridge HSHSH. The horizontal distance between adjacent contours shows the distance for a fall or rise of the contour interval—10 ft in the figure. A profile of any line as AB can be made from the contour map as shown in the lower part of the figure. Conversely, a contour map may be made from a series of profiles, properly chosen. Thus, a profile line run along the ridge HSHSH and radiating profile lines from the peaks down the hills and from the saddles down the ravines would give data for projecting points of equal elevation which could be connected for contour lines. This is the best method for making contour maps of very limited areas, such as city squares, or very small parks. If the ground is not too much broken, the small tract is divided into squares and elevations are taken at each square, corner, and between two corners on some lines if necessary to get correct profiles.

Sloping stadia

Only the oldest transits, known as external-focusing devices, have an instrument constant. They can be recognized by noting that the objective lens will physically move as the focus knob is turned. Such devices have not been manufactured for at least 50 years, and therefore it is most probable that, for stadia applications, the value of C  0 should be used in the equations. Likewise, it is certain that the stadia interval factor K in the following equation will be exactly 100. Stadia surveying falls into the category of low-precision work, but in certain cases where low precision is acceptable, the speed and efficiency of stadia methods is advantageous. Stadia Leveling This method is related to the previously described trigonometric leveling. The elevation of the instrument is determined by sighting on a point of known elevation with the center crosshair positioned at the height of the instrument (HI) at the setup position. Record the elevation angle A from the setup point to the distant point of known elevation, read the rod intercept S, and apply the following equation to determine the difference in elevation H between the known point and the instrument: H  KS cos A sin A  C sin A Since K is sure to be equal to 100 and that the stadia constant C for most existing instruments will be equal to zero, the value of H is easily determined and the elevation of the instrument HI is known. With known HI, sights can now be performed on other points and the above equation applied to determine the elevation differences between the instrument and the selected points. Note also that the horizontal distance D can be determined by a similar equation:

Fig. 16.3.18 Contour map. SPECIAL PROBLEMS IN SURVEYING AND MENSURATION Volume of Earth in Foundation and Area Grading The volume of earth removed from a foundation pit or in grading an area can be computed in several ways, of which two follow. 1. The area (Fig. 16.3.19) is divided into squares or rectangles, elevations are taken at each corner before and after grading, and the volumes are computed as a series of prisms. If A is the area (ft2) of one of the squares or rectangles—all being equal—and b1, b2, b3, b4 are corner heights (ft) equal to the differences of elevation before and after grading, the subscripts referring to the number of prisms of which b is a corner, then the volume in cubic yards is

Q 5 Ash1 1 2h2 1 3h3 1 4h4d/s4 3 27d In Fig. 16.3.19 the h’s at A0, D0, D3, C5, and A5, would be h1’s; those at B0, C0, D1, D2, C4, B5, A4, A3, A2, and A1 would be h2’s; that at C3 an h3; and the rest h4’s. The rectangles or squares should be of such size that their tops and bottoms are practically planes.

D 5 1/2 KS sin 2A 1 C cos A Stadia Traverse A stadia traverse of low precision can be quickly performed by recording the stadia data for rod intercept S and vertical angle A plus the horizontal direction angle (or deflection angle, angle to the right, or azimuth) for each occupied point defining a traverse. Application of the equation for distance D eliminates the need for laborious taping of a distance. Note that modern surveying with EDM devices makes stadia surveying obsolete. Topography Low-precision location of topographic details is also quickly performed by stadia methods. From a known instrument setup position, the direction angle to the landmark features, along with the necessary values of rod intercept S and vertical angle A, allows computation of the vertical and horizontal location of the landmark position from the foregoing equations for H and D. If sights are made in a regular pattern as described in the next paragraph, a contour map can be developed.

Fig. 16.3.19 Estimating volume of earth by squares.

2. A large-scale profile of each line one way across the area is carefully made, as the A, B, C, and D lines of Fig. 16.3.19, the final grade line is drawn on it, and the areas in excavation and embankment are separately measured with a planimeter or by estimation from the drawing. The excavation area of profile A is averaged with that of profile B, and the result multiplied by the distance AB and divided by 27 to reduce to cubic yards. Similarly, the material between B and C is found.

16-58

SURVEYING

To Pass an Obstacle Four cases are shown in Fig. 16.3.20. If the obstacle is large, as a building, (1) turn right angles at B, C, D, and E, making BC  DE when CD  BE. All distances should be long enough to ensure sufficiently accurate sighting. (2) At B turn the angle K and measure BC to a convenient point. At C turn left  360  2K; measure CD  BC. At D turn K for line DE. BD  2BC cos (180  K). (3) At B lay off a right angle and measure BC. At C measure any angle to clear object and measure CD  BC/cos C. At D lay off K  90  C for the line DE. BC  BC  tan C. If the obstacle is small, as a tree. (4) at A, some distance back, turn the small angle a necessary to pass the obstacle and measure AB. At B turn the angle 2a and measure BC  AB. At C turn the small angle a for the line AC, and transit, or turn the large angle K  180  a. If a is but a few minutes of arc, AC  AB  BC with sufficient exactness. If only a tape is available, the right-angle method (1) above given may be used, or an equilateral triangle, ABC (Fig. 16.3.21) may be laid out, AC produced a convenient distance to F, the similar triangle DEF laid out, FE produced to H making FH  AF, and the similar triangle GHI then laid out for the line GH. AH  AF.

Fig. 16.3.20 Surveying past an obstacle. To Measure the Distance across a Stream To measure AB, Fig. 16.3.22, B being any established point, tree, stake, or building corner: (1) Set the instrument over A; turn a right angle from AB and measure any distance AC; set over C and measure the angle ACB. AB  AC tan ACB. (2) Set over A, turn any convenient angle BAC and measure AC; set over C and measure ACB. Angle ABC  180  ACB  BAC. BA  AC  sin AC B/sin ABC. (3) Set up on A and produce BA any measured distance to D; establish a convenient point C about opposite A and measure BAC and CAD; set over D and measure ADC; set over C, and measure DCA and ACB; solve ACD for AC, and ABC for AB. For best results the acute angles of either method should lie between 30 and 60.

Fig. 16.3.21 Surveying past an obstacle by using an equilateral triangle.

Fig. 16.3.22 Measuring across a stream.

To Measure a Visible but Inaccessible Distance (as AB in Fig. 16.3.23) Measure CD. Set the instrument at C and measure angles ACB and BCD; set at D and measure angles CDA and ADB. CAD  180  (ACB  BCD  CDA). AD  CD  sin ACD/sin CAD. CBD  180  (BCD  CDA  ADB). BD  CD  sin BCD/sin CBD. In the triangle ABD, 1/2(B  A)  90  1/2 D, where A, B, and D are the angles of the triangle; tan 1/2(B  A)  cot 1/2 D(AD  BD) / (AD  BD); AB  BD sin D/sin A  AD sin D/sin B. Random Line On many surveys it is necessary to run a random line from point A to a nonvisible point B which is a known distance away. On the basis of compass bearings, a line such as AB is run. The distances AB and BC are measured, and the angle BAC is found from its calculated tangent (see Fig. 16.3.24).

Fig. 16.3.24 Running a random line.

Fig. 16.3.23 Measuring an inaccessible distance.

To Stake Out a Simple Horizontal Curve A simple horizontal curve is composed of a single arc. Usually the curve must be laid out so that it joins two straight lines called tangents, which are marked on the ground by PT (points on tangent). These tangents are run to intersection, thus locating the PI (point of intersection). The plus of the PI and the angle I are measured (see Fig. 16.3.25). With these values and any given value of R (the radius desired for the curve), the data required for staking out the curve can be computed.

R5

5,729.58 D

L 5 100

 D

T 5 R tan

 2

where R  radius, T  tan distance, L  curve length, C  long chord. In a sample computation, assume that   824 and a 2 curve is required. R  5,729.58/2  2,864.79 ft (873 m); L  (100) (8.40/2)  420.0 ft (128 m); T  2,864.79  0.07344  210.39 ft (64 m). The degree of the curve is always twice as great as the deflection angle for a chord of 100 ft (30 m). Setting Stakes for Trenching A common way to give line and grade for trenching (see Fig. 16.3.26) is to set stakes K ft from the center line, driving them so that the near face is the measuring point and the top is some whole inch or tenth of a foot above the bottom grade or grade of the center or top of the pipe to be laid. The top of the pipe barrel is perhaps the better line of reference. If preferred, two stakes can be driven on opposite sides and a board nailed across, on which the centerline is marked and the depth to pipeline given. When only one stake is used, a graduated pole sliding on one end of a level board at right angles is convenient for workmen and inspectors. On long grades, the grade stakes are set by “shooting in.” Two grade stakes are set, one at each end of the grade, the instrument is set over one, its height above grade determined, and a rod reading calculated for the distance stake such as to make the line of sight parallel to the grade line; the line of sight is then set at this rod reading; when the rod is taken to any intermediate stake, the height of instrument above grade less the rod reading will be the height of the top of the stake above grade. If the ground is uniform, the stakes may all be set at the same height above grade by driving them so as to give the same rod readings throughout.

Fig. 16.3.25 Staking out a horizontal curve.

Fig. 16.3.26 Setting stakes for trenching.

To Reference a Point The point P (Fig. 16.3.27), which must be disturbed during construction operations and will be again required as a line point in a railway, pipeline, or other survey, is referenced as follows: (1) Set the instrument over it and set four points, A, B, and C, D on two intersecting lines. When P is again required, the transit is set over B and, with foresight on A, two temporary points close together near P but on opposite sides of the line DC are set; the instrument is then set on D and, with foresight on C, a point is set in the lines DC and BA by setting it in DC under a string stretched between the two temporary points on BA. (2) Points A and E and C and F may be established instead of A, B, C, D. (3) If the ground is fairly level and is not to be much disturbed, only points A and C need be located, and these by simple tape measurement from P. They should be less than a tape length from

SPECIAL PROBLEMS IN SURVEYING AND MENSURATION

P. When P is wanted, arcs struck from A and C with the measured distance for radii will give P at their intersection. Foundations The corners and lines of a foundation are preserved by setting stakes outside the area to be disturbed, as in Fig. 16.3.28. Cords stretched around nails in the stakes marking the reference points will give the referenced corners at their intersections and the main lines of the building. These corners can be plumbed down to the level desired if the height of the stakes above grade is given. It is well to nail boards across the stakes at AB, putting nails in the top edge of the board to mark the points A and B, and if the ground permits, to put all the boards at the same level.

Fig. 16.3.27 Referencing a point which will be disturbed.

Fig. 16.3.28 Reference points for foundation corners.

To Test the Alignment and Level of a Shaft Having placed the shaft hangers as closely in line as possible by the use of a chalk line, the shaft is finally adjusted for line by hanging plumb lines over one side of the shaft at each hanger and bringing these lines into a line found by stretching a cord or wire or by setting a theodolite at one end and adjusting at each hanger till its plumb line is in the line of sight. The position of the line will be known either on the floor or on the ceiling rafters or beams to which the hangers are attached. If the latter, the instrument may be centered over a point found by plumbing down, and sighted to a plumb line at the farther end. To level the shaft, an ordinary carpenter’s level may be used near each hanger, or, better, a pole with an improvised sliding target may be hung over the shaft at each hanger by a hook in one end. The target is brought to the line of sight of a leveling instrument set preferably about under the middle of the shaft, by adjusting the hanger. When the hangers are attached to inclined roof rafters, the two extreme hangers can be put in a line at right angles to the vertical planes of the rafters by the use of a square and cord. The other hangers will then be put as nearly as possible without instrumental test in the same line. The shaft being hung, the two extreme hangers, which have been attached to the rafters about midway between their limits of adjustment, are brought to line and level by trial, using an instrument with a well-adjusted telescope bubble, a plumb line, and inverted level rod or target pole. Each intermediate hanger is then tested and may be adjusted by trial. To Determine the Verticality of a Stack If the stack is not in use and its top is accessible, a board can be fitted across the top, the center of the opening found, and a plumb line suspended to the bottom, where its deviation from the center will show any leaning. If the stack is in use or its top not accessible and its sides are battered, the following procedure may be followed. Referring to Fig. 16.3.29, set up an instrument at any point T and measure the horizontal angles between vertical planes tangent, respectively, to both sides of the top and the base and also the angle a to a second point T1. On a line through T approximately at right angles to the chimney diameter, set the transit at T1 and perform the same operations as at T1, measuring also K and the angle b. On the drawing board, lay off K to as large a scale as convenient, and from the plotted T and T1 lay off the several angles shown in the figure. By trial, draw circumferences tangent to the two quadrilaterals formed by the intersecting tangents of the base and top, respectively. The line joining the centers of these circumferences will be the deviation from the vertical in direction and amount. If the base is square, T and T1 should be established opposite the middle points of two adjacent sides, as in Fig. 16.3.30. To Determine Land Area and Boundaries Two procedures may be followed to gather data to establish and define land area and boundaries. Generally the data is gathered by the traverse method or the radial/ radiation-survey method.

Fig. 16.3.29 Determining the verticality of a stack.

16-59

Fig. 16.3.30 Determining the verticality of a stack.

The traverse method entails the occupation of each end point on the boundary lines for a land area. The length of each boundary is measured (in more recent times with EDM) and the direction is measured by deflection angle or angle to the right. Reference sources explain how direction angles and boundary lengths are converted to latitudes and departures (north-south and east-west change along a boundary). The latitudes and departures of each course/boundary allow for a simple computation of double meridian distance (DMD), which in turn is directly related by computation to land area. In Fig. 16.3.31 each of the points 1 through 4 would be occupied by the instrument for direction and distance measurement.

Fig. 16.3.31 Land area survey points.

The radial method requires, ideally, only a single instrument setup. In Fig. 16.3.31 the setup point might be at A or any boundary corner with known or assumed coordinates and with line of sight available to all other corners. This method relies on gathering data to permit computation of boundary corners relative to the instrument coordinates. The data required is the usual direction angle from the instrument to the point (e.g., angle a in Fig. 16.3.31) and the distance to the point. The references detail the computation of the coordinates of each corner and the computation of land area by the coordinate method. The coordinate method used with the radial survey technique lends itself to the use of modern EDM and total station equipment. In very low precision work (approximation) the older methods might be used. Stream flow estimates may be obtained by using leveling methods to determine depth along a stream cross section and then measuring current velocity at each depth location with a current meter. As shown in Fig. 16.3.32, the total stream flow Q will be the sum of the products of area A and velocity V for each (say) 10-ft-wide division of the cross section. To define limits of cut and fill once the route alignment for the roadway has been established (see Fig. 16.3.15), the surveyor uses leveling methods to define transverse ground profiles. From this information and the required elevation of the roadbed, the distance outward from the roadway

16-60

SURVEYING

center is determined, thus defining the limits of cut and fill. The surveyor then implants slope stakes to guide the earthmoving equipment operators. See Fig. 16.3.33.

Fig. 16.3.32 Determination of stream flow.

Fig. 16.3.33 Stakes showing limits of roadway cut and fill. Datum of Reference The establishment of horizontal position, elevation, and direction is meaningless unless a known reference or datum is specified. It is fundamentally the question of “where is the point?,” “which way is north?,” and “what is the basis for specified latitude, longitude, and direction?” Plane surveying is based on the concept that all distances and directions are as projected upon a horizontal plane which at a single point is tangent to the earth spheroid. This practice serves nicely for local surveys but as the survey expands, the distances and angles measured on the earth spheroid will depart more from the corresponding data as projected upon the horizontal plane. The deviation can become unacceptable as the limits of plane surveying are reached and the accuracy of geodetic surveying is required. However, the methods of geodetic surveying (dealing with spherical angles and coordinates) are complex and require difficult computations, expense, and expertise. Mapping of coordinates and direction to establish geodetic points has been carried out by the U.S. Coast and Geodetic Survey (USCGS, now NGS and NOAA), originating many years ago. The mapping of geodetic position is represented on the Lambert conformal projection or the transverse Mercator projection. The Lambert projection is appropriate in most states and areas having their long dimension in the east-west direction, while the Mercator projection is best used in places having their longer extents in the north-south direction. The Lambert projection utilizes a developed conic surface and the Mercator projection uses a developed cylinder. The methods to produce the flat/developed cone or cylinder are found in the references. Simplification has been accomplished through evolution of the State Plane Coordinate System (SPCS), which is a rectangular coordinate definition. The USCGS has developed the SPCS for every state, providing a common reference for all points lying in the state plane. On the SPCS,

grid points are located by x-y values and all longitudinal lines are parallel to a central meridian (unlike converging true geodetic meridians). Thus the traditional thinking of azimuth direction as (a) geographic north-south, (b) magnetic north-south, and (c) assumed north-south is now expanded to include grid north-south. Low-order surveys can be conducted throughout the state plane with no adjustment in distance. For higher-order surveys, however, a grid factor (1.000 or very close to 1.000) is used. The grid factor is the product of a scale factor and an elevation factor. These values are available in tables for each state. The adoption of SPCS within a state or region is gaining support within the profession. Lawmakers and the courts have been slow to incorporate the principles of SPCS. Among the advantages of the system are the following: 1. When the SPCS (x and y) coordinates of a point are lost they may be regained from relationship to nearby known SPCS points within a degree of accuracy of its original specification. 2. Lengthy statewide surveys (right-of-way for highways, railroads, utilities) may be conducted without loop closure and with near geodetic accuracy. 3. Maps referenced to SPCS will always “fit” when joined regardless of the agency producing the map. 4. There is a common grid north direction definition (see Resurvey and Retracement below). Resurvey and Retracement On occasion it may become necessary to verify and reestablish boundaries described in a deed description dating from many years ago. At least two problems arise in the effort to find the original monuments, corners, and ties referenced in the original survey. Equipment used in the older surveys does not approach the precision of that used in recent times and the basic measurements of angle and distance must be examined and interpreted carefully. Angles are almost always based upon a magnetic reference and usually specified to about 15r to 30r, consistent with the precision of the compass box of the era. Subsequent measured angles are limited to the refinement of the graduated circle of the transit often specified to no better than 30r. The magnetic reference is subject to adjustment to current declination which is arguably an inexact process. Additionally, distance measurement may be subject to the diligence of the original crew of chain men and the surveyor. The possibility also exists that the distances were measured on the slope and not converted to horizontal equivalent. It is known that often the early chain men were paid “by the foot” or “by the chain,” which is a practice that does not lend itself to due diligence. A resurvey/retracement requires the consideration of several quantities. Research is required to determine all the relevant information from deeds, legal records, historic documents, etc. The research goal is to identify references that include monuments (natural or artificial), identification of corner markers, established lines (fence lines, stone walls, trails, streams), acreage, and referenced angles and distances. After the thorough research, the subsequent visit to the field is likely to reveal that features and data required for definition of the survey may be missing or conflicting. In the case of conflict there is an accepted order of preference to be applied. Courts tend to follow precedence, as listed, in the absence of intent that is contrary. Priorities are as follows: 1. Natural monuments or landmarks. These are objects of specific reference or set by the early surveyor and found along the boundary. 2. Secondary or artificial monuments which are references documented by other maps or deeds and established by other previous surveyors. 3. Boundaries or lines of record to adjoining tracts documented by maps, plats, or deeds. 4. Bearing or distance references (ties) to on-site or off-site objects. 5. Distances are deemed to be more reliable than angle bearing references. 6. Angle measurement related to direction of a line. 7. Quantity or area may be cited in the old survey but is inferior to all the above cited items.

GLOBAL POSITIONING SYSTEM

With the continued development of the State Plane Coordinate System, many surveying groups argue for the inclusion of coordinates in the hierarchy of calls listed above. GLOBAL POSITIONING SYSTEM

The GPS concept is based on a system of 24 satellites in six orbital planes with four satellites per orbit. Two elements of data support the underlying principle of GPS surveying. First, radio-frequency signals are transmitted from the satellites and received by GPS receiver units at or near the earth’s surface. The signal transmission time is combined with the velocity of propagation (approximately the speed of light) to result in a pseudodistance d  Vt, where V is the velocity of propagation and t is the time interval from source to receiver. A second requirement is the precise location of the satellites as published in satellite ephemerides. The satellite data (ephemeride and time) is transmitted on two frequencies referred to as L1 (1,575.42 MHz) and L2 (1,227.60 MHz). The transmissions are band-modulated with codes that can be interpreted by the GPS receiver-signal processor. The L1 frequency modulation results in two signals: the precise positioning service (PPS) P code and the C/A or standard positioning service (SPS) code. The L2 frequency contains the P code only. The most recent GPS receivers use dual-frequency receiving and process the C/A code from L1 and the P code from L2. Single-Receiver Positioning A single point position can be established using one GPS receiver where C/A or SPS code is processed from several satellites (at least four for both horizontal and vertical position). The reliability depends on the uncertainty of the satellite position, which is 15 m in the ephemeride data transmitted. This handicap can be circumvented through the application of Loran C technology. The surveyor may also obtain high precision through the use of two receivers and differential positioning. The U.S. government reserves the right to degrade ephemeride data for reasons of national security. Differential Positioning Multiple GPS receivers may be used to attain submeter accuracy. This is done by placing one receiver at a known location and multiple receivers at locations to be determined. Virtually all error inherent in the transmissions is eliminated or cancels out, since the difference in coordinates from the known position is being determined. By averaging data taken over time and postprocessing, the precision of this technique can approach millimetre precision. Field Practice A procedure, sometimes called leapfrogging, can be used with three GPS receivers (R1, R2, and R3) operating simultaneously (see Fig. 16.3.34). Receiver R1 is placed at a known control point P, while R2 and R3 are placed at L and M. The three units are operated for a minimum of 15 min to obtain time and position data from at least four satellites. The R1 at P and R2 at L are leapfrogged to N and O,

16-61

respectively. This procedure is repeated until all points have been occupied. It is also desirable to include a partial leapfrog to obtain data for the OP line if a closed traverse is wanted. A second procedure requires two receivers to be initially located at two known control points. The units are operated for at least 15 min in order to establish data defining the relative location of the two points. Then one of the GPS receivers is moved to subsequent positions until all points have been occupied. As in the previous procedure, signals from a minimum of four satellites must be acquired. GPS Computing The data for time and position of a satellite acquired by a GPS station is subsequently postprocessed on a computer. The object is to convert the time and satellite-position data to the earthsurface position of the GPS receiver. At least four satellites are needed for a complete (three-dimensional) definition of coordinate location and elevation of a control point. The computer program typically requires input of the ephemerides-coordinate location of the (at least) four satellites, the propagation velocity of the radio transmission from the satellite, and a satellite-clock offset determined by calibration or by receipt of correction data from the satellite. General Survey Computing The microcomputer is now an integral part of the processing of survey data. Software vendors have developed many programs that perform virtually every computation required by the professional surveyor. Some programs will also produce the graphic output required for almost every survey: site drawings, plat drawings, profile and transverse sections, and topographic maps. The Total Station The construction of the total station device is such that it performs electronically all the functions of a transit, optical theodolite, level, and tape. It has built-in EDM capability. Thus the total data-gathering requirements are available in a single instrument: horizontal and vertical angles plus the distance from the instrument’s optical center to the EDM reflector. A built-in microprocessor converts the slope distance to the desired horizontal distance, and also to vertical distance for leveling requirements. If the height of the instrument’s optical center and the height of the reflector are keyed in, the on-board computer can display that actual elevation difference between ground points with correction for earth curvature and atmospheric refraction. A typical total station is shown in Fig. 16.3.11. Another feature found in some total stations is data-collection capability. This permits data as read by the instrument (angle, distance) to be electronically output to a data collector for transfer to a computer for later processing. A total station operating in conjunction with a remote processing unit allows a survey to be conducted by a single person. The variety of features found on total station devices is enormous. The total station changes some of the traditional procedures in surveying practice. An example is the laying out of a route curve as shown in Fig. 16.3.24. The references describe the newer methods.

Fig. 16.3.34 Determining positions by leapfrogging.

Section

17

Industrial Engineering BY

ERWIN M. SANIGA Dana Johnson Professor of Information Technology and Professor of

Operations Management, University of Delaware. SCOTT JONES Professor, Department of Accounting & MIS, Alfred Lerner College of Business

and Economics, University of Delaware. ASHLEY C. COCKERILL Vice President and Event Coordinator, nanoTech Business, Inc. VINCENT M. ALTAMURO President, VMA, Inc., Toms River, NJ. ANDREW M. DONALDSON Project Director, Parsons E&C, Reading, PA. ROBERT F. GAMBON Power Plant Design and Development Consultant. EZRA S. KRENDEL Emeritus Professor of Operations Research and Statistics, Wharton School,

University of Pennsylvania.

17.1

OPERATIONS MANAGEMENT by Erwin M. Saniga Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-3 Inventory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-3 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-5 Aggregate Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-6 Quality Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-7 Statistical Process Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-7 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-8 Queuing Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-10 Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-10 17.2

COST ACCOUNTING by Scott Jones Role and Purpose of Cost Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-11 Measuring and Reporting Costs to Stockholders . . . . . . . . . . . . . . . . . . . . 17-11 Classifications of Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-11 Methods of Accumulating Costs in Records of Account. . . . . . . . . . . . . . . 17-13 Elements of Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-13 Activity-Based Costing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-14 Management and the Control Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-14 Types of Cost Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-15 Budgets and Standard Costs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-15 Transfer Pricing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-16 Supporting Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-16 Capital-Expenditure Decisions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-17 Cost Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-18 17.3

ENGINEERING STATISTICS AND QUALITY CONTROL by Ashley C. Cockerill Engineering Statistics and Quality Control. . . . . . . . . . . . . . . . . . . . . . . . . 17-18 Statistics and Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-18 Characterizing Observational Data: The Average and Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-18 Process Variability—How Much Data? . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-19

Correlation and Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-20 Comparison of Methods or Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-21 Go/No-Go Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-22 Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-23 17.4

METHODS ENGINEERING by Vincent M. Altamuro Scope of Methods Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-25 Process Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-25 Workplace Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-25 Methods Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-26 Elements of Motion and Time Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-26 Method Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-26 Operation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-26 Principles of Motion Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-27 Standardizing the Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-27 Work Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-28 Time Study Observations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-28 Performance Rating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-29 Allowances for Fatigue and Personal and Unavoidable Delays. . . . . . . . . . 17-30 Developing the Time Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-30 Time Formulas and Standard Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-30 Uses of Time Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-31 17.5 COST OF ELECTRIC POWER by Andrew M. Donaldson and Robert F. Gambon Constructed Plant Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-32 Fixed Charges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-35 Operating Expenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-36 Overall Generation Costs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-37 Transmission Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-37 Power Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-37

17-1

17-2

INDUSTRIAL ENGINEERING 17.6

HUMAN FACTORS AND ERGONOMICS by Ezra S. Krendel

Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-39 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-39 Psychomotor Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-40 Skills and Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-40 Manual Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-41

McRuer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-42 Modeling the Human in the MMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-42 17.7

AUTOMATIC MANUFACTURING by Vincent M. Altamuro Design for Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-43 Autofacturing Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-45 Micron and Nano Autofacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-46

17.1 OPERATIONS MANAGEMENT by Erwin M. Saniga REFERENCES: Meredith and Shafer, “Operations management for MBAs,” 2d ed., John Wiley & Sons. Chase, Aquilano, and Jacobs, “Operations Management for Competitive Advantage,” 9th ed., McGraw-Hill Irwin. Stevenson, “Production/ Operations Management,” 5th ed., Irwin. Montgomery, “Introduction to Statistical Quality Control,” 3d ed., John Wiley and Sons. Hoerl, “Six Sigma and the Future of the Quality Profession,” Quality Progress, June 1998, pp. 35–42. Mentch, “Manufacturing Process Quality Optimization Studies,” Journal of Quality Technology, vol. 12, no. 3, July 1980, pp. 119–129. Dilworth, “Operations Management: Providing Value in Goods and Services,” 3d ed., Dryden Press. Box, Jenkins, and Reinsel, “Time Series Analysis,” 3d ed., Prentice Hall. Deming, “Out of the Crisis,” MIT Center for Advanced Engineering Study.

Operations management is concerned with the production of the goods and services of an organization. In particular, operations management problems focus on determining the way an organization will produce highquality goods or services in an efficient manner. We define efficiency as producing goods and services in a timely and cost-efficient manner. In the past, the field was limited to the study of manufacturing processes, but in the last several decades the emphasis has included the study of service processes, which now number more organizations than manufacturing in the United States. Many of the same principles of effective operations management that have been found to work in manufacturing organizations will optimize the efficiency and quality of service organizations. Operations management has generated increased interest among organizations today because of the competitive advantage that accrues to the organization that has a high-quality product or service and manages its efficiency of production in an optimal way. The terminology that is common in business today is due to an emphasis on operations management strategies. Some examples include six sigma, lean manufacturing, supply chain management, and just-in-time inventory methods. FORECASTING

There are three general types of forecasting models used in organizations today. These are time series models, regression models, and subjective models. Time series models are models that are based upon the assumptions that the past behavior of the variable to be forecasted, say sales, is the best indicator of the future behavior of sales. Some of the common models that have been used in many firms include moving-average methods and exponential smoothing methods. More complex models due to Box and Jenkins are used when simpler models do not satisfy. Moving-average methods are based upon simply averaging the past n observations of a stream of data to obtain a forecast for the next period. For example, a three-period moving average of sales would be obtained by averaging the past three periods of sales. This would be the forecast for the next period (or future periods) sales. The formula for a moving average forecast of sales is simply St11 5

g

Xi /N

i5t2N11

where St is the forecast in period t, Xi is sales in period i, and N is the number of periods in the moving average. An obvious weakness of moving-average forecasts is that equal weight is placed on all observations. It seems more plausible to weigh the more recent observations higher in determining a forecast. Exponential smoothing is a common method that allows more recent observations to have more weight in the averaging scheme. The model for exponential smoothing is: St11 5 a Xt 1 s1 2 adSt where a is the smoothing coefficient.

Note that the model is dependent upon the choice of a. If a is near 1, there is very little smoothing taking place and the forecast follows the rapid changes in the data. Note that if a equals 1, the forecast for the next period is simply the actual observation for the previous period. If a is near zero, the forecast approaches the average of the past data. In practice, common values of a are between 0.1 and 0.3. Trend and seasonalities are not accounted for in the exponential smoothing model presented above. One can account for trend in a model by using a model called double exponential smoothing, which is a simple extension of the model presented above. Seasonalities are common in many industries. Consider the demand for the sale of turkeys. One would expect there would be peaks around the Thanksgiving and Christmas holidays in the United States. The usual exponential smoothing or double exponential smoothing models would yield forecasts that lag the actual demand. One simple and effective method for handling these seasonalities is to first deseasonalize the data by calculating the seasonal component for the data. An example of this would be to calculate the percentage of sales occurring in each month by averaging the last 3 years of monthly data (assuming there is a monthly seasonality) and then dividing the data by this monthly index. One can then develop a forecast on the deseasonalized data and, when the forecast is obtained, multiply the forecast by the seasonal index for the month in which a forecast is desired. More complicated methods for forecasting time series are due to George Box and Gwilym Jenkins and are based upon a set of models called ARIMA, or mixed autoregressive and moving-average models. These are more general and powerful models of the class mentioned above and have been shown to be extremely effective in forecasting in practice. Consult their book for a detailed explanation of this class of models. A second class of forecasting models is based upon using explanatory variables in a regression model to forecast sales. The regression model may be represented by the function F, where Y 5 FsX1, X2, c, Xkd 1 e The variables labeled Xj , j  1, 2, . . . , k are the explanatory variables, and the variable Y is called the dependent variable. The term e represents the error. For example, we may hypothesize that sales (Y) can be explained by price (X1) and average delivery time (X2). We can build the model by entering data from the past where we have recorded average delivery time in a month, the price in a month, and sales in a month. Human judgment is the basis for a third class of models. These are commonly used in practice and, at times, yield accurate forecasts. Perhaps the most common types of judgmental forecasts are those based upon management or sales force opinions obtained from discussions with customers. Surveys of customers are also used to build these types of forecasts. INVENTORY MANAGEMENT

Inventory decisions have a major impact on the profitability of an organization. Organizations that carry large inventories incur large holding costs, which impact the bottom line. On the other hand, maintaining low inventories may prevent an organization from meeting the variability in demand. Additionally, low inventories may not allow flexibility in scheduling, may not allow the organization to take account of economic order quantities, or may not allow for variation that occurs in vendorsupplied inputs. Thus, the inventory decision is very important in organizations. Essentially, the inventory decision can be broken down into two major components. These are (a) determining when items should be ordered and (b) determining how much to order. 17-3

17-4

OPERATIONS MANAGEMENT

The standard criterion on which to base inventory decision is cost. Four costs are generally considered. These are holding or carrying costs, ordering costs, setup costs, and shortage costs. Holding costs include as a major component the cost of the capital tied up in the inventory and additional costs include the cost of storage, insurance, deterioration, obsolescence, etc. Ordering costs refer to the costs incurred when a purchase order is placed or when a production order is given. Setup costs include the clerical costs in changing production. Shortage costs are difficult to measure since they include a component for lost sales, lost customers, or penalties for late delivery. When analyzing inventory decisions, analysts place inventory in two categories: independent demand inventory and dependent demand inventory. Independent demand items are items whose demand does not depend on the demand for another item. Dependent demand items are ones for which the demand depends on the demand for another item. For example, the demand for bicycles in an organization may be an independent demand item. Wheels for the bicycle, on the other hand, are dependent demand items, since their demand depends on the demand for bicycles. That is, if the demand estimate for bicycles is 500 units, the demand for wheels is 1,000 units. The most common inventory model is labeled the economic order quantity (EOQ) model. It is used when one is interested in determining the optimal order quantity where the criterion is cost minimization and where the inventory demand item is independent. The assumptions of the EOQ model are that demand is satisfied on time, price is fixed (there are no price breaks), stock is depleted linearly, and there are no stockouts. Under these assumptions one can show that the EOQ is EOQ 5 s2 R cp /chd1/2 where cp is the ordering costs, R is the yearly demand, and ch is the holding costs. In addition, it can be shown that the reorder point is when the inventory reaches a level of P, where P  L  R/52 where L is the lead time in weeks. Other similar models are developed for the situations where price breaks are available or where backorders are allowable. An equivalent model to the EOQ is available for the situation where the analyst wishes to determine the economic production quantity or lot size (EPQ). Here, one can find the EPQ as EPQ 5 52cp R/[ch s1 2 r/pd]6 1⁄2

where r  daily usage and p  production rate per day. When demand is not constant, organizations must provide a level of safety stock if they wish to maintain a certain service level. Safety stock, then, is the excess inventory carried over expected demand. In the standard EOQ model, which is also called the fixed order quantity model, the EOQ is calculated in the same way if the demand is constant or varies. But if the demand varies, the reorder point differs. If the demand varies, the reorder point must be calculated by using some knowledge of the distribution of demand. For example, if demand is normally distributed with standard deviation, the reorder point becomes PdLz where d is the average daily demand, L is the lead time in days, and z is the number of standard deviations associated with a particular service level probability. Another way to address this problem is to use a fixed time period model. Here, order quantities vary from time period to time period but the reorder point remains the same. An advantage of this model is that one does not have to constantly account for the inventory; all one must keep track of is the time period. A disadvantage of this model is that it requires the maintenance of a higher level of inventory. Most textbooks present the models and discuss the implications of the fixed time period model. There are other simpler systems that enjoy popularity in applications because they do not rely on any assumptions such as a normal distribution of demand or constant usage. Some common systems include the ABC system of classifying inventory and one- and two-bin systems.

The ABC system for inventory management essentially classifies inventory into one of three levels: A, B, and C. The classification is based upon the dollar value of inventory, which is determined by multiplying annual usage times the cost. Then, the A items are defined as the ones with the top 75 to 80 percent of dollar value. B items are the items with the next 15 percent of value and C items are those with the next 5 to 10 percent of value. Once the items are classified, management uses this information to determine where to apply the greatest control. The one-bin system is very simple to manage but has a disadvantage in its cost that may be far from minimal compared to more complex systems. To use the one-bin system the analyst simply replenishes inventory up to a maximum level on a regular basis, say weekly for A items. The two-bin system is one in which there are two bins of the same inventory. Inventory is used from the first bin until it is used up; subsequently an order is placed for replenishment and inventory is used from the second bin. The second bin should contain enough inventory to cover needs during the order lead time. The amount in the second bin thus should equal P, where P is the order point defined above. The methods discussed above constitute some of the more valuable methods of inventory management available to managers but, at times, various industries employ other methods that are more usable because of the difficulty in managing a large number of items. For example, shoes are distributed according to batches organized by sizes. Department stores use a method called “open to buy,” which refers to the budgeted value of inventory minus the amount that was currently spent for various departments. That is, instead of managing individual inventory items, the manager focuses on managing the cost of inventory for a department as a whole. As mentioned earlier, the demand for inventory can be independent or dependent. When demand for an inventory item is dependent upon the demand for another item, such as the case for bicycle wheels examined before, the methods of managing inventory differ from the case where demand is independent. A common technique for managing dependent demand inventories is the method of materials requirements planning (MRP), which is an accounting information system technique used for keeping track of inventory needs for these items. MRP has been popular for about 30 years, and computer programs and systems to perform the cumbersome calculations associated with MRP are readily available. MRP is based upon knowledge of the bill of materials for an item, the master production schedule, and a record of the inventories currently in stock. The bill of materials is simply a list of the components of a particular product. For our bicycle example, the bill of materials would contain two wheels, two tires, a set of handlebars, a set of rear brakes, front brakes, and various other items. The master production schedule provides information on when to produce these bicycles and what quantity to produce. The MRP then uses these inputs along with lead times to provide the production manager with planned order releases along with various other kinds of information, depending upon the needs of the manager. In our simple example, suppose our forecasted demand is such that we need to produce 3,000 bicycles in March and in June; this would be the master production schedule. Suppose we have 3,000 wheels in inventory and the lead time for wheels is 1 month. The MRP would calculate that the organization needs 6,000 wheels in March and in June, but since the organization has 3,000 on hand it would show the necessary order releases are for 3,000 wheels on February 1 and for 6,000 wheels on May 1. Of course, in practice MRP is much more complicated than the simple example presented here, for a number of reasons. For example, some firms produce many products and some of these have common parts, or dependent items. Order releases must account for these factors. Also, firms might want to take advantage of price breaks for larger lots or by ordering less often to minimize ordering costs. In MRP, this problem is called lot sizing. A variety of techniques for lot sizing are available and range from the simplest, called lot-for-lot ordering, to complex ones such as dynamic programming techniques—for example, the Wagner Whitin model. The lot-for-lot model, for example, indicates that orders are to be placed when they are needed for each period; here, holding costs are minimized.

SCHEDULING

Related to MRP is an approach to broaden the scope of MRP to the firm as a whole. That is, all of the factors going into production are considered together on a larger scale than just the production system. This includes the overall planning of the business. Marketing and finance functions, for example, might take part in the determination of the master production schedule, forecasting, the financing requirements of the process, etc. This approach is called MRP II and has been popular for about 20 years. An approach to managing production systems that includes the management of its inventory is called a just-in-time approach. This approach was initially used at Toyota in Japan and is very much related to the teachings of W. Edwards Deming in that it requires that the production system be in a state of statistical control, that suppliers be kept to a minimum, that scrap and rework are to be avoided as a policy rather than as a cost consideration, that preventive maintenance be performed, that continuous improvement is in place, and that workers are team based. Because the system is in control and there is a minimum of scrap or rework, an innovative method of managing the inventory is possible. This method is to manage inventory, whether purchased from a vendor or work-in-process inventory, so that large stocks of inventory are not necessary to be held, thereby minimizing holding costs. Other common elements of just-in-time systems are that setups are quick, and therefore not costly, and that lot sizes are small. Additionally, just-in-time systems move work in a pull versus push way. Push systems work with less regard for the forecast or the demands of higher upstream stations. Pull systems, on the other hand, work in a just-in-time method such that the upstream demand is pulled from downstream. As a simple example, a firm might produce according to expected demand only rather than by efficient lot sizes. A drawback encountered in just-in-time systems is that, since the supplier is penalized for late delivery, it often prepares and stores finished product which is then shipped to arrive at the customer just in time. The carrying cost for this storage, which the customer thinks has been avoided, is passed on to the customer in the form of an increased price. One way in which to manage the information in a pull system is with a kanban sheet, which is simply a record of what work is taking place. For example, a kanban sheet might be attached to a lot of parts. When upstream demand requires that this lot is needed for production, the kanban sheet is placed in an area where the need for replenishment is indicated. JIT systems are conceptually very attractive and have been used to advantage by a number of companies, especially those in Japan. SCHEDULING

Scheduling the accomplishment of tasks within the operations of a manufacturing or service environment is dependent upon a number of factors. First, there are differences between services and manufacturing in that the concept of an inventory of service has little meaning. Second, within manufacturing there are different types of production systems used for different types of products. More specifically these can be broken down into systems in which the same product is made many times (flow shop), a system in which several products are made by the same manufacturer on the same production system, a system in which similar products are made to order (job shop), and finally a system in which a single product is made (project). Some examples of these are respectively automobile production, a private press, a machine shop producing milled products, and construction of a skyscraper. In a service environment, operations scheduling is a matter of providing the service when it is demanded by the customer. Analytical approaches involve developing forecasts of when customers arrive and the probability distributions of these arrivals and using these to build service capacity so that desired service levels are achieved. These service levels are usually expressed in terms of the probability or percentage of the time customers are serviced within a time limit. For example, the credit card company MBNA answers more than 90 percent of its phone calls from customers within two rings.

17-5

Some organizations such as doctors’ offices schedule by appointment. The demand is then known and the service supply can be provided to meet that customer demand. Airlines use reservations to accomplish much of the same objective. Of course the determination of feasible schedules involves forecasting as well. Complex service operations such as hospitals have compound scheduling problems encompassing the issues mentioned above in addition to the consideration that multiple service providers of various types are needed. For example, scheduling surgery involves scheduling surgeons, nurses, anesthesiologists, operating rooms, etc. Scheduling for the situation when a product is produced in a series of workstations, or a flowshop, is called assembly line balancing. The product spends a period of time in each workstation; this time is referred to as the cycle time. The objective of assembly line balancing is to allocate all of the tasks needed to complete a product into workstations. Criteria for evaluation of assembly line balances include the cycle time required, the number of workstations required, and the amount of slack in the assembly line, which is the sum of the idle time in the workstations. To perform assembly line balancing for producing a particular product, one needs to establish the precedence relationships between the tasks and determine the task times for each of the tasks. In the situation where cycle time is specified, one can calculate the theoretical minimum number of workstations; it is the next highest integer obtained from the ratio S/C, where S is the sum of the task times and C is the cycle time. In assembly line balancing, one then assigns tasks to workstations such that precedence relationships are not violated and, in addition, the sum of the task times for each station is less than the cycle time. When one assigns tasks to workstations, one finds that there usually will be alternative assignments. Researchers have investigated the performance of a number of heuristics to use in these situations. Some of the more common are to assign tasks to workstations such that the longest tasks are assigned first without, of course, violating precedence relationships, and to assign tasks according to positional weights, where positional weight is the sum of the task times and the times of all of the tasks that follow. A graph showing the precedence relationships for an 8-task assembly line is shown in Fig. 17.1.1. The times for the various tasks are: Task Time

1 2

2 4

3 3

4 4

5 5

6 4

7 6

8 4

Total 32

Thus, if the cycle time is 7, the theoretical minimum number of workstations is 32/ 7  4.55  5. If we allocate tasks to stations such that the longer tasks are assigned first, we have the following balance: Station Tasks Time Slack

1 1,2 6 1

2 5 5 2

3 3,6 7 0

4 4 4 3

5 7 6 1

6 8 4 2

In practice the problem is usually more complex because task times are stochastic variables and actual results may differ from the expected. Additionally, some tasks may require special training, which may change the way in which tasks are assigned to workstations. Many other 1

3

2

5

4

7

8

6

Fig. 17.1.1 Network showing precedence relationships for an 8-task assembly line. Code numbers within nodes signify events. Connecting lines with directional arrows indicate operations that are dependent on prerequisite operations.

17-6

OPERATIONS MANAGEMENT

situations occur that make the procedures described above a starting point in the assembly line balancing decision. When a firm produces a number of products using similar equipment and locations, the scheduling problem becomes more complex because the production manager needs to make decisions on the size of the lot produced and the sequencing of the lots. In some cases the economic lot size can be determined by using methods discussed in the section on inventory management; methods are also available for multiple products. Sequencing becomes an issue of satisfying the needs of the customers, whether they be internal or external, and minimizing the cost of holding inventory. A common tool used for evaluating various types of schedules is called the Gantt chart. This is a simple chart that provides a visual record of the order of processing tasks over time (Fig. 17.1.2). Here, M1 and M2 represent two machines in a job shop. The numbers within the blocks show the processing order, or schedule, to complete three jobs, each of which requires processing time on each of the two machines. Note that all the jobs can be done first on machine 1 or on machine 2 but not on both simultaneously.

M1

M2

3

2

2

1

1

3

Time Fig. 17.1.2 Sample Gantt chart.

In job shop scheduling the object is to determine the sequence in which machines will process jobs. The evaluation criteria are varied. Some common criteria include makespan, which is the total time to process all jobs; mean flowtime, which is the average time a job is in the shop; and average lateness per job, which is the average time a job takes longer than the promised time. If arrivals are static and there is one machine to process jobs, an optimal schedule in terms of minimizing mean flow time is to schedule jobs by processing time, with the shortest jobs being processed first. Note that makespan for this situation is the same for all processing orders. When there are two machines and n jobs to process on these machines and the order is the same for each job, a method called Johnson’s rule can be used to minimize makespan. A variant, Jackson’s rule, can be used if jobs can be performed on machine 1 only, machine 2 only, first machine 1 then machine 2, or first machine 2 then machine 1. This rule minimizes the makespan of the sequence. With three machines and n jobs, Johnson’s rule can be used by employing a trick in the solution procedure. If there are m machines and n jobs, Campbell’s heuristic can be used to minimize makespan. This is not an optimal procedure since there is not an efficient method available to find an exact solution. If, on the other hand, jobs arrive intermittently throughout a time period, which is the usual case in a job shop, one must rely on schedules based on priority dispatch rules. In these cases, jobs with the smallest priority are scheduled first. One common rule is the shortest processing time rule. This rule can be dominant in terms of minimizing lateness, makespan, and flowtime. First come, first serve is another common priority dispatch rule. Some others include: random priority, least work remaining, total work fewest operations remaining, due date, slack, and critical ratio, which is the ratio of the time remaining before delivery divided by the remaining processing time. One can evaluate the effectiveness of all of these rules for a particular situation by constructing Gantt charts of each of the resulting schedules and comparing these on the basis of makespan, mean flow time, and average lateness. Implementation involves choosing that one that best fits the needs of the

organization. Subsequently, one should monitor the results of the solution over time to see, if, in fact, it is providing the best solution in terms of optimizing the solution criteria. Scheduling projects is somewhat different from scheduling for assembly lines for a single or multiple product or for job shops. Projects are different in that the firm is producing a single unit (or small number) of the product. Some examples include implementing an MRP or MRP II system in an organization, building a skyscraper, or doing a consulting project for an organization. For many projects the Gantt chart is a useful tool in that it lists tasks to be accomplished, the time it should take to accomplish them, and the sequence and procedure of the tasks. Researchers have made improvements to this method with the use of PERT and CPM charts to help them schedule and better manage projects. PERT is an acronym for program evaluation and review technique and CPM is an acronym for critical path method. These methods were initially different in that PERT used probabilistic times for tasks and CPM used deterministic times, but after the more than 50 years in which these methods have been applied there is little to distinguish the two methods. PERT and CPM rely on a network diagram, which is a graph which contains a number of nodes and arrows. Usually the arrows designate the tasks to be performed and the nodes represent the beginning and end of these tasks. See Fig. 17.1.1. The value of the network diagram is that it clearly displays all of the tasks that must be completed to complete the project, the precedence relationships between these tasks, and, in some graphs, the time to complete each task. Of special interest among the various paths from the beginning to the ending node is the critical path. The critical path is the path that is the longest in terms of the time it takes to go from the beginning to the ending node. It is of interest because any delay in accomplishing tasks on the critical path will result in a delay of the completion of the project. To shorten the length of the project, tasks on the critical path will have to be shortened. One advantage of PERT and CPM are that the graphical display clearly indicates tasks and precedences. Once the critical path has been determined managers can closely monitor tasks on the critical path and, further, monitor tasks not on the critical path that have slack, which is the difference between the time it takes to complete a task and the time allowable to complete the task. Another advantage of the use of these techniques is that they help the manager to organize all information and data required to complete the project. AGGREGATE PLANNING

Aggregate planning is the task of determining how much to produce of a product, what workforce level to use, and what inventory to carry for a period of time. One use of the aggregate plan is as an input into the master production schedule. It is dependent, of course, on the forecast for the product for each of the periods for which the plan is to be made and the amount on inventory on hand along with the costs of the various components. Factors that can be used to match production to demand include the ability to hire employees and lay off employees; the ability to vary the number of shifts, the length of shifts and the amount of overtime; the ability to subcontract; and the ability to vary the amount of inventory held or backordered. Aggregate plans are usually evaluated on the basis of objective factors such as cost and meeting demand on time, as well as subjective factors such as lost sales potential. Costs to be considered in most approaches include the basic cost of producing the product (fixed and variable costs); costs associated with hiring, layoffs, and overtime; costs of inventory (holding and backorder costs); and the cost of subcontracting. There are a number of approaches used to develop aggregate plans in industry, and these range from simple trial-and-error evaluations of plans on a spreadsheet to sophisticated mathematical programming or artificial intelligence models to develop plans that are optimal in terms of particular criteria. The most simple planning method, trial and error using a spreadsheet, consists of trying various plans and seeing what they yield in

STATISTICAL PROCESS CONTROL

terms of meeting production schedules and, in addition, what costs are incurred in the various categories mentioned above. One of the important reasons for using this type of planning method is that a number of subjective considerations can be incorporated into the solution. These include constraints such as, say, maintaining a stable workforce, eliminating shortages or backorders, eliminating subcontracting, etc. Some of these may be hard to model when one is using sophisticated modeling methods but are relatively easy to incorporate into a trial and error method. Sophisticated methods of aggregate planning include methods such as mathematical programming, of which linear programming is a subset (covered later in this section), and expert systems approaches to the problem. Mathematical programming methods involve deciding upon what criterion to optimize and what constraints are to be placed upon the system. The objective function may be one where the costs of producing the product (which include costs of hiring, layoffs, subcontracting, inventory holding and shortage costs, etc.) are minimized. Constraints are then written that ensure that, say, the amount of inventory available for a particular period is such that demand is satisfied in that period. Note that the inventory can be obtained from inventory on hand, from production in a variety of ways, and from subcontracting. Linear constraints such as these become part of the mathematical model. The mathematical program built in the way above then is optimized using a computer program and the outputs of this program are the optimal plan and the costs of this plan. One of the advantages of the mathematical programming approach is that cost is minimized. A disadvantage is that some constraints are not easily modeled in some cases. For example, subcontractors may offer price breaks for certain quantities of products or inventory produced. In practice, most firms use the trial-and-error approach. QUALITY MANAGEMENT

In the past, much of the emphasis on quality management was placed on inspection. An organization produced a product and inspected samples of that product, usually at the end of production. Technology consisted of implementing inspection sampling schemes. In the early 1980s, many organizations began to modify their methods to adhere to the teachings of proponents such as W. Edwards Deming. Deming emphasized strategic changes in the management of quality. He argued that quality was not a cost to be minimized but rather a constraint that had to be met in order for a firm to remain in business. Some of his more revolutionary contributions were that rework should be eliminated, that high quality is associated with high efficiency, and that an organization should always try to get better. Deming labeled this continuous improvement. Deming had a list of 14 points that he called the basis of transformation of American industry. These are (Deming, 1993, p. 23): 1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs. 2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn its responsibilities, and take on leadership for change. 3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place. 4. End the practice of awarding business on the basis of price tag. Instead, minimize total cost. Move toward a single supplier for any one item, on a long-term relationship of loyalty and trust. 5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs. 6. Institute training on the job. 7. Institute leadership. The aim of supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of overhaul, as is supervision of production workers.

17-7

8. Drive out fear, so that everyone may work effectively for the company. 9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service. 10. Eliminate slogans, exhortations, and targets for the workforce asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the workforce. 11. a. Eliminate work standards (quotas) on the factory floor. Substitute leadership. b. Eliminate management by objective. Eliminate management by numbers and numerical goals. Substitute leadership. 12. a. Remove barriers that rob the hourly worker of the right to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality. b. Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective. 13. Institute a vigorous program of education and self-improvement. 14. Put everybody in the company to work to accomplish the transformation. The transformation is everybody’s job. One of the essential requirements of the Deming idea is that if a problem in the production of a product occurs, it should be fixed before any more of the product is produced. This concept, of course, can be applied to any organization, as all organizations produce either a product or a service. In service organizations, if the service that is provided is poor, one risks losing the customer forever. Moreover, some marketing researchers claim that a dissatisfied customer tells several other customers about the bad experience and this may magnify the cost of a poor service encounter. STATISTICAL PROCESS CONTROL

The primary set of tools in managing the production of a good or service so that quality is optimized is called statistical process control or SPC. SPC has as its purpose the identification of the quality of a process, the identification of the factors that influence the quality of a process, the elimination of factors that cause excess variability in the process, the monitoring of a process so that problems are quickly detected and eliminated, and the continuous improvement of a process. Most of the tools in SPC are graphical; these have the added advantage of ease of use, understanding, and communication. Perhaps the most important of these tools is the control chart, which is a graph of a quality measurement over time. This quality measurement has been calculated from a sample taken from the process because sampling is more efficient than 100 percent inspection in most cases, when one considers cost. The control chart is used to determine the current performance of a process, to determine the capability of a process, and to monitor a process. The concept behind the control chart is that it is a tool for separating the variation of the process quality into two parts. Deming calls these two parts special causes of variability and common causes of variability. Some other common usage is assignable causes (special) and chance or natural (common) variability. Special-cause variability is due to problems with machines, operators, or raw materials and is usually large compared to common-cause variability, which is the variability in the process due to all of the other factors of production. When a process has had all of the special causes of variability removed, it is said to be stable or in a state of statistical control. Further reduction in the remaining common-cause variability is called process improvement but should not be attempted until the process is stable. Other common tools for SPC include graphs such as histograms, scatterplots, stem and leaf diagrams, Pareto charts, and boxplots. Most textbooks in statistical quality control give in-depth coverage of these

17-8

OPERATIONS MANAGEMENT

topics. Section 17.3, Engineering Statistics and Quality Control, covers some of the tools mentioned here as well as others in more detail. There are several steps in SPC. The first step is to define quality. In the past product quality was measured by specifications determined by engineers. A more contemporary view of the measurement of quality is to use customer input into the design of the product or service and subsequently to use these specifications of the dimensions of quality as targets. The second step in SPC is to measure the current quality of a process; this step is called process performance evaluation. One accurate method of measuring quality is to take random samples from the process over time. In SPC, though, one is interested in eliminating special causes of variability after one does a process performance evaluation so the most common form of sampling is by rational subgroups. These are samples chosen so that any differences due to machines, operators, or raw materials will be apparent in the data. While not as accurate in terms of providing an estimate of quality as a random sample, rational subgrouping provides a very accurate measure of quality and, more important, allows an analyst to have some insight into what special causes of variability are present. For example, a process performance evaluation might show that quality depends upon the supplier of raw material, as in a case where the product quality obtained from shifts in which a particular supplier’s inputs are used is much worse than the quality when other suppliers’ raw materials are used. The value of rational subgrouping is that these types of differences are apparent if the rational subgroups are cleverly chosen. The process of removing special causes of variability is called process capability analysis. In the example above, the analyst might work with the particular supplier to find and eliminate the causes of poor-quality input. After a process has undergone a process capability analysis and special causes of variability are removed, the process should be monitored. Monitoring involves taking samples from the process at particular times and calculating a statistic that measures quality. Some common statistics are the sample mean, the sample range, the sample proportion defective, and the sample number of defects per unit. These statistics are plotted on a control chart; for these statistics the control charts are respectively the X bar chart, the R chart, the p chart and the c chart. A control chart is used to visually determine if a special cause of poor quality has occurred; if it has, ideally the process is shut down and the special cause removed. The last step in the SPC is process improvement, which ideally happens continuously. Thus, it has the name continuous improvement. In this step, the analyst tries to continuously reduce the common-cause variability through observational studies with rational subgrouping or experimental designs. In practice there have been many general attempts to push the idea of quality in the workplace. Some advocates such as Deming, Juran, and Crosby came up with platforms for convincing managers that their methods would ensure successful quality efforts within an organization. Other general schemes to manage quality have appeared over the past several decades and include total quality management, ISO certification, the Malcom Baldridge national quality award, and reengineering. Perhaps the most successful is the current platform called six sigma which originated at Motorola under the leadership of Bob Galvin and at GE under Jack Welch. Six sigma is a method that fully uses the methods discussed above in SPC. It gets its name from the fact that six sigma means there are not more than 3.4 defects per million, an admirable goal. Some of the major ideas behind six sigma include teaching everyone involved the standard statistical tools of SPC such as those discussed above, teaching a structured process such as SPC as a general problem-solving tool, making decisions analytically by using data rather than intuition or experience, and striving for a continuous need to reduce variation in a process. One of the advantages of six sigma is that its use is suggested in all areas of an organization, not just manufacturing.

Six sigma has generated admirable results because it stands on a strong theoretical foundation and is plausible. It is also not beyond the scope of anyone in its complexity. And, empirically, it has had a major financial impact on many firms.

LINEAR PROGRAMMING

At the heart of management’s responsibility is the best or optimum use of limited resources including money, personnel, materials, facilities, and time. Linear programming, a mathematical technique, permits determination of the best use which can be made of available resources. It provides a systematic and efficient procedure which can be used as a guide in decision making. As an example, imagine the simple problem of a small machine shop that manufactures two models, standard and deluxe. Each standard model requires 2 h of grinding and 4 h of polishing. Each deluxe model requires 5 h of grinding and 2 h of polishing. The manufacturer has three grinders and two polishers; therefore, in a 40-h week there are 120 h of grinding capacity and 80 h of polishing capacity. There is a profit of $3 on each standard model and $4 on each deluxe model and a ready market for both models. The management must decide on: (1) the allocation of the available production capacity to standard and deluxe models and (2) the number of units of each model in order to maximize profit. To solve this linear programming problem, the symbol X is assigned to the number of standard models and Y to the number of deluxe models. The profit from making X standard models and Y deluxe models is 3X  4Y dollars. The term profit refers to the profit contribution, also referred to as contribution margin or marginal income. The profit contribution per unit is the selling price per unit less the unit variable cost. Total contribution is the per-unit contribution multiplied by the number of units. The restrictions on machine capacity are expressed in this manner: To manufacture one standard unit requires 2 h of grinding time, so that making X standard models uses 2X h. Similarly, the production of Y deluxe models uses 5Y h of grinding time. With 120 h of grinding time available, the grinding capacity is written as follows: 2X  5Y  120 h of grinding capacity per week. The limitation on polishing capacity is expressed as follows: 4X  2Y  80 h per week. In summary, the basic information is:

Standard model Deluxe model Plant capacity

Grinding time

Polishing time

Profit contribution

2h 5h 120 h

4h 2h 80 h

$3 4

Two basic linear programming techniques, the graphic method and the simplex method, are described and illustrated using the above capacity-allocation-profit-contribution maximization data.

Graphic Method

Operations

Hours available

Hours required per model Standard

Deluxe

Grinding

120

2

5

Polishing

80

4

2

Maximum number of models Standard

Deluxe

120 5 60 2 80 5 20 4

120 5 24 5 80 5 40 2

LINEAR PROGRAMMING

The lowest number in each of the two columns at the extreme right measures the impact of the hours limitations. The company can produce 20 standard models with a profit contribution of $60 (20  $3) or 24 deluxe models at a profit contribution of $96 (24  $4). Is there a better solution? To determine production levels in order to maximize the profit contribution of $3X  $4Y when: 2X  5Y ≤ 120 h 4X  2Y ≤ 80 h

grinding constraint polishing constrain

a graph (Fig. 17.1.3) is drawn with the constraints shown. The twodimensional graphic technique is limited to problems having only two variables—in this example, standard and deluxe models. However, more than two constraints can be considered, although this case uses only two, grinding and polishing.

17-9

to one side of the inequality, transforming it into an equality. This arbitrary variable is called slack variable, since it takes up the slack in the inequality. The simplex method requires the use of equations, in contrast to the inequalities used by the graphic method. The problem rows contain the coefficients of the equations which represent constraints upon the satisfaction of the objective function. Each constraint equation adds an additional problem row. The objective column receives different entries at each iteration, representing the profit per unit of the variables. In this first tableau (the only one illustrated due to space limitations) zeros are listed because they are the coefficients of the slack variables of the objective function. This column indicates that at the very beginning every Sn has a net worth of zero profit. The variable column receives different notations at each iteration by replacement. These notations are the variables used to find the profit contribution of the particular iteration. In this first matrix a situation of no (zero) production is considered. For this reason, zeros are marked in the objective column and the slacks are recorded in the variable column. As the iterations proceed, by replacements, appropriate values and notations will be entered in these two columns, objective and variable. The quantity column shows the constant values of the constraint equations. Based on the data used in the graphic method and with a knowledge of the basic components of the simplex tableau, the first matrix can now be set up. Letting X and Y be respectively the number of items of the standard model and the deluxe model that are to be manufactured, the system of inequalities or the set of constraint equations is 2X  5Y  120 4X  2Y  80

Fig. 17.1.3 Graph depicting feasible solution.

The constraints define the solution space when they are sketched on the graph. The solution space, representing the area of feasible solutions, is bounded by the corner points a, b, c, and d on the graph. Any combination of standard and deluxe units that falls within the solution space is a feasible solution. However, the best feasible solution, according to mathematical laws, is in this case found at one of the corner points. Consequently, all corner-point variables must be tried to find the combination which maximizes the profit contribution: $3X  $4Y. Trying values at each of the corner points: a  (X  0, Y  0); $3 ( 0)  $4 ( 0)  $ 0 profit b  (X  0, Y  24); $3 ( 0)  $4 (24)  $ 96 profit c  (X  10, Y  20); $3 (10)  $4 (20)  $110 profit d  (X  20, Y  0); $3 (20)  $4 ( 0)  $ 60 profit Therefore, in order to maximize profit the plant should schedule 10 standard models and 20 deluxe models. Simplex Method The simplex method is considered one of the basic techniques from which many linear programming techniques are directly or indirectly derived. The method uses an iterative, stepwise process which approaches an optimum solution in order to reach an objective function of maximization (for profit) or minimization (for cost). The pertinent data are recorded in a tabular form known as the simplex tableau. The components of the tableau are as follows (see Table 17.1.1): The objective row of the matrix consists of the coefficients of the objective function, which is the profit contribution per unit of each of the products. The variable row has the names of the variables of the problem including slack variables. Slack variables S1 and S2 are introduced in order to transform the set of inequalities into a set of equations. The use of slack variables involves simply the addition of an arbitrary variable

in which both X and Y must be positive values or zero (X  0; Y  0) for this problem. The objective function is 3X  4Y  P; these two steps were the same for the graphic method. The set of inequalities used by the graphic method must next be transformed into a set of equations by the use of slack variables. The inequalities rewritten as equalities are 2X  5Y  S1  120 4X  2Y  S2  80 and the objective function becomes 3X 1 4Y 1 0S1 1 0S2 5 P

to be maximized

The first tableau with the first solution would then appear as shown in Table 17.1.1. The tableau carries also the first solution which is shown in the index row. The index row carries values computed by the following steps: 1. Multiply the values of the quantity column and those columns to the right of the quantity column by the corresponding value, by rows, of the objective column. 2. Add the results of the products by column of the matrix. 3. Subtract the values in the objective row from the results in step 2. For this operation the objective row is assumed to have a zero value in the quantity column. By convention the profit contribution entered in the cell lying in the quantity column and in the index row is zero, a condition valid only for the first tableau; in the subsequent matrices it will be a positive value. Index row: Steps 1 and 2: 120(0)  80(0)  0 2(0)  4(0)  0 5(0)  2(0)  0 1(0)  0(0)  0 0(0)  1(0)  0

Step 3: 00 0 0  3  3 0  4  4 00 0 00 0

17-10

OPERATIONS MANAGEMENT Table 17.1.1

First Simplex Tableau and First Solution 0

3

4

0

0

Objective row Variable row

Mix

Quantity

X

Y

S1

S2

0

S1

120

2

5

1

0

0

S2

80

4

2

0

1

0

–3

–4

0

0

Objective column

Variable column

Problem rows Index row

Quantity column

In this first tableau the slack variables were introduced into the product mix, variable column, to find a feasible solution to the problem. It can be proven mathematically that beginning with slack variables assures a feasible solution. One possible solution might have S1 take a value of 120 and S2 a value of 80. This approach satisfies the constraint equation but is undesirable since the resulting profit is zero. It is a rule of the simplex method that the optimum solution has not been reached if the index row carries any negative values at the completion of an iteration in a maximization problem. Consequently, this first tableau does not carry the optimum solution since negative values appear in its index row. A second tableau or matrix must now be prepared, step by step, according to the rules of the simplex method. Duality of Linear Programming Problems and the Problem of Shadow Prices Every linear programming problem has associated with it another linear programming problem called its dual. This

duality relationship states that for every maximization (or minimization) problem in linear programming, there is a unique, similar problem of minimization (or maximization) involving the same data which describe the original problem. The possibility of solving any linear programming problem by starting from two different points of view offers considerable advantage. The two paired problems are defined as the dual problems because both are formed by the same set of data, although differently arranged in their mathematical presentation. Either can be considered to be the primal; consequently the other becomes its dual. Shadow prices are the values assigned to one unit of capacity and represent economic values per unit of scarce resources involved in the restrictions of a linear programming problem. To maximize or minimize the total value of the total output it is necessary to assign a quantity of unit values to each input. These quantities, as cost coefficients in the dual, take the name of “shadow prices,” “accounting prices,” “fictitious prices,” or “imputed prices” (values). They indicate the amount by which total profits would be increased if the producing division could increase its productive capacity by a unit. The shadow prices, expressed by monetary units (dollars) per unit of each element, represent the least cost of any extra unit of the element under consideration, in other words, a kind of marginal cost. The real use of shadow prices (or values) is for management’s evaluation of the manufacturing process.

QUEUING THEORY

Queuing theory or waiting-line theory problems involve the matching of servers, who provide, to randomly arriving customers, services which take random amounts of time. Typical questions addressed by queuing theory studies are: how long does the average customer wait before being waited on and how many servers are needed to assure

that only a given fraction of customers waits longer than a given amount of time. In the typical problem applicable to queuing theory solution, people (or customers or parts) arrive at a server (or machine) and wait in line (in a queue) until service is rendered. There may be one or more servers. On completion of the service, the person leaves the system. The rate at which people arrive to be serviced is often considered to be a random variable with a Poisson distribution having a parameter l. The average rate at which services can be provided is also generally a Poisson distribution with a parameter m. The symbol k is often used to indicate the number of servers. MONTE CARLO

Monte Carlo simulation can be a helpful method in gaining insight to problems where the system under study is too complex to describe or the model which has been developed to represent the system does not lend itself to an analytical solution by other mathematical techniques. Briefly, the method involves building a mathematical model of the system to be studied which calculates results based on the input variables, or parameters. In general the variables are of two kinds: decision parameters, which represent variables which the analyst can choose, and stochastic, or random, variables, which may take on a range of values and which the analyst cannot control. The random variables are selected from specially prepared probability tables which give the probability that the variable will have a particular value. All the random variables must be independent. That means that the probability distribution of each variable is independent of the values chosen for the others. If there is any correlation between the random variables, that correlation will have to be built into the system model. For example, in a model of a business situation where market share is to be calculated, a decision-type variable representing selling price can be selected by the analyst. A variable for the price of the competitive product can be randomly selected. Another random variable for the rate of change of market share can also be randomly selected. The purpose of the model is to use these variables to calculate a market share suitable to those market conditions. The algebra of the model will take the effects of all the variables into account. Since the rate-of-change variable can take many values which cannot be accurately predicted, as can the competitive price variable, many runs will be made with different randomly selected values for the random variables. Consequently a range of probable answers will be obtained. This is usually in the form of a histogram. A histogram is a graph, or table, showing values of the output and the probability that those values will occur. The results, when translated into words, are expressed in the typical Monte Carlo form of: if such a price is chosen, the following probability distribution of market shares is to be expected.

17.2 COST ACCOUNTING by Scott Jones REFERENCES: Horngren and Foster, “Cost Accounting—A Managerial Emphasis,” Prentice-Hall. Anthony and Govindarajan, “Management Control Systems,” Irwin. Cooper and Kaplan, “The Design of Cost Management Systems—Text, Cases, and Readings,” Prentice-Hall. ROLE AND PURPOSE OF COST ACCOUNTING

Cost measurements and reporting procedures are integral components of management information systems, providing financial measurements of economic value and reports useful to many and varied objectives. In a functional organization, where lines of authority are drawn between engineering, production, marketing, finance, and so on, cost accounting information is primarily used by managers for control in guiding departmental units toward the attainment of specific organizational goals. In team-oriented organizations, authority is distributed to multifunctional teams empowered as a group to make decisions. The focus of cost accounting in this organizational structure is not so much on control, but on supporting decisions through the collection of relevant information for decision making. Cost accounting also provides insight for the attainment of competitive advantage by providing an information set and analytical framework useful for the analysis of product or process design, or service delivery alternatives. The basic purpose of cost measurements and reporting procedures can be organized into a few fundamental areas. These are: (1) identifying and measuring the economic value to be placed on goods and services for reporting periodic results to external information users (creditors, stockholders, and regulators); (2) providing control frameworks for the implementation of specific organizational objectives; (3) supporting operational and strategic decision making aimed at achieving and sustaining competitive advantage. MEASURING AND REPORTING COSTS TO STOCKHOLDERS

Accounting in general and cost accounting in particular are most visible to the general public in the role of external reporting. In this role, cost accounting is geared toward measuring and reporting periodic results, typically annual, to users outside of the company such as stockholders, creditors, and regulators. The annual report presents to these users management forecasts for the coming period and results of past years operations as reflected in general-purpose financial statements. (See also Sec. 17.1, “Operations Management.”) Performance is usually captured through the presentation of three reports: the income statement, the balance sheet, and the statement of cash flows (see Fig. 17.2.1). These statements are audited by independent certified public accountants who attest that the results reported present a fair picture of the financial position of the company and that prescribed rules have been followed in preparing the reports. The accounting principles and procedures that guide the preparation of those reports are governed by generally accepted accounting principles (GAAP). The underlying principles that guide GAAP financial statements encourage comparability among companies and across time. Therefore, these rules are usually too constraining for reports destined for internal use. External reports such as the balance sheet and income statement are prepared on the accrual basis. Under this basis, revenues are recognized (reported to stockholders) when earned, likewise costs are expensed against (matched with) revenue when incurred. This is to be contrasted with the cash basis, which recognizes revenues and expenses when cash is collected or paid. For example, if a drill press is purchased for use in the factory, a cash outflow occurs when the item is

paid for. The cash basis would recognize the purchase price of the drill press as an expense when the payment is made. On the other hand, the accrual basis of accounting would capitalize the price paid, and this amount would be the cost (or basis) of the asset called drill press. In accrual accounting, an asset is something having future economic benefit, and therefore the cost of this asset must be distributed among the periods of time when it is used to generate revenue. The cost of the drill press would be expensed periodically by deducting a small amount of that cost from revenue as the drill press is used over its economic life, which may be several years. This periodic charge is called depreciation. To capture the effects that revenue-generating activities have on cash, GAAP financial statements also include the statement of cash flows. The statement of cash flows is not prepared on an accrual basis; rather, it reflects the amount of cash flowing into a company during a period, as well as the cash outflow. The first section of that statement, “Cash flows from operating activities,” is essentially an income statement prepared on the cash basis. Another application of cost accounting measurements for external users involves the preparation of reports such as income tax returns for governmental agencies. Federal, state, and local tax authorities prescribe specific accounting procedures to be applied in determining taxable income. These rules are conceptually similar to general-purpose financial reporting but differ mainly in technical aspects of the computations, which are modified to support whatever public finance goals may exist for a particular period. Whereas GAAP financial statements allow for the analysis of credit and investment opportunities, Internal Revenue Service regulations are designed to raise revenue, stimulate the economy, or both. Regulations may be primarily aimed at reducing the federal deficit and hence assign rather long “useful lives” to depreciable assets; at other times in history useful lives were shortened to stimulate investment and economic growth. For GAAP, management usually estimates the useful life of an asset for purposes of depreciation. For IRS purposes, the amount of depreciation is based on the class life of the asset, and the depreciation system elected by the taxpayer. Class life is based on whether the asset is specialized or general purpose, and in what industry the asset is employed. CLASSIFICATIONS OF COSTS

The purposes of cost accounting require classifications of costs so that they are recognized (1) by the nature of the item (a natural classification), (2) in their relation to the product, (3) with respect to the accounting period to which they apply, (4) in their tendency to vary with volume or activity, (5) in their relation to departments, (6) for control and analysis, and (7) for planning and decision making. Direct material and direct labor may be listed among the items which have a variable nature. Factory overhead, however, must be carefully examined with regard to items of a variable and a fixed nature. It is impossible to budget and control factory-overhead items successfully without regard to their tendency to be fixed or variable; the division is a necessary prerequisite to successful budgeting and intelligent cost planning and analysis. In general, variable expenses show the following characteristics: (1) variability of total amount in direct proportion to volume, (2) comparatively constant cost per unit or product in the face of changing volume, (3) easy and reasonably accurate assignments to operating departments, and (4) incurrence controllable by the responsible department head. The characteristics of fixed expenses are (1) fixed amount within a relative output range, (2) decrease of fixed cost per unit with increased output, (3) assignment to departments often made by managerial 17-11

17-12

COST ACCOUNTING Balance Sheet (Illustrative) Black Carbon, Inc. December 31, 20Assets Current Assets Cash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $5,050,000 Accounts Receivable (net) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6,990,000 Inventories: Raw Materials and Supplies . . . . . . . . . . . . . . . . . $1,000,000 Work in Process . . . . . . . . . . . . . . . . . . . . . . . . . . 1,800,000 Finished Goods . . . . . . . . . . . . . . . . . . . . . . . . . . 2,900,000 5,700,000 Investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1,000,000 Deferred Charges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340,000 Total Current Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Property, Plant and Equipment: Land . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $4,000,000 Buildings and Equipment . . . . . . . . . . . . . . . . . . . . $75,500,000 Less: Allowance for Depreciation . . . . . . . . . . . . . . 47,300,000 28,200,000 Total Fixed Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Total Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liabilities Current Liabilities: Accounts Payable and Accruals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $3,580,000 Provision for Income Taxes: Federal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $2,250,000 State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65,000 2,315,000 Total Current Liabilities . . . . . . . . . . . . . . . . . . Long-term Debt . . . . . . . . . . . . . . . . . . . . . . . . . . . Total Liabilities . . . . . . . . . . . . . . . . . . . . . . . . . . Stockholders Equity: Common Stock—no par value Authorized—2,000,000 shares Outstanding—1,190,000 shares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $11,900,000 Earnings retained in the business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28,185,000 Total Stockholders’ Equity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Total Liabilities and Stockholders’ Equity . . . . . . . . . . . . . . . . . . . . . . . . Income Statement (Illustrative) Black Carbon Inc. for the year 20Net Sales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cost of products: Material, Labor, and Overhead (excluding depreciation) . . . . . . . . . . . . . . . . . $32,150,000 Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5,420,000 Gross Profit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Less: Selling and Administrative Expenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Profit from Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Deductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Income . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Income before Federal and State Income Taxes . . . . . . . . . . . . . . . . . . . . . . . . . . Less: Provision for Federal and State Income Taxes . . . . . . . . . . . . . . . . . . . . . . . Total Net Income . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dividends paid to shareholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Income retained in the business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statement of Cash Flows (Illustrative) Black Carbon Inc. December 31, 20Cash flows from operating activities Cash received from customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash paid to suppliers and employees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash paid for interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash paid for income taxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Net cash provided by operating activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash flows from investing activities Purchase of equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Net cash from investing activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash flows from financing activities Principal payment on loans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dividend payments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Net cash used by financing activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Net increase (decrease) in cash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash at beginning of year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cash at end of year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

$19,080,000

32,200,000 $51,280,000

$5,895,000 5,300,000 $11,195,000

$40,085,000 $51,280,000

$50,087,000

37,570,000 $12,517,000 3,220,000 9,297,000 305,000 8,992,000 219,000 9,211,000 4,055,000 5,156,000 2,200,000 $ 2,956,000

$49,525,000 (40,890,000) (1,050,000) (2,860,000) $ 4,725,000 (1,970,000) (1,970,000) (2,780,000) (2,200,000) (4,980,000) (2,225,000) 7,275,000 $5,050,000

Fig. 17.2.1 Examples of the balance sheet, income statement, and statement of cash flows based on the published annual report.

ELEMENTS OF COSTS

decisions or cost-allocation methods, and (4) control for incurrence resting with top management rather than departmental supervisors. Whether an expense is classified as fixed or variable may well be the result of managerial decisions. Some factory overhead items are semivariable in nature; i.e., they vary with production but not in direct proportion to the volume. For practical purposes, it is desirable to resolve each semivariable expense item into its variable and fixed components. A factory is generally organized along departmental lines for production purposes. This factory departmentalization is the basis for the important classification and subsequent accumulation of costs by departments to achieve (1) cost control and (2) accurate costing. The departments of a company generally fall into two categories: (1) producing, or productive, departments, and (2) nonproducing, or service, departments. A producing department is one in which manual and machine operations are performed directly upon any part of the product manufactured. A service department is one that is not directly engaged in production but renders a particular type of service for the benefit of other departments. The expense incurred in the operation of service departments represents a part of the total factory overhead that must be absorbed in the cost of the product. For product costing, the factory may be divided into departments, and departments may also be subdivided into cost centers. As a product passes through a cost center or department, it is charged with a share of the indirect expenses on the basis of a departmental factory-overhead rate. For cost-control purposes, budgets are established for departments and cost centers. Actual expenses are compared with budget allowances in order to determine the efficiency of a department and to measure the manager’s success in controlling expenses. Factory overhead, which is charged to a product or a job on the basis of a predetermined overhead rate, is considered indirect with regard to the product or the job to which the expense is charged. Service-department expenses are prorated to other service departments and/or to the producing departments. The proration is accomplished by using some rational basis such as area occupied or number of workers. The prorated costs are termed indirect departmental charges. When all servicedepartment expenses have been prorated to the producing departments, each producing department’s total factory overhead will consist of its own direct departmental expense and the indirect (or prorated, or apportioned) charges. This total cost is charged to the product or the job on the basis of the predetermined factory-overhead rate. A company’s cost system provides the data required for establishing standard costs and for the preparation and operation of a budget. The budget program enlists all members of management in the task of creating a workable and acceptable plan of action, welds the plan into a homogeneous unit, communicates to the managerial levels differences between planned activity and actual performance, and points out unfavorable conditions which need corrective action. The budget not only will help promote coordination of people, clarification of policy, and crystallization of plans, but with successful use will create greater internal harmony and unanimity of purpose among managers and workers. The established standard-cost values for material, labor, and factory overhead form the foundation for the budget. Since standard costs are an invaluable aid in the process of setting prices, it is essential to set these standard costs at realistic levels. The measurement of deviations from established standards or norms is accomplished through the use of variance accounts. Costs as a basis for planning are estimated costs which may be incurred if any one of several alternative courses of action is adopted. Different types of costs involve varying kinds of consideration in managerial planning and decision making.

process and finished goods are accumulated or tabulated in the record of accounts according to one of two methods: 1. The Job-Order Cost Method When orders are placed in the factory for specific jobs or lots of product, which can be identified through all manufacturing processes, a job cost system is appropriate. This method has certain characteristics. A manufacturing order often corresponds to a customer’s order, though sometimes a manufacturing order may be for stock. The customer’s order may be obtained on the basis of a bid price computed from an estimated cost for the job. The goods in each order are kept physically separate from those of other jobs. The costs of a manufacturing order are entered on a job cost sheet which shows the total cost of the job upon completion of the order. This cost is compared with the estimated cost and with the price which the customer agreed to pay. 2. The Process Cost Method When production proceeds in a continuous flow, when units of product are not separately identifiable, and when there are no specific jobs or lots of product, a process cost system is appropriate, for it has certain characteristics: work is ordered through the plant for a specific time period until the raw materials on hand have all been processed or until a specified quantity has been produced; goods are sold from the stock of finished goods on hand since a customer’s order is not separately processed in the factory; the cost-ofproduction sheet is a record of the costs incurred in operating the process—or a series of processes—for a period of time. It shows the quantity produced in pounds, tons, gallons, or other units, and the cost per unit is obtained by dividing the total costs of the period by the total units produced. Performance is indicated by comparing the quantity produced and the cost per unit of the current period with similar figures of other periods or with standard cost figures. ELEMENTS OF COSTS

The main items of costs shown on the income statement are factory costs which include direct materials, direct labor and factory overhead; and selling and administrative expenses. A breakdown of costs is shown in Figure 17.2.2. Materials The cost of materials purchased is recorded from purchase invoices. When the materials are used in the factory, an assumption must be made as to cost flow, that is, whether to charge them to operations at average prices, at costs based on the first-in, first-out method of costing, or at costs based on the last-in, first-out method of costing. Each method will lead to a different cost figure, depending on how prices change. Each situation must be studied individually to determine which practice will give a maximum of accuracy in cost figures with a minimum of accounting and clerical effort. Once the choice has been made, records must be set up to charge materials to operations based on requisitions. Indirect material is necessary to the completion of the product, but its consumption with regard to the final product is either so small or so complex that it would be futile to treat it as a directmaterial item. Labor Labor also consists of two categories: direct and indirect. Direct labor, also called productive labor, is expended immediately on the materials comprising the finished product. Indirect labor, in contrast to direct labor, cannot be traced specifically to the construction

METHODS OF ACCUMULATING COSTS IN RECORDS OF ACCOUNT

The balance sheet lists the components of inventory as raw materials, work in process, and finished goods. These accounts reflect the cost of unsold production at various stages of completion. The costs in work in

17-13

Figure 17.2.2 Summary diagram of cost relationships.

17-14

COST ACCOUNTING

or composition of the finished product. The term includes the labor of supervisors, shop clerks, general helpers, cleaners, and those employees engaged in maintenance work. Factory Overhead Indirect materials or factory supplies and indirect labor constitute an important segment of factory overhead. In addition, costs of fuel, power, small tools, depreciation, taxes on real estate, patent amortization, rent, inspection, supervision, social security taxes, health and accident insurance, workers’ compensation insurance, and many others fall into this large category. These expenses must be collected and allocated to all jobs or units produced. Many expenses are definitely applicable to a specific department and are easily assigned thereto. Other expenses relate to the entire plant and must be prorated to departments on some suitable basis. For instance, heat might be prorated to departments on the basis of volume of space occupied. The expenses of the service departments are prorated to the producing departments on some basis such as service rendered in the case of a maintenance department or per dollar of payroll processed in the case of a cost department. The charging of factory overhead to jobs or products is accomplished by means of an overhead or burden rate. This rate is essentially a ratio computed to show the relationship of the total burden of a department to some other easily measurable total figure for the department. For example, the total burden cost of a department may be divided by its direct-labor cost to give a percentage-of-direct-labor rate. This percentage applied to the direct-labor cost of a job or a product gives the amount of overhead chargeable thereto. Other common types of burden rates are the labor-hour rate (departmental expenses total direct-labor hours) and the machinehour rate (departmental expenses total machine hours available). Labor rates are most commonly used. When, however, machines perform the greater amount of the work, machine-hour rates give better results. It must be clearly understood that these rates are computed in advance of production, generally at the beginning of the year. They are used throughout the fiscal period unless seasonal fluctuations or unusual changes in expense amounts necessitate the creation of a new rate. The determination of the overhead rate is closely tied up with overhead budgets. Departmental Classification As mentioned above, the establishment of departmental lines is important not only for costing purposes but also for budgetary control purposes. Departmental lines are set up in order to (1) segregate basically different processes of production, (2) secure the smoothest possible flow of production, and (3) establish lines of responsibility for control over production and costs. When the costing methods are designed to fit in with the departmentalization of factory and office, costs can be accumulated within a department with production being on either the job-order or process cost method.

Fig. 17.2.3 Volume-based costing.

process such as machine setup, receiving orders, or material movement. The costs are assigned to products based on the relative amounts of each cost driver consumed by that product. Therefore, low-volume options such as the premium radio receive a larger share of indirect costs relative to the actual volume of labor used to insert the component. The amount of each cost pool attributed to a product can then be spread over the number of units produced and a more accurate assignment of costs obtained.

ACTIVITY-BASED COSTING

The method of assigning overhead to products based on labor hours or machine hours is referred to as volume-based overhead absorption because the amount assigned will vary strictly with the volume of either labor or machine time consumed. In applications where production costs may not be strictly driven by volume of labor hours, activity-based or transaction costing is appropriate. This situation typically occurs when there are many options or alternatives available to the customer. Typically, these products are produced in low volumes and have a high degree of complexity. An example would be the option of a premium radio in an automobile. Though the actual purchase cost of that radio would not be overlooked in pricing the automobile, the indirect costs would be overlooked in a volume-based system because the indirect costs associated with the premium radio, such as holding an additional item of inventory, documenting and producing separate receiving orders, added clerical and assembly coordination effort, increased engineering complexity, and so on would be grouped under the general category of overhead. On the other hand, in an activity-based system, indirect costs would be assigned to a unique cost pool, such as shown in Fig. 17.2.3 and 17.2.4, which compare the procedures of volume and activity-based costing. The striking difference is that the overhead cost pool used in volume systems is not present. In activity-based systems, many more cost pools are used and are closely related to some causal aspect of the

Fig. 17.2.4 Activity-based costing.

MANAGEMENT AND THE CONTROL FUNCTION

To be successful, management must integrate its own knowledge, skills, and practices with the know-how and experience of those who are entrusted with the task of carrying out company objectives. Management,

BUDGETS AND STANDARD COSTS

together with its employees and workers, can achieve its objectives through performance of the three managerial functions: (1) planning and setting objectives, (2) organizing, and (3) controlling. Planning is a basic function of the management process. Without planning there is no need to organize or control. However, planning must precede doing, and the budget is the most important planning tool of an enterprise. Organizing is essentially the establishment of the framework within which the required activities are to be performed, together with a list of who should perform them. Creation of an organization requires the establishment of organizational or functional units generally known as departments, divisions, sections, floors, branches, etc. Controlling is the process or procedure by which management ensures operative performance which corresponds with plans. The control process is pictured diagrammatically in Fig. 17.2.5. Recognition of accounting as an important tool in the controlling phase is evidenced through the role of performance reports in pointing out areas and jobs or tasks which require corrective action. These reports should make possible “management by exception.” The effectiveness of the control of costs depends upon proper communication through control and action reports from the accountant to the various levels of operating management. An organization chart is essential to the development of a cost system and cost reports which parallel the responsibilities of individuals for implementing management plans. The coordinated development of a company’s organization with the cost and budgetary system will lead to “responsibility accounting.” Responsibility accounting plays a key role in determining the type of cost system used.

17-15

Any cost system should be perfected so that it will (1) aid in the control and management of the company; (2) measure the efficiency of personnel, materials, and machines; (3) help in eliminating waste; (4) provide comparison within individual industries; (5) provide a means of valuing inventories; and (6) aid in establishing selling prices. In an organization departmentalized or segmented along product lines, it is often arbitrary to allocate certain indirect costs especially when common facilities or personnel are shared. This is because there is no objective basis to compute a division of common costs. Control methods for evaluating performance often rely on the segment margin statement.

Example of a Segment Margin Statement

Sales Direct material & labor Contribution margin Product specific overhead Segment margin Common costs Operating income

Product A

Product B

Total

$9,000 (4,000) 5,000 (1,000) 4,000

$11,000 (5,000) 6,000 (2,500) 3,500

$20,000 (9,000) 11,000 (3,500) 7,500 (2,000) $5,500

Fig. 17.2.5 The control process.

The cost system’s value is greatly enhanced when it is interlocked with a budgetary control system. When budget figures are based upon standard costs, the greatest benefit will be derived from such a combination. Basically, two types of cost systems exist: (1) the actual (or historical) and (2) the standard (or predetermined). The actual cost system accumulates and summarizes costs as they occur and determines a final product cost after all manufacturing operations have been completed. The job is charged with actual quantities and costs of materials used and labor expended; the overhead or burden is allocated on the basis of some predetermined overhead rate. This predetermined overhead rate shows that even the so-called actual system does not entirely live up to its name. Under a standard cost system all costs are predetermined in advance of production. Both the actual (historical) and the standard cost system may be used in connection with either (1) the job-order cost method or (2) the process cost method.

TYPES OF COST SYSTEMS

BUDGETS AND STANDARD COSTS

The construction of a cost system requires a thorough understanding of (1) the organizational structure of the company, (2) the manufacturing procedure, and (3) the type of information which management requires of the cost system. 1. The organization chart gives a graphic picture of the ranking authority of superintendents, department heads, and managers who are responsible for (a) providing the detailed information needed by the accounting division in order to install a successful system; (b) incurring expenditures in personnel, materials, and other cost elements, which the cost accountant must segregate and report to those in charge. The cost system with its operating accounts must correspond to organizational divisions of authority so that the individual supervisor, department head, or executive can be held “accountable” for the costs incurred in the department. 2. The manufacturing procedure and shop methods lead to a consideration of the type of pay (piece rate, incentive, day rate, etc.); the method of collecting hours worked; the control of inventories; the problem of costing tools, dies, jigs, and machinery; and many other problems connected with the factory. 3. The organizational setup on the one hand and the manufacturing procedure on the other form the background for the design of a cost system that is based on (a) recognition of the various cost elements, (b) departmentalization of factory and office, and (c) the chart of accounts.

A budget provides management with the information necessary to attain the following major objectives of budgetary control: (1) an organized procedure for planning; (2) a means for coordinating the activities of the various divisions of a business; (3) a basis for cost control. The planning phase provides the means for formalizing and coordinating the plans of the many individuals whose decisions influence the conduct of a business. Sales, production, and expense budgets must be established. Their establishment leads necessarily to the second phase of coordination. Production must be planned in relation to expected sales, materials and labor must be acquired or hired in line with expected production requirements, facilities must be expanded only as foreseeable future needs justify, and finances must be planned in relation to volume of sales and production. The third phase of cost control is predicated on the idea that actual costs will be compared with budgeted costs, thus relating what actually happened with what should have happened. To accomplish this purpose, a good measure of what costs should be under any given set of conditions must be provided. The most important condition affecting costs is volume or rate of activity. By predetermining, through the use of the flexible budget, the expenses allowed for any given rate of activity and comparing it with the actual expense, a better measurement of the performance of an individual department is achieved and the control of costs is more readily accomplished.

17-16

COST ACCOUNTING

In the construction of overhead budgets the volume or activity of the entire organization as well as of the individual department is of considerable importance in their relationship to existing capacity. Capacity must be looked upon as that fixed amount of plant, machinery, and personnel to which management has committed itself and with which it expects to conduct the business. Volume or activity is the variable factor in business related to capacity by the fact that volume attempts to make the best use of the existing capacity. To find a profitable solution to this relationship is one of the most difficult problems faced by business management and the accountant who tries to help with appropriate cost data. Volume, particularly of a department, is often expressed in terms of direct-labor hours. With different rates of capacity, a different cost per hour of labor will be computed. This relationship can be demonstrated in the following manner:

Percentage of productive capacity Direct-labor hours Factory overhead Fixed overhead Variable overhead

60% 600

80% 800

100% 1,000

$1,200 600

$1,200 800

$1,200 1,000

Total Overhead rate per direct-labor hour

$1,800 $3.00

$2,000 $2.50

$2,200 $2.20

with targets and goals. Only in this manner can management, which includes all echelons from the foreman to the president, exercise control over costs and therewith over profits. TRANSFER PRICING A transfer price refers to the selling price of a good or service when both

the buyer and seller are within the same organization. For example, one division of a company may produce a component, such as an engine, and transfer this component to an assembly division. For purposes of control, these organizational units may be treated as profit centers (responsible for earning a specified profit or return on investment). Accordingly, the transfer price is a revenue for the seller and a cost to the buyer. Because organizational control is at issue whenever interdivisional transfers are made, companies must often specify a policy to dictate the basis for determining a transfer price. Transfer prices should be based on market prices when available. Most taxing authorities require intercompany transfers to be made at market price as well. To solve situations of suboptimal resource usage (e.g., idle capacity) it is often possible to construct transfer prices based on manufacturing cost plus some allowance for profit. If the producing division is a cost center (responsible for controlling costs to achieve a certain budgeted level), in order to promote efficiency transfers are usually made on the basis of standard cost.

The existence of fixed overhead causes a higher rate at lower capacity utilization. It is desirable to select that overhead rate which permits a full recovery of production costs by the end of the business cycle. The above tabulation reveals another important axiom with respect to fixed and variable overhead. Fixed overhead remains constant in total but varies in respect to cost per unit or hour. Variable overhead varies in total but remains fixed in relationship to the unit or hour. Standard Costs The budget, as a statement of expected costs, acts as a guidepost which keeps business on a charted course. Standards, however, do not tell what the costs are expected to be but rather what they will be if certain performances are attained. In a well-managed business, costs never exceed the budget. They should constantly approach predetermined standards. The uses of standard costs are of prime importance for (1) controlling and reducing costs, (2) promoting and measuring efficiencies, (3) simplifying the costing procedures, (4) evaluating inventories, (5) calculating and setting selling prices. The success of a standard cost system depends upon the reliability and accuracy of the standards. To be effective, standards should be established for a definite period of time so that control can be exercised and variances from standards computed. Standards are set for materials, labor, and factory overhead. When actual costs differ from standard costs with respect to material and labor, two causes can generally be detected. (1) The price may be higher or lower or the rate paid a worker may be different; the difference is called a material price or a labor rate variance. (2) The quantity of the material used may be more or less than the standard quantity or the hours used by the worker may be more or less. The difference is called material-quantity variance or labor-efficiency variance, respectively. For factory overhead, the computation is somewhat more elaborate. Actual expenses are compared not only with standard expenses but also with budget figures. Various methods are in vogue, resulting in different kinds of overhead variances. Most accountants compute a controllable and a volume variance. The controllable variance deals chiefly with variable expenses and measures the efficiency of the manager’s ability to hold costs within the budget allowance. The volume variance portrays fixed overhead with respect to the use or nonuse of existing capacity. It measures the success of management in its ability to fill capacity with sales or production volume. These two variances can be analyzed further into an expenditure and efficiency variance for the controllable variances and into an effectiveness and capacity variance for the volume variance. Such detailed analyses might bring forth additional information which would help management in making decisions. Of absolute importance for any cost system is the fact that the information must reach management promptly, with regularity, and in a report that is analytical, permitting quick comparison

SUPPORTING DECISION MAKING

The analytical phase of cost accounting has become more important and influential in the last few years. Management must make many decisions, some of a short-range, others of a long-range nature. To base judgment upon good, reliable data and analyses is a major task for controllers and their staffs. Cost analysis comprises such matters as analysis of distribution costs, gross-profit analysis, break-even analysis, profit-volume analysis, differential-cost analysis, direct costing, capitalexpenditure analysis, return on capital employed, and price analysis. A detailed discussion of each phase mentioned lies beyond the scope of this section, but a short description is appropriate. Distribution-cost analysis deals with allocation of selling expenses to territories, customers, channels of distribution, products, and sales representatives. Once so allocated it might be possible to determine the most profitable and the least profitable commodity, product, territory, or customer. Segment margin statements are useful for this analysis. Standards have been introduced recently in these analyses. The Robinson-Patman Act, an amendment to Section 2 of the Clayton Act, gave additional impetus to the analytical phase of distribution costs. This act prohibits pricing the same product at different amounts when the amounts do not reflect actual cost differences (such as distribution or warranty). Gross-profit analysis attempts to determine the causes for an increase or decrease in the gross profit. Any change in the gross profit is due to one or a combination of the following: (1) changes in the selling price of the products; (2) changes in the volume sold; (3) changes in the types of products sold, called the sales-mix; (4) changes in the cost elements. Cost elements are analyzed through budgetary control methods. Sales figures must be scrutinized to unearth the changes from the contemplated course and therewith from the final profit. Break-even analysis, generally presented in the form of a break-even chart, constitutes one of the briefest and most easily understood devices for data presentation for policy-making decisions. The name “break-even” implies that point at which the company neither makes a profit nor suffers a loss from the operations of the business. A break-even chart can be defined as a portrayal in graphic form of the relation of production and sales to profit or, more briefly, a graphic variable income statement. The computation of the break-even point can be made by the following formula. Break-even sales volume 5

total fixed expenses total variable expenses 12 total sales volume

CAPITAL-EXPENDITURE DECISIONS EXAMPLE. Assume fixed expenses, $13,800,000; variable expenses, $27,000,000; total sales volume, $50,000,000. Computation: Break-even sales volume  $13,800,000/[1  ($27,000,000/$50,000,000)]  $30,000,000.

studies stress the variable costs. The differential-cost statement presents only the differences in the following manner:

Results can be obtained in chart form (Fig. 17.2.6).

Sales Variable costs Marginal income Fixed expenses Profit

Fig. 17.2.6 Illustrative break-even chart.

Cost-volume-profit analysis deals with the effect that a change of volume, cost, price, and product-mix will have on profits. Managements of many enterprises attempt to stimulate the public to purchase their products by conducting intensive promotion campaigns in radio, press, mail, and television. The customer, however, makes the final decision. What management wants to know is which product or model will yield the most profitable margin; which is the least profitable; what effect a reduction in sales price will have on final profit; what effect a shift in volume or product-mix will have on product costs and profits; what the new break-even point will be under such changing conditions; what the effect of expected increases in wages or other operating costs on profit will be; what the effect will be on costs, profit, and sales volume should there be an expansion of the plant. Cost-volume-profit analysis can also be presented graphically in a so-called volume-profit-analysis graph. Using the same data as in the break-even chart, a volume-profit analysis graph takes the form shown in Fig. 17.2.7.

Fig. 17.2.7 Illustrative cost-volume-profit analysis graph. Differential-cost analysis treats differences, as the title suggests. These differences, also called alternative courses, arise when management wants to know whether or not to take business at a special price, to risk a decline in price of total sales, to sacrifice volume for price, to shut down part of the plant, or to enlarge plant capacity. While accountants generally use the term “differential,” economists speak of “marginal” and engineers of “incremental” costs in connection with such a study. As in any of the previously discussed analyses, the classification of costs into their fixed and variable components is absolutely essential. However, while in break-even analysis the emphasis rests upon the fixed expenses, differential-cost

17-17

Present business

Additional business

Total

$100,000 60,000 40,000 30,000 $ 10,000

$10,000 6,000 4,000 none $ 4,000

$110,000 66,000 44,000 30,000 $ 14,000

This statement shows that additional business is charged with the variable expenses only because present business is absorbing all fixed expenses. Direct costing is a costing method which charges the products with only those costs that vary directly with volume. Variable or direct costs such as direct materials, direct labor, and variable manufacturing expenses are examples of costs chargeable to the product. Costs that are a function of time rather than of production are excluded from the cost of the product. The only costs assignable to inventories are variable costs, and because they should vary in proportion to increases or decreases in production, the unit cost assigned to inventories should be uniform. CAPITAL-EXPENDITURE DECISIONS

The preparation of a capital-expenditure budget must be preceded by an analytical and decision-making process by management. This area of managerial decisions not only is important to the success of the company but also is crucial in case of errors. Financial requirements, present and anticipated costs, profits, tax considerations, and legal, personnel, and market problems must be studied and reviewed before making the final decision. Five evaluation techniques are generally accepted as representative tools for decision making: (1) payback- or payout-period method; (2) average-return-on-investment method; (3) present-value method; (4) discounted-cash-flow method; and (5) economic value added (EVA). None of these methods serves every purpose or every firm. The methods should, however, aid management in exercising judgment and making decisions. Of significance in the evaluation of a capital expenditure is the time value of money which is employed in the present value and the discounted-cash-flow methods. The present value means that a dollar received a year hence is not the equivalent of a dollar received today, because the use of money has a value. For this reason, the estimated results of an investment proposal can be stated as a cash equivalent at the present time, i.e., its present value. Present-value tables have been devised to facilitate application of present-value theory. In the present-value method the discount rate is known or at least predetermined. In the discounted-cash-flow (DCF) method the rate is to be calculated and is defined as the rate of discount at which the sum of positive present values equals the sum of negative present values. The DCF method permits management to amortize corporate profits by selecting proposals with the highest rates of return as long as the rates are higher than the company’s own cost of capital plus management’s allowance for risk and uncertainty. Cost of capital represents the expected return for a given level of risk that investors demand for investing their money in a given firm or venture. However, when related to capital-expenditure planning, the cost of capital refers to a specific cost of capital from a particular financing effort to provide funds for a specific project or numerous projects. Such use of the concept connotes the marginal cost of capital point of view and implies linkage of the financing and investment decisions. It is, therefore, not surprising that the cost of capital differs, depending upon the sources. A company could obtain funds from (1) bonds, (2) preferred and common stock, (3) use of retained earnings, and (4) loans from banks. If a company obtains funds by some combination of these sources to achieve or maintain a particular capital structure, then the cost of capital (money) is the weighted average cost of each money source.

17-18

ENGINEERING STATISTICS AND QUALITY CONTROL

Return-on-Capital Concept This aids management in making decisions with respect to proposed capital expenditures. This concept can also be used for (1) measuring operating performance, (2) profit planning and decision making, and (3) product pricing. The return on capital may be expressed as the product of two factors: the percentage of profit to sales and the rate of capital turnover. In the form of an equation, the method appears as

S S

Sales Investment 5 turnover Capital

S

Profit 5 profit margin Sales

return on capital investment

identifying and setting priorities for the elimination of non-value-adding activities. Non-value-added activities decrease cycle time efficiency, where cycle time efficiency is the sum of all value-added activity times divided by total cycle time. The engineer may redesign the product using common parts or through process redesign so as to eliminate those activities or cost pools that add to product cost without adding to value. Some examples of these activities are material movement, run setup, and queue time. Efforts to manage product costs by eliminating non-value-adding activities are frequently the result of a need to attain a specific target cost. Traditionally selling price was determined by adding a required markup to total cost. Global competition has forced producers to accept a market price determined by competitive forces: Target cost 5 market selling price 2 required return on investment

Whether for top executive, plant or product manager, plant engineer, sales representative, or accountant, the concept of return on capital employed tends to mesh the interest of the entire organization. An understanding and appreciation of the return-on-capital concept by all employees help in building an organization interested in achieving fair profits and an adequate rate of return. COST MANAGEMENT

Often, programs of continuous improvement require that costs be computed according to the activity-based method. That method facilitates

Accordingly, the company that stays in business is the one that can accept this price and still earn a return on investment. Target costing is a concerted effort to design, produce, and deliver the product at a cost that will assure long-term survival. Engineers may also focus on process throughput as a means of cost management. The theory of constraints proposes that profit is maximized when process throughput is maximized. The underlying assumption is that costs are fixed in the short run and cost per unit is lowest when fixed costs are distributed over the largest possible volume. This goal is accomplished by reducing cycle time through the elimination of process bottlenecks.

17.3 ENGINEERING STATISTICS AND QUALITY CONTROL by Ashley C. Cockerill REFERENCES: Brownlee, “Statistical Theory and Methodology in Science and Engineering,” Wiley. Conover, Two k-sample slippage tests, Journal of the American Statistical Association, 63: 614–626. Conover, “Practical Nonparametric Statistics,” Wiley. Duncan, “Quality Control and Industrial Statistics,” Richard D. Irwin. Gibbons, “Nonparametric Statistical Inference,” McGraw-Hill. Olmstead, A corner test for association, Annals of Mathematical Statistics, 18: 495–513. Owen, “Handbook of Statistical Tables,” AddisonWesley. Pearson, ‘“Biometrika Tables for Statisticians,” vol. 1, 2d ed., Cambridge University Press. Tukey, “A quick compact two sample test to Duckworth’s specifications,” Technometrics, 1: 31–48. Wilks, “Mathematical Statistics,” Wiley. ENGINEERING STATISTICS AND QUALITY CONTROL

Statistical models and statistical methods play an important role in modern engineering. Phenomena such as turbulence, vibration, and the strength of fiber bundles have statistical models for some of their underlying theories. Engineers now have available to them batteries of computer programs to assist in the analysis of masses of complex data. Many textbooks are needed to cover fully all these models and methods; many are areas of specialization in themselves. On the other hand, every engineer has a need for easy-to-use, self-contained statistical methods to assist in the analysis of data and the planning of tests and experiments. The sections to follow give methods that can be used in many everyday situations, yet they require a minimum of background to use and need little, if any, calculation to obtain good results. STATISTICS AND VARIABILITY

One of the primary problems in the interpretation of engineering and scientific data is coping with variability. Variability is inherent in every physical process; no two individuals in any population are the same. For example, it makes no real sense to speak of the tensile strength of a synthetic fiber manufactured under certain conditions; some of the fibers

will be stronger than others. Many factors, including variations of raw materials, operation of equipment, and test conditions themselves, may account for differences. Some factors may be carefully controlled, but variability will be observed in any data taken from the process. Even tightly designed and controlled laboratory experiments will exhibit variability. Variability or variation is one of the basic concepts of statistics. Statistical methods are aimed at giving objective, quantitative, and reproducible ways of assessing the effects of variability. In particular, they aim to provide measures of the uncertainty in conclusions drawn from observational data that are inherently variable. A second important concept is that of a random sample. To make valid inferences or conclusions from a set of observational data, the data should be able to be considered a random sample. What does this mean? In an operational sense it means that everything we are interested in seeing should have an equal chance of being represented in the observations we obtain. Some examples of what not to do may help. If machine setup is an important contributor to differences, then all observations should not be taken from one setup. If instrumental variation can be important, then measurements on the same item should not be taken successively—time to “forget” the last reading should pass. A random sample of n items in a warehouse is not the first n that you can find. It is the n that is selected by a procedure guaranteed to give each item of interest an equal chance of selection. One should be guided by generalizations of the fact that the apples on top of a basket may not be representative of all apples in the basket. CHARACTERIZING OBSERVATIONAL DATA: THE AVERAGE AND STANDARD DEVIATION

The two statistics most commonly used to characterize observational data are the average and the standard deviation. Denote by x1, x2, . . . , xn

PROCESS VARIABILITY—HOW MUCH DATA?

the n individual observations in a random sample from some process. Then the average and standard deviation are defined as follows: Average: n

x 5 g xi /n i51

Standard deviation: n

s 5 B g sxi 2 x d2/sn 2 1dd

 2s to  3s to  4s to  5s to  6s to

x x x x x

Lower bound on % of measurements  2s  3s  4s  5s  6s

75% 89% 94% 96% 97%

Since the average and the standard deviation are computed from a sample, they are themselves subject to fluctuation. However, if m, is the long-term average of the process and s is the long-term standard deviation, then: Average sx d 5 m, process average Average ssd 5 s, process standard deviation Furthermore, the intervals m ks contain the same percentage of all values, as do the intervals x ks for the sample; that is, at least 89 percent of all the long-term values will be contained in the interval m  3s to m  3s, etc. Range Estimate of the Standard Deviation

For n  20 it is more convenient to compute the range r to estimate the standard deviation s. The range is x(n)  x(1), where x(n) is the largest value in a random sample of size n and x(1) is the smallest value. For example, if n  10 and the observations are 310, 309, 312, 316, 314, 303, 306, 308, 302, 305, the range is r  316  302  14. An estimate of the standard deviation s is obtained by multiplying r by the factor fn in Table 17.3.1. The average value of r  fn is s. Thus, in the example above, an estimate of s and a value that can be used for s is 0.3249r  0.3249(14)  4.5486. Table 17.3.1

Since the output of all processes is variable, one can make reasonable decisions about the output only if one can obtain a measure of how much variability or spread one can expect to see under normal conditions. Variability cannot be measured accurately with a small amount of data. Methods for assessing how much data are needed are given for two general situations. Specified Tolerances

Clearly, the average gives one number around which the n observations tend to cluster. The standard deviation gives a measure of how the n observations vary or spread about this average. The square of the standard deviation is called the variance. If we consider a unit mass at each point xi, then the variance is equivalent to a moment of inertia about an axis through x. It is readily seen that for a fixed value of x, greater spreads from the average will produce larger values of the standard deviation s. The average and the standard deviation can be used jointly to summarize where the observations are concentrated. Tchebysheff’s theorem states: A fraction of at least 1  (1/k2) of the observations lie within k standard deviations of the average. The theorem guarantees lower bounds on the percentage of observations within k (also known as z in some textbooks) standard deviations of the average.

x x x x x

PROCESS VARIABILITY—HOW MUCH DATA?

1/2

i51

Interval

17-19

Average of Range fn  s

Sample size

fn

Sample size

fn

2 3 4 5 6 7 8 9 10

0.8862 0.5908 0.4857 0.4299 0.3946 0.3698 0.3512 0.3367 0.3249

11 12 13 14 15 16 17 18 19 20

0.3152 0.3069 0.2998 0.2935 0.2880 0.2831 0.2787 0.2747 0.2711 0.2677

A convenient statement about the variability or spread of a process can be based on the smallest and largest values in a random sample of the output. There are no practical limitations on its use. Suppose that we have a random sample of n values from our process. Denote the values by X1, X2, . . . , Xn. After obtaining the values we find the smallest, X(1), and the largest, X(n). Now we want to assess what percent of all future values that this process might generate will be covered by X(1) and X(n). In statistics, X(1) and X(n) are called tolerance limits. If the process generates bolts and X is the diameter, then the engineering concept of tolerance and the statistical concept of tolerance are seen to be quite similar. Let p be the percentage of all the process values that on a longterm basis will be between X(1) and X(n). Let P be a lower bound for this percentage p. Now consider the probability statement: Probability (p  P)  C. The quantity C we call confidence. Since it is a probability its value is between 0 and 1. As C approaches 1 our confidence in the percentage P increases. The interpretation of P and C can be explained in terms of Table 17.3.2. Suppose that we take a random sample of size n  269 values from our process output; the smallest value is 10 and the largest is 54. In Table 17.3.2 we see that 269 is the entry for P  99 and C  0.75. This tells us that at least P  99 percent of all future values that this process will generate will be between 10 and 54, the smallest and largest values seen in the sample of 269. The confidence C  0.75  3/4 tells us that the chances are 3 out of 4 that our statement is correct. As we increase the sample size n, we increase the chances that our statement is correct. For example, if our sample size had been n  473, then C  0.95 and the chances are 95 in 100 that we are correct in making the statement that at least 99 percent of all process values will be between the 10 and 54 seen in the sample. Similarly, if the sample size had been 740, then the chances of being correct increase to 995 in 1000. If sample size n is decreased sufficiently, the confidence C  0.50  1/2 indicates that the chances are one in two of being right, and one in two of being wrong. Therefore, it is important to select n so as to keep C as large as possible. The cost of acquiring the data will determine the upper limit for n. Further information on tolerance limits can be found in Wilks (1962) and Duncan (1986). Wear-Out and Life Tests

A special case of coverage occurs if our interest is in a wear-out phenomenon or a life test. For example, suppose we put a number of incandescent light bulbs on test; our interest is in the length of time to failure. Clearly we do not want to wait until all specimens fail to draw a conclusion; it Table 17.3.2 Confidence, C P, %

0.995

0.99

0.95

0.75

99.9 99.5 99 98 97 96 95 90 80 75

7427 1483 740 368 245 183 146 71 34 27

6636 1325 661 330 219 163 130 64 31 24

4742 947 473 235 156 117 93 46 22 18

2692 538 269 134 89 67 53 26 13 10

NOTE: Sample size r required to have a confidence C that at least P percent of all future values will be included between the smallest and largest values in a random sample.

17-20

ENGINEERING STATISTICS AND QUALITY CONTROL

Table 17.3.3

Table 17.3.4 Confidence, C

Q, %

0.995

0.99

0.95

0.75

0.1 1 2 3 4 5 10 15 20 25

5296 528 263 174 130 104 51 33 24 19

4603 459 228 152 113 90 44 29 21 17

2995 299 149 99 74 59 29 19 14 11

1386 138 69 46 34 28 14 9 7 5

NOTE: Sample size r required to have a confidence C that fewer than Q percent of future units will fail in a time shorter than the shortest life in the sample. For a more extensive table of values, see Owen (1962).

might take an inordinate length of time for the last one to fail. From a practical point of view we would probably be interested in those that fail first anyway. If the sample size is properly chosen, there will be important information as soon as we obtain the first failure. In a random sample of size n let the failure times be T1  T2  . . .  Tn. The value T1 is thus the smallest value in a random sample of size n. Now let q be the percentage of future units that can be expected to fail in a time less than T1, the smallest value in the sample. As before we can make a probability statement about q. Let Q be an upper bound to q. Then we can compute: Probability (q  Q)  C. For example, suppose that we put a random sample of 299 items on test and the first one fails in time T1  151 h. From Table 17.3.3 we see that 299 is an entry for Q  1 and C  0.95. Thus we can conclude that not more than 1 percent of future units should fail in a time less than 151 h. The confidence in the statement is 95 chances in 100 of being correct. Again referring to Table 17.3.3, we see that if T1  151 were based on a sample of 528, then the confidence would be increased to 995 chances in 1000. Most importantly, Table 17.3.3 tells us that we need to test a very large sample if we want to have high confidence that only a small percentage of future units will fail in a time less than the smallest observed. The theory behind Table 17.3.3 can be found in Wilks (1962). For a more extensive table of values see Owen (1962). If Q  Q/100, use r 5 [log s1 2 Cd] / [log s1 2 Qrd] CORRELATION AND ASSOCIATION

i

X

Y

i

X

Y

1 2 3 4 5 6

10.45 13.81 12.22 9.05 17.86 14.54

4.1 2.7 1.6 4.3 2.6 0.1

18 19 20 21 22 23

9.65 7.44 10.70 13.38 13.00 13.90

3.8 5.4 7.6 6.0 10.4 10.7

7 8 9 10 11 12

19.99 8.73 4.66 13.88 5.10 3.98

3.7 3.5 5.3 3.9 4.4 4.1

24 25 26 27 28 29

11.94 14.11 0.93 3.18 13.13 13.45

9.4 10.7 12.9 12.5 6.5 11.7

13 14 15 16 17

8.12 12.26 10.30 5.40 10.39

6.3 6.6 6.5 11.9 5.8

30 31 32 33

12.70 15.95 7.30 7.78

9.6 8.5 16.6 8.8

NOTE: Data are on paper samples. X is proportional to reciprocal of light transmission. Y is proportional to tensile strength.

largest. If N is even, then N can be written as 2k. Then the median is taken to be midway between the kth and (k  1)st values. 3. Draw a vertical line through Xm. 4. Draw a horizontal line through Ym. 5. The lines in (3) and (4) divide the graph into four quadrants. Label the upper right and lower left as plus. Label the upper left and lower right as minus. 6. Begin at the right side of the plot. Count the values, in order of decreasing X, until forced to cross the horizontal median Ym. Give the count a plus sign if the values are in a plus quadrant, a minus sign if they are in a minus quadrant. 7. Repeat the procedure in (6), moving down from above until you have to cross the vertical median, moving from left to right until you have to cross the horizontal median, and moving up from below until you have to cross the vertical median. 8. Compute the algebraic sum of the four counts obtained in (6) and (7). Denote the sum by T. Test If |T |  11, then there is evidence of correlation between X and Y; | T | is the value of T ignoring the sign. EXAMPLE. Table 17.3.4 gives 33 pairs of values obtained from samples of a paper product. The X coordinate is proportional to the reciprocal of light transmitted by the sample. The Y coordinate is proportional to tensile strength. 1. The 33 pairs of values are plotted in Fig. 17.3.1.

One of the most common problems in the analysis of engineering data is to determine if a correlation or an association exists between two variables X and Y, where the data occur in pairs (Xi, Yi). The “corner test of association” developed by Olmstead and Tukey (1947) is a quick and simple test to assist in making this determination. Corner Test Conditions for Use Each pair (Xi, Yi) should have been obtained independently; there are no other practical assumptions for its use. Of course, the user should consider the physical process generating the data when interpreting any correlation or association that is determined to exist. Procedure

1. Make a scatter plot on graph paper of the data pairs (Xi, Yi), with the usual convention that X is the horizontal scale and Y the vertical. 2. Determine the median Xm of the Xi values. Determine the median Ym of the Yi values. The median splits the data into two parts so that there is an equal number of values above and below the median. Let N denote the total number of points. If N is odd, then N can be written as 2k  1 and the median is the (k  1)st value as the values are ordered from the smallest to the

Fig. 17.3.1 Plot of example data used in conjunction with the Corner Test.

COMPARISON OF METHODS OR PROCESSES 2. The median of X values is Xm  11.94. The median of Y values is Ym  6.3. 3 to 5. The medians are shown in Fig. 17.3.1, and the quadrants are labeled. 6 to 7. The counts are as follows: Right to left: 2 (points at 19.99 and 17.86 on X) Top to bottom: 4 (points at 16.6, 12.9, 12.5, 11.7 on Y) Left to right: 2 (points at 0.93 and 3.18 on X) Bottom to top: 4 (points at 0.1, 1.6, 2.6, 2.7 on Y) 8. The algebraic sum of the counts is 12. Hence T  12. And since | T|  11, one can conclude that there is evidence of correlation or association between the variables X and Y.

COMPARISON OF METHODS OR PROCESSES

A common problem in engineering investigations is that of using experimental or observational data to assess the performance of two processes, two treatment methods, or the like. Often one process or treatment is a standard or the one in current use. The other is an alternative that is a candidate to replace the standard. Sometimes it is cheaper, and one hopes to see no performance difference. Sometimes it is supposed to offer superior performance, and one hopes to see a measurable difference in the variable of interest. In either case we know that the variable of interest will have a distribution of values; and if the two processes are to be measurably different the distribution of values should not overlap too much. For an objective assessment we need to have some way to calibrate the overlap. A quick and easyto-use test for this purpose is the outside count test developed by Tukey (1959).

17-21

Table 17.3.5 Supplier Sample no.

1

2

3

4

1 2 3 4 5 6 7 8 9 10

45.37 21.68 43.91 47.76* 23.81 19.90 44.68 11.81 35.42 39.85

30.05 36.04 18.04 32.91 41.67 37.40 46.67* 27.93 45.20 29.54

41.30 31.09 24.31 15.64 54.85* 32.96 45.48 45.14 45.49 52.82

46.21 36.01 46.28 21.80 28.57 48.45 33.49 53.07* 35.65 14.95

NOTE: The data are proportional to the time to failure of a standard cutting tool. Asterisks denote largest value in each group.

methods, or the like. Again, if there are differences among the methods, the values that we see should not overlap too much. We give you two easy-to-use tests. The first is for the situation where there is an equal amount of data for each method. For the second, the amount of data may differ. Each method will be demonstrated using the data in Table 17.3.5.

Two Methods—Outside Count Test Conditions for Use Given two groups of measurements taken under conditions 1 and 2 (treatments, methods, etc.), we identify the direction of difference by insisting that the two groups have minimum overlap. Use 1 to denote the group with the smaller number of measurements and let N1 be the number of measurements for that group. Let N2 be the number of measurements for the other group. The number of observations for each group should be about the same. The conditions to be satisfied are:

4 # N1 # N2 # 30 N2 # s4N1 >3d 1 3 Procedure

1. Count the number of values in the one group exceeding all values in the other. 2. Count the number of values in the other group falling below all those in the other. 3. Sum the two counts in (1) and (2). (It is required that neither count be zero. One group must have the largest value and the other the smallest.) Test If the sum of the two counts in (3) is 7 or larger, there is sufficient evidence to conclude that the two methods are measurably different. EXAMPLE. The following data represent the results of a trial of two methods for increasing the wear resistance of a grinding wheel. The data are proportional to wear: Method 1: 13.06**, 9.52, 9.98, 8.83, 12.78, 9.00, 11.56, 8.10*, 12.21. Method 2: 8.44, 9.64, 9.94, 7.30, 8.74, 6.30*, 10.78**, 7.24, 9.30, 6.66. The smallest value for each method is marked with an asterisk; the largest value is marked with two asterisks. Count 1: The largest value is 13.06 for method 1. The values 13.06, 12.78, 12.21, and 11.56 for method 1 exceed the largest value for method 2, viz., 10.78. Hence the count is 4. Count 2: The smallest value for method 1 is 8.10. For method 2 the values 7.30, 6.30, 7.24, and 6.66 are less than 8.10. Hence the count is 4. Count 3: The total count is 4  4  8  7. The data support the conclusion that method 2 produces measurably less wear than method 1. Several Methods

The problem outlined in the preceding section can be generalized so that one can make a comparison of several processes, treatments,

Several Methods—Overlap Test Conditions for Use Independent data should be obtained for each of the k methods. The number of values n should be the same for each method. Procedure

1. For each of the k methods, determine the largest value. Label it with an asterisk. 2. Scan the largest values. Label the group with the largest largest value as BIG. Label the group with the smallest largest value as SMALL, and its largest value as S. 3. In the group labeled BIG count the number of values that are larger than S, the largest value in SMALL. Denote this count by C. 4. Enter Table 17.3.6 for n values of k groups. If C exceeds the tabled value, then the data support a conclusion that the methods are different. Otherwise, they do not. EXAMPLE. 1. In Table 17.3.5 the largest value for each of the four groups is marked with an asterisk. 2. Group 3 is BIG. Group 2 is SMALL; the largest value in Group 2 is S  46.67. 3. The number of values in Group 3 larger than 46.67 is 2 (52.82, 54.85). 4. Enter Table 17.3.6 with n  10 and k  4. The entry is 5 which is greater than 2. Hence, the data do not support a conclusion that the time to failure for the cutting tools of the four suppliers is measurably different. Several Methods—Rank Test Conditions for Use Independent data should be obtained for each of the methods. The amount of data for each method may be different. Procedure

1. Let ni be the number of values in Group i. k

2. Let N 5 g ni be the total number of values. 1

3. Rank each value from 1 to N beginning with the smallest. (If there are ties among t values, divide the successive ranks equally among them.) 4. Compute ri, the sum of the ranks for the ith group. [Note: For a k

check g ri 5 N sN 1 1d/2.] 1

17-22

ENGINEERING STATISTICS AND QUALITY CONTROL Table 17.3.6

95% Point for k-Sample Problems k

n

3

4

5

6

7

8

9

10

5 6 7 8 9 10 12 14 16 18 20 25 30 40 40

4 4 4 4 5 5 5 5 5 5 5 5 5 5 6

4 4 5 5 5 5 5 5 5 6 6 6 6 6 6

4 4 5 5 5 5 5 6 6 6 6 6 6 6 7

4 5 5 5 5 5 6 6 6 6 6 6 6 7 7

4 5 5 5 5 6 6 6 6 6 6 6 7 7 7

4 5 5 5 5 6 6 6 6 6 6 7 7 7 7

4 5 5 5 6 6 6 6 6 6 7 7 7 7 8

4 5 5 5 6 6 6 6 6 7 7 7 7 7 8

NOTE: k is the number of groups; n is the number of values per group. For other n, k, and percentage points see Conover (1968).

Table 17.3.8

Test

1. Compute

Supplier*

k

T 5 [12/NsN 1 1d] c g

sr 2i /nidd

2 3sN 1 1d

1

2

3

4

1

2. Go to Table 17.3.7; find the entry under k  1. If T exceeds the entry, then the data support the conclusion that the groups are different. Otherwise, they do not. Table 17.3.7

Chi-Square Distribution

k

w

k

w

1 2 3 4 5

3.841 5.991 7.815 9.488 11.07

16 17 18 19 20

26.30 27.59 28.87 30.14 31.41

6 7 8 9 10

12.59 14.07 15.51 16.92 18.31

22 24 26 28 30

33.92 36.42 38.89 41.34 43.77

11 12 13 14 15

19.68 21.03 22.36 23.68 25.00

40 50 60 70 80

55.76 67.50 79.08 90.53 101.9

NOTE: Entries are P(W  w)  p  0.05. For other values of k and p, see Pearson and Hartley (1962).

EXAMPLE. We again use the data shown in Table 17.3.5. In Table 17.3.8 the numerical values representing times to failure have been replaced by their ranks. To facilitate such ranking it is convenient to order the values in each group from smallest to largest. Then all values are ranked from smallest to largest. In Table 17.3.8 the values have been reordered this way. The ranks are in parentheses. 1. The number of values in each group is 10. Hence ni  10 for each value of i. 2. The total number of values N  40. 3 and 4. The sum of the ranks ri is shown for each group. [Note that ri  820  (40)(41)/2.]

T 5 [12 /NsN 1 1d][sr 2i /10d] 2 3sN 1 1d 5 [12/s40ds41d][170182/10] 2 3s41d 5 124.523 2 123. 5 1.523 Now go to Table 17.3.7 and obtain the entry under k  4  1  3. The entry is 7.815, which is larger than 1.523. Hence, the data do not support a conclusion that the time to failure for the cutting tools for the four suppliers is measurably different.

11.81 19.90 21.68 23.81 35.42 39.85 43.91 44.68 45.37 47.76

(1) (5) (6) (8) (18) (23) (26) (27) (30) (36)

18.04 27.93 29.54 30.05 32.91 36.04 37.40 41.67 45.20 46.67

180 32400

186 34596

(4) (10) (12) (13) (15) (21) (22) (25) (29) (35)

15.64 24.31 21.09 32.96 41.30 45.14 45.48 45.49 52.82 54.85

(3) (9) (14) (16) (24) (28) (31) (32) (38) (40)

235 55225

14.95 21.80 28.57 33.49 35.65 36.01 46.21 46.28 48.45 53.07

(2) (7) (11) (17) (19) (20) (33) (34) (37) (39)

219 ri 47961 r 2i

* These are the data of Table 17.3.5 with the values for each supplier listed from smallest to largest. The values in parentheses are the ranks of the time to failure values from smallest to largest.

GO/NO-GO DATA

Quite often the data that we encounter will be attribute or go/no-go data; that is, we will not have quantitative measurements but only a characterization as to whether an item does or does not have some attribute. For example, if a manufactured part has a specification that it should not be shorter than 2 in, we might construct a template; and if a part is to meet the specification, it should not fit into the template. After inspecting a series of units with the template our data would consist of a tabulation of “gos” and “no-gos”—those that did not meet the specification and those that did. If the items that are checked for an attribute are obtained by random sampling, the resulting data will follow what is known as the binomial distribution. Its standard form is as follows: p is the long-term fraction of failures q  1  p is the long-term fraction of successes n is the size of the random sample. Then the probability that our sample gives x failures and n  x successes is n f sx; n, pd 5 a bp xqn2x; x 5 0,1,2, c, n x where

n a b 5 n!/x!sn 2 xd! x

CONTROL CHARTS

From f (x; n, p), for a given n and p, we can calculate the probability of x failures in a sample size n. Similarly, by summing terms for different values of x, we can calculate the probability of having more than w failures or fewer than r failures, etc. Here we are not going to try to be so precise; rather we are going to try to show the general picture of the relationship between x, p, and n by the use of examples and the graph in Fig. 17.3.2.

17-23

a specific accuracy or tolerance. Suppose that the proportion of interest is assumed to be around p  0.20. Now enter Fig. 17.3.2 with x/n  0.20. From the figure we see that if we take a sample of size 50, our estimate will have a range of about 0.10 (actually 0.10, 0.13). On the other hand, if the sample size is 250, the estimate will have a range of about 0.06. Often one wants to compare the performance of two processes. As above, suppose that the rate p for our process is 0.20. We have a modification that we want to test; however, to be economical the modification has to bring the rate p down to 0.15 or less. If the modification is going to be assessed on the p’s for the standard and the modification, then we do not want the uncertainty in their estimates to overlap and the uncertainty should be less than half of 0.05 where 0.05 = (0.20  0.15). Figure 17.3.2 shows that we would have to use a sample size of at least 1000. This demonstrates that attribute sampling is effective only when the items and their characterization are not expensive. Otherwise, it is best to go to measurements where smaller sample sizes can be used to assess differences. A more detailed exposition of the binomial distribution and its uses can be found in Brownlee (1970). Statistical Software Packages

There are numerous statistical software packages available that may be used to determine sample sizes, design experiments, fit curves to data, test for goodness of fit, and perform basic statistical calculations. The packages include both standardized and customized menus, features for importing and exporting data to and from external files, and wellpresented analysis summaries and reports. For example, a widely used statistical package called JMP was developed and is periodically upgraded by the SAS Institute, Inc.

Fig. 17.3.2 Binomial distribution, 95 percent confidence bands. (Reproduced with the permission of the Biometrika Trust from C. J. Clopper and E. S. Pearson, “The Use of Confidence or Fiducial Limits Illustrated in the Case of the Binomial,” Biometrika, 26 (1934), p. 410.)

Estimating the Failure Rate

In a manufacturing process a general index of quality is the fraction of items which fail to pass a certain test. Suppose that we take a random sample of size n  100 from the process and observe x0  10 failures. Clearly we have met the conditions for a binomial distribution and an estimate of p, the long-term failure rate is pˆ  x0 /n  10/100  0.1. However, we would also like to know the accuracy of the estimate. In other words, if we operate the process for a long time under these conditions and obtain a large sample, what might be the value of p? One simple way to assess the estimate of pˆ is to find values p1 and p2 (p1 p2) such that the following probabilities are satisfied for a fixed value of a: Pr[x  x0 |p1]  a/2 Pr[x  x0 |p2]  a/2 These values are the solutions for p of the two equations. n

g a b p x1 s1 2 p1d n2x 5 a>2

x 5 x0 x0

n x

g a b p x2 s1 2 p2d n2x 5 a>2

x50

n x

General solutions for these equations for a  0.05 are shown in Fig. 17.3.2. If we go to Fig. 17.3.2 with x /n  0.1 and read where the lines for n  100 intersect the ordinate or p scale, we see that p1  0.07 and p2  0.18. We can then state that we have reasonable confidence (the probability is 0.95) that the long-term failure rate for the process is between 0.07 and 0.18. Estimating the Sample Size

It should be evident that Fig. 17.3.2 can also be used to determine how large a sample is needed to estimate a proportion or a percentage with

CONTROL CHARTS

When an industrial process is under control it is in a state of “statistical equilibrium.” By equilibrium we mean that we can characterize its output by a fixed average m and a fixed standard deviation s. The variation in output is what one would expect to see from that m and s, as bounded by the values given in Tchebysheff’s theorem, let us say. However, if control is lost, we tend to get a greater spread in values. In effect, the average m, or the standard deviation s is changing because of some cause. The causes of lack of control are manifold— it can be a change in raw materials, tool wear, instrumentaton failure, operator error, etc. The important thing is that one wants to be able to detect when this lack of control occurs and take the appropriate steps to make corrections. One of the most frequently used statistical tools for analyzing the state of an industrial process is the control chart. The two most commonly used charts are those for the average and the range. The control chart procedure consists of these steps: 1. Choose a characteristic X which will be used to describe the product coming from the process. 2. At time ti, take a small number of observations n on the process. The number n should be small enough so that it is reasonable to assume that conditions will not change during the course of obtaining the observstions. 3. For each set of n observations, compute the average xi and the range ri as defined in the section “Characterizing Observational Data.” There are two different control situations of interest. 4a. Control standards given. Suppose that from past operation of the process or from the need to meet certain specifications, a goal average m* and a goal standard deviation s* are specified. Then we set up two charts as follows: Average chart:

Range chart:

Upper limit line: Central line: Lower limit line: Upper limit line: Central line: Lower limit line:

m* 1 As* m* m* 2 As* D2s* ds* D1s*

17-24

IENGINEERING STATISTICS AND QUALITY CONTROL Table 17.3.9

Factors for Control-Chart Limits For averages

Sample size n 2 3 4 5 6 7 8 9 10

For ranges

A

A2

d

D1

D2

D3

D4

2.12 1.73 1.50 1.34 1.22 1.13 1.06 1.00 0.95

1.88 1.02 0.73 0.58 0.48 0.42 0.37 0.34 0.31

1.128 1.693 2.059 2.326 2.534 2.704 2.847 2.970 3.078

0 0 0 0 0 0.21 0.39 0.55 0.69

3.69 4.36 4.70 4.92 5.08 5.20 5.31 5.39 5.47

0 0 0 0 0 0.08 0.14 0.18 0.22

3.27 2.57 2.28 2.11 2.00 1.92 1.86 1.82 1.78

The values of A, d, D1, and D2 depend upon n and can be found in Table 17.3.9. Plot the values of xi and ri obtained in (3) on the two charts as shown in Fig. 17.3.3. Whenever a value falls outside the limit lines, there is an indication of lack of control, and one is justified in seeking the causes for a change. 4b. Control, no standards given. Often one has no prior information about the process m and s, and one wants to determine if the process behaves as though it is in statistical equilibrium, and if not, take actions to get it there. In this case one has to determine the central lines for the charts from process data. To do this one first accumulates

the data for 25 to 50 time periods as indicated in (2). Then two charts are set up as follows: Let K be the 25 to 50 time periods observed. Compute an overall average X 5 ki51 xi >K and the average range R 5 ki51 ri >K . Set up charts with limits defined from: Average chart:

Range chart:

Upper limit line: Central line: Lower limit line: Upper limit line: Central line: Lower limit line:

X 1 A2R X X 2 A2R D4R R D3R

The values of A2, D3, and D4 depend upon n and can be found in Table 17.3.9. The individual xi and ri are plotted on the charts, and again a value outside the limits is an indication of lack of control and is justification for seeking the cause for a change. Process Capability Indices

If the process is in statistical control, an estimate for the process standard deviation can be obtained by using ˆ 5 R >d s

ˆ is used to calculate the process capability indices Cp and Cpk. In turn, s These two indices compare the actual spread of the data with the desired range (usually as specified by a customer). Cp is used when the actual process average is equal to the goal average. Cpk is used when the actual process average is not equal to the goal average. The desired range is called the specification tolerance (ST) and is equal to the upper specification limit minus the lower specification limit, namely, 2As*. Cp is given by the following equation: ˆ 5 2As*>6s ˆ Cp 5 ST/6s If Cp  1, the process is considered to be capable, which means that most or all of the data stayed within the desired range. If Cp 1, the process is considered to be not capable and requires adjustment. Ideally, one should control the process variability so that Cp  2. The other index, Cpk, is given by Cpk 5 Cp s1 2 kd where and

Fig. 17.3.3 Control chart. (a) Average chart; (b) range chart.

k 5 2sm* 2 Xd/ST 5 sm* 2 Xd/As* X 5  xi /K

The (1  k) factor modifies Cp so as to allow the actual average X to be different from the goal average m*. Ideally, one should control the process so that Cpk  1.5. In process capability analysis, the indices should be calculated on a frequent basis, but the trends should be examined only monthly or quarterly in order to be meaningful.

WORKPLACE DESIGN

17-25

p* 6 3[p*s1 2 p*d/n]1>2

Charts for Go/No-Go Data

The control chart concept can also be used for attribute or go/no-go data. The procedures are, in general, the same as outlined for averages and ranges. Briefly, they are as follows: 1. Select a sample of size n from the process; for best results n should be in the range of 50 to 100. 2. Let xi denote the number of defective units in the sample of size n at time ti; then pˆ i  xi /n is an estimate of the process fraction defective. 3. Set control limits for a standard fraction defective p* at

If no standard is given, then take K  25 to 50 samples of size n to get a good estimate of the fraction-defective p. Define p 5 Ki51 pˆ i /K . In this case set control limits at p 6 3[ ps1 2 p d/n]1>2 4. Interpret a pˆ i outside the limits as an indication of a change worthy of investigation. Further information on control charts can be found in Duncan (1986).

17.4 METHODS ENGINEERING by Vincent M. Altamuro REFERENCES: ASME Standard Industrial Engineering Terminology. Barnes, “Motion and Time Study,” Wiley. Krick, “Methods Engineering,” Wiley. Maynard, Stegemerten, and Schwab, “Methods-Time-Measurement,” McGraw-Hill. SCOPE OF METHODS ENGINEERING

Methods engineering is concerned with the selection, development, and documentation of the methods by which work is to be done. It includes the analysis of input and output conditions, assisting in the choice of the processes to be used, operations and work flow analyses, workplace design, assisting in tool and equipment selection and specifications, ergonomic and human factors considerations, workplace layout, motion analysis and standardization, and the establishment of work time standards. A primary concern of methods engineering is the integration of humans and equipment in the work processes and facilities. PROCESS ANALYSIS

Process analysis is that step in the conversion of raw materials to a finished product at which decisions are made regarding what methods,

Fig. 17.4.1 Workplace layout chart.

machines, tools, inspections and routings are best. In many cases, the product’s specifications can be altered slightly, without diminishing its function or quality level, so as to allow processing by a preferred method. For this reason, it is desirable to have the product’s designer and the process engineer work together before specifications are finalized. WORKPLACE DESIGN

Material usually flows through a facility, stopping briefly at stations where additional work is done on it to bring it closer to a finished product. These workstations, or workplaces, must be designed to permit performance of the required operations, to contain all the tooling and equipment needed to fit the capabilities and limitations of the people working at them, to be safe and to interface smoothly with neighboring workplaces. Human engineering and ergonomic factors must be considered so that all work, tools, and machine activation devices are not only within the comfortable reach of the operator but are designed for safe and efficient operation. A workplace chart (Fig. 17.4.1) which analyzes the required actions of both hands is an aid in workplace design.

17-26

METHODS ENGINEERING

METHODS DESIGN

Methods design is the analysis of the various ways a task can be done so as to establish the one best way. It includes motion analysis—the study of the actions the operator can use and the advantages and/or disadvantages of each variation—and standardization of procedure—the selection and recording of the selected and authorized work methods. While “time and motion study” is the more commonly used term, it is more correct to use “motion and time study,” as the motion study to establish the standard procedure must be done prior to the establishment of a standard time to perform that work. According to ASME Standard Industrial Engineering Terminology, motion study is defined as . . . the analysis of the manual and the eye movements occurring in an operation or work cycle for the purpose of eliminating wasted movements and establishing a better sequence and coordination of movements.

In the same publication, time study is defined as . . . the procedure by which the actual elapsed time for performing an operation or subdivisions or elements thereof is determined by the use of a suitable timing device and recorded. The procedure usually but not always includes the adjustment of the actual time as the result of performance rating to derive the time which should be required to perform the task by a workman working at a standard pace and following a standard method under standard conditions.

Attempts have been made to separate the two functions and to assign each to a specialist. Although motion study deals with method and time study deals with time, the two are nearly inseparable in practical application work. The method determines the time required, and the time determines which of two or more methods is the best. It has, therefore, been found best to have both functions handled by the same individual.

ELEMENTS OF MOTION AND TIME STUDY

Figure 17.4.2 presents graphically the steps which should be taken to make a good motion and time study and shows their relation to each other and the order in which they must be performed.

METHOD DEVELOPMENT

The first steps are concerned with the development of the method. Starting with the drawing of the product, the operations which must be performed are determined and tools and equipment are specified. In large companies, this is usually done by a specialist called a process engineer. In smaller companies, processing is commonly done by the time study specialist. Next, the detailed method by which each operation should be performed is developed. The procedures used for this are known as operation analysis and motion study.

OPERATION ANALYSIS

Operation analysis is the procedure employed to study all major factors which affect a given operation. It is used for the purpose of uncovering possibilities of improving the method. The study is made by reviewing the operation with an open mind and asking either of oneself or others questions which are likely to lead to methods-improving ideas. If this is done systematically, so that the possibility of overlooking factors which should be considered is minimized, worthwhile improvements are almost certain to result. The 10 major factors explored during operation analysis, together with typical questions which should be asked about each factor, are as follows: 1. Purpose of operation a. Is the result accomplished by the operation necessary? b. Can the purpose of the operation be accomplished better in any other way?

Fig. 17.4.2 Graphic analysis of the elements of motion and time study.

2. Design of part a. Can motions be eliminated by design changes which will not affect the functioning and other desirable characteristics of the product? b. Is the design satisfactory for automated assembly? 3. Complete survey of all operations performed on part a. Can the operation being analyzed be eliminated by changing the procedure or the sequence of operations? b. Can it be combined with another operation? 4. Inspection requirements a. Are tolerance, allowance, finish, and other requirements necessary?

STANDARDIZING THE JOB

b. Will changing the requirements of a previous operation make this operation easier to perform? 5. Material a. Is the material furnished in a suitable condition for use? b. Is material utilized to best advantage during processing? 6. Material handling a. Where should incoming and outgoing material be located with respect to the work station? b. Can a progressive assembly line be set up? 7. Workplace layout, setup, and tool equipment a. Does the workplace layout conform to the principles of motion economy? b. Can the work be held in the machine by other means to better advantage? 8. Common possibilities for job improvement a. Can “drop delivery” be used? b. Can foot-operated mechanisms be used to free the hand for other work? 9. Working conditions a. Has safety received due consideration? b. Are new workers properly introduced to their surroundings, and are sufficient instructions given them? 10. Method a. Is the repetitiveness of the job sufficient to justify more detailed motion study? b. Should full automation be considered? When the method has been developed, conditions are standardized, and the operators are trained to follow the approved method. At this time, not before, the job is ready for time study. Suitable operators are selected, the purposes of the study are carefully explained to them, and the time study observations are made. During the study, time study specialists rate the performance being given by operators either by judging the skill and effort they are exhibiting or by assessing the speed with which motions are made as compared with what they consider to be a normal working pace. The final step is to compute the standard. PRINCIPLES OF MOTION STUDY

Operation analysis is a primary analysis which eliminates inefficiencies. Motion study is a secondary analysis which refines the method still further. Motion study may and often does suggest further improvements in the factors considered during operation analyses, such as tools, material handling, design, and workplace layouts. In addition, it studies the human factors as well as the mechanical and sets up operations in conformance with the limitations, both physical and psychological, of those who must perform them. The technique of motion study rests on the concept originally advanced by Frank B. and Lillian M. Gilbreth that all work is performed by using a relatively few basic operations in varying combinations and sequences. These Gilbreth Basic Elements have also been called “therbligs” and “basic divisions of accomplishment.” The basic elements together with their symbols (for definitions see ASME Industrial Engineering Terminology), grouped in accordance with their effect on accomplishment, are as follows:

Group 1 Accomplishes Reach Move Grasp Position Disengage Release Examine Do

R M G P D RL E DO

17-27

Group 2 Retards accomplishment Change direction Preposition Search Select Plan Balancing delay

CD PP S SE PL BD

Group 3 Does not accomplish Hold Avoidable delay Unavoidable delay Rest to overcome fatigue

H AD UD F

Group 1 is the useful group of basic elements or the ones that accomplish work. They do not necessarily accomplish it in the most effective way, however, and a study of these elements will often uncover possibilities for improvement. Group 2 contains the basic elements that tend to retard accomplishment when present. In most cases, they do this by slowing down the group 1 basic elements. They should be eliminated wherever possible. Group 3 is the nonaccomplishment group. The greatest improvements in method usually come from the elimination of the group 3 basic elements from the cycle. This is done by rearranging the motion sequence, by providing mechanical holding fixtures, and by improving the workplace layout. An operation may be analyzed into its basic elements either by observation or by making a micromotion study of a motion picture of the operation. Methods improvement may be made on any operation by eliminating insofar as possible the group 2 and group 3 basic elements and by arranging the workplace so that the group 1 basic elements are performed in the shortest reasonable time. In doing this, certain laws of motion economy are followed. The following, derived from the laws originally stated by the Gilbreths, are the most important. 1. When both hands begin and complete their motions simultaneously and are not idle during rest periods, maximum performance is approached. 2. When motions of the arms are made simultaneously in opposite directions over symmetrical paths, rhythm and automaticity develop most naturally. 3. The motion sequence which employs the fewest basic elements is the best for performing a given task. 4. When motions are confined to the lowest practical classification, maximum performance and minimum fatigue are approached. Motion classifications are: Class 1, finger motions; Class 2, finger and wrist motions; Class 3, finger, wrist, and forearm motions; Class 4, finger, wrist, forearm, and upper-arm motions; Class 5, finger, wrist, forearm, upper-arm, and body motions. STANDARDIZING THE JOB

When an acceptable method has been devised, equipment, materials, and conditions must be standardized so that the method can always be followed. Information and records describing the standard method must be carefully made and preserved, for experience has shown that, unless this is done, minor variations creep in which may in time cause a major problem. In the case of repetitive work, a job is not standardized until each piece is delivered to operators in the same condition, and it is possible for them to perform their work on each piece by completing a set cycle of motions, doing a definite amount of work with the same equipment under uniform working conditions. The operator or operators must then be taught to follow the approved method. Operator training is always important if reasonable production is to be obtained, but it is an absolute necessity where methods have been devised by motion study. It is quite apparent that the operators cannot be

17-28

METHODS ENGINEERING

expected to discover for themselves the method which the time-study specialist developed as the result of hours of concentrated study. They must, therefore, be carefully trained if they are to be expected to reach standard production. In addition, an accurate time study cannot be made until the operator is following the approved method with reasonable proficiency. WORK MEASUREMENT

Work measurement is the calculation of the amount of time it should take to do a standardized job. It utilizes the concept of a standard time. The standard time to perform a task is the agreed-upon and reproducible calculated time that a hypothetical typical person working at a normal rate of speed should take to do the job using the specified method with the proper tools and materials. It is the normal time determined to be required to complete one prescribed cycle of an operation, including noncyclic tasks, allowances, and unavoidable delays. Time Study Methods There are several bases upon which time standards may be calculated. They include: 1. Application of past experience. The time required to do the operation in the past, either recorded or remembered, may be used as the present standard or as a basis for estimating a standard for a similar operation or the same operation being done under changed conditions. 2. Direct observation and measurement. The operation may be observed and its time recorded as it is actually performed and adjustments may be made to allow for the estimated pace rate of the operator and for special allowances. A stop watch or other recording instrument may be used or work sampling may be employed, which makes statistical inferences based upon random observations.

Fig. 17.4.3 Face of a time study form.

3. Synthetic techniques. A time standard for an actual or proposed operation may be constructed from the sum of the times to perform its several components. The times of the components are extracted from standard charts, tables, graphs, and formulas in manuals or in computer databases and totaled to arrive at the overall time for the entire operation. Standard data or predetermined motion times may be used. Methodstime measurement (MTM), basic-motion times (BMT), work factor (WF), and others are some of the systems available. TIME STUDY OBSERVATIONS

The time study specialist can study any operator he or she wishes so long as that operator is using the accepted method. By applying what is known as the leveling procedure, the time study specialist should arrive at the same final time standard regardless of whether studying the fastest or the slowest worker. The manner in which the operator is approached at the beginning of the study is important. This is particularly true if the operator is not accustomed to being studied. Time study specialists should be courteous and unassuming and should show a recognition of and a respect for the problems of the operator. They should be frank in their dealings with the operator and should be willing at any time to explain what they are doing and how they do it. The first step in making time study observations is to subdivide the operation into a number of smaller operations which will be studied and timed separately. These subdivisions are known as elements, or elemental operations. Each element is exactly described in a few wellchosen words, which are recorded on the top of the time study form. Figure 17.4.3 shows how this is done. The beginning and ending points

PERFORMANCE RATING

of these elements must be clearly recognizable so that the chances of overlapping watch readings will be minimized. The timing is done with the aid of a stopwatch, or less frequently, with a special type of “time study machine.” There are several types of stopwatches as well as several methods of recording watch readings in common use. The study illustrated by Fig. 17.4.3 was made using a decimalhour stopwatch that reads directly in ten-thousandths of an hour. The readings were recorded using what is known as the continuous method of recording. In this method, the watch runs continuously from the beginning of the study to the end. Thus every moment of time is accounted for, something that may be important if the correctness of the study is ever questioned. The watch is read at the end of each elemental operation, and the reading is recorded in the “R” column under the proper element description. The elapsed time for each element is later secured by subtracting successive readings. This observation procedure gives results as accurate as any other and more accurate than some. Occasionally variations from the regular sequence of elemental operations occur. The time study specialist must be prepared to handle such situations when they happen. These variations may be divided into four general classes as follows: (1) elements performed out of order, (2) elements missed by the time study specialist, (3) elements omitted by the operator, (4) foreign elements. The time study illustrated by Fig. 17.4.3 contains examples of each of these kinds of irregularity. Elements 12 and 1 on lines 12 and 13 were performed out of order. On line 3, the time study specialist missed obtaining the watch readings for elements 9 and 10. On line 6, element 12 was omitted by the operator. Foreign elements A, B, C, and D occurred during regular elements 2, 5, 1, and 7, respectively. A study of these examples will show how the time study specialist handles

Fig. 17.4.4 Back of a time study form.

17-29

variations from the regular sequence of elements which occur during the making of a time study. A time study to be of value for future use must tell the whole story of a job in such a way that it will be understood by anyone familiar with the time study procedure. This will not be possible unless all identifying and other pertinent information is recorded at the time the study is made. Records should be made to show complete identification of the operator; the part or assembly; the machines, tools, and equipment used; the operation; the department in which the operation was performed; and the conditions existing at the time the study was made. Sketches are generally a desirable part of this description. Figure 17.4.4 shows the information which would be recorded on the reverse side of the time study form illustrated by Fig. 17.4.3. PERFORMANCE RATING

The objective of a time study is to determine the time which a worker giving average performance will require to do the job under average or normal conditions. It is important to understand that when the time study specialist speaks of average performance, he or she is not referring to the mathematical average of all human beings, or even the average of all persons engaged in a given occupation. Average performance is established by definition and not statistically. It represents the time study specialist’s conception of the normal, steady, but unhurried performance which may reasonably be expected from anyone qualified for the work. If sufficient inducement is offered by incentives or otherwise, this performance may be considerably surpassed. If all operators available for study worked at the average performance level, the task of establishing a standard would be easy. It would

17-30

METHODS ENGINEERING

Fig. 17.4.5 Leveling factors for performance rating.

be necessary merely to average the elapsed elemental times determined from time study and add an allowance for fatigue and personal and unavoidable delays. It is seldom, however, that a performance is observed which is rated throughout as average. Therefore, to establish a standard which represents the time which would be taken had an average performance been observed, it is necessary to use some method of adjusting the recorded elemental times when other than average performance is timed. One of the well-known methods of doing this is the leveling procedure. When properly applied it gives excellent results. It must be correctly understood, of course, and the time study specialist who uses it must be thoroughly trained to apply it correctly. The procedure recognizes that when the correct method is being followed, skill, effort, and working conditions will affect the level at which the operator works. These factors are judged during the making of the time study. Skill is defined as proficiency at following a given method. This is not subject to variation at will by the operator but develops with practice over a period of time. Effort is defined as the will to work. It is controllable by the operator within the limits imposed by skill. Conditions are those conditions which affect the operator and not those which affect the method. Definitions have been established for different degrees of skill, effort, and conditions. Numerical factors have been established by extensive research for each degree of skill, effort, and conditions. These are shown by Fig. 17.4.5. The algebraic sum of these numerical values added to 1.0 gives the leveling factor by which all actual elemental times are multiplied to bring them to the average or normal level. The leveling factor represents in effect the amount in percent which actual performance times are above and below the average performance level. ALLOWANCES FOR FATIGUE AND PERSONAL AND UNAVOIDABLE DELAYS

The leveled elemental time values are net elapsed times adjusted to the average performance level. They do not provide for delays and other legitimate allowances. Something, therefore, must be added to take care of such things as fatigue, and special conditions of the work. Fatigue allowances vary according to the nature of the work. Flat percentages are determined for each general class of work, such as bench work, machine-tool operation, hard physical labor, and so on. Personal allowances are the same for most classes of work. Unavoidable delay allowances vary with the nature of the work and the conditions under which it is performed. Peculiar conditions surrounding specific jobs sometimes require additional special allowances. It is apparent, therefore, that the proper allowance factor to use can only be determined by a study of the class of work to which it is to be applied. Allowances are determined either by a series of all-day time studies or by a statistical method known as work sampling, or both. When an allowance factor has once been established, it is then applied to all time studies made on that class of work thereafter. DEVELOPING THE TIME STANDARD

When time study observations have been completed, a series of calculations are made to develop the time standard. Elapsed times are determined by subtracting successive watch readings. Each subtraction is

recorded between the two watch readings that determine its value. Elapsed time is noted in ink to ensure a permanent record and to distinguish it from the watch readings which are usually recorded in pencil. A study of Fig. 17.4.3 will show how subtractions are entered on the time study form and later summarized. The several elapsed times for each element are next carefully compared and examined for abnormal values. If any are found, they are circled so that they can be distinguished and excluded from the summary. The remaining elapsed times for each element are added and are averaged by dividing by the number of elapsed time readings. The results are average elapsed times which represent the time taken by the operator during that particular study. These times must be adjusted by multiplying them by a leveling factor to bring them to the average performance level. This factor is determined by the rating of skill, effort, and conditions made during the period of observation. Each average elapsed time is multiplied by the leveling factor, except when the element is not controlled by the operator. An element that is outside the control of the operator, such as element 7 in Fig. 17.4.3 which is a cut with power feed, should not be leveled, because it is unaffected by the ability of the operator. As long as the proper feed and speed are used, the time for performing this element will be the same whether the worker is an expert or a learner. If workers were able to work continuously, the leveled time would be the correct value to allow for doing the operation studied, but constant application to the job is neither possible nor desirable. In the course of a day, there are certain to be occasional interruptions and delays, for which due allowance must be made in establishing the final standard. Therefore, each elemental time is increased by an allowance which covers time that will be consumed by personal and unavoidable delays, fatigue, and any special factors that may affect the job. The numbers and descriptions of the elemental operations together with their allowed time are transcribed on the back of the time study form as shown by Fig. 17.4.4. The number of times an elemental operations occurs on each piece or cycle of the operation is taken into account, and the total time allowed for each element is determined and recorded. The final standard for the operation is the sum of the amounts recorded in the “time-allowed” column. When all computations have been checked and all supporting records have been properly identified and filed, the task of developing the time standard is complete. TIME FORMULAS AND STANDARD DATA

On repetitive work, time study is a satisfactory tool of work measurement. A single time study may be sufficient to establish a standard which will cover the work of one or more operators for a long period of time. As quantities become smaller, however, the cost of establishing standards by individual time study increases until at length it becomes prohibitive. In the extreme case, where products are manufactured in quantities of one, it would require at least one time study specialist for each operator if standards were established by detailed time study, and the standards would not be available until after the jobs had been completed. In order to simplify the task of setting standards on a given class of work and in order to improve the consistency of the standards, standard data are frequently used by time study specialists. A compilation of standard data

USES OF TIME STANDARDS

in its simplest form is merely a list of all the different elements that have occurred during all the time studies made on a given class of work, with representative time values for each element. Every element that differs even slightly from any other element has its own time value. When a job comes into the shop on which no standard has previously been established, time study specialists analyze the job either mentally or by direct observation and determine the elements required to perform it. They then select time values from the standard data for each element. Their sum gives the standard for the job. This method, although a decided improvement from a time, cost, and consistency standpoint over individual time study, is capable of further refinement and improvement. On a given class of work, certain elements will be performed—for example, “pick up part”—on every piece produced, while others—such as “secure in steady rest”—will be performed only when a piece has certain characteristics. In some cases, the performance of a certain element will always require the performance of another element, e.g., “start machine” will always require the subsequent performance of the element “stop machine.” Then again, the time for performing certain elements—for example, “engage feed”—will be the same regardless of the characteristics of the part being worked upon, while the time for performing certain other elements—like “lay part aside”—will be affected by the size and shape of the part. Thus it is possible to make certain combinations and groupings which will simplify the task of applying standard data. Time study specialists construct various charts, and tables which they still call standard data, or, in the ultimate refinement, develop time formulas. A time formula is a convenient arrangement of standard data which simplifies their accurate application. Much of the analysis which is necessary when applying standard data is done once and for all at the time the formula is derived. The job characteristics which make the performance of certain elements or groups of elements necessary are determined, and the formula is expressed in terms of these characteristics. Figure 17.4.3 illustrates a detailed time study made to establish a standard on a simple milling machine operation. The same standard can be derived much more quickly from the following time formula: Curve A  Table T  each piece time Curve A combines the times for the variable elements “pick up part from table,” “place in vise,” and “lay aside part in totepan” with the times for the constant elements “tighten vise,” “start machine,” “run table forward 4 in,” “engage feed,” “stop machine,” “release vise,” and

17-31

“brush vise.” Table T combines the times for “mill slot” and “return table.” The standard time for milling a slot in a brass clamp of any size is computed by determining the variable characteristics of the job from the drawing—in this case, the volume of the clamp and the perimeter of the cut—and adding together the time read from curve A and the time read from Table T. The amount of time which the use of time formulas will save the time study specialist is readily apparent. It takes a certain amount of time and no little know-how to develop a time formula, but once it is available, the job of establishing accurate standards becomes a simple, fairly routine task. The time required to make and work up a time study will be from 1 to 100 or more hours, depending upon the length of the operation cycle studied. The time required to establish a standard from a time formula will, in the majority of cases, range from 1 to 15 min, depending upon the complexity of the formula and the amount of time required to determine the characteristics of the job. Where all the necessary information may be obtained from the drawing of the part, the standard may generally be computed in less than 5 min. USES OF TIME STANDARDS

Some of the more common uses of time standards are in connection with 1. Wage incentive plans 2. Plant layout 3. Plant capacity studies 4. Production planning and control 5. Standard costs 6. Budgetary control 7. Cost reduction activities 8. Product design 9. Tool design 10. Top-management controls 11. Equipment selection 12. Bidding for new business 13. Machine loading 14. Effective labor utilization 15. Material-handling studies Time standards can be established not only for direct labor operations but also for indirect work, such as maintenance and repair, inspection, office and clerical operations, engineering, and management. They can also be set for machinery and equipment, including robots.

17.5 COST OF ELECTRIC POWER by Andrew M. Donaldson and Robert F. Gambon REFERENCES: Department of Energy (DOE), “Electric Plant Cost and Power Production Expenses,” 1991. Electric Power Research Institute, “Technical Assessment Guide,” 1993. Oak Ridge National Laboratory, “CONCEPT-V, A Computer Code for Conceptual Cost Estimates of Steam Electric Power Plants.” “HandyWhitman Index of Public Utility Construction Costs,” Whitman, Requardt and Associates. Grant and Ireson, “Principles of Engineering Economy,” Ronald Press.

In simplest terms, the cost of electric power is the cost of the initial energy source plus the cost of converting that energy into electricity (generation), plus the cost of delivering that electricity to the consumer (transmission and distribution). That initial energy source is the potential and kinetic energy of hydroelectric power; the chemical energy of fossil, biomass, and waste power; or the atomic energy of nuclear power. In addition, there are other energy sources for the conversion to electricity. These other sources include energy derived from solar radiation, wind, and ocean waves, but these sources are limited, inefficient, and currently not cost-effective without subsidy. These alternative generation approaches are not addressed herein.

The cost of the initial energy and its conversion to electricity constitute generation and is the major factor in electric power cost and is the primary concern of this section. The largest contributor to electricity cost of fossil fuel plants is delivered fuel followed by plant capital cost amortization, and operations personnel and maintenance expenses. The higher the fuel cost and the lower the plant efficiency, the greater the effect of fuel cost on the final electric price. For natural gas, which is generally the most expensive fossil fuel, a combination of lower plant capital cost and/or higher plant efficiency is necessary for cost competitiveness. Electricity production with natural gas in a 33 percent efficient Rankine cycle steam boiler and turbine makes little sense today because the same fuel can be used in a large-frame combustion turbine combined-cycle plant and approaches 60 percent efficiency. The traditionally lower delivered cost of coal allows it to be used economically in the Rankine cycle. However, for many recent years, low gas prices and restrictive environmental regulations combined with the high efficiency and lower construction cost of combustion turbine combined-cycle

17-32

COST OF ELECTRIC POWER

plants have almost stopped coal plant construction. Only larger electricity producers conscious of the need for fuel diversity in their generation base have even considered coal in the last 5 to 10 years. Environmental regulations have severely restricted coal use, but improving emission control equipment and increasing natural gas prices are likely to result in a resurgence of coal plants in the early twenty-first century. For hydroelectric, geothermal, nuclear, and waste fuel plants, the highest cost contributor to electricity price is paying for the capital investment that is the power plant, followed by operations and maintenance. Of course, economics have been overshadowed by public opinion and politics when it comes to nuclear and large hydroelectric plants, which have not been constructed in the United States in 20 years. Generation costs consist of production expenses (fuel plus operating and maintenance expenses) and fixed charges on investment (cost of investment capital, depreciation or amortization, taxes, and insurance). Interconnected power supply systems must price power to include transmission costs, distribution costs, and commercial expenses. Power price savings are realized by scheduling the installation and operation of a range of plant types to optimize overall generation costs. Generally, efficient plants burning lowest-cost fuels or operating on river water flows that are available the year round are assigned to continuous baseload operation at or near full capacity. Plants bearing low unit-kilowatt investment costs and lower efficiencies typically are installed to provide peaking capacity. Operation of these units during short daily periods of peak load limits their energy output and thus minimizes characteristic cost penalties associated with their poorer station efficiency or their requirement for higher-priced fuels. These higher unit cost per kilowatt but higher energy density peaking plants supply peak energy needs but are idle most of the time. This approach is preferable to building high capital cost but more efficient central stations that would sit idle most of the time until load growth catches up to capacity. Power generating stations also participate in regional transmission grid systems supplementing capacity and reserve needs of neighboring systems, and by the daily and seasonal exchange of off-peak, low-incremental-cost energy. Historically, there have been significant regional differences in the price of power. These differences have reflected such factors as availability of hydroelectric energy, the cost of fossil fuels, labor costs, ownnership type and investment composition, local tax structure, and the opportunities pooling and coordination provide to exploit the economies of scale. This exchange of capacity and reserves to serve neighboring grid and spread the benefits of economies of scale, combined with regional power cost differences, led to the advent of state-by-state deregulation of the power industry, and proliferation of “nonutility” generators. The early industrial areas of this country in the northern Midwest and New England areas were increasingly served by aging and decreasingly efficient power plants, often burning oil or requiring long-distance coal deliveries. These fuels have shown volatility on the world market and escalating delivery costs. The flight of heavy power-demanding industry from these areas allowed residential and light industrial growth to be picked up by the aging plants without new, more efficient capacity being added. The population density increased but the same old plants were kept in operation, often past their original life expectancy. Since the early 1980s regulators and well-intentioned environmentalists have opposed nearly every new power plant. Existing plants, with their higher cost of replacement, were under no replacement pressure, since installing new, efficient, cost-effective power plants was delayed because of this environment. Regulators that refused requests from utilities for price increases, which would have allowed eventual new construction, further exacerbated the situation, preserving inefficiency. Regional retail power price differences like 12 cents per kilowatt in areas of New York versus 6 cents per kilowatt in Georgia contributed to industry and job flight away from areas with high-priced electricity. As a result, business continues to look for lower electric rates, while politicians search for opportunities to keep jobs and voters in the area. Out of this dilemma, the idea of deregulation, in search of lower-cost electric supplies, was born. Opening some markets produced temporary downward pressure on prices even in regulated areas but also produced chaos

and threatened the reliability of the national electric system that powers the U.S. economy. The deregulation promises of lower prices were often temporary, as were the companies offering the lower prices. The California deregulation situation, with tremendous electricity price swings and state-assisted bankruptcy of one of California’s largest utilities, and the Enron scandal exposed in 2001, have severely reduced the number of nonutility generators and the likelihood of deregulation. However, some states, such as Pennsylvania, have had some success in the deregulation of electricity. The electrical supply, distribution, reliability, and price chaos of the early twentieth century electrical system, which led to utility regulation, was just repeated for entry into the twenty-first century. This section, while aware of the short-term volatility resulting from regulation, deregulation, and politics will concentrate on the long-term aspects affecting the cost of power. Cost or price is a basic characteristic of power supply. Along with relative abundance, reliability, and high quality of service, low prices have encouraged the widespread use of electric energy that has come to be associated with our national way of life. Industrial use of electric energy is particularly heavy in the electroprocess and metallurgical industries. The price of electricity has a significant effect on the end-product cost in these industries. In many manufacturing and process industries, quality of service in terms of voltage regulation, frequency control, and reliability is a major concern. Uninterrupted supply is of crucial importance in many process industries where a power failure causes material waste or damage to equipment, in addition to loss of production revenue. Because of the flexibility and convenience of electric power, future increases in industrial, residential, and commercial consumption are anticipated despite deregulation, fuel price increases, environmental considerations, and energy conservation efforts. Various types of power-generating units are owned and operated by industry and private sector investors to provide power (and thermal energy) to industrial facilities and often sell excess power to the local utility at the avoided cost of incremental power for the utility. Inside-the-fence cogenerators at paper mills and other industrial facilities that are electrical and thermal consumers but provide only a fraction of their electric generation to the grid are not considered in this section. However, the calculations and information in this section can assist in deciding the advisability of purchasing versus generating electric power. Larger cogenerating facilities built under the “qualifying facility” (QF) sections of the Public Utilities Regulatory Policies Act (PURPA) that have electricity production for wholesale to the grid or local utility as their main objective are included in this section. With the PURPA regulations no longer restricting the development of nonutility generators (NUGs), a significant portion of new generation in the 1990s was provided by NUGs. Of course, many “nonutility generators” had regulated utility parents. CONSTRUCTED PLANT COSTS

A central station serving a transmission system is designed to meet not only the existing and prospective loads of the system in which it is to function but also the pooling and integration obligations to adjacent systems. Service requirements which establish station size, type, location, and design characteristics ultimately affect cost of delivered power. Selection of plant type and the overall philosophy followed in design must accommodate a combination of objectives which may include high operating efficiency, minimum investment, high reliability and availability, maximum reserve capability margins, rapid load change capability, quick-start capability, or service adaptability as spinning reserve. For a given plant, the design must account for siting factors such as environmental impacts, subsoil conditions, local meteorology and air quality, quality and quantity of available water supply, access for construction, transmission intertie, fuel delivery and storage, and maintainability. In addition, plant siting and design will be significantly affected by legal restrictions on effluents which may have adverse impacts on the environment. In the case of nuclear plants, siting must also consider the proximity of population centers and the size of exclusion areas. Reactor plant design must bear the investments required to control radioactive

CONSTRUCTED PLANT COSTS

releases and to provide safeguard systems which protect against accidents. No new nuclear power plants are contemplated to begin the licensing process in the United States until 2010 or beyond. Fossil-fueled power plants may be sited adjacent to fuel supplies or in proximity to load centers, thereby increasing transmission costs on the one hand or fuel delivery charges on the other. Depending primarily on climate, plants may be enclosed, semienclosed, or of the outdoor type. Spare auxiliary components can be installed to improve reliability. Increased investments in sophisticated heat cycles and controls, for improved equipment performance, can achieve higher plant efficiencies. Combustion-turbine, simple and combined-cycle plant outputs are restricted by both increased elevation and high ambient air temperatures. Hydroelectric sites are frequently very distant from load centers and thus require added costs for extensive transmission facilities. Also, hydroelectric facilities may provide for flood control, navigation, or recreation as by-products of power production. In such instances, total cost should be properly allocated to the various product elements of the multipurpose project. An industrial power plant provided to meet the requirements of an isolated load entails design considerations and exhibits cost characteristics which differ from those of a central station power plant assigned an integrated role within a connected generating system. Industrial power plants often produce both thermal and electric power. Industrial facilities must often accommodate both base- and peak-load requirements. They may be designed to provide for on-site reserve capacity or spinning reserve capacity. Frequency control and voltage regulation must be viewed as a special problem because of the limited capability of a single plant to meet load changes. Table 17.5.1 provides typical installed cost data for central station generating plants. The figures represent costs of facilities in place, excluding interest during construction. The cost of land, waste-disposal facilities, and fuel is not included. Costs apply to plants completed in 2003. Interest during construction can be estimated by multiplying the simple interest rate per year by the construction period in years and dividing by 2 to reflect the carrying costs on the average commitment of capital toward equipment and labor during construction. Escalation effects for plants to be completed beyond this date may be extrapolated in accordance with anticipated cost trends for labor, material, and equipment. Historical cost trends, by region, as experienced in the power industry, which can be helpful in forecasting future costs, may be determined by use of the figures in Table 17.5.2. Escalation can significantly affect plant costs on future projects, especially in view of the 10to 12-year engineering and construction periods historically experienced for nuclear facilities and the corresponding 5 to 6 years required for fossil-fueled power plants. In addition to rising equipment, construction labor, and material costs, major factors influencing the upward trend in plant costs include increased investment in environmental control systems, an emphasis on improved quality assurance and plant reliability, and a concern for safety, particularly in the nuclear field. Table 17.5.2

Table 17.5.1

Typical Investments Costs* (2003 Price Level) Plant description

Type Combustion turbine Simple cycle Combined cycle Conventional steam plant Fossil

Net capacity, MW

Fuel

Total investment cost in $/net kW

200 500

Gas Gas

250–400 400–600

500 500 500

Coal Oil Gas

1,200–1,500 1,100–1,300 900–1,000

* Capital investments exclude costs for the following: initial fuel supply, cost of decommissioning for nuclear plants, main transformers, switchyard, transmission facilities, waste disposal, land and land rights, and interest during construction.

Conventional Steam-Electric Plants Conventional fossil plant investment costs given in Table 17.5.1 are for 500-MW nominal units which are deemed to be representative of future central station fossil units. Costs will vary from those in the table due to equipment arrangement, pollution control systems, foundations, and cooling-water-system designs dictated by plant site conditions. The variety of cycle arrangements and steam conditions selected also affects plant capital cost. Plant designers try to economically balance investment and operating costs for each plant. Present-day parameters, in the face of current economics, call for a drum boiler, regenerative reheat cycles at initial steam pressures of 2,400 psig with superheat and reheat temperatures of 1,000F (539C). Similar temperature levels are employed for 3,500psig initial steam pressure supercritical-reheat and double-reheat cycles which require higher investment outlays in exchange for efficiency or heat rate improvements of between 5 and 10 percent. Increased investment costs are also caused by more generous boiler-furnace sizing, and larger fuel and ash-handling, precipitator, and scrubber facilities required for burning poor quality coals. Investment increments are also required to provide partial enclosure of the turbine building and furnace structure and to fully enclose the boiler by providing extended housing to weatherprotect duct work and breeching. Nuclear Plants The construction of nuclear power plant capacity in the United States has been suspended for over 20 years. A new generation of smaller light-water reactor nuclear power plants is currently in the conceptual design stage. Costs associated with these units, including siting, licensing, and fuel cycle, remain unclear, as they are precommercial. In general, light-water reactor nuclear plants require considerably higher investments than do fossil-fueled plants, reflecting the need for leakproof reactor pressure containment structures, radiation shielding, and a host of reactor plant safety-related devices and redundant equipment. Also, light-water reactor plants operate at lower initial steam conditions than do fossil-fueled plants and, because of their poorer turbine

U.S. Cost Trends of Electric Plant Construction by Region (1973 Index is 100) North Atlantic

South Atlantic

329 341 354 361 373 380 385 396 420 431 445

308 321 332 339 350 357 361 372 391 397 412

North Central

South Central

Plateau

Pacific

317 331 340 346 357 365 369 379 394 400 423

333 347 357 366 376 382 388 397 419 424 445

Total: Steam generating plants* 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003

17-33

318 331 345 352 362 368 374 386 404 417 438

304 319 329 333 347 351 355 365 391 395 412

*As of January 1 of each year. SOURCE: “Handy-Whitman Index of Public Utility Construction Costs,” compiled and published by Whitman, Requardt & Associates, LLP, 801 Caroline St., Baltimore, MD 21231.

17-34

COST OF ELECTRIC POWER

cycle efficiency, require larger steam flows and increased equipment sizes at added investment. Combustion Turbine/Combined-Cycle Plants Simple and combinedcycle plants are offered by a number of vendors in the 100- to 1000-MW size range. These plants consist of multiple installations of combustion gas turbines arranged to exhaust to waste-heat steam generators which may be equipped for supplementary firing of fuel. Steam produced is supplied to a conventional steam turbine cycle. Advantages of combustion turbine– based plants are lower unit investment costs, efficient thermal performance, increased flexibility (which allows independent operation of the gas-turbine portion of the plant), shorter installation schedules, reduced cooling-water requirements, and the reduction in sulfur oxide and particulate emissions characteristic of gaseous fuels. The disadvantages of combustion turbines are the high costs of fuel and combustion turbine maintenance. Hydroelectric Plants Hydroelectric generation offers unique advantages. Fuel, a heavy contributor to thermal plant operating costs, is eliminated. Also, hydro facilities last longer than do other plant types; thus they carry lower depreciation rates. They have lower maintenance and operating expenses, eliminate air and thermal discharges, and because of their relatively simple design, exhibit attractive availability and forcedoutage rates. Quick-start capability and rapid response to load change ideally suit hydro turbines to spinning reserve and frequency-control assignments. The constructed cost of a hydroelectric station is strongly site dependent. Overall costs fluctuate significantly with variations in dam costs, intake and discharge system requirements, pondage required to firm up capacity, and with the cost of relocating facilities within the areas inundated by the impoundments. For a given investment in structures, available head and flow quantity may vary considerably, resulting in a wide range of outputs and unit investment costs. Installed plant costs are competitive with other facilities. Cost prediction for future hydroelectric construction is difficult, particularly in view of the decreasing availability of economical sites and restrictions imposed by concern for the ecological and social consequences of disrupting the natural flow patterns of rivers and streams. As with nuclear, no new large hydroelectric plants have been constructed in over 20 years. A number of dams and associated power plants have been demolished or otherwise taken out of service to save fish or restore waterways to some past condition. Any new capacity in this area has come from upgrading existing hydroturbines for greater output and efficiency. Pumped-Storage Plants Pumped-storage plants involve a special application of hydroelectric generation, allowing the use of off-peak energy supplied at incremental charges by low-operating-cost thermal stations to elevate and store water for the daily generation of energy during peak-load hours. Pumped hydro projects must justify the inefficiencies of storage pumping and hydroelectric reconversion of off-peak thermal plant energy by investment cost savings over competing peaking plants. Installation of a pumped hydro station calls for a suitable high head site which minimizes required water storage and upper and lower reservoir areas and an available makeup source to supply the evaporative losses of the closed hydraulic loop. Despite the added complications of installing both pumping and generating units, or of utilizing reversible motor-generator pump-turbines, costs for pumped hydro stations generally fall below those for conventional hydroelectric stations. Differences in gross head, impoundment, and siting make plant cost comparisons difficult. Geothermal Plants Geothermal generation utilizes the earth’s heat by extracting it from steam or hot water found within the earth’s crust. Prevalent in geological formations underlying the western United States and the Gulf of Mexico, geothermal energy is predominantly unexploited, but it is receiving increased attention in view of escalating demands on limited worldwide fossil-fuel supplies. Because natural geothermal heat supplants fuel, the atmospheric release of combustion products is eliminated. Nevertheless, noxious gases and chemical residues, usually contained in geothermal steam and hot water, must be treated when geothemral resources are tapped. There is a current lack of significant cost data covering geothermal plants. Because boiler and

associated fuel-handling facilities are eliminated, investment in these generating plants is considerably less than the cost of comparable fossil-fueled units. However, overall investment chargeable to geothermal facilities includes significant exploration and drilling costs which are site dependent and cannot be accurately predicted without extensive geophysical investigation. Environmental Considerations Environmental protection has become a dominant factor in the siting and design of new power generating stations. Both stack emissions to the atmosphere and thermal discharges to natural water courses must be significantly reduced in order to meet increasingly stringent environmental criteria. In many cases older plants are being required to reduce emission levels to achieve legislated ambient air-quality standards and to control thermal discharges by the use of closed cooling systems to prevent aquatic thermal pollution. Control of air pollution in fossil-fueled power plants includes the reduction of particulates, sulfur oxides, and nitrogen oxides in flue gas emissions. Particulate collection can be achieved by electrostatic precipitators, baghouse filters, or as part of stack gas scrubbing. Stack gas scrubbing is required for all new coal-fired power plants, regardless of sulfur content of the fuel. Fossil-fueled power plants are believed to be a major contributor to acid rain. Scrubbers reduce sulfur oxide emissions by contacting flue gas with a sorbate composed of metal (usually sodium, magnesium, or calcium) hydroxides in solution which act as bases to produce sulfate and sulfite precipitates when they contact sulfur oxides in the flue gas. Wet scrabbers contact flue gas with a sorbate in solution. Dry scrubbers evaporate sorbate solution into the gas stream. Fossil-fueled power plants are required to burn low-sulfur fuels. Restrictions on the use of oil in new power plants and its high cost have virtually eliminated new central station steam power plants designed to utilize oil. Control of nitrogen oxides NOx is attained primarily through modifications to flame propagation and the combustion processes. NOx control for combustion turbines is typically staged combustion. New burner designs and the use of water or steam injection into the combustion zone have effectively reduced emissions to acceptable rates. For steam boilers, changes to the burners and combustion zone were deemed insufficient and introduction of ammonia or urea spray has been incorporated with or without a catalyst [selective catalytic reduction (SCR) and selective noncatalytic reduction (SNCR), respectively]. The two systems, SCR and SNCR, operate at different temperatures. The SCR system includes ammonia injection across the flue gas in an area of the gas path at approximately 700F. The flue gas then passes through a vanadium pentoxide (V2O5) catalyst. This combination removes NOx with a removal efficiency of approximately 90 percent. SNCR operates at approximately 1600 to 2100F but does not require a catalyst. Its removal efficiency is approximately 70 percent. In order to avoid plant discharges of waste heat to the aquatic environment, evaporative-type closed-cooling cycles are employed in lieu of once-through cooling system designs. Closed-loop cooling systems in current use employ evaporative cooling towers or cooling ponds. More advanced closed-cooling-loop designs use dry and wet-dry cooling towers. These represent alternates to conventional evaporative systems where makeup water is in short supply or where visible vapor plumes or ice formed by vapor discharge present hazards. The penalties for dry-tower cooling are significantly higher than those for conventional evaporative designs. Large, more costly water-to-air heat-transfer surfaces are required, and characteristically higher condensing temperatures result in higher turbine backpressure, restricting plant capability and reducing efficiency. Evaporative cooling systems (wet towers) have a significant advantage over hybrid (wet and dry sections) and all-dry cooling systems in initial installed and operating costs. A dry cooling tower system can be $200 to $400 per kW more expensive. For a large facility the operating penalty can be $1,000 per kW or more because of the additional fan power requirements. Hybrid systems, which are a combination of an evaporative section and a dry heat exchanger section, are often used in

FIXED CHARGES

areas where water usage is a concern or fogging issues are present. The wet/dry hybrid arrangement utilizes the required amount of wet and dry surface to meet the site constraints. In the Northeast, many smaller facilities (under 100 MW) have used the wet/dry arrangement to minimize fogging or icing of nearby highways. In the western half of the United States, wet/dry towers have been used to conserve the precious and sometimes scarce water resources. Another consideration for the addition of any dry surface beyond the typical evaporative cooling system is the effect on efficiency. Use of a dry cooling system can easily increase the condensing temperature for the Rankine cycle by 20 to 30F. The efficiency and output penalty of the increase in heat rejection temperature can approach 5 to 10 percent, which can reduce overall efficiency by several percent or more. FIXED CHARGES

Costs that are established by the amount of capital investment in plant and which are fixed regardless of production level are termed fixed charges. Annual fixed charges are ordinarily expressed as a percentage of investment and include interest or the cost of money, funds applied to amortize investment or to allow for replacement of depreciated plant, and charges covering property taxes and insurance. Additionally, fixed charges may include an interim replacement allowance to cover the replacement cost of plant equipment not expected to last the full life of the plant. For investor-owned utilities, the cost of money employed for plant expansion depends upon financial market conditions in general and upon the attitude of investors with regard to a particular utility enterprise or specific project. Funds for investor-owned utility expansion are derived from both the risk capital (equity) and debt capital (bond) markets. Prevailing return rates for investments in utility plant facilities are influenced by the rate-setting practices of public utility regulatory agencies and by supply-and-demand factors in the investment market. Public utility facilities owned by state and municipal government organizations are generally financed by long-term revenue bonds, most of which qualify for tax-free-income status. Interest rates currently fall between 4 and 6 percent and reflect the tax relief on interest income enjoyed by the bondholders. Generating facilities are often financed by industrial concerns whose primary business is the production of a manufactured product. In these instances, the annual cost of money invested in power facilities will be established by considering alternate investment of the required funds in manufacturing plant. Inasmuch as returns on equity capital invested in manufacturing industries are usually on the order of 10 to 20 percent, the rate of return for industrial power plants will tend to be set at these higher levels. Nonutility Generators (NUGs) generally form a project-specific limited liability corporation, or LLC, and borrow through nonrecourse financing at or above market rates. They put up a minimum of capital. In the early days of NUGs, the project would get equity contributions from the equipment vendors and contractors to cover the equity required by the banks as “down payment.” The plant and its revenue stream would collateralize the balance. Later revenue from earlier operating plants and real equity was required to satisfy the loan. As we enter the twenty-first century, project financing has seen numerous project defaults and the decline of many independent NUGs and utility-spawned NUGs. Equity

Table 17.5.3 Representative Useful Life of Alternate Utility Facilities Facility

Representative useful service life, years

Steam-electric generating plant Hydroelectric plant Combustion turbine-combined cycle Nuclear plant Transmission and distribution plant

30 50 30 30 40

requirements for new projects have increased dramatically, as well as the expected interest rates. Lenders expect multiple layers of guarantee from all participants. The short-lived electricity trading market has shrunk and the expectation of huge short-term profits that could justify and secure merchant generating has disappeared. New NUG generation will require guaranteed fuel costs, a reasonable expectation of electric sales through renewable sales contracts, and/or an electric price tied profitably to fuel and operating costs. Corporate federal taxes are levied on equity capital income. Thus, corporate earnings on equity investment must exceed the return paid the investor by an amount sufficient to cover the tax increment. The amortization of debt capital or the provision for depreciation over the physical life of utility plant facilities may be effected by several methods. Straight-line depreciation requiring uniform charges in each year over a predetermined period of useful service is commonly applied because of its simplicity. The percentage method of depreciation assumes a constant percentage decrease in the value of capital investment from its value the previous year, thereby resulting in annual depreciation charges which progressively diminish. The sinking-fund method of economic analysis assumes equal annual payments which, when invested at a given interest rate, will accumulate the capital value of facilities less their salvage value, over a predetermined useful service life. Table 17.5.3 illustrates the representative useful life for alternate utility facilities. Use may be made of interest tables which show, for any rate of interest and any number of years, the equal annual payment (sinking-fund) rate which will amortize an investment and additionally will yield an annual return on investment equal to the interest rate. Equal annual payment 5

is1 1 idn s1 1 idn 2 1

where n  number of years of life, and i  interest rate or rate of return. Property taxes and property insurance premiums are normally established as a function of plant investment and thus are properly included as fixed charges. Property taxes vary with the location of installed facilities and with the rates levied by the various governmental authorities having jurisdiction. In general, public power authorities will be free of taxes, although public enterprises often render payments to government in lieu of taxes. Annual property tax rates for private enterprise will amount to perhaps 2 to 4 percent of investment, while property insurance may account for annual costs of between 0.3 and 0.5 percent. A representative makeup of fixed charges on investment in a conventional steam-electric station having a useful service life of 30 years is shown in various classes of ownership in Table 17.5.4.

Table 17.5.4 Typical of Fixed-Charge Rate for Conventional Fossil-Fueled Steam-Electric Plant with 30-Year Economic Life

Rate of return or interest Amortization or depreciation Federal income tax Local taxes (or payment in lieu of taxes) Insurance Total

17-35

Investor-owned utility, %

Government-owned utility, %

Industrial-commercial ownership, %

6.4 1.2 1.5 2.0 0.3 11.4

5.0 1.5 0.0 2.0 0.3 8.8

8.0 0.9 4.3 2.0 0.3 15.5

17-36

COST OF ELECTRIC POWER Table 17.5.5

Operating Expense for Fuel for Representative Heat Rates and Fuel Prices

Conventional coal fired Advanced light-water reactor Combustion turbine/combined cycle Combustion turbine/simple cycle

Nominal size, MW

Typical heat rate, Btu/kWh

Fuel price, ¢/MBtu

Fuel cost, mills/kWh

500 600 500 200

9,800 10,700 7,000 8,000

200 60 400 400

19.6 6.4 28 32

NOTE: Operating fuel cost, mills/kWh  (Btu/kWh)  (¢/Btu)  105

OPERATING EXPENSES Fossil Fuels Currently, fossil fuels contribute approximately three-

fourths of the primary energy consumed by the United States in the production of electric energy. Generating station demands for fossil fuels are continuing to increase as the electric utility and industrial power markets grow. Price comparisons of fossil fuel are generally made on the basis of delivered cost per million Btu. This cost includes mine-mouth or well-head price, plus the cost of delivery by pipeline or carrier. Price comparisons must recognize that solid and, to a lesser extent, liquid fuels require plant investments and operating expenditures for fuel receipt, storage, handling and processing facilities, and for ash collection and removal. High transportation cost contributions on a Btu basis will be incurred by high-moisture and ash-content coals with low heating values. It is, therefore, advantageous to fire lignite and subbituminous coals at mine-mouth generating plants. Coal represents our most abundant indigenous energy resource, with enough economically recoverable supplies at current use rates to last well into the twenty-first century. About half of the recoverable coal reserves have a sulfur content above 1 percent and are considered high-sulfur coal. Low-sulfur coals are found chiefly in the low-load areas of the mountainous West, and delivered cost at the major markets east of the Mississippi include high transportation charges. Increasingly, coal production is bearing the cost of more rigid enforcement of stringent mine safety regulations, coal washing environmental compliance, and the charges associated with strip-mine land restoration. Delivered price depends upon transportation economies as may be affected by barging, unit train haulage, or pumping in slurry pipelines. Delivered price levels for coal fuel vary with plant location. Plants conveniently located with respect to eastern coal reserves report delivered-coal prices in the general range of $1.25 to $2.50 per million Btu, depending upon sulfur and ash content. Lowsulfur western subbituminous coals have delivered prices of between $1.00 to $2.00 per million Btu. The domestic supply of petroleum is now outstripped by nationwide demand. The United States has become dependent on overseas sources to meet growing energy demands. The consequences of this trend are a continued unfavorable balance of payments and dependence on foreign oil supplies from politically unstable areas in the Middle East and North Africa. Residual and distillate oil prices have risen because of short supply and pressure by the major oil producers on worldwide market price levels. Blends of low-sulfur oils delivered during 2004 to generating plants along the eastern seaboard range from $3.50 per million Btu to as high as $4.50 per million Btu for the 0.3 percent sulfur fuel required for firing in some metropolitan areas. During this same period distillate oil commanded a nationwide price ranging from approximately $5.00 to $7.00 per million Btu. Consumption of natural gas as a power fuel is increasing because it is clean burning, convenient to handle, and generally requires smaller and cheaper furnaces. Historically, its limited supply made it best-suited for consumption by residential and commercial users and meeting industrial process needs. Present-day power plant use of natural gas is on the rise. The proliferation of larger, more efficient combustion turbines like the 150 MW “F series” machines designed for combined-cycle application has increased the consumption of natural gas for power generation. During the 1990s almost all new generation was natural gasfired combustion turbine combined-cycle.

Fuel prices for 2004 ranged from $4.00 to $6.00 per million Btu. Price levels of natural gas, severely regulated in the past by government controls imposed at the well, have been falling sharply in response to current supply-and-demand factors and gas deregulation. Nuclear Fuel Reactor plant fuel costs present a special case. Actually, the initial core loading which will support operation of a nuclear plant over its early years of life requires a single purchase prior to commissioning of the plant. For comparison with fossil-fuel prices, therefore, nuclear-fuel cycle costs, including first-core investment, periodic charges for reload fuel, and spent-fuel shipment and processing costs, are ordinarily extrapolated at assigned load factors over the life of the plant and are converted to an economic equivalent expressed in dollars per million Btu of released fission heat. Nuclear-fuel costs are not only influenced by ore prices and by fuel fabrication and processing costs, they are also sensitive to investment and uranium-enrichment costs. Future costs are subject to inflationary pressures and cost-saving technology changes such as extended burn-up cycles. Charges up to $1 per million Btu are representative of the levelized fuel prices for nuclear plants. Operating Costs of Fuel Fuel price contributions to energygeneration costs will reflect start-up and will depend upon plant efficiencies which, at low loads, show considerable departure from the bestpoint performance achieved at or near full unit loadings. These factors significantly affect the operating expenses of load-following utility system units as well as plants assigned fluctuating demands in manufacturing or industrial service. As an example, calculated performance for a nominal 500-MW, 2,400-psig coal-fired regenerative reheat steam unit shows a best-point heat rate of 9,800 Btu/net kWh at rated output. Load reduction yields heat rates of 10,500 and 12,400 Btu/kWh at loadings of 250 MW and 125 MW, respectively. Typical operating-expense ranges for given fuel prices and estimated full-load heat rates may be determined directly where continuous operation at or near unit rating is assumed (Table 17.5.5). Operating Labor and Maintenance In addition to fuel costs, operating expenses include labor costs for plant operation and maintenance, plus charges for operating supplies and maintenance materials, general administrative expenses, and other costs incidental to normal plant operation. Operating labor and maintenance costs vary considerably with unit size, operating regimen, plant-design conditions, type of facility, and the local labor market. Representative figures appear in Table 17.5.6.

Table 17.5.6 Representative Operating and Maintenance Costs (2004 Price Level)

Type of plant Conventional fossil Advanced lightwater reactor Combustion turbine/ combined cycle Combustion turbine/ simple cycle Conventional hydro

Nominal size, MW

Fuel

Operating and maintenance costs, mills/kWh

500 600 500

Coal Nuclear Gas

5–9 5–12 6–11

200

Gas

7–12

300



6–9

NOTE: Unit kilowatt-hour costs include labor, maintenance materials, operating supplies, and incidental expenses. Costs shown assume base-load operation.

POWER PRICES Environmental Controls Systems and equipment required for air and water pollution abatement generally carry increased fuel and maintenance labor and materials costs. Reductions in plant output resulting from the higher condensing pressures associated with cooling-tower operation or the added auxiliary power for stack gas clean-up systems lower plant efficiency and increase fuel consumption. OVERALL GENERATION COSTS

The total cost of power generation may now be estimated by reference to the preceding material assuming type of ownership, capital structure, plant type, fuel, and loading regimen. Table 17.5.7 comprises an illustrative tabulation of the factors determining the overall generation cost of an investor-owned generating facility. It should be noted that capacity factor, or the ratio of average-actual to peak-capable load carried by a given generating facility, will have a significant effect on generation expenses. In addition to the effects of part-load operation on fuel costs as previously discussed, capacity factor will determine the plant generation which will support fixed charges. High-capacity-factor operation will spread fixed charges over a large number of kilowatt-hours of output, thereby reducing unit generation costs. During its initial life, a thermal plant is usually operated at high-capacity factor. As inevitable obsolescence brings newer and more efficient equipment into service, a unit’s baseload position on the utility system load duration curve is relinquished, and capacity factor tends to drop. This decline in capacity factor must be acknowledged in estimating output and generation costs over the life of a given facility. Most modern electric utilities incorporate computerized systems designed to economically dispatch power generated at each production plant feeding the load. Individual generating-unit loads are assigned in a manner that can be demonstrated to result in minimum overall cost; i.e., at each system power level, load is shared between units so that all operate at the same incremental production cost. Telemetered data reflecting system load and generation is transmitted to central dispatch computers by multiplexing via power line carrier, microwave, or telephone lines. The communication schemes include channels for transmitting load adjustment commands developed by the computer to on-line generating units. Loading instructions account for the unit production efficiencies and transmission losses associated with each dispatch assignment. TRANSMISSION COSTS

Because of a growing scarcity of urban sites, increasing emphasis on environmental protection, and public attitudes, it has become more and more difficult to site major power stations near centers of load. As a consequence, added cost of transmission along with attendant resistive power losses add significantly to the overall cost of service. Because of the distances involved, these costs are generally greatest for nuclear facilities, mine-mouth stations, and hydroelectric plants where remoteness or remote resources strongly govern siting. Transmission plant investment also reflects a trend toward interconnection of neighboring utilities. Designed to improve service reliability by the pooling of reserves and to effect savings by capacity and energy interchanges, such Table 17.5.7 Total Cost of Generating Power (Plant type: 500-MW conventional fossil; plant net heat rate: 9,800 Btu/kWh) Generating costs, mills/kWh Coal fuel @ $2.00/mBtu Operating and maintenance Fixed charges @ 11.4% per annum Total operating costs

19.6 7.0 24.0 50.6

NOTES: Fixed charges are based on assumed initial plant investment of 1,580/kW and 7,500 h/year of operation. Values shown are typical and could vary significantly for individual plants.

17-37

interties must carry substantial ratings so that emergency power transfers can be accommodated without exceeding system stability limits. Costs for major overhead transmission ties (1,000 MVA and up) are estimated to range between $400,000 and $900,000 per circuit mile depending on terrain and voltage level. Investments are also sensitive to factors of climate and proximity to urban areas that can increase right-of-way costs significantly. Where underground transmission is elected, installed investment costs can be as much as 10 times the cost of overhead lines. Transmission and distribution systems have seen a large-scale change in ownership in deregulated areas of the country as regional grids were sold, divested, or handed to new owner/operators. These grid systems that had been maintained by high-cash-flow integrated utilities were now the responsibility of lower-cash-flow entities receiving only a small percentage of the retail price of electricity. Extensive blackouts, like the August 2003 blackout that darkened Ohio to New York City and parts of Canada, can result from insufficient transmission line investment, publicly restricted transmission line additions, and capacity increases, as was demonstrated in this situation. The choice of transmission voltage level and whether ac or dc is used for bulk power transfer depends on the amount of power transmitted, the transmission distance involved, and at each voltage level, the cost of line and substation equipment. Voltages generally employed are 230, 345, 500, and 765 kV ac and 5000  kV dc. Where long distances on the order of 400 mi (650 km) or more are encountered, dc transmission becomes economically attractive. For shorter lines, however, savings in fewer conductors and lighter transmission towers are nullified by the high cost of dc-ac conversion equipment at both line terminals. POWER PRICES

The price of power delivered must account for the production costs at each generating station in addition to transmission and distribution costs. As previously noted, these costs depend upon labor rates, fuel prices, and material charges. They reflect investment levels in generating plant and the fixed-charge rates established by funding patterns, type of ownership, and expected equipment service life. Overall system production costs are affected by the investment requirements of specific mixes of generating equipment types, and by the manner in which load is shared by units, i.e., how production is allocated between highly efficient base-load stations and the less-efficient peaking equipment which normally runs for only a few hours each day. Also, important cost reductions are achieved in hydro systems by controlling natural and stored water flows to allow optimized sizing and scheduling of hydroelectric output, thereby reducing needs for thermal peaking capacity and decreasing the generation requirements of high fuel cost fossil plants. Power prices cover the investment charges, maintenance costs, and capacity and energy losses chargeable to the transmission and distribution plant. They also include the administrative costs incurred to maintain corporate enterprise and the commercial expense of metering and billing. The advertising and public relations cost of competition or the cost of preparing for competition in a deregulated market must be considered. Generally, power prices must provide a return to cover the average cost of power production throughout a given system. For smaller utility systems and commercial and industrial companies, each plant must provide an adequate return based solely on the individual plant’s costs rather than an average as discussed above. Rate schedules and supply contracts, however, are drawn to reflect the reduced cost of off-peak energy produced by available generating units during periods of low system load. Additionally, prices for high load factor service often recognize the cost reductions effected by spreading fixed charges over increased units of energy output. Large blocks of capacity and energy supplied for industrial use are often priced by establishing an annual charge for capacity which equals the fixed charges on investment in committed generation and transmission plant, plus charges for energy representing the sum of the variable kilowatt-hour production costs for fuel, maintenance, and operation. Historically, utilities have applied rate schedules which promote consumption by applying progressively lower rates to blocks of increased energy usage. Rationale for such pricing is the savings that load growth can

17-38

COST OF ELECTRIC POWER Table 17.5.8 Average Cost, in Cents per Kilowatt-Hour, for Consumers by Sector, Census Division, and State, 2002 Census division, State

Residential

Commercial

Industrial

Other

All Sectors

New England Connecticut Maine Massachusetts New Hampshire Rhode Island Vermont Middle Atlantic New Jersey New York Pennsylvania East North Central Illinois Indiana Michigan Ohio Wisconsin West North Central Iowa Kansas Minnesota Missouri Nebraska North Dakota South Dakota South Atlantic Delaware District of Columbia Florida Georgia Maryland North Carolina South Carolina Virginia West Virginia East South Central Alabama Kentucky Mississippi Tennessee West South Central Arkansas Louisiana Oklahoma Texas Mountain Arizona Colorado Idaho Montana Nevada New Mexico Utah Wyoming Pacific Contiguous California Oregon Washington Pacific Noncontiguous Alaska Hawaii U.S. Total

11.18 10.96 11.98 10.97 11.77 10.21 12.78 11.32 10.38 13.58 9.71 8.06 8.39 6.91 8.28 8.29 8.18 7.37 8.35 7.67 7.49 7.06 6.73 6.39 7.40 7.90 8.70 7.82 8.16 7.63 7.71 8.19 7.72 7.79 6.23 6.57 7.12 5.65 7.28 6.41 7.70 7.25 7.10 6.73 8.05 7.87 8.27 7.37 6.59 7.23 9.43 8.50 6.79 6.97 10.46 12.90 7.12 6.29 14.20 12.05 15.63 8.46

9.91 9.35 10.47 10.14 10.09 8.84 11.10 10.14 8.87 12.46 8.03 7.20 7.49 5.98 7.36 7.68 6.54 6.01 6.56 6.28 5.88 5.88 5.62 5.85 6.24 6.43 6.98 7.38 6.64 6.46 6.09 6.51 6.48 5.87 5.41 6.33 6.63 5.30 6.83 6.45 6.69 5.68 6.64 5.75 6.95 6.64 7.28 5.67 5.71 6.53 9.06 7.22 5.60 5.71 11.38 13.22 6.59 6.11 12.46 10.13 14.11 7.86

8.52 7.68 11.24 8.77 8.83 8.04 7.90 6.03 7.83 5.16 6.06 4.58 5.01 3.95 4.95 4.68 4.43 4.22 4.06 4.53 4.19 4.42 3.89 3.98 4.54 4.24 5.11 4.95 5.23 3.95 3.88 4.70 3.85 4.13 3.81 3.71 3.82 3.09 4.40 4.15 4.48 4.01 4.42 3.81 4.66 4.86 5.20 4.52 4.34 3.70 7.25 4.48 3.84 3.55 8.26 10.83 4.72 4.56 10.26 7.65 11.02 4.88

10.50 10.36 32.82 9.79 12.07 8.09 19.26 9.45 14.04 9.05 11.10 6.09 5.56 9.75 10.43 5.70 8.08 5.72 4.92 9.30 7.36 6.20 6.37 3.68 3.63 6.42 10.62 6.60 7.43 8.31 10.18 6.70 6.44 5.15 10.01 6.32 7.46 4.61 8.76 8.92 6.31 6.52 7.05 5.06 6.55 5.57 4.56 6.64 5.18 7.14 6.54 6.23 4.69 5.93 6.29 6.68 9.44 4.94 14.63 14.04 16.85 6.73

10.16 9.73 11.36 10.18 10.49 9.19 10.87 9.59 9.31 11.29 8.01 6.50 6.97 5.34 6.92 6.66 6.28 5.97 6.01 6.31 5.84 6.09 5.55 5.45 6.26 6.56 7.05 7.37 7.31 6.24 6.21 6.74 5.83 6.23 5.11 5.39 5.71 4.26 6.24 5.72 6.33 5.61 5.99 5.59 6.62 6.52 7.21 6.00 5.58 5.75 8.42 6.73 5.39 4.68 10.28 12.50 6.32 5.80 12.35 10.46 13.39 7.21

SOURCE: DOE website www.eia.doe.gov/cneaf/electricity/esr/table1abcd.xls#A238.

VISION

realize through economies of scale, as well as the improved utilization of existing utility plant. However, regulatory pressure, reflecting a policy of minimizing the cost to the consumer, impact on the environment, and the critical need to conserve high-cost imported fuel, has favored a marginal cost-pricing system more nearly reflecting the actual cost of production and transmission of a particular user’s supply of power. Rate setting under this conservationist approach calls for flat, rather than reduced, rates as usage increases and for high unit energy charges during peak-load periods. Cogeneration facilities designed for the dual-purpose production of power and process steam permit investment savings, principally in steamgeneration plants, which result in combined production charges falling below the total cost of separate, single-purpose production of power and of steam. Such savings can permit proportionate decreases in the prices ordinarily charged for separate single-purpose production of each of the products, or they may be assigned in total to reduce the price of one or the other by-product. This latter option is often exercised in the case of dual-purpose water product plants arranged for seawater flash evaporation using power-turbine extraction as a process steam source. Where severe shortages of fresh water exist, social considerations favor the total assignment of dual-purpose savings to the water product. Thus, minimal prices for desalted product water are achieved, while dual-purpose power is marketed at prices competing with single-purpose power generation costs. Similarly, assignment of the total savings of dual-purpose power and process steam production to the power product may justify on-site industrial plant power generation in preference to outside purchases of higher-priced utility system power supplies. Since the 1980s, some facilities qualifying under the PURPA regulations have provided very low-priced steam to a host industrial plant as a means to satisfy the PURPA regulations and sell electricity to the local

17-39

utility, whether the utility wanted the facility or not. Changes to the pricing structures payable by the local utilities to the “qualifying facility” and the eventual end of the PURPA statute eliminated the “free steam” approach from cogeneration. Where hydro facilities supply power in combination with irrigation, flood control, navigation, or recreational benefits, power costs are largely sensitive to the allocation of investment charges against each of the multipurpose project functions. Should generation be treated as a by-product, power prices can be reduced drastically to reflect equipment operating expenses and the limited fixed charges covering investment in only the generating plant itself. Cheap fuel, advances in design, the economies of scale, and the economic application of alternate generating unit types produced downward trends in the price of electric service in the 1960s. In the 1970s these trends were reversed by inflationary effects on plant costs, high interest rates, and the increased fuel prices. A dwindling number of favorable plant sites, licensing delays, and the added costs of environmental impact controls combined to cause further upward pressure on power prices in the 1980s. However, falling interest rates and fuel prices began to slow the rate of increase in the cost of electric power. These stabilizing influences have continued into the 1990s, tempered by stricter environmental regulation costs. Increases in electric rates are expected into the foreseeable future. The twenty-first century has started with good and bad deregulation cases, corporate financial problems (such as Enron), and increased terrorist attacks on the U.S. infrastructure. All of these factors, coupled with a significant rise in the price of natural gas in the first several years of the new millennium, have resulted in continual escalation of the price of electricity. Table 17.5.8 shows the electricity prices by sector for each region and state in the United States for the year 2002.

17.6 HUMAN FACTORS AND ERGONOMICS by Ezra S. Krendel REFERENCES: “Aviation Safety and Pilot Control,” NRC, National Academy Press, Washington, DC, 1997. Allen, McRuer, et al., “Computer-Aided Procedures for Analyzing Man-Machine System Dynamic Interactions,” Vol. I, “Methodology and Application Examples,” Vol. II, “Simplified Pilot-Modeling for Divided Attention Operations,” Vol III, “Users Guide,” WADC TR-89-3070, June 1989. Badler, Phillips, and Webber, “Simulating Humans: Computer Graphics, Animation and Control,” Oxford University Press, 1993. Boff, Kaufman, and Thomas (eds.), “Handbook of Perception and Human Performance,” Vols. I and II, John Wiley & Sons, 1986. Card, Moran, and Newell, “The Psychology of Human-Computer Interaction,” Lawrence Erlbaum Associates, 1983. Kleinman, Baron, and Levison, “An Optimal Control Model of Human Response, Part I: Theory and Validation,” Automatica, 6, 1970. Konz and Johnson, “Work Design: Occupational Ergonomics,” 6th ed., Holcomb Hathaway, 2004. Krendel and Wodinsky, “Search in an Unstructured Visual Field,” Jour. Optical Soc., 50, 1960. McRuer, Pilot-Induced Oscillations and Human Dynamic Behavior, NASA CR-4683 July 1995. McRuer, Clement, Thompson, and Magdaleno, “Minimum Flying Qualities,” Vol. II, “Pilot Modeling for Flying Qualities Applications,” Systems Technology, Inc. Hawthorne, CA, 1990. McRuer, “Human Dynamics in Man-Machine Systems,” Automatica, 16, 1980. McRuer and Krendel, “Mathematical Models of Human Pilot Behavior,” AGARDograph No. 188, 1974. Salvendy (ed.), “Handbook of Human Factors,” 2d ed., John Wiley & Sons, 1997. Sundin, Örtengren, and Sjöberg, “Proactive Human Factors Engineering Analysis in Space Station Design Using the Computer Manikin Jack,” SAE Technical Paper 2000-01-2166. Thompson, “Program CC Version 5.0 for Windows,” Systems Technology Inc., Hawthorne, CA, 2001. SCOPE

The mission of human factors engineering/ergonomics (HFE/E) is to improve the performance, reliability, efficiency, and the risk management of systems in which humans work in concert with machines, man/machine/systems (MMS). This discipline developed many of its techniques during and after World War II. The life-or-death stakes and

the advances in military technology made even minor improvements in the performance and reliability of manned military systems highly desirable and major improvements essential. The design of MMS interact with experimental psychology, physiology, and physical anthropology and with aeronautical, electrical, industrial, mechanical, systems, computer, and cognitive engineering. MMS cover a wide range of enterprises, from piloting a jet aircraft to microsurgery. Achieving the goals of HFE/E in MMS requires a knowledge of humans as sensors, manipulators, responders, information processors, and decision makers. The expected performance of the “man” in MMS is affected by: training and motivation, variability of behavior among humans, personal stress, work load, age, fatigue, substance abuse, and by environmental conditions: vibration, noise, temperature, gravity, and distractions from the task. Some of the references for this section are sources for details and data on the impact of these many human-related variables. Other references provide engineering theory and practice for the design of MMS. What follows is an introduction to HFE/E findings that are likely to be helpful to mechanical engineers. VISION

The visual pathway in psychomotor tasks, written or iconic messages, and searching for targets is primary in most MMS. Other senses, in declining order of importance to MMS design, are: hearing, which is superior to vision for alerting or warning signals; kinesthesis and vestibular senses, which facilitate positional awareness; and touch and pain. High-resolution vision requires that the image of the object seen be projected by the eye’s lens upon an approximately 2 central sector of the retina, known as the fovea, composed entirely of photoreceptors called

17-40

HUMAN FACTORS AND ERGONOMICS

cones. Cones generate color vision and respond to light intensity from the limit of brightness tolerance to that of candlelight reflected from a newspaper. As it scans its field of view, the eye continually moves in pulses of approximately 50 ms, pausing to fixate its fovea on objects in this field. Each fixation pause is about 250 to 300 ms, but can be as long as 1 minute when the viewer is uncertain about the image on his fovea. The density of cones on the retina falls steeply beyond its area of concentration on the fovea and reaches a flat minimum at an eccentricity of about 20. Beyond the fovea a second set of retinal photoreceptors, rods, appears among the cones. Rods provide vision from low brightness levels to the darkest conditions for which vision is still possible. Rods reach their maximum sensitivity to low light levels after about 30 min in the dark. In a twilight level of brightness comparable to snow under a full moon, rods and cones operate together. The number of rods on the retina increases in density until reaching a maximum at an eccentricity of about 20. Although much greater than that of cones, rod density gradually declines as it approaches the limiting eccentricity of 80. This peripheral vision supports orientation and motion detection. Visual perception is the brain’s interpretation of visual images. The perception of the size and distance of an object begins when the eyes’ lenses, controlled by the ciliary muscles, accommodate so as to focus upon the object. Estimates of an object’s apparent distance are influenced by: this muscle action, the visual cues leading to the object, and the anticipated size of the object. These cues may be inadequate or they may be inconsistent with one another, and as a consequence a perception of distance that differs from reality may emerge. The vision of an automobile driver may be momentarily focused on the dashboard displays at a distance of 0.5 m. Refocusing on an object appearing unexpectedly in the road ahead may take as long as 0.4 s. The object may be unfamiliar and helpful visual cues obscured by darkness or fog. The probability of an accident is increased by this briefly held misperception of object size or distance. Some of the characteristics of vision can support a two-parameter model for the probability of success over time in searching with the naked eye for a target alone in a visual field. This simplified model must be elaborated when the proposed application differs greatly from the experimental conditions under which it was validated. This is a common condition when most HFE/E models or data are used. Assuming the search is random and psg is the probability of finding the target during a single glimpse or one fixation, then the probability of finding the target after k glimpses is: Pk 5 1 2 s1 2 psgdk

tmean  1/m

m < psg /T

(17.6.2)

This model was confirmed for 0.006  psg  0.80, different size sectors of the sky, different target sizes, and different target contrasts. In unpracticed random searches for a command in a menu on a monitor, the single-glimpse probability of detection, psg, was 0.08. After practice, psg increased to 0.60. Were the search systematic with no overlap of fixation areas the probability of detection, Psys, for search for a limited time t would be: Psys std 5 psg t/T,

n

H 5 a i pi log2 s1/pi 1 1d

(17.6.4)

For the range 0  H  3, there is an empirical equation, known as the Hick-Hyman law for RT, for these manual responses: RT  a  bH

(17.6.5)

The constants a < 200 ms and b < 150 ms are influenced by practice and by the compatibility between the geometry of the physical presentation of the stimulus and that of the mechanism for responding. For rapid discrete or repetitive actions, usually by the hands, movement time (MT) is proportional to the difficulty of the task as defined by Fitts’ law. Equation (17.6.6) defines the index of difficulty, ID, in terms of target width W and movement amplitude A. MTs from a wide range of rapid movements, as in operating a key pad or sorting items into bins, can be described. In applications, d is about 100 ms and c, which depends on the movement geometry, is approximately 200 ms. Estimates of MT enable comparisons to be made among different operating procedures. ID  log2 (2A/W) MT  c  d ID

(17.6.6) (17.6.7)

To be useful, measurements of RT and MT must come from a stable plateau of skilled performance that can be determined by units of error or of time to complete the given task. For many psychomotor tasks, skilled learning follows a power function Tn 5 T1n2b

(17.6.8)

where T1 is the time to perform the task on the first trial. Tn the time to perform the task on the nth trial, and n is the number of trials. In the development of psychomotor skill, 0.2  b  0.6.

(17.6.1)

Each glimpse plus movement time is T seconds, therefore the elapsed search time is t  kT; the probability of finding the target by this time is: P(t)  1  emt

times and movements, by continuous closed-loop control known as manual control or tracking, by complex open-loop activities, and by various combinations. Manual reaction time (RT), like the eyes’ fixation time, increases with uncertainty. This increase in RT can be quantified when a subject must select the correct manual response out of n possible responses to a corresponding visual signal out of n probable signals. For situations in which the independent or the sequential probability, pi, for the occurrence of each of these n signals can be estimated, uncertainty can be expressed as information theory entropy, H:

(17.6.3)

where Psys  1. This model for the probability of finding targets becomes more elaborate when the target must be found from among camouflaged decoys as well as from among decoys differing from it by contrast, shape, area, color, and velocity as well as by area of search, numbers and spatial density of decoys and of targets, and the use of auditory aids. PSYCHOMOTOR PERFORMANCE

A program of skilled psychomotor behavior can be constructed in several ways: by incrementally aggregating a sequence of discrete reaction

SKILL AND ERRORS

The highest skill level in tracking is attained after extensive training when an operator fully familiar with the dynamics of the machine, the manipulator under her control, and the appropriate responses to the input signals can extract what coherence is present in these signals, develop pathways that reorganize the perceptual system, adapt her behavior to create a repertory of special responses, and select from this repertory the appropriate response for the best MMS performance. The procedure resulting in this performance is known as successive organization of perception (SOP). SOP results in a virtual display that provides the operator with the information that would have been present in the actual display had it been augmented. In closed-loop control situations, the precognitive mode response can be anticipatory and compensate for inherent human control lags. In an open-loop mode, the response may be programmed as in executing the sequential actions in preparing to stop a car on approaching a red light or otherwise responding to behavioral cues. As psychomotor skills develop with training, errors diminish, and a plateau of stable, skilled performance is reached. Transient errors occur and are shed as training progresses. Posttraining errors are departures from the expected performance of a motivated, skilled operator. These errors may place the MMS in danger. Their number increases under the following pressures on human performance: divided attention, multitasking, and physiologically impairment such as fatigue, hypoxia, and alcohol or drug use. Errors can be divided into two classes: (1) mistakes, errors in the thought processes behind an action or decision,

MANUAL CONTROL

and (2) slips, errors in the detection or interpretation of sensory signals and unintended responses. The thinking processes which lead to mistakes have been reconstructed; for example, inappropriate expectations can result in misinterpreting a situation acting accordingly and making a malign error. The basic sources for such reconstructions are accident reports, anecdotes, interviews with participants, and introspection on the part of the analyst. If the mistake is made under time pressure, it is difficult for the operator to reexamine the situation, sort out his misconceptions, and examine his options in a deliberate manner. Mistakes can be decreased by training methods, personnel selection, and operating procedures that emphasize the examination of options and impose redundancy for critical actions to lessen the chance that operators will persevere in erroneous behavior. By applying HFE/E findings in the selection and positioning of information displays, controls, monitors, and keyboards, and by accommodating the physical dimensions of the operator, the designer can make slips such as misreading a display or reaching for the wrong switch less likely. In the unlikely event that an error does occur, emergency corrective procedures and warnings must be part of the MMS design. Warnings can address different sensory modalities with audio signals, speech, annunciators, lights, and vibration, either separately or in various combinations determined by the task being performed and the environment of the person or persons to be warned. The probabilities of slips occurring can be estimated from the existing databases in HFE/E and from manned and unmanned simulator studies. These estimates, together with the failure probabilities for the inanimate components of the MMS, can be incorporated into flowcharts for the MMS to determine those locations where further risk management procedures are necessary. MANUAL CONTROL

The body of knowledge in manual control developed from the USAF’s mid-twentieth-century interest in mathematically modeling the dynamic response of pilots in order to provide aeronautical engineers with data for designing aircraft of high performance, stability, and positive evaluations from their pilots. Classical control theory provided the structure for this model. The core was a quasi-linear transfer function or describing function for the pilot’s dynamics represented as Yp (jv),Yp(s), or Yp, depending on the projected analysis. The quasi-linear human controller’s response to an input comprises two parts: describing function

Fig. 17.6.1 Simplified block diagram for manual control.

components that correspond to the response of the equivalent linear elements driven by that input, and a “remnant” component that represents the difference between the response of the actual system and a system composed of equivalent linear elements. Test pilots operating fixed- and moving-base flight simulators as well as actual aircraft provided the empirical data for Yp ( jv). Aircraft dynamics, the controlled element, Yc( jv), confine the dynamic response of motivated pilots to a limited range so that intersubject variability is suppressed. Consequently, adequate measurements for Yp ( jv) can be obtained from a small number of skilled, motivated test subjects. The compensatory mode, Fig. 17.6.1, the most common and extensively studied mode of closed loop control, is present in vehicle control and many other configurations. In this mode, the human tracks a moving visual signal so as to keep its position as close to a reference marker as she can. This signal is the MMS’s error e(t), which is the difference between i(t), the input to the MMS, and m(t), the MMS output. Detecting cyclical and coherent structures in e(t) and from these developing estimates of m(t) and i(t) are the first step in the SOP progression of evolving virtual displays. Maintaining system stability and minimizing the closed-loop system’s error are basic design goals for closed-loop systems. If the open-loop gain |Yp ( jv)Yc ( jv)| in the compensatory mode is too high, instability can occur in the closed-loop task. An expert pilot may unwittingly regress from a higher level in the SOP procedure to the compensatory mode. In this mode the pilot may inadvertently generate excessive open-loop gain and destabilize an otherwise stable system. Potentially destructive pilot-induced oscillations of the aircraft about its flight path are the result. Compensatory pursuit and precognitive modes of the SOP procedures are presented in Figure 17.6.2. In the pursuit mode the operator

Neuromuscular actuation system

Perceptual

Pursuti Ypi Disturbances

Precognitive

System input i

+

System error −

e

Compensatory + Ype

α, γ Motorneuron commands

+

Spinal cord Muscle/manipulator dynamics

Spindle/tendon organ ensemble



Proprioceptive Ypp

Fig. 17.6.2 Compensatory, pursuit, and precognitive pathways.

17-41

Manipulator output c

Machine Yc

System output m

17-42

HUMAN FACTORS AND ERGONOMICS

attempts to follow i(t) with a cursor driven by the MMS dynamics. The perceptual pathways composing SOP for the human controller can be separated into sensory mechanisms that generate internal signals and to a central processor that integrates them, extracts coherence, and determines the human operator’s dynamic responses. The describing functions and remnant depend explicitly on: the task variables, MMS inputs and disturbances, controlled element dynamics, and the operator-centered variables. The effects of these variables have been integrated into a set of rules for applying and adjusting human-operator describing functions for compensatory closed-loop control. The following conditions must apply before using these rules: 1. The forcing functions that act as inputs to the MMS are unpredictable, have bandwidth vi below 6.2 rad/s, and continuous waveforms. 2. The controlled element is a low-order system or can be so approximated and has no highly resonant modes over the input bandwidth. 3. The display and controls are reasonably well scaled and smooth so as to minimize the effects of thresholds, detents, and friction. 4. The other task demands are sufficiently light to permit the operator to devote the majority of his or her attention to minimizing the displayed error.

Attitude control of a spacecraft with damper off can be approximated in the region of human control by Yc s jvd 5 Kc/s jvd2. Maintaining stability demands more of the operator than did automobile heading control, for the operator must generate a low-frequency lead. The cost incurred by control is an increase in the effective time delay t to 0.32 s. A significant consequence is the reduction of the maximum system crossover frequency, and thus the maximum effective closed loop system bandwidth, to 3.3 rad/s. For a single-loop control system of maximum crossover frequency vc, the attainable relative mean square error coherent with an input bandwidth, vi and variance, si can be estimated:

MCRUER’S RULE

The crossover model is a first approximation in the process of elaborating models for human operators of increasingly more complex control systems in which there may be competing demands for the operator’s attention. The process has been described in a reductionist fashion by analyzing the input-output behavior of the human into subsystem components. The sensory mechanism subsystems in Fig. 17.6.2 include optics, the retina, oculomotor muscles, eye dynamics, and the dynamics of linear acceleration and angular velocities acting on the vestibular and kinesthetic sensors. The central elements act to integrate and to equalize the outputs of these subsystems so as to generate commands to the neuromuscular actuation system, which is divisible into dynamic subsystems. These components are adjusted according to McRuer’s rule so that the crossover model obtains and then is integrated to describe the operator’s behavior in a structural-isomorphic model. In contrast, the algorithmic or optimal control model (OCM) proceeds in a holistic fashion and mimics the human’s total response by the application of optimal control computational methods to those human properties subject to adaptation and hence optimization. The OCM proceeds by minimizing a quadratic performance index with an optimum linear predictor operating on estimated delayed state vectors to emulate the human’s control actions. The computational procedure results in very high-order transfer functions for the human. PC-compatible programs, for example, Program CC Version 5, have been written for both the classical model and the OCM. Since McRuer’s rule is the best understood, most widely applicable empirically tested description of human dynamics, the crossover model serves as the standard for comparing the classical model with the OCM model. This has been done for Yc  Kc /s and the comparison is good for frequencies less than 15 rad/s. Manual control models present one perspective of the virtual human. Other perspectives are reach, strength, movements, and eye point of regard. Digital models exist which allow 3-D virtual humans to be placed in 3-D CAD-generated virtual environments. In the design of automobile, aircraft, spacecraft, or submarine interiors as well as factory work stations and large construction machinery, engineers are able to examine a human’s activity in virtual environments and to sort through design options. One such digital human model, Jack/Jill, manufactured by EDS, has 69 segments, 68 interconnected joints, a 17-segment spine, 16-segment hands, coupled shoulder/clavicle joints, and 135 degrees of freedom to mimic the motions and positions of a wide range of humans.

Under the foregoing conditions, which are commonly met in most operational control tasks, theory, experiments, and practice have shown that the operator compensates for the dynamics of the controlled element so that the open-loop characteristics of the MMS act like an integrator in series with an effective time delay in the frequency range most critical to system stability and performance. The frequency within this range where the open-loop gain of the MMS is unity is the crossover frequency vc. By compensating in this way, the operator creates a desirable, stable control system in accordance with good engineering design. This behavior by the human controller is known as McRuer’s rule, and it is expressed by the crossover model: Yp s jvdYc s jvd
P5d Wc 5 Q MsH4 2 H3d

where M is the molecular weight of the gas whose enthalpy per unit mass is H. The other symbols are defined by reference in Fig. 19.2.1.

REFRIGERATION METHODS (See also Sec. 4 and Sec. 19.1, “Mechanical Refrigeration.”)

Cryogenic refrigerators (cryocoolers) may be classified by (1) the functions

they perform (e.g., the delivery of liquid cryogens, the separation of mixtures of gases, and the maintenance of spaces at cryogenic temperatures, (2) their refrigerating capacities and (3) the temperatures they reach. Large industrial-sized plants (a) deliver LNG (~120 K), LO2 (~83 K), LN2 (~77 K), LH2 (~20 K), and LHe (~4 K), (b) separate gaseous mixtures, e.g. the constituents of the atmosphere, H2 from petroleum refinery gases, H2 and CO from coke oven and coal-water gas reactors, and He from natural gas, and (c) provide refrigeration to maintain spaces at low temperatures. For the latter, i.e., (c), a unit has been manufactured and installed that delivers 13 kW of refrigeration at less than 3.8 K. An important area of refrigerator development that is being commercially exploited now is laboratory-sized cryogenic refrigerators for laboratory research and development at LHe temperatures ( 5 K). These refrigerators are used for many different purposes, refrigerating for example: high-field superconducting electromagnets, LHe bubble chambers for high-energy (nuclear) particle research, experimental superconducting electric generators and motors and superconducting magnets for levitation of railroad trains. Another area of commercial exploitation that is important for the progress of cryogenic physics research is the development of refrigerators for the continuous production of refrigeration at temperatures below 1 K. These include L3He evaporation refrigerators and L3He-L4He dilution refrigerators that reach temperatures of 0.4 and 0.003 K, respectively. Of various methods of refrigeration, the most commonly used to produce temperatures as low as 1 K are (1) the evaporation of a volatile liquid (referred to as the cascade method when applied in several successive stages using progressively lower-boiling liquids), (2) JouleThomson (isenthalpic) expansion of a compressed gas, and (3) an

Fig. 19.2.1 (a) Schematic of a modified (isothermal compression) Brayton cycle, and (b) the process path. Wc is the energy (work) to drive the compressor C. Q is the heat absorbed by the refrigerator from the refrigerated area R.A., where the working fluid passes from state 3 to state 4. The heat-exchanger (H.E.) processes are isobaric. The dashed line 2–3 is the ideal isentropic expansion; 2–3 represents the actual expansion in the expander E.

The Stirling refrigerating engine and modifications of it are used in a number of commercial makes of laboratory and miniature-sized refrigerators. The Stirling-cycle refrigerator is well suited to (1) the liquefaction of air on a laboratory scale (~7 L/h), (2) the recondensation of evaporated liquid cryogens, and (3) the refrigeration of closed spaces where the refrigeration load is not very large. These laboratory scale refrigerators supply ~1 kW of refrigeration at ~80 K and ~2 kW at 160 K. They are used with liquid air fractionating columns for the production of LN2 (~7 L/h) and LO2 (~5 L/h). The Stirling-cycle refrigerator consists of a piston for compressing isothermally the working fluid (usually He), and a displacer which can operate in the same cylinder with the piston. The displacer and piston are connected to the same electrically driven shaft but displaced in phase by 90. The displacer pushes compressed gas isochorically from the warm region where it was compressed, through the regenerator into the cold region where the compressed gas is expanded isothermally doing work on the piston and producing refrigeration. The regenerator consists of a porous mass (packed metal wool of high heat capacity) in which a steep temperature gradient is established between the warm region of compression and the cold region of expansion. The displacer returns the gas after expansion to the region for compression through the

19-28

CRYOGENICS

regenerator in which it is warmed to the temperature of the isothermal compression. The refrigerant (He) is recycled. The regenerators in the larger refrigerators are usually placed around the outside of the engine cylinders, but in small capacity, lower temperature refrigerators they are put inside the displacers. In Stirling refrigerators that reach temperatures of 17 K and produce 1 W of refrigeration at 25 K, the compressed He is expanded in two or three stages at temperatures intermediate between room temperature and the lowest temperature reached. There is only a single piston for compression. The expansion in stages allows part of the heat that leaks into the coldest region of a single stage engine to be absorbed and removed at a higher (intermediate) temperature where the efficiency for transferring heat is greater. The Vuilleumier refrigerating engine is a modification of the Stirling refrigerator. It resembles two single Stirling engines placed back to back. One of the engines operates as a heat engine and the other as a refrigerator. The lower temperature at which the heat engine discharges its waste heat, is also the top temperature at which the refrigerator engine discharges its waste heat. Hence, the Vuilleumier refrigerator operates at elevated, intermediate and low temperature levels which may be 800, 300 (ambient), and 90 K, respectively. Each engine has a cylinder, a displacer, and a regenerator, and the two engines are connected to a common crankshaft but displaced by 90. Very little external power is needed to operate the two displacers which are operated in sinusoidal motion, displaced 90 in phase. The working fluid (He) is recycled. The Gifford-McMahon refrigerator is another modification of the Stirling-cycle refrigerator. It may be single stage for refrigeration at higher temperatures (~80 K), or multistage for lower temperatures. Each stage has a cylinder, a displacer and a regenerator. The displacer pushes gas (usually He) under a very small head of pressure from the top of the cylinder, where the temperature is ambient, through the regenerator, into the bottom of the cylinder which is the cold region of the refrigerator. Compressed He is supplied from an external source— the refrigerator has no piston for compressing gas. Only a small source of external energy is needed for driving the displacer which has very little work to do in overcoming the forces of friction. The valves that control the admission of compressed He to the top of the cylinder, and the expansion of the cold compressed He from the bottom of the cylinder are externally operated by a drive mechanism. The cycling rate is 2 Hz. The expansion of the cold He is of the adiabatic-Simon-bomb type. Expanded gas is discharged from the refrigerator. Some applications of cryogenic refrigerators require very high reliability without maintenance. The fewer the working parts, the more likely is the needed reliability to be achieved. The pulse-tube and thermoacoustic refrigerators are advantageous in this respect in that they have only one moving part, and that part operates at ambient temperature. Pressure oscillations are introduced into a tube with a closed end and fitted with the proper arrangement of heat exchangers. For the optimum frequency of the pulsation, a temperature gradient is established between the heat exchangers, and this gradient can pump heat from a low temperature to a higher one. A number of different designs have resulted, including single-stage and multistage units, and operating frequencies have ranged from about 1 Hz up to acoustic frequencies. This type of refrigerator is capable of reaching low temperatures down to 28 K with a single-stage unit, 26 K with a two-stage unit, and 11.5 K with a three-stage unit, all with ambient temperature heat sinks. Present research is also focused on the use of an acoustically driven pulse-tube refrigerator that would have no moving parts and no close tolerances of the components, thus making it inexpensive and easy to produce. Magnetic refrigeration, originally proposed in 1925, depends upon the ability of paramagnetic materials at low temperature (or ferromagnetic materials near their Curie temperatures) to warm up (or expel heat at constant temperature) with the application of a magnetic field. Conversely, these materials will cool (or absorb heat at constant temperature) upon removal or reduction of the magnetic field. With the use of a paramagnetic material magnetized at a higher temperature of about

1.5 K, temperatures of about 0.001 K have been reached by a single demagnetization. Magnetic refrigeration is applicable over a wide range of temperatures, and recent research work is leading to continuous application of this process by mechanisms that promise a considerable increase in efficiency over that of more conventional refrigerators. 3 He-4He dilution refrigerators for reaching temperatures between 0.01 and 0.3 K and generating continuously as much as 750 ergs/s of refrigeration are commercially available. Refrigeration results from the solution of L3He in L4He, the heat of solution being negative. Figure 19.2.2 is a schematic diagram of a dilution refrigerator. 3He vapor from the “still” at ~1 K (the upper operating temperature of the refrigerator) is collected and returned by a pump to the “mixing” chamber where solution takes place. The 3He arrives at the mixing chamber as L3He near the mixing chamber temperature (the lower operating temperature of the refrigerator) after flowing through a heat exchanger counter to the flow

Fig. 19.2.2 Schematic diagram of the mixing chamber and still of a 3He-4He dilution refrigerator.

of cold 3He-4He solution on its way to the “still.” In the electrically heated still, 3He is evaporated from the solution. The evaporated 3He is withdrawn by the collecting pump (diffusion) that returns the 3He to the mixing chamber. At 1 K the vapor pressure of L4He is negligible. L4He in the still, freed of 3He, returns to the mixing chamber by super-fluid flow, building up in the mixing chamber an osmotic pressure that drives 3 He atoms towards the still against the forces of viscous flow. In the mixing chamber there is a two-phase separation of 3He and 4 He. A layer of nearly pure L3He of lower density rides on a heavier L4He-rich layer containing ~ 6 percent 3He. At temperatures lower than 0.87 K liquid solutions of 3He and 4He separate into two liquid phases, one 3He-rich and the other 4He-rich, in thermodynamic equilibrium. At 0 K, the 3He-rich phase is 100 percent 3He, whereas the 4He-rich phase (in equilibrium with L3He phase) contains ~ 6 percent 3He. In the mixing chamber, L3He enters the upper 3He-rich phase; solution of 3He in 4He takes place at the interphase boundary. The lower (denser) 4He-rich phase connects with the still. Starting at 50 mK (a temperature reached in a 3He-4He dilution refrigerator), temperatures in the 2- to 3-mK range are attainable with a Pomeranchuk refrigerator. Refrigeration is generated by compression of a mixture of liquid and solid phases of 3He at pressures in excess of 28.9 atm, the minimum P at which these phases can coexist in equilibrium. The entropy of solid 3He exceeds the entropy of L3He, which is contrary to the normal behavior for other substances. This occurs in 3He because of its nuclear magnetic properties. The 3He nucleus is magnetic and at temperatures in the mK range and above, in solid 3He, the nuclear

PROPERTIES OF SOLIDS AT LOW TEMPERATURES

moments are randomly oriented whereas in L3He, at temperatures lower than ~ 0.3 K, the nuclear moments are paired or partially paired, antiferromagnetically. An adiabatic increase of P converts liquid to solid with a reduction in T, whereas an isothermal increase of P results in an absorption of heat. 3He, condensed, is confined in a container with flexible metal walls in the “cold” region of a 3He-4He dilution refrigerator. The 3He container is surrounded with L4He which serves as the pressure transmitting fluid. GAS LIQUEFACTION

A conventional method of gas liquefaction utilizes a gas compressor, a countercurrent heat exchanger, and a throttling valve through which the gas expands isenthalpically (Joule-Thomson) and cools if it has already been cooled below its inversion temperature. After expansion, the cold gas returns through the heat exchanger, in which it exchanges heat with the countercurrent compressed gas flowing to the throttling valve. It leaves the heat exchanger near the temperature of the entering compressed gas. The temperature decreases at the throttling valve until the condensing temperature is reached, and then liquefaction occurs at a rate determined by the rate of refrigeration. The rate of liquefaction x in pounds per hour is x  {[H (expanded gas out)  H (compressed gas in)] w  q}/[H (compressed gas in)  H (liquid following valve)], where the H’s are enthalpies in Btu per pound for the final heat exchanger, w is the flow rate in the same units as x, and q is the heat leak in Btu per hour from outside into the heat exchanger. For an ideal heat exchanger the expanded gas leaves at the temperature of the entering compressed gas. In practice, there is a small difference in temperature which represents a small loss of refrigeration. For ideal gases, H is independent of P for a given T, and hence no liquefaction of an ideal gas results. The H’s of air, O2, N2, A, CO2, the hydrocarbons, and the normal refrigerants decrease with increasing pressure at ambient T, except at very high P’s, and these gases are liquefiable by this kind of isenthalpic (Joule-Thomson) expansion. The H’s of H2 and He at ambient T, however, increase with increasing P, even at low P’s, and hence an auxiliary mechanism is required for the liquefaction of these gases. In one method, the stream of compressed gas is split in the liquefier, and a part goes to an engine in which it is cooled by an isentropic expansion with the performance of work. This part of the flow, thus cooled, is sent to a heat exchanger, where it flows counter to the other fraction of compressed gas, cooling it to a temperature below the inversion temperature (see Sec. 4). This flow of compressed gas, thus precooled, is run through another and final heat exchanger with a throttling valve at its lower end, with the result that the quantity x is liquefied. Another method of precooling H2 and He below their inversion temperatures makes use of a boiling liquid cryogen through which the compressed gas flows in a heat exchanger. LN2 is used for precooling H2, and LH2 for He.

Alamos National Laboratory installed a 30-MJ superconducting magnetic energy-storage system in the Bonneville Power Administration’s Tacoma, Washington, substation to provide stabilization of utility longline power-transmission systems. The response characteristics of the unit have been demonstrated on the utility’s power grid, and the unit has been run successfully for extended periods. Two 100-m long superconducting ac transmission cables have been successfully tested at Brookhaven National Laboratory at 4,100 A and the equivalent of 138 kV three-phase. At the other end of the size spectrum, Josephson junction devices are being considered for applications such as radiation detectors, highly precise and accurate electrical measurements, geophysical instrumentation, medical instrumentation, and computer memory cells. Cryogenics is also becoming increasingly important in the storage and transport of energy. Large quantities of natural gas are routinely transported as a liquid (LNG). Peak shaving in municipal gas systems is accomplished by storage of LNG. If, due to depletion of fossil fuels, hydrogen becomes an important synthetic portable fuel, cryogenic hydrogen will play an important role as it already does in space applications including the shuttle. Another potential application is aircraft fuel. This may be extended to surface transportation applications. PROPERTIES OF SOLIDS AT LOW TEMPERATURES (See also Sec. 4.)

Specific heats of solids in general decrease with decreasing T, becoming zero at 0R (Fig. 19.2.3). The downward approach to Cp  0 at 0 R is interrupted for some substances (principally compounds) by “bumps” on the curve (excess Cp that rises to a maximum and then decreases). Paramagnetic salts undergoing transitions to either a ferromagnetic or an antiferromagnetic state are examples. This excess specific heat of a paramagnetic salt is connected with the effectiveness of the salt for reaching low temperatures by the method of adiabatic demagnetization. There are also transitions in solids from a more orderly to a less orderly arrangement of atoms and molecules in the lattice that give rise to excess Cp. The transition in solid ortho- and normal H2 below 20R (11 K) is an example. In Fig. 19.2.3, Cp’s are plotted for various materials. Heat is transferred in dielectrics by lattice vibrations, or waves. In good electrical conductors, heat is transferred principally by the conduction electrons, and thermal and electrical conductivities are related by the Wiedemann-Franz law; k/sT  const. This means that the ratio

APPLICATIONS OF CRYOGENICS

Gases, such as O2, N2, natural gas, H2, and He, are liquefied for transportation because of the gain in density at low pressure. Shipments have been made of LH2 by trailer trucks in quantities of 13  103 gal (50 m3) for thousands of miles and by railway tank cars holding as much as 28  103 gal (100 m3). LHe is regularly shipped in quantities from 25 gal (0.1 m3) to 10,000 gal (40 m3). Liquid cryogens are used as liquid refrigerants. The aerospace, steel, and bottled-gas industries are the principal users of liquefied cryogens. Important applications as coolants can be found in vacuum technology, electronics, biology, medicine, and metal forming. In recent years, large superconducting magnets have been undergoing development for a variety of applications including fusion reactors, energy storage, electric-power transmission, high-energy physics experiments, and magnetically levitated trains. Magnets of Nb-Ti are available with fields up to 80 kG (8 T) with large magnetic field volumes (1 m3). Higher field magnets [up to 1500 kG (15 T)] of Nb3Sn are available with smaller magnetic field volumes (104 m3). The Los

19-29

Fig. 19.2.3 Specific heats of solids at low temperatures.

19-30

CRYOGENICS

of the thermal and electrical conductivities at a given T is approximately the same for the good conductors. In the poor electrical conductors, alloys for example, both lattice waves and conduction electrons play important parts in the transfer of heat. The k’s of “pure” dielectrics and “pure” metals rise with increasing T from k  0 at 0R (proportional to T for metals and to T 3 for dielectrics), reach a maximum, normally between 10 and 100R (5 and 55 K), and then decrease to a value approximately independent of T (Fig. 19.2.5 and ice in Fig. 19.2.4). The k’s of alloys (Fig. 19.2.4) are an order of magnitude smaller than for “pure” metals and do not exhibit the maxima characteristic of the “pure” metals at low T. Lattice disorder introduced by alloying, even in small amounts, and working a metal, even a pure metal, reduces k. Annealing, in general, raises k.

roughly independent of T (Matthiessen’s rule). For very pure metals, Re is approximately constant below 20R (11 K). Magnetic impurities can give rise to a minimum in resistivity [usually below 36R (20 K)] called the Kondo effect. Resistivity in  cm (Fig. 19.2.6) is the Re of a 1-cm cube.

Fig. 19.2.4 Thermal conductivity of solids at low temperatures, part 1.

At 0R, an absolutely pure and perfect single crystal of metal would have zero electrical resistance, Re  0. Electrical resistance arises from the scattering of the conduction electrons as they move through the lattice of metal ions under the influence of an externally applied electric field. Scattering arises for two reasons: (1) the amplitudes of the thermal vibrations of the lattice which increase with T at a rate proportional to 2C of the metal, and (2) the imperfections in the regularity of the lattice

Fig. 19.2.5 Thermal conductivity of solids at low temperatures, part 2.

as caused by impurity atoms (solid-solution alloys included), lattice vacancies, dislocations, and grain boundaries. Resistances for “pure” metals increase with T and are roughly proportional to C  T, except at very small T ’s where they are proportional to T 5 for absolutely pure crystals. The resistance due to impurities and imperfections is very

Fig. 19.2.6 Electric resistivity of solids at low temperatures.

Some metals, including elements (none from column 1 of the Mendeleeff table or the ferromagnetic elements), intermetallic compounds, metal alloys, and a number of oxides exhibit the phenomenon of superconductivity, characterized by a zero dc electrical resistance from 0R up to a transition temperature Tc , at which normal resistance (value extrapolated from higher T) appears. In pure, single crystals the transition range T, from Re  0 to the normal value, may be as small as a few millidegrees. Until the discovery of the so-called high-temperature superconductors (HTSCs), the higher Tc in a practical superconductor was that of the compound Nb3Ge at 41.9R (23.3 K). The search for higher Tc materials continues with Tc’s that have already been reported as high as 239R (133 K). Closed circuits consisting entirely of superconductors can support a persistent, resistanceless current without an external source of voltage. A superconducting circuit therefore maintains constant the value of the total magnetic flux that was enclosed by the circuit at the time it entered the superconducting state. Hence, superconducting materials influence the magnetic fields in their environment. For this reason, their use in the construction of equipment and apparatus must sometimes be avoided when expected temperatures could be below their Tc . Lead brasses and some solders, in particular Pb-Sn alloys, become superconductors. Superconductivity, in materials designated Type 1 superconductor, is characterized by (1) perfect electric conductivity (Re  0) and (2) the internal magnetic induction B  0 when H (external) is not zero (the Meissner effect). Persistent currents at the surface of the specimen shield the interior from H (external). H parallel to the surface of a specimen is continuous at the surface and falls off exponentially below the surface. The penetration depth of H for perfect (chemically and mechanically) specimens is of the order of 5  108 m. The penetration depth is larger for alloys, specimens with lattice imperfections, and all superconducting specimens near their superconducting transition temperatures Tc.

PROPERTIES OF SOLIDS AT LOW TEMPERATURES

The normal-to-superconducting transition temperature Tc is also a function of H (external) as well as of the current being carried by the superconductor. For a Type I superconductor, the effect of H (external) is expressed by Tc  Tc0 [1  (Hc /H0)]1/2 where Tc 0 is the value of Tc at essentially zero H, and H0 is the value of Hc at 0 K. For every H Hc, a further decrease in Tc occurs when the superconductor is carrying a current, the maximum current density for maintaining the superconducting state being denoted Jc. Thus the superconducting state exists when T Tc, H Hc, and J Jc, and the normal, or resistive, state exists when either T  Tc, H  Hc, or J  Jc. Because Tc decreases with increases in H and J, the practical application of a superconductor requires that it be maintained at a temperature considerably below its Tc. Tables 19.2.1 and 19.2.2 are only representative. The number of superconductors including compounds and alloys runs into the hundreds. Of the chemical elements, 38 are known to become superconductors. Some that are not superconductors at normal pressures have high-pressure allotropic modifications that are superconductors, e.g., Bi, Si, and Ge at pressures of ~25, 130, and 120  108 Pa, respectively. Many of the superconducting alloys have nonsuperconducting constituents, and there are compounds all of whose constituent elements in their pure state are nonsuperconductors (see Table 19.2.2). It is interesting that good conductors Cu, Ag, and Au do not become superconductors to the lowest temperatures at which they have been tested (~ 0.01 K). Table 19.2.1 Transition Temperature T0 and Critical Magnetic Fields H0 of Some Type I Superconductors Superconductor

T0 , K

H0 , A/m*

Nb Pb V Hg Sn In Al Mo Zr Ti W

9.25 7.23 5.31 4.154 3.722 3.405 1.175 0.916 0.53 0.39 0.0154

1.57  105 6.4  104 8.8  104 3.3  104 2.4  104 2.2  104 8.4  103 7.2  103 3.7  103 8.0  103 9.2  10

Na Bi Au2Bi Cu S

2.2 1.7 1.6

The elements, in pure state at normal pressure, are not superconductors.

* To convert to oersteds, divide by 79.57.

Table 19.2.2 Transition Temperatures T0 and Upper Critical Fields Hc 2 of Some High-Field, Type II Superconductors Superconductor

T0 , K

Hc2 , A/m*

T, K, for Hc2

Al0.75Ge0.25Nb3 Nb3Sn N0.93Nb GaV3 NbZr Nb0.2Ti0.8 Ti0.6V0.4

18.5 18.0 15.9 14.8 10.8 7.5 7.0

3.4  107 1.9  107 1.3  107 1.9  107 7.4  106 6.4  106 8.8  106

4.2 4.2 0 0 0 4.2 2

* To convert to oersteds, divide by 79.57.

Type I superconductor is the designation given to those superconductors that exhibit a complete Meissner effect for 0 H Hc; i.e., B  0 until Hc is reached and penetration of the specimen by B becomes complete. The superconducting elemental metals in the pure and mechanically perfect state are Type I superconductors. Superconducting alloys (intermetallic compounds, solid solutions, and mixed phases) and workhardened (unannealed) metals are Type II superconductors. For Type II superconductors, the Meissner effect is complete only for 0 H Hc1 where Hc1 is smaller than Hc. For Hc1 H Hc2, the magnetic field penetrates the body of the specimen in the form of current vortices, called fluxons, that generate a fixed quantized amount of magnetic flux. Each fluxon is a cylindrical region of circulating current centered on a resistive core of material in which the superconductivity is

19-31

suppressed. Hence the region defined by Hc1 H Hc2 is called the mixed state of a Type II superconductor. The number of fluxons increases with H from zero at Hc1 to a density at which complete overlap of the cores occurs along with the restoration of the normal state at Hc2. Thus Hc2 is an upper limit to the superconducting state that increases as T decreases. The alloys commonly used for high-field superconducting magnets (Nb3Sn, NbTi, and NbZr, see Table 19.2.2) are Type II superconductors. Hc’s for Type I superconductors are small, whereas magnet alloy wires have Hc2 values that are 100 to 1,000 times higher. The fluxons in the mixed state of a Type II superconductor transporting a current are acted on by a Lorentz force that is proportional to i  B and acts in a direction perpendicular to B, and to i (the current intensity). Unless the fluxons are pinned to crystal lattice sites they are propelled across the superconductor under the influence of the Lorentz force. This movement involves the performance of work by the current and results in (1) production of heat within the superconductor and (2) the appearance of electrical resistance to the flow of current and, eventually, the destruction of the superconducting state. Fluxons are pinned by lattice imperfections (chemical and mechanical) and their displacement is resisted until the Lorentz force exceeds a breakaway value, freeing the fluxons to move. Lattice imperfections are introduced in the wires for high-field superconducting magnets to pin fluxons as well as to increase Hc2. Currents avoid the normal core of a pinned fluxon and thus retain their resistanceless character. A current exceeding the critical value destroys the superconducting state. This critical current is determined by the critical Lorentz force (the product of current and magnetic field) at which pinning forces are exceeded. Hence, the critical value of an applied field depends on the current transported by the superconductor and vice versa. In general, the larger the applied field, the smaller is the critical current and vice versa. For superconducting magnets, it is essential that the values of the limiting magnetic field and the limiting current be simultaneously large. For the Types I and II superconductors discussed above, practical application requires operation in the temperature region of 12 K or lower. Thus the lower limit (ideal refrigeration) for the work to heat removal ratio is seen to be 24 for operation between ambient temperature and 12 K. Raising the operating temperature of a superconducting system to 20, 40, or 77 K (the temperature of liquid nitrogen boiling at normal atmospheric pressure) reduces this ratio to 14, 6.5, and 2.9 respectively. Although the actual refrigerator may take 2.5 or more times this ratio, the saving in input energy that can be effected by raising the operating temperature is obvious. The equipment for producing refrigeration at these higher temperatures is also less complicated and less expensive than that needed for producing refrigeration at the lower temperatures required for the operation of the low-temperature superconductors. In 1986 a true breakthrough was achieved in producing superconductors able to enter the superconducting state at much higher temperatures. The high-temperature superconductors (HTSCs) are oxide compounds consisting of several elements. They are more like ceramics and have a complicated crystal structure consisting of layers of Cu-O separated by other oxides. All of them exhibit highly anisotropic superconducting properties and their performance is greatly influenced by the oxygen content of the compound. If the crystals are not properly oriented so that there is too great an angle between the crystal axes of adjacent crystals (e.g., greater than about 5°), intergranular supercurrent flow is supressed and the HTSCs will have sections of superconductor connected by rather poorly conducting junctions. This has been referred to as “having weak links.” Therefore, it is desirable to prepare the HTSC material in such a way that it is biaxially textured, in order to minimize weak links. Also, the ability of these superconductors to conduct useful quantities of current can be severely limited by an imposed magnetic field at temperatures near the high end of their superconducting range. The magnetic field enters the superconductor in fluxons, or vortices. When these vortices move through the superconductor, electrical resistance is generated and the superconductor behaves much like a normal metal. Therefore it is necessary to “pin” the vortices so they cannot move. As with low-temperature superconductors, vortices are pinned by lattice imperfections, either chemical or mechanical.

19-32

CRYOGENICS

At temperatures below 30 K, the extraordinarily high Hc2 of the HTCSs allows them to carry high current densities at much higher magnetic fields than is possible with the conventional, low-temperature superconductors. At higher temperatures a new phenomenon has been observed, the “melting” of the vortex lattice that limits the ability of crystal imperfections to pin vortices at magnetic fields far below Hc2. The HTSCs are fabricated as thin films and multifilimentary wires. The necessity for optimum chemistry of the constituents, proper crystal orientation, and the fact that the ceramic-like nature of the superconductor makes it brittle complicate both the manufacture of the wires in useful lengths and the subsequent fabrication of equipment such as coils for motors and generators. There are two classes of compounds that have received the most attention in these research efforts. One class is referred to as BSCCO because it consists of the oxides of bismuth, strontium, calcium, and copper in varying ratios. The other class is referred to as YBCO because it consists of oxides of yttrium, barium, and copper. A great deal of research is continuing in the attempt to improve the commercial usefulness of the HTSCs. This research has made enormous strides in finding production methods for practical superconductors and ways to produce them more economically. There are already fabrication methods that enable the production of kilometre lengths of superconductors capable of carrying very high currents (about 105 A/cm2) in self-field at 77 K. These efforts should make possible the large-scale utilization of HTSCs. Table 19.2.3 lists several of the HTSCs, along with the pertinent properties. Table 19.2.3

Properties and Status of some HTSCs

HTSC*

Tc, K

La2xBaxCuO4 YBa2Cu3O7 (YBCO or 1–2–3)

35 95

HgBa2Ca2Cu3O8  d Bi2Sr2Ca2Cu3O8 (2223 BSCCO)

133 110

Tl2Ba2CaCu2O8 (2212 TBCCO)

~115

Comments First HTSC discovered First HTSC with Tc above normal boiling point of liquid nitrogen, Hc2  30 T at 77 K Highest Tc to date Hc2  30 T at 77 K, Jc  8  104 A/cm2 at 77 K and Hext  0; industrial pilot-plant-scale production Jc  7  106 A/cm2 at 75 K and Hext  0

* These HTSCs are representative of classes of superconductors.

The discovery in 2001 of superconductivity in the relatively common intermetallic compound, magnesium diboride (MgB2), with a transition temperature of about 40 K, opens new possibilities to the application of low-temperature superconductors (LTSCs). MgB2 is easier than the HTSCs to produce, handle, and fabricate into wires, and MgB2 reduces the energy requirement for refrigeration by about an order of magnitude compared to that of the previously known LTSCs. MgB2 can be used for the manufacture of superconducting magnets, and its use should make more feasible the utilization of superconducting electric power transmission lines. The coefficient of linear expansion is (1/L) (dL/dT). Thermal expansion of asymmetric crystals differs along different crystal axes. For an isotropic solid, the volume or cubical-expansion coefficient (1/V) (dV/dT)  3a. Expansion coefficients do not vary appreciably in the temperature region near ambient temperatures, but they all approach zero at 0R, and the approach to zero is tangential to the T axis (da/dT  0 at 0R.) Some expansion coefficients are negative at low temperatures, e.g., stainless steel, some Invars, and fused quartz. The expansivities of some crystals are negative in some directions even though their volume coefficients are positive. Cold-working may produce differences in expansivity in different directions. Annealing restores isotropy. Figure 19.2.7 shows the relative change in length, the integral of a from T to 528R (293 K). Because the cold interiors of cryogenic equipment must at times be warmed to ambient temperature, provisions have to be made for those changes in dimensions of the interior that results from large changes in T. Even for equipment of ordinary size, these changes can be too large for accommodation within the elastic limits of the materials of construction or of the structure. In such situations, flexibility has to be designed into

Fig. 19.2.7 Relative change in length from T to 528R (293 K), equal to a dt. Bracketed materials are within ~5 percent of the curve shown. 5288R t

the structure. For example, bellows and U bends are commonly employed to obtain this flexibility in insulated transfer lines for liquid cryogens. The mechanical properties important for the design and construction of cryogenic equipment are the same as for other temperatures (see also Sec. 5). Frequently, the choice of materials for the construction of cryogenic equipment will depend upon other considerations besides mechanical strength, e.g., lightness (density or weight), thermal conductivity (heat transfer along structural support members), and thermal expansivity (change of dimensions when cycling between ambient and low temperatures). Frequently, mechanical properties at low temperature are significantly different from properties at ambient temperature. This makes room-temperature data unreliable for engineering use at low temperatures. It is not possible to make generalizations that would not have numerous exceptions about the temperature variations of the mechanical properties. In this discussion, Figs. 19.2.8 to 19.2.10 are only illustrative guides. There is no substitute for test data on a truly representative specimen when designing for the limit of effectiveness of a cryogenic material or structure. Just as the mechanical properties at ambient temperatures are dependent upon the impurities (metallic and nonmetallic), their chemical nature and concentration, the thermal history of the specimen, the amount and kind of working (microstructure and dislocations of the lattice), and the rate of loading and type of stress (uni-, bi-, and triaxial), so also are the changes, quantitative and qualitative, in the mechanical properties when changing the temperature from ambient to low. Metals, nonmetallic solids (glass), plastics, and elastomers are discussed in order. The metals are classed by their lattice (crystal) symmetry. The f.c.c. metals and their alloys are most commonly used for the construction of cryogenic equipment. Al, Cu, Ni, their alloys, and the austenitic stainless steels of the 18–8 type (300 series) are f.c.c. They do not exhibit an impact (or a notched-tensile) ductile-to-brittle transition at low temperatures. As a general rule, which has some exceptions, the mechanical properties of these metals improve as T is reduced: (1) Young’s modulus at 40R (22 K) is 5 to 20 percent larger than at 530R (294 K), (2) the yield strength at 40R (22 K) is considerably greater than the strength at 530R (294 K). (Cu is an exception; see Fig. 19.2.9), and (3) the fatigue properties at low T are improved. There is a large

PROPERTIES OF SOLIDS AT LOW TEMPERATURES

19-33

Fig. 19.2.8 Ultimate tensile strength of solids at low temperatures. Fig. 19.2.9 Yield strength of solids at low temperatures.

difference between the yield strength and the ultimate tensile strength of these metals and alloys, especially when they have been annealed. Pb (f.c.c.) and In (face centered tetragonal) are used for low-T deformable gaskets because of their creep properties. Low-T creep data are meager, but the rate of creep decreases with T. Beta brass (f.c.c.) is ductile down to 7R, though it is like all alloys of Cu in being less ductile than Cu itself. Thin brass is sometimes porous, and this limits its usefulness for high-vacuum enclosures at low temperatures. Free-machining brass normally contains Pb, which even in low concentrations can render the brass superconducting at LHe temperatures. In the superconducting state, it may then affect the magnetic field in its vicinity (see electrical properties, above).

The b.c.c. metals and alloys are normally classed as undesirable. These include Fe, the martensitic steels (low carbon and the 400 series of stainless steels), Mo, and Nb. If not brittle at room temperature, they have a ductile-to-brittle transition at low T. Working can induce the austenite-to-martensite transition in some steels. AISI 301 austenitic stainless is an example. It remains moderately “tough” at very low T and is a valuable material for construction, but cold-drawing, in excess of 70 percent strain, induces a partial transformation of structure and reduces the elongation and notched-strength ratio to nearly zero. This same type of reduction in toughness is observed in the “heat-treatable” stainless steels. Improved tensile properties can be obtained by

Fig. 19.2.10 Impact energy for solids at low temperatures (Charpy V unless noted). For Kel-F, nylon, and Teflon, the impact energy units are foot-pound per inch of notch width.

19-34

CRYOGENICS

cold-working and by heat-treating. These alloys usually have decreased toughness at low temperatures. Alloys of V, Nb and Ta, although not f.c.c., behave well as regards brittleness at low temperatures. These alloys have the advantage of being suitable for use at high T. Carbon steels, though brittle, have found special uses at low temperatures, e.g., in the construction of the expansion engines of the Collins He liquefier and cryostat. The h.c.p. metals exhibit properties intermediate between those of the f.c.c. and b.c.c. metals; e.g., Zn undergoes a brittle-to-ductile transition, whereas Zr and pure Ti do not. Ti and some Ti alloys, having a h.c.p. structure, remain moderately ductile at low T and are excellent for many applications. They have high strength-to-weight and strength-to-thermalconductivity ratios. The low-T properties of Ti and its alloys are extremely sensitive to even small amounts of O, N, H, and C. Brittle materials, when in very thin sheets, can have a high degree of flexibility, and this makes them useful for some cryogenic applications. Flexibility cannot be used as a criterion for ductility. Ordinarily, normal welding practices are observed in the construction of cryogenic equipment. These practices, however, are modified in accordance with available knowledge of the performance of the metals at low T. Nonmetal materials for construction are in many cases brittle, or they are susceptible to brittle fracture. The strength of glass, measured at a constant rate of loading, increases on going to low T. Failure occurs at a lower stress when the glass surface contains cracks and abrasions. The strength of glass can be improved by tempering the surface, i.e., by putting the surface under compression. The plastics increase in strength as T is decreased, but this is accompanied by a rapid decrease in elongation in a tensile test and a decrease in impact resistance. Teflon and the glass-reinforced plastics (e.g., glass-reinforced epoxy resin) retain appreciable impact resistance as T is lowered. Teflon, which is polytetrafluoroethylene, can be deformed plastically at T’s as low as 7R (3.9 K). The amount is considerably less than at room temperature, but it is enough to make Teflon very useful for some cryogenic applications. The glass-reinforced epoxies, besides having appreciable impact resistance at low T’s, also have high strength-to-weight and strength-to-thermal-conductivity ratios. All the elastomers become brittle at low T. However, elastomers like natural rubber, nitrile rubber, Viton A, and plastics such as Mylar and nylon that become brittle at low T can be used for static seal gaskets when highly compressed at room temperature, prior to cooling. PROPERTIES OF CRYOGENIC FLUIDS (CRYOGENS)

Table 19.2.4 indicates the temperature ranges accessible with liquid cryogen baths. There are two inaccessible ranges: (1) between helium and hydrogen [9.36 to 24.9R (5.2 to 13.8 K)] and (2) between neon and

Table 19.2.4

oxygen [80 to 97.9R (44.4 to 54.4 K)]. The limiting temperatures are set by the critical and triple points. Pumping on a cryogen bath to lower the pressure results in lower temperatures ultimately reaching the triple point; except for helium, further pumping leads to solidification and in practice poor heat-transfer. Raising the bath pressure results in higher boiling temperatures with the limit set by the critical temperature, above which liquid and vapor phases cannot coexist. Cryogen baths are most frequently operated near atmospheric pressure. The algebraic sign of the Joule-Thomson coefficient determines whether a gas cools or warms upon free expansion. Figure 19.2.11, based on the law of corresponding states, allows rough calculations of inversion temperatures and pressures. Free (Joule-Thomson) expansion inside the “cooling” region results in cooling, i.e., refrigeration; outside this region, heating results. Upper inversion temperatures at 14.7 psia (0.101 MPa) are given in Table 19.2.4.

Fig. 19.2.11 Reduced inversion temperature versus reduced pressure. Volumetric latent heats (Table 19.2.4) decrease with the lower-boiling cryogens and emphasize the importance of the insulation in storing and handling LH2 and LHe. Two forms of hydrogen and deuterium exist: ortho and para. The ortho and para molecules differ in the relative orientation of the nuclear spins of the two atoms composing the diatomic molecule. In the ortho form of H2 the nuclear spins of the two atoms in the molecule are parallel (in the same direction), whereas in the para form the spins are antiparallel. The relative orientations for ortho and para D2 differ from those for H2. The thermodynamic equilibrium composition of ortho and para varieties is temperature dependent as shown in Fig. 19.2.12. In liquid hydrogen, the uncatalyzed ortho-para reaction is second order (proportional to the square of the o-H2 concentration) with a rate

Common Cryogen Properties

Pressure, lb/in2 abs

Critical temp., °R

Critical pressure, lb/in2 abs

24.9 33.7 44.2 113.7

1.02 2.49 6.26 1.86

6 9.36 59.4 69.0 80.0 227.27

17 33.2 187.7 240 395.3 492.3

150.8 (96.4)§ 97.9 163.2

9.97

271.3 259.3 278.59 343.27

709.8 808 736.3 673

Triple-point Cryogen

Boiling point,* °R

He3 He4 H2 (equilib.) D2 (normal) Ne N2 Air A F2 O2 CH4

5.7 7.6 36.5 42.6 48.7 139.2 (141.8) 157.1 151 162.3 201.1

Temp., °R

0.022 1.69

Upper inversion temp,* °R 72 92 368 486 1,115 1,300 1,605

Heat of vaporization* Btu/lbm

Btu/ft

Liquid density,† lbm/ft3

3.6 8.8 192 131 37.1 85.9 88.22 70.2 74.1 91.5 219.2

13.4 68.6 853 1,343 2,790 4,294 4,813 6,135 6,965 6,538 5,804

3.72 7.80 4.42 10.25 75.35 50.4 54.56 87.4 94 71.24 26.5

3

Vapor density,† lbm/ft3

Gas density,‡ lbm/ft3

1.50 1.04 0.0837 0.14 0.58 0.28 0.28 0.36 0.50 0.28 0.115

0.0084 0.0111 0.00561 0.0112 0.056 0.078 0.0807 0.111 0.106 0.089 0.0448

NOTE: To convert R to K, multiply R by 0.556; to convert lb/in2 to MPa, multiply by 0.00689; to convert Btu/lbm to J/g, multiply by 2.326; to convert Btu/ft3 to J/m3, multiply by 3.73  104; to convert lbm/ft3 to kg/m3, multiply by 16.01. * 14.7 lb/in2 abs. † At normal boiling point. ‡ At 14.7 lb/in2 abs and 492R. § Melting point, 14.7 lb/in2 abs.

PROPERTIES OF CRYOGENIC FLUIDS (CRYOGENS)

19-35

constant of 0.0114/h. Thus, in the course of uncatalyzed liquefaction, normal LH2 (75 percent ortho) is produced. Since the heat of conversion of o-H2 to p-H2 at the normal boiling point is 302 Btu/lbm (702 J/g) (greater than the heat of vaporization; see Table 19.2.4), long-term storage of normal LH2 is impractical. Therefore, in the commercial production of LH2 a catalyst is used to produce LH2 with more than 95 percent para which for practical purposes is equivalent to the equilibrium composition (99.8 percent para).

Fig. 19.2.12 Thermodynamic equilibrium composition of ortho and para varieties of H2 and D2; percent ortho  100  percent para.

For pressures of no more than 150 lb/in2 (1.03 MPa) and temperatures at least twice the critical temperature, the ideal gas law (PV  nRT) enables one to calculate PVT data with sufficient accuracy for many engineering purposes. For cases in which the ideal gas law is not adequate or in which experimental data are not available, the procedures outlined in Sec. 4 may be used. The theoretical prediction of PVT data for liquids is considerably more difficult than for gases. Consequently no universal equation analogous to the perfect gas law exists for extensive ranges of P and T. Reduced quantum mechanical correlations of saturated liquid densities, viscosities, and thermal conductivities for several cryogens are shown in Figs. 19.2.13 to 19.2.15. Dashed lines represent areas void of experimental data.

Fig. 19.2.13 Reduced density versus reduced temperature along the saturation curve for cryogenic liquids. Values of T* are obtained by dividing the desired temperature (R) by the value of T/T* given in Table 19.2.5. Reduced density r* must be multiplied by r/r* given in Table 19.2.5 to obtain density in lbm/ft3. Multiply lbm/ft3 by 16.01 to obtain kg/m3 and R by 0.556 to obtain K.

Fig. 19.2.14 Reduced viscosity versus reduced temperature along the saturation curve for cryogenic liquids. Values of T* are obtained by dividing the desired temperature (R) by the value of T/T* given in Table 19.2.5. Reduced viscosity h* must be multiplied by h/h* given in Table 19.2.5 to obtain viscosity in lbm/ft  h. Multiply lbm/ft  h by 0.000413 to obtain Pa  s.

Fig. 19.2.15 Reduced thermal conductivity versus reduced temperature along the saturation curve for cryogenic liquids. Values of T* are obtained by dividing the desired temperature (R) by the value of T/T* given in Table 19.2.5. Reuced thermal conductivity k* must be multiplied by k/k* given in Table 19.2.5 to obtain thermal conductivity in Btu/h  ft  R. Multiply by 1.73 to obtain W/m  K.

19-36

CRYOGENICS

Utilization of Figs. 19.2.13 to 19.2.15 requires knowledge of the constants e and s for the Lennard-Jones function for the intermolecular potential energy w(r) of a pair of molecules separated by a distance r, so that w(r)  4e[(s/r)12  (s/r)6]. Here e is the depth of the minimum in the potential energy of a pair of molecules, and s is the separation of the pair at which w  0. To facilitate use of Figs. 19.2.13 to 19.2.15, all required constants and conversion factors have been combined and are listed in Table 19.2.5. Table 19.2.5 Multiplication Factors for Use with Figs. 19.2.13 to 19.2.15 Substance

T/T*, °R

r/r*, lbm/ft3

h/h*, lb/ft-hr

k/k*, Btu/(h  ft  R)

Helium 3 Helium 4 Para-hydrogen Deuterium Tritium Neon Nitrogen Argon Krypton Xenon

18.4 18.4 66.1 63.4 62.1 64.1 171.1 215.6 297.4 397.8

18.71 24.84 8.06 16.22 24.37 100.66 57.39 104.84 177.1 197.4

0.0311 0.0359 0.0360 0.0500 0.0607 0.1300 0.1382 0.2183

0.0204 0.0178 0.0363 0.0257 0.0210 0.0128 0.0098 0.0109 0.0080

Fig. 19.2.16 Viscosity of selected gases at low temperatures and 14.7 lb/in2 abs (0.1013 MPa).

Thermal-expansion coefficient (1/V)(∂V/∂T)P and compressibility (1/V)(∂V/∂P)T data for the cryogenic liquids are obtained as derivatives of PVT data. Liquids hydrogen and helium have large thermalexpansion and compressibility coefficients (see Table 19.2.6). Table 19.2.6 Compressibility and Thermal-Expansion Coefficients. Coefficients for Liquid Nitrogen, Hydrogen, and Helium at 14.7 lb/in2 abs (0.101 MPa)

Cryogen

T, R (K)

Compressibility, (1/V)(∂V/∂P)t, 1/(lb/in2 abs) (1/MPa)

Helium 4 Nitrogen Para-hydrogen Water

7 (4) 139 (77) 36 (20) 540 (300)

0.0028 (0.4) 0.000020 (0.0029) 0.000134 (0.0195) 0.0000036 (0.000526)

Thermal expansion, (1/V)(∂V/∂T)p , 1/R (1/K) 0.085 (0.153) 0.0032 (0.0058) 0.0090 (0.0162) 0.00015 (0.00027)

Fig. 19.2.18 Vapor pressures of the common cryogens.

Fig. 19.2.17 Thermal conductivity of gases at low temperatures and 14.7 lb/in2 abs (0.1013 MPa).

INSTRUMENTATION

For gases, kinetic theory predicts an increase of viscosity with rising temperature as shown in Fig. 19.2.16 sh , 2Td. Normally, the thermal conductivities of liquids decrease with increasing temperature, whereas the thermal conductivities of gases increase. Fig. 19.2.17 presents thermal conductivity for several gases as a function of temperature at 14.7 psia (0.1013 MPa). Figure 19.2.18 is a graph of vapor pressures of the common cryogens. Figure 19.2.19 presents specific heats at constant pressure for several of the cryogenic liquids, while Fig. 19.2.20 presents the specific heats for liquid hydrogen and helium along the saturation curve. Specific heats of gases are given in Fig. 19.2.21.

Fig. 19.2.19 Specific heat at constant pressure (Cp) for liquid cryogens at saturation pressure. Asterisk means Cp evaluated along an isobar at 14.7 lb/in2 abs (0.1013 MPa) instead of saturation pressure.

19-37

INSTRUMENTATION (See also Secs. 15 and 16.)

The usual instrumentation problems of engineering practice are further complicated by low-temperature problems which require special calibration procedures. Pressures are measured with normally used apparatus, e.g., bourdon gages, transducers, and manometers. If the measuring device is located outside the cryogenic environment and insulated from it by a thermal barrier, no special problems are introduced. Occasionally response time requirements necessitate the placing of the transducer in the cryogen space. Then the problems of temperature compensation and calibration must be considered. Level measurements currently utilize several methods. The differentialpressure method can in principle be used with all cryogens. With cryogens of low densities and low heats of vaporization (e.g., H2 and He) pressure oscillation occurs in the liquid leg. In the case of LH2 these can be eliminated by the proper design of the liquid leg and the introduction of He gas which does not condense. Direct weighing is also used to determine “levels,” although difficulties are encountered owing to the large tare-to-cryogen weight ratios (as large as 10 to 1 for a full container in the case of H2) and extraneous loadings introduced by permanently connected piping, frost, and wind. Weighing systems have nevertheless been successfully used on 50,000-gal (190 m3) LH2 dewars with an accuracy in level changes of 250 gal (1 m3). With the capacitance gage, the sensing element consists of a concentric tube capacitor which can be calibrated to give liquid-level measurements accurate to a few tenths of a percent. An inexpensive, accurate point-level sensor is easily fabricated from a carbon resistor, typically 1⁄10 W, 1,500 . Its use depends upon the fact that its resistance is a strong function of temperature (Fig. 19.2.22). A small current (50 to 80 mA) is passed through the resistor to heat it. Since heat from the resistor is dissipated more readily in the liquid than in the gas, the temperature of the resistor

Fig. 19.2.20 Specific heats at saturation Cs along the saturation curve versus temperature for 4He and para hydrogen.

Fig. 19.2.21 Specific heat Cp of cryogen gases at 14.7 lb/in2 abs (0.1013 MPa) versus temperature.

Fig. 19.2.22 Ratio of resistance R to the resistance R0 at 492R (273 K) versus temperature for several resistance thermometers. (Values for germanium and carbon are representative only.)

19-38

CRYOGENICS

undergoes a step increase when the resistor environment changes from liquid to gas, whereas the resistance decreases stepwise. Flow measurements utilize orifices, venturis, and turbine-type meters. Various devices for direct mass-flow measurement exist, but none are widely used. “Quality” meters have been built to determine the percentage liquid in two-phase flow. The determination of quality is complicated by nonequilibrium of temperature in the gas phase. Temperature measurements are usually made with thermocouples, resistance thermometers, and vapor-pressure thermometers. The vapor-pressure thermometer depends upon a vapor-pressuretemperature relationship (see Fig. 19.2.18). In general, a small cavity is filled with a gas which has a condensation temperature in the vicinity of the temperature to be measured. If sufficient gas is present, the pressure will be that of the liquid vapor pressure at the coldest part of the measuring system. Maximum speed of response is obtained if the total quantity of gas is minimized to permit only a thin film of liquid to form. In the case of H2, a catalyst (e.g., iron hydroxide) to promote ortho-para conversion should be included in the bulb for an accurate measurement since the vapor pressure of H2 is dependent upon its ortho-para composition (see above). The pressure at the surface of a liquid cryogen in a dewar may, under conditions of thermal equilibrium, be used to measure the temperature of the cryogen and of apparatus immersed in it. Corrections for the pressure of the hydrostatic head of liquid may be required. If using this method, it should be realized that large vertical temperature gradients can exist in an unstirred liquid cryogen in a vessel with good insulation. Thermocouples are favored for the measurement of temperatures because of their low cost, ease of application, and rapid response. The thermoelectric powers (temperature sensitivities) of the thermocouples commonly used at higher T’s decrease with decreasing temperature, and spurious emfs generated in wires of non-uniform composition are troublesome. The proper use of a reference junction at a known fixed temperature close to the temperature being measured is often advantageous for accuracy. Variability in composition of thermocouple alloys makes individual calibrations necessary for accurate results. Figure 19.2.23

Fig. 19.2.23 EMF versus temperature for several thermocouples.

gives typical thermocouple emf-versus-temperature curves for some common thermocouples. A thermocouple consisting of gold with 0.07-at % iron versus chromel has been found useful over the entire temperature range from 2 to 300 K. Its sensitivity increases from 11 mV/K at 4 K to 17 mV/K at 77 K to 22 mV/K at 300 K. Radio carbon resistors (Allen Bradley for 2 to 100 K and Speer Carbon for 0.01 to 4.2 K) are much used as temperature sensors because of their low cost and high sensitivity. However, the permanence and stability of their calibrations are not very good, and germanium (doped) resistance thermometers are preferred (0.01 to 100 K) for their good reproducibility of calibration, although they are considerably more expensive. The carbon and germanium resistors (thermometers) in magnetic fields H exhibit a magnetoresistance (Hall) effect, proportional to H2, which complicates the determination of temperatures. Recent developments include: (1) a carbon-glass electrical resistor which has a stable calibration; and (2) the electrical capacitance thermometer (dielectric: SrTiO3, a perovskite in solution in a solid SiO2 or Al2O3 glass) with very small (approaching zero) magnetic effect and a stable calibration. The capacitance thermometer is preferred when magnetic fields are present. The Rh-0.5-at % Fe, wire-coil thermometers, usable in the range of 0.1 to 300 K, have a high sensitivity especially below 10 K. INSULATION (See also Sec. 6.)

The degree of thermal isolation required at low T’s is normally greater than at elevated T’s because it is more costly to remove heat leaking into a low-temperature system than to replace heat lost at elevated T’s, as demonstrated by the Carnot cycle. Other differences between insulating for low and elevated temperatures are: 1. Condensation of moisture in low-temperature insulation is possible. When this occurs, the conductance of the insulation is significantly increased. This problem is avoided by a vapor barrier on the outside of the insulation. 2. Condensation of the atmosphere in the insulation and surface washing by the condensed liquid are possible when the insulating surfaces are below 150R (83 K). This can result in a large added transfer of heat. It is avoided by using an outer cover or surface impermeable to air, and evacuating the insulating space or replacing the atmosphere in it with a noncondensable gas. At low temperatures, as at other temperatures, the fundamental modes of heat transfer are conduction and radiation, and it is against these that insulation is used. Convection of heat is practically eliminated by the insulation. Low-T insulation categories are (1) high vacuum, with or without multiple radiation shields, (2) powders, (3) rigid foams, and (4) lowconductivity solids, such as balsa wood and corkboard. 1. (i) Heat is transferred across an evacuated space by radiation and by conduction through the residual gas. The conductivity of a gas is almost independent of its density (pressure) as the pressure is reduced in an evacuated space until the free paths of the molecules are increased to an appreciable fraction of the separation of the containing walls. The molecular mean free path at 1 atm (0.1013 MPa) and 492R (273 K) in air is 4  106 in (107 m); in H2, 6  106 in (1.5  107 m); and in He, 10  106 in (2.5  107 m). At pressures lower than about 103 torr (0.133 Pa), the rate of heat transfer Q by residual gas between parallel surfaces at T1 and T2 less than 1 in (0.0254 m) apart is approximately (Q/A) < (const) aP(T2T1), where P is pressure measured in torr with a gage at ambient T and a is an overall accommodation coefficient of gas molecules in collision with the walls. Thus, a  a1a2/[a2  a1 (1  a2)], where a1 and a2 are accommodation coefficients of gas molecules at the two walls (see Table 19.2.7). Table 19.2.8 gives the values of the (const) in the equation for Q/A. The rate of radiative heat transfer between parallel surfaces at T1 and T2  T1, where emissivities are e1 and e2, respectively, is Q 5 e1e2s>[e2 1 s1 2 e2de1][T 42 2 T 41d, where s is the StefanBoltzmann constant whose value is 1.712  109 Btu/ft2  h  R4

INSULATION Table 19.2.7 Approximate Values of Accomodation Coefficients, a Temp, °R (K)

He

H2

Air

540 (300) 140 (78) 35 (19)

0.3 0.6 0.6

0.3 0.5 1

0.8–0.9 1 1

Table 19.2.8 Constant in the Gas-Conduction Equation Q/A < (const) aP(T2  T1)

Gas

T2 and T1, R (K)

Const, Btu/h  ft2  torr  R (J/N  s  K)

N2 O2 H2 H2 He

700 (389)

540 (300) 540 and 140 (300 and 78) 140 and 36 (78 and 20) Any

28 (1.19) 26 (1.11) 93 (3.96) 70 (2.98) 49 (2.09)

NOTE: T1—inner wall (cold). T2—outer wall (hot).

(5.67  108 W/m2K4) (see also Sec. 4). Table 19.2.9 contains emissivities for 540R (300 K) thermal radiation; these for most engineering purposes may be considered equivalent to minimal values of the e’s for very carefully prepared and cleaned surfaces. Organic coatings have high emissivities approaching unity ( 0.9). Mechanically polished surfaces have higher e’s than the minimal values. Handling a surface transfers an absorbing (high e) coating to it. Table 19.2.9 Minimal Values of Emissivity of Metal Surfaces at Various Temperatures for 540°R (300 K) Thermal Radiation Surface temp, R (K) Surfaces

7 (3.9)

140 (78)

540 (300)

Copper Silver Aluminum Chromium Nickel Brass Stainless steel 18-8 50 Pb-50 Sn solder Glass, paints, carbon Nickel plate on copper

0.005 0.0044 0.011

0.008 0.008 0.018 0.08 0.022

0.018 0.02 0.03 0.08 0.04 0.035 0.08

0.018 0.048 0.032

0.9

0.033

(ii) Refrigerated Radiation Shield An LN2 (140R) (78 K) cooled surface interposed in the insulating vacuum space of an LH2 or an LHe container reduces the heat transferred by radiation to the LH2 or LHe by a factor of about 250 when compared with that transferred by a surface of the same emissivity at room temperature without the interposition of the refrigerated shield. Nearly all the heat radiated by the surface at room temperature is absorbed by the LN2, and this results in evaporation of the relatively inexpensive LN2. (iii) A floating radiation shield is an opaque layer (e.g., a metal sheet), having surfaces of low emissivity suspended in the vacuum space with a minimum of thermal contacts with surfaces that are warmer or cooler. If the emissivity of the surfaces of the floating shield is the same as the emissivity of the vacuum-space walls, the floating shield decreases the radiative heat transfer by a factor of 2. For m floating shields arranged in series between the inner cold and # outer warm surfaces, the rate of radiative heat # transfer becomes Q m (m floating shields)  # [1/(m  1)]Q 0 where Q 0  heat transfer for m  0. (iv) Multilayer (Super) Insulation The principle of multiple floating radiation shields has been extended to the use of many thin metal layers (up to 75 layers per inch or 3,000 layers per metre) separated by thin thermal insulation. Best results have been obtained with Al foil separated by glass-fiber paper. Other materials have been used successfully, e.g.,

19-39

nylon nets as separators and aluminized Mylar with no spacer material. The apparent thermal conductivity of Al foil multilayer insulation spaced with glass fiber paper is variable depending on the number of layers of foil, the thickness of the paper, and the thickness and compacting of the insulating layer. For T  (540  36)R or (300  20) K, apparent mean conductivities range from 2.0 to 4.0  105 Btu/h  ft  R (3.5 to 7  105 J/s  m  K). For T  [540  140R (300  78) K], the values are about one-third larger. Using nylon nets, glass fabric or glass fibers for separating the Al foil increases the conductance from three to seven times. The above conductivities are for the direction normal to the foil surfaces. Lateral conductivities are many thousands of times larger. An advantage of aluminized Mylar is its reduced lateral conductivity. Another advantage is its relatively low density (one-third to one-half of other multilayer insulations). If the residual gas pressure in multilayer insulation is less than 104 torr, the conductance is practically independent of the residual gas pressure. At 103 torr (0.133 Pa), the conductance is increased by the order of 50 percent. 2. (i) Powder insulation consists of finely divided materials. Conductances of evacuated powders may vary by as much as 100 percent with variations of particle size and apparent density (packing) of the powder. The most commonly used powders are perlite, expanded SiO2 (aerogel), calcium-silicate, diatomaceous earth, and carbon black. The conductance of powder insulation is practically independent of gas pressure down to 10 torr (1.333 kPa), at which pressure it begins to decrease rapidly with decreasing pressure as the free path of the gas molecules becomes comparable with the space between powder particles. The most rapid decrease of conductance occurs between 1 and 101 torr (133 and 13.3 Pa). At 103 torr (0.133 Pa) the conductance reaches practically the lower limit. Even at 102 torr (1.33 Pa) the conductance is quite low. For this reason, good mechanical pumps are adequate for the evacuation of powder insulation (fine mesh filters are needed to protect the pump from being damaged by the powder). When evacuated to 104 torr (1.33  102 Pa) or less, the apparent mean thermal conductivity for perlite [540R (300 K) to 140R (78 K)] ranges from 0.6 to 1.2 mW/m  K. For nonevacuated perlite filled with He or H2 an apparent mean thermal conductivity of 115 mW/m  K can be used. With N2 a typical value is 30 mW/m  K. The principal function of the powder is to impede the transfer of heat by radiation. Powders are used when radiation would constitute an important heat leak. If an insulating vacuum is bounded by highly reflecting walls, such as clean Cu, Ag, or Al, adding powder may result in an increase of heat transfer because of the paths of solid conduction through the powder. (ii) Al powder or flakes added to a powder insulation reduces the radiative transfer through the insulation. Powdered Al is more effective than flaked Al. The conductance of highly evacuated perlite insulation [T  540  140R (300  78 K)] may be reduced 40 percent by the addition of 25 percent of Al powder. A similar addition of Al powder to Santocel and Cab-O-Sil (diatomaceous earth) reduces the conductance by 70 percent. Metal powder additions are most effective with those powders that are partially transparent to infrared radiation of wave lengths longer than 3 mm. 3. Rigid-Foam insulation The foams most used in cryogenic applications are the more or less closed-cell foams. Table 19.2.10 gives thermal conductivities of selected samples of insulating foams. The conductivity of a foam is dependent on the conduction through the intracellular gas, on the transfer by thermal radiation, and on solid conduction. Temperatures below the condensing T of the encased gas condense it and improve the insulation. Gases encased when polymer foams are blown gradually diffuse out of the cells and are replaced by ambient gases. The conductivities of many Freon and CO2 blown foams increase in time as much as 30 percent because of the diffusion of air into the foam. Conductivities may increase by a factor of 3 or 4 when cells are permeated with H2 or He. Glass and silica foams appear to be the only ones having fully closed cells. The structural rigidity of foams may be used to eliminate the need for mechanical supports. It is possible to use rigid foams without inner liners and outer casings. Polymer

19-40

CRYOGENICS

Table 19.2.10 Apparent Mean Thermal Conductivities of Selected Foams, Balsa Wood, and Corkboard Density lbm/ft3 (kg/m3)

Material Polystyrene foam†

Thermal conductivity,* Btu/h  ft  R(W/K  m)

2.4 (38) 2.9 (46) 5.0 (80) 5.0–8.7 (80–139) 5.0 (80) 10.0 (160) 8.7 (139)

Epoxy resin foam† Polyurethane foam† Rubber foam† Silica foam† Glass foam† Balsa wood, across grain

Corkboard, no added binder

Corkboard with asphalt binder

0.019 0.015 0.019 0.019 0.021 0.032 0.020

(0.033) (0.026) (0.033) (0.033) (0.036) (0.055) (0.035)

7.3 (117) 8.8 (141) 20.0 (320)

0.027 (0.047) 0.032 (0.055) 0.048 (0.083)

5.4 (86) 7.0 (112) 10.6 (170) 14.0 (224)

0.021 (0.036) 0.0225 (0.039) 0.025 (0.043) 0.028 (0.048)

14.5 (232)

0.027 (0.047)

* Test space pressure  14.7 lb/in2 abs (0.101 MPa). † T  540 to 140R (300 to 78 K).

foams have relatively high thermal expansions, even greater than the same polymer without voids. This makes cracking of the insulation a problem when it is bonded to metal. Sliding expansion joints have been used to overcome this. Mylar film bags (0.001 in or 0.025 mm thick) have been used to hold cryogenic liquids inside rigid-foam insulation and plastic films outside to keep the ambient atmosphere from the insulation. 4. Balsa wood and corkboard have been used for insulating cryogens that are not very cold, e.g., liquid methane (201R or 112 K) and higherboiling cryogens. The conductivities of these insulators (Table 19.2.10) are considerably higher than for the other types of low-T insulation discussed above. Their use, therefore, is restricted to special applications, as where their ability to support internal structures mechanically is important. Conduction of heat along structural supports passing through the insulation merits special attention for an adequate realization of the benefits of superior insulation. Ideal materials of construction would have high mechanical strength and low thermal conductivity. In Table 19.2.11, the yield strengths, in tension, of construction materials are compared with their thermal conductivities. This kind of comparison favors the nonmetallic materials in Table 19.2.11. When large

Table 19.2.11

quantities of cryogens are handled in the field, metals are commonly used for structural members because glass and plastics (Teflon excepted) are brittle at low T’s. Minimizing the transfer of heat through the supporting structure offers opportunities for ingenuity and inventiveness in design. The transfer is minimized by lengthening the supporting members. A stack of thin metal disks or washers, utilizing the contact resistance between adjacent disks, will increase thermal resistance without increase in length of support. A stack of 0.0008-in (0.02-mm) stainless steel disks under a compressive stress of 1,000 lb/in2 (6.895 MPa), in vacuum, has the same thermal resistance as a solid rod 50 times the length of the stack. Insulation Selection Because of their smaller heats of vaporization and greater cost of production, the lowest-T cryogens (LHe and LH2) ordinarily merit more effective insulation than the higher-boiling cryogens like LO2 and LN2, and these in turn are ordinarily better insulated than other cryogens boiling at still more elevated T ’s. The choice of insulation ordinarily involves the conditions of use of the equipment, as well as considerations of the tolerable loss of refrigeration and the cost of the insulation and its installation. If equipment is to have intermittent, rather than long, uninterrupted operating periods, the insulation heat capacity and time lag in reaching steady state are important. An insulation with large heat capacity (e.g., powders) will evaporate large quantities of refrigerant and take a long time to reach steady state. Powders are compacted by mechanical shocks and vibrations. Continued mechanical loading will compact both multilayer insulation and powders. Weight and bulkiness of the insulation can be important. The heat transported by the mechanical supports and vents connecting the cold interior with the warm exterior becomes the minimum refrigeration loss, attainable only with perfect insulation of the rest of the system. A large loss of refrigeration through supports and vents ordinarily reduces the attractiveness of costly insulation having the lowest conductance. SAFETY

Precautions are needed for the safe handling and storage of cryogens because (1) air can contaminate the cryogen, creating a potential explosive, either directly as when air mixes with H2 or CH4 or indirectly by transforming an inert cryogen like LN2 into an oxidant (a potential hazard for combustible insulation), and (2) moisture and air can freeze on cold surfaces, clogging vents and preventing normal operation of valves. Vent systems should be sized for sufficient exhaust velocities to prevent back-diffusion of air into the cryogen space. Storage at slightly

Comparison of Materials for Support Members

Material

Ey,* ¶ kpsi (MPa)

k,* Btu/h  ft  R(W/K  m)

Ey/k kpsi  h  ft  R/Btu (MN  K/W  m)

Aluminum 2024 Aluminum 7075 Copper, ann Hastelloy† B Hastelloy† C K Monel‡ Stainless steel 304, ann Stainless steel (drawn 210,000 lb/in2) Titanium, pure Titanium alloy (4A1–4Mn) Dacron§ Mylar§ Nylon§ Teflon§

55 (379) 70 (483) 12 (83) 65 (448) 48 (331) 100 (690) 35 (241) 150 (1034) 85 (586) 145 (1000) 20 (138) 10 (69) 20 (138) 2 (14)

47 (81) 50 (87) 274 (474) 5.4 (9.3) 5.9 (10.2) 9.9 (17.1) 5.9 (10.2) 5.2 (9.0) 21 (36.3) 3.5 (6.1) 0.088 (0.15)¶ 0.088 (0.15)¶ 0.18 (0.31) 0.14 (0.24)

1.17 (4.68) 1.4 (5.55) 0.044 (0.18) 12 (48) 8.1 (32) 10.1 (40) 5.9 (24) 29 (115) 4.0 (16) 41.4 (164) 227 (920) 113 (460) 111 (445) 14.3 (58)

* Ey is the yield stress, and k is the average thermal conductivity between 36 and 540R (20 and 300 K). † Haynes Stellite Co. ‡ International Nickel Co. § E.I. du Pont de Nemours & Company ¶ Room-temperature value.

GEOMETRIC OPTICS: REFLECTION AND REFRACTION

elevated pressure is preferable to prevent in-leakage of contaminants. Condensation can also be a problem on poorly insulated lines carrying low-boiling cryogens; frost is not a hazard but the oxygen-enriched condensed air is hazardous in the presence of combustibles. The insulation of liquid vent lines is a necessary precaution. Where this is not possible, appropriately placed guttering can prevent the contact of any condensed air with combustible objects. Any space, either containing a cryogen or being refrigerated by a cryogen, should be protected with proper safety-relief mechanisms, i.e., relief valves or rupture disks. Such space includes less obvious ones such as the insulating vacuum space surrounding a cryogenic fluid or a section of line which can trap liquid between two valves. A heat leak to trapped liquid confined without a vent for escape of vapor may develop pressures sufficient to burst the containing vessel. Gas pressures necessary to maintain liquid density at ambient temperature are for helium, 18,000 lb/in2 (124 MPa); for hydrogen, 28,000 lb/in2 (193 MPa); and for nitrogen, 43,000 lb/in2 (296 MPa). The selection of structural materials calls for consideration of the effects of low temperatures on the properties of those materials. A number of otherwise suitable materials become brittle at low temperature (Fig. 19.2.10). Materials must perform satisfactorily over the complete range from cryogenic to room temperature, with hydrogen embrittlement particularly significant. While hydrogen embrittlement is not believed to be a problem at liquid-hydrogen temperatures, it has been shown to occur at room temperature. The 300-series stainless steels, aluminum alloys and BeCu, among other alloys, are more resistant to hydrogen embrittlement.

19.3

19-41

In calculating allowable stresses, room temperature yield and tensile strengths should be used because (1) pressure testing of containers at ambient temperature is frequently necessary, and (2) in the case of large vessels, large unknown temperature gradients can exist above the liquid level both in the ullage space and in the walls of the vessel. The 300series stainless steels are normally austenitic (f.c.c.). However, in some 300-series stainless steels austenite is partially transformed by coldworking and possibly by temperature cycling to martensite. As martensite is brittle at low T ’s, the ductility of these steels is reduced by this kind of treatment. For this reason, it is recommended as an added safety factor that room temperature strength of the 300 series stainless steels be used for the design of structural members. Careful stress analysis is essential and must include stresses due to (1) thermal contraction of equipment, and (2) radial, axial, and circumferential temperature gradients caused by non-uniform cooling rates. The latter are frequently encountered in the cool-down of cryogenic transfer lines. Cryogenic equipment should be purged before being placed in service to remove unwanted condensables and gases that could form explosive mixtures. Commonly used methods are (1) evacuating and back-filling with purge gas and (2) flowing of purge gas through the system. The cryogen can be a direct hazard to personnel. Cryogen “burns” can result from direct contact with either the cryogen or uninsulated equipment containing the cryogen. The large evolution of gases associated with cryogenic spills can result in asphyxiation. Asphyxiation is a hazard in entering a warmed cryogenic vessel. Further safety details are found in the references.

OPTICS

by Patrick C. Mock REFERENCES: “AIP Handbook,” McGraw-Hill. Born and Wolf, “Principles of Optics,” Pergamon Press. “Photonics Handbook,” Laurin Publishing Co. Feynman, “QED: The Strange Theory of Light and Matter,” Princeton University Press. Smith, “Modern Optical Engineering,” McGraw-Hill. Miller and Friedman, “Photonics Rules of Thumb,” McGraw-Hill. “Diffraction Grating Handbook,” Richardson Grating Laboratory. FUNDAMENTAL THEORIES OF LIGHT

Optics is the science of light, and light is photons. The fundamental theory of photons is quantum electrodynamics, and Richard Feynman explains it with great clarity in his book for the layman, “QED: The Strange Theory of Light and Matter.” Photons are the particles that mediate the electromagnetic force, and they have three fundamental properties: energy, polarization, and constant velocity in a vacuum. The speed of light in a vacuum, c, is a fundamental physical constant with the defined value of 299,792,458 m/s (often approximated as 3 3 108 m/s). Every photon has an energy, which determines its frequency and wavelength. These are related by E  hf and l 5 c>f, where E is the photon energy, h is Plank’s constant, f is the frequency, and l is the wavelength. Polarization is fundamentally related to the quantum spin of the photon. The classical interpretation is that a photon is an electromagnetic wave packet with perpendicular electric and magnetic fields that are also perpendicular to the direction of propagation. The polarization is the direction of the electric field component. Light is photons with wavelengths shorter than the resolution of the human eye and long enough that their behavior is primarily wavelike (roughly 0.1 to 300 mm). It is a small portion of the electromagnetic spectrum (Fig. 19.3.1). Visible light is the small band with wavelengths of 400 to 760 nm. Longer wavelength light is infrared radiation, and shorter wavelength light is ultraviolet radiation. The human eye is most

sensitive near 560 nm. The interactions that light can have with matter are reflection, refraction, interference, diffraction, polarization, absorption, and scattering; however, the last three are beyond the scope of this section.

GEOMETRIC OPTICS: REFLECTION AND REFRACTION

Reflection and refraction govern geometric optics. The speed of light in an optical material, cr, is always smaller than the speed of light in a vacuum. The index of refraction, n 5 c/cr, is an important optical property of any material. The refractive index for most optical materials depends on the wavelength of the light, and it is typically between 1 and 4. This wavelength dependence is called dispersion. A beautiful example of the effects of refraction and dispersion is a rainbow. The reflection and refraction of a light ray from a flat interface between two media is shown in Fig. 19.3.2. The plane of incidence is defined by the incident light ray and the surface normal at the point of incidence. The law of reflection states that the reflected ray lies in the plane of incidence, and that the angle of reflection is equal to the angle of incidence, I 5 Is , as measured from the surface normal. The law of refraction (also known as Snell’s law) states that refracted ray lies in the plane of incidence, and the angle of refraction is related to the angle of incidence by nr sin Ir 5 n sin I . Total internal reflection occurs when n . nr, and sn /nrd sin I  1. The incident ray is reflected with no loss and the transmitted ray vanishes. The critical angle, Ic  arcsin (nr/n), is the minimum angle at which this occurs. The refraction of a light ray at a spherical interface between two media with radius of curvature r is shown in Fig. 19.3.3. The optical axis passes through the center of curvature. The image distance lr is related to the object distance l by snr/lr 2 n/ld 5 snr 2 nd/R, when the

19-42

OPTICS

A thin lens has a thickness much less than its diameter. The image distance lI for a thin lens is related to the object distance lO by (1/lO  1/lI)  (1/r1  1/r2) snr 2 nd/n, where r1 and r2 are the radii of curvature of the surfaces of the lens, and n and nr are the indices of refraction of the medium and the lens, respectively. It is derived by applying the refraction equation from above to the two surfaces of the thin lens. The focal length f of a lens is the image distance when the object is at infinity. This leads to the lens maker’s equation 1/f  (1/r1  1/r2) snr 2 nd/n and the gaussian lens equation 1/f  1/IO  1/lI. The optical invariant (also known as the Lagrange invariant) is an invariant quantity at every surface of an optical system. In its simplest form, the product of the image size and ray angle is a constant. It implies that the field of view and useful aperture are inversely proportional, i.e., a large telescope has a very small field of view, and wide-angle camera has a small aperture. The relative illumination of the image from the object is proportional to the area of the aperture and inversely proportional to the size of the image. This relationship is characterized by two optical parameters, the f number and the numerical aperture NA. The f number is the ratio of the focal length to the diameter of the clear aperture of a lens system, f/#  f/D, and it typically is used to characterize systems with a large image or object distance. Since the image size is proportional to the square of the focal length, the square of the f number is inversely proportional to the relative intensity. For example, a telescope with a 1-m aperture and a 10-m focal length has an f number of 10, customarily written as f/10. Numerical aperture is defined as NA  n sin f, where f is maximum ray angle that passes through the optical system, and it is typically used to characterize systems with small image and object distances. When this angle is small, f/# < 1/s2 NAd. THE WAVE NATURE OF LIGHT: INTERFERENCE AND DIFFRACTION Interference is the interaction of waves. The interference is constructive Fig. 19.3.1 Electromagnetic spectrum.

rays are close to the optical axis (the paraxial approximation). The magnification m is given by m 5 hr/h 5 nlr/nrl. If the image is inverted, the magnification is negative. A typical optical assembly contains several lenses, and each lens is two surfaces. The optical properties of the assembly can be found by starting with the object and determining the image for the first surface. The object for all subsequent surfaces is the image of the previous surface.

Fig. 19.3.2 Reflection and refraction at a plane surface.

Fig. 19.3.3 Refraction at a spherical surface; all parameters are shown in their positive sense.

when the waves are in phase and destructive when they are not. Diffraction is the interaction of a wave with an aperture, which often results in the bending of the wave at the edge of the aperture. Both phenomena depend on the wavelength of the wave. A common example of optical interference is the colors observed in a thin film of oil on water. Light reflected from the oil/water boundary interferes with light reflected from the air/oil boundary. The apparent color depends on the thickness of the oil layer and the viewing angle. As these change, different wavelengths interfere constructively and destructively along the path to the eye. When the light source is monochromatic, the constructive and destructive interference leads to formation of light and dark regions known as fringes. If the fringe pattern is caused by the interference of two beams following different paths from the same source, the fringe pattern can be analyzed to determine path length differences with an accuracy smaller than the wavelength of the light (see “Photonics Handbook”). For example, the flatness of an unknown surface can be compared with the flatness of a reference surface. Instruments that make these measurements are known as interferometers. The most common types of interferometers are the Fizeau, Michelson, Twyman-Green, and Fabry-Perot. Diffraction limits the resolution of any optical imaging system. The limit depends on the size of the aperture and the wavelength. The angular resolution of a circular aperture is given by f 5 2.44l/D, where D is the diameter of the aperture. For an aperture with an arbitrary shape, the far-field diffraction limit is approximately f > lP/p2A, where P and A are the perimeter and area of the aperture, respectively. A diffraction grating is a set of finely spaced narrow apertures with a spacing comparable to the wavelength of the light to be measured. The grating can be imposed on a reflective or transparent surface. The light diffracted by each aperture in the grating interferes with light from nearby apertures. If the surface in Figure 19.3.2 is a reflective diffraction grating, constructive interference occurs when ml 5 dssin I 1 sin Isd, where d is the spacing and m is an integer which indicates the diffraction order. Physically realizable diffraction orders exist for |ml/d| , 2.

SPECIAL TOPICS: LASERS, FIBERS, AND OPTICAL DESIGN SPECIAL TOPICS: LASERS, FIBERS, AND OPTICAL DESIGN Lasers have become a ubiquitous optical tool. The term laser is an

acronym for light amplification by the stimulated emission of radiation. Stimulated emission is the underlying physical process that all lasers exploit. The result is an intense, monochromatic optical source that is nearly parallel and spatially coherent. Lasers are used in a broad array of applications, including CD and DVD players, fiber-optic and freespace telecommunications, photolithography, machining, photomedicine, interferometry, and surveying. An optical fiber exploits total internal reflection to transmit light with very little attenuation. A simple bare glass fiber suspended in air will transmit light, but surface contamination and surface defects cause high losses. An optical fiber achieves extremely low attenuation by surrounding a high index core with a low index cladding, which protects

19-43

the surface of total internal reflection. Single mode optical fibers, used in telecommunications, achieve losses as low as 0.2 dB/km. Large optical fibers are effective light pipes that can illuminate hard-to-reach areas. Coherent bundles of optical fibers are used to transmit images from hard-to-reach areas. Optical design is the science of precisely manipulating light. Simple optical problems can be solved with the formulas in this section or the references. However, these ideal formulas ignore distortions, aberrations, and tolerances that affect all real optical systems. Sophisticated optical computer-aided design tools use precise ray tracing to accurately predict the performance of an optical design. They include libraries of design templates that can be adapted to meet the requirements of common optical systems and powerful optimizers to maximize the performance of the optical system. The “Photonics Handbook” has more information on optical CAD tools.

Section

20

Emerging Technologies and Miscellany BY

LUC G. FRÉCHETTE Professor, Mechanical Engineering Department, Université de Sherbrooke,

Canada ALEXANDER COUZIS Professor, Chemical Engineering, The City College of the City University

of New York JACKIE JIE LI Professor, Mechanical Engineering, The City College of the City University of

New York FARID M. AMIROUCHE Professor, Department of Mechanical and Industrial Engineering,

University of Illinois at Chicago DOYLE KNIGHT Professor, Department of Mechanical and Aerospace Engineering,

Rutgers University YIANNIS ANDREOPOULOS Professor, Department of Mechanical Engineering, The City College

of the City University of New York ALI M. SADEGH Professor, Department of Mechanical Engineering, The City College of the City

University of New York SRIRANGAM KUMARESAN Biomechanics Institute, Santa Barbara, Calif. ANTHONY SANCES, JR. Biomechanics Institute, Santa Barbara, Calif. PAUL CAVALLARO Department of Navy, NUWC, Newport, Rhode Island ANDREW GOLDENBERG Professor, Mechanical and Industrial Engineering, University of

Toronto, Canada; President and CEO of Engineering Services Inc. (ESI), Toronto

20.1 AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS) by Luc G. Fréchette Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-3 Microfabrication Technology and Materials . . . . . . . . . . . . . . . . . . . . . . . . . 20-3 Processing for MEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-6 Engineering Science and Practice for MEMS Design. . . . . . . . . . . . . . . . . . 20-8 Physics and Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-11 Beyond Sensors and Actuators: Integrated Microsystems . . . . . . . . . . . . . . 20-12 Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-12 20.2 INTRODUCTION TO NANOTECHNOLOGY by Alexander Couzis A Brief History of Nanotechnology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-13 Semiconductor Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-14 Alternative Architectures for Nanocomputing . . . . . . . . . . . . . . . . . . . . . . 20-14 Medical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-14 Molecular Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-15 Carbon Nanotubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-15 20.3 FERROELECTRICS/PIEZOELECTRICS AND SHAPE MEMORY ALLOYS by Jacqueline J. Li Ferroelectrics/Piezoelectrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-20 Shape Memory Alloys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-25

20.4 INTRODUCTION TO THE FINITE-ELEMENT METHOD by Farid Amirouche Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-28 Basic Concepts in the Finite-Element Method . . . . . . . . . . . . . . . . . . . . . . 20-29 Potential Energy Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-30 Closed-Form Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-30 Weighted Residual Method (WRM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-31 Galerkin Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-31 Introduction to Truss Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-32 Heat Conduction Analysis and FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-37 Formulation of Global Stiffness Matrix for N Elements . . . . . . . . . . . . . . . 20-40 2-D Heat Conduction Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-41 20.5 COMPUTER-AIDED DESIGN, COMPUTER-AIDED ENGINEERING, AND VARIATIONAL DESIGN by Farid Amirouche Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-44 Parametric and Variational Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-45 Engineering Analysis and CAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-46 Computer-Aided Engineering (CAE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-47 CAE Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-47 Virtual Reality and Its Applications to Product Design and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-47 Prototyping Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-49 The Use of Virtual Reality in Interactive Finite-Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-50 20-1

20-2

EMERGING TECHNOLOGIES AND MISCELLANY

20.6 INTRODUCTION TO COMPUTATIONAL FLUID DYNAMICS by Doyle Knight Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-51 Governing Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-51 Vorticity and Kelvin’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-51 Irrotational Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-52 Boundary Layer Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-56 Navier-Stokes Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-59 20.7 EXPERIMENTAL FLUID DYNAMICS by Yiannis Andreopoulis Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-63 Background Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-63 Classification of Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-64 General Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-65 Measurements of Pressure and Temperature . . . . . . . . . . . . . . . . . . . . . . . 20-66 Measurements of Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-67 Measurements of Vorticity, Rate of Strain, and Dissipation . . . . . . . . . . . . 20-72 Micro- and Nanosensors and Associated Techniques . . . . . . . . . . . . . . . . . 20-75 20.8 INTRODUCTION TO BIOMECHANICS by Ali M. Sadegh Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-79 Introduction—What Is Biomechanics? . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-79 Biomechanics of Bone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-79 Biomechanics of Articular Cartilage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-82 Biomechanics of Tendons and Ligaments . . . . . . . . . . . . . . . . . . . . . . . . . 20-83 Biomechanics of Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-83 Biomechanics of Joints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-84 Biomechanics of the Spine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-90 Biomechanics of the Shoulder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-98 The Elbow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-98 Biomaterials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-99 Occupational Biomechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-99 Prostheses and Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-99 20.9 HUMAN INJURY TOLERANCE AND ANTHROPOMETRIC TEST DEVICES by Srirangam Kumaresan and Anthony Sances, Jr. Human Injury Tolerance Related to Automotive Safety . . . . . . . . . . . . . . 20-104 Biomechanical Basis to Develop Anthropometric Test Dummies for Automobile Crash Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-106

20.10 AIR-INFLATED FABRIC STRUCTURES by Paul V. Cavallaro and Ali M. Sadegh Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-108 Description of Air-Inflated Fabric Structures . . . . . . . . . . . . . . . . . . . . . . 20-108 Fiber Materials and Yarn Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . 20-108 Effects of Fabric Construction on Structural Behavior . . . . . . . . . . . . . . . 20-109 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-110 Inflation and Pressure Relief Valves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-110 Continuous Manufacturing and Seamless Fabrics . . . . . . . . . . . . . . . . . . 20-110 Protective Coatings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-110 Rigidification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-110 Air Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-111 Drop-Stitch Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-113 Effects of Air Compressibility on Structural Stiffness . . . . . . . . . . . . . . . 20-113 Experiments on Plain-Woven Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-114 A Strain Energy-Based Deflection Solution for Bending of Air Beams with Shear Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-115 Analytical and Numerical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-116

20.11 ROBOTICS, MECHATRONICS, AND INTELLIGENT AUTOMATION by Andrew A. Goldenberg Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-118 Typical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-120 Technological Update on Some Emerging Applications . . . . . . . . . . . . . . 20-131 Trends and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-132

20.12 RAPID PROTOTYPING by Ali M. Sadegh Rapid Prototype Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-132 Rapid Prototyping Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-133 Applications of Rapid Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-134

20.13 MISCELLANY by E. A. Avallone Preferred Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-135 Professional Engineering Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-135 Publishing and Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-137

20.1 AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS) by Luc G. Fréchette INTRODUCTION Trend toward Small Scales

The twentieth century was marked by mankind’s accomplishments at ever increasing scales, from civil works such as bridges that span rivers to sophisticated machinery such as airplanes and spacecraft that span our planet and beyond. Although this quest for immensity will remain vibrant, the twenty-first century will likely be marked by achievements at smaller scales. This trend is best demonstrated by the information revolution experienced by our society over the past decades, which was a result of the remarkable development of integrated circuits (ICs), itself enabled by progress in semiconductor manufacturing. The microfabrication approach developed for ICs is based on parallel processing of large arrays of devices on a planar substrate, using photolithography and thin-film processing in a clean room environment. The semiconductor industry leverages this approach to make devices with impressive levels of performance, reliability, and complexity. Today, microprocessors integrating over 100 million transistors per chip can be batch-fabricated in large volumes and at low cost. Such capabilities are expanding beyond microelectronics to allow the microfabrication of sensors, actuators, and microsystems that implement functions from other physical domains, such as mechanical, thermal, fluidic, magnetic, electrical, optical, chemical, or biological domains. The field of microelectromechanical systems, or MEMS, focuses on the miniaturization of such a diverse set of systems through the use of microfabrication technology. It is also referred to as microsystems technology, or MST, in Europe and micromachines in Asia.

associated with them. Similarly, devices pertaining of other physical domains could also be miniaturized to the extent that technology allows, if commercially beneficial. A prime example is sensors, i.e., devices that measure a physical quantity, often by converting it to an electrical signal. Most measurement applications would benefit from local, compact, nonintrusive, and inexpensive sensors, suggesting that batch microfabrication may be appropriate. Pressure sensors (Fig. 20.1.1a) and accelerometers (Fig. 20.1.7a) are common examples. Actuators that act on the environment by converting an electrical signal into a mechanical displacement or another physical domain can also potentially be miniaturized, as long as the power transmitted to the environment is small and the action is highly localized. Examples include inkjet printer heads, micromirror arrays for image projection (Fig. 20.1.1b), and microvalves for flow control (Fig. 20.1.7c). Microfabrication and MEMS technology can also be enabling to implement microsystems that would otherwise be impossible or impractical, such as microfluidic labs-on-a-chip that allow interaction with molecules or cells at their length scales (0.1–10 mm). The annual market for MEMS is estimated at $US 8.2 billion worldwide for 2007,1 in the following areas: microfluidics (27 percent), inertial sensors (22 percent), optical MEMS (22 percent), pressure sensors (11 percent), RF MEMS (3 percent), and other sensors (10 percent) and actuators (5 percent). These devices are used in the automotive industry, in consumer electronics and computers, for telecommunications, as well as in instrumentation for biochemistry and medicine. The field has experienced an average growth of 16 percent annually since 2000, as a result of novel applications of MEMS and insertion of this disruptive technology into existing markets.

Microsystems and Their Applications

The scale of a device or machine is set by the task it must accomplish as well as the fabrication technology available to implement it. For example, information storage devices, such as hard disk drives and solid-state memory, increase in storage density as technology enables smaller bit size, since bits of information do not have a physical scale

MICROFABRICATION TECHNOLOGY AND MATERIALS

The key enabling technology responsible for the rapid development of microsystems over the past decade is microfabrication. Most of today’s computer chips are manufactured by the CMOS (complementary

Redirected light

Metal contact pads Piezoresistors Diaphragm Electronics

Pressure

(a)

(b)

Fig. 20.1.1 Commercial examples of microelectromechanical systems (MEMS). (a) Automotive piezoresistive pressure sensor and (b) micromirror array for display applications. The pressure sensor (a) measures the deflection of a thin silicon diaphragm due to an applied pressure by monitoring a strain-sensitive resistor fabricated on the diaphragm through on-chip circuitry. (Photo: Bosch.) The micromirror array (b) is an electrostatic actuator that redirects reflected light over 1 million independent pixels to digitally project an image on a screen. (Photo: Texas Instruments.) 20-3

20-4

AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS)

so, for example, 100 wafers will have the cubic lattice parallel to the wafer surface. As will be discussed later, the crystalline structure of silicon has been leveraged to create useful structures such as the pressure sensor membrane in Fig. 20.1.1a by anisotropic wet etching. Doping, by introducing impurities in the silicon lattice, controls the substrate’s electrical conductivity (which can range from 0.001 to 10,000 # cm) and also defines the wafer type (Group III atoms such as boron produce p-type material, Group V atoms such as phosphorus produce n-type material). Mechanically, single-crystal silicon is a strong and light material (r 5 2.33 kg/m3, E  130 to 187 GPa, n  0.06 to 0.28, and G  57 to 79 GPa depending on orientation2), but exhibits a brittle behavior with a fracture strength that is process and size dependent (typically 1.2 GPa, but can vary from 0.5 to 7 GPa).4 Traditional machining of silicon is therefore not common, since it introduces surface cracks that can significantly reduce the fracture strength; etching techniques are therefore used instead. Thermally, silicon is a good conductor (k  150 W/m  K) and exhibits less thermal expansion and higher specific heat than most metals (CTE 5 2.6 3 1026 /K, cp  0.713 kJ/kg  K at 300 K), which tends to reduce thermal stresses. It can sustain relatively high temperatures (brittle-to-ductile transition above 5508C and melting at 14218C), although the electronic properties of silicon are best conserved below 1508C. These electronic properties are interesting for electromechanical transduction in sensors and will be discussed later.

metal-oxide semiconductor) process, which is a sequence of steps that repeatedly deposit and pattern thin films to form an integrated circuit with transistors. Silicon, typically used in microelectronics, has become the dominant material for MEMS because of its material properties (semiconductor, mechanical, thermal) and the extensive processing experience with the silicon material set pioneered by the semiconductor industry. This section describes the underlying microfabrication technologies and materials borrowed from this industry that are the basis for silicon MEMS processing. To gain further insight on microfabrication science and technology, beyond what will be presented here, the reader is referred to classic texts on the subject.2,3 Silicon and Its Properties

Most microfabrication processes create devices on the surface or through the thickness of thin, planar, circular substrates, or wafers. Single-crystal silicon wafers are the most common starting material, although glass, quartz (single-crystal and fused), and compound semiconductor wafers (GaAs, InP) are also of interest. As provided by the semiconductor industry, single-crystal silicon wafers offer unique properties, such as: (1) stable, defect-free bulk material of high chemical purity; (2) precise substrates with low wafer bow (25 mm), minimal taper (15 mm), and mirror polished surfaces (one or both sides); (3) semiconductor properties and the possibility of integrating MEMS with electronics. Wafers are characterized by the orientation of the crystal’s cubic structure with respect to the wafer surface. As illustrated in Fig. 20.1.2, principal planes and directions in single-crystal silicon can be identified by Miller indices.2 Specific planes are denoted by (h, k, l), where the indices are components of the direction vector [h, k, l] that is normal to the plane. Since the lattice of silicon is cubic, hence symmetric, the three principal directions are identical and interchangeable. Therefore, we commonly refer to families of directions, written 100 to identify the [100], [010], and [001] directions, and equivalently to families of planes, written {100}, for the planes normal to these directions. Wafers are classified by the direction vector that is normal to the wafer surface,

(a) Diamond unit cell of silicon

Thin-Film Processing

A broad range of thin films can be deposited or grown on the surface of a substrate to form mechanical or electrical elements, such as membranes, mechanisms, or circuits. Materials include silicon or silicon-based compounds, metals, and polymers. The most common thin films are summarized in Table 20.1.1 with their characteristics and typical microfabrication approaches. Conductive layers can be formed of metals or doped semiconductors, such as polycrystalline silicon (referred to as polysilicon) diffused with boron or phosphorous dopants. Electrical insulation is typically created with silicon dioxide (SiO2) or silicon nitride (Si3N4) films. Materials can be deposited in thin films by various techniques, such as physical vapor deposition, chemical vapor

l

Corner Si atoms Face centered Si atoms Interior Si atoms

k

Tetrahedral bonding of silicon atoms

h

(b) Crystal planes

[001]

[001]

[001]

[010]

[010] [010]

[100] (100) plane

[100] (110) plane

[100] (111) plane

Fig. 20.1.2 Crystal lattice, and Miller indices for silicon. The unit cell (a) has the structure of diamond, with each silicon atom covalently bonded to its four adjacent neighbors. The major planes, directions, and corresponding Miller indices are illustrated in (b).

MICROFABRICATION TECHNOLOGY AND MATERIALS Table 20.1.1

20-5

Common Substrate and Thin-Film Materials Used for MEMS

Material

Typical application

Fabrication process and characteristics

Material characteristics

Silicon (Si) (substrate)

Most common substrate; compliant or rigid structures; electrical and electronic components

Single-crystal substrate; 300–700 mm thick; many micromachining techniques

Defect-free, high-strength, brittle, thermally conductive semiconductor

Quartz (substrate)

Bulk structural materials for active components

Crystalline or fused silica substrate; 400–700 mm thick; limited micromachining

Piezoelectric properties; visually transparent

Borosilicate glass (Pyrex) (substrate)

Insulating substrate; bulk structures for microfluidics

Amorphous glass substrate; 400–700 mm-thick wafers; anodic bonding to Si; limited micromachining

Thermal expansion matched with Si; visually transparent

Polycrystalline silicon (polySi)

Thin-film structures; piezoresistors; conductors

Polycrystalline; LPCVD; 0.2–2 mm-thick films

Process-dependent properties; semiconductor

Silicon dioxide (SiO2)

Sacrifical thin film; insulator (thermal and electric)

Amorphous; thermal growth or PECVD; 0.1–10 mm-thick films

Typically high compressive strength; thermal insulator: dielectric

Silicon nitride (Si3N4)

Membrane; insulator (electric)

LPCVD or PECVD; 0.1–1 mm-thick films

High tensile strength; dielectric

Metals

Electrical conductors; structural layers

Evaporation or sputtering (Al, Au, Cr, Ti, Pt, W); 20 nm to 2 mm-thick films

Ductile; thermal and electrical conductors

High-aspect-ratio structures

Electrodeposited (Ni); 1–1000 mm-thick films

PDMS

Microfluidics; molded microstructures

Flexible polymer; cast; 10–1000 mm-thick films

Low stiffness, dielectric, biocompatible

SU-8

Mold for electroplating; thick structures

Photosensitive epoxy; spin cast; 5–500 mm-thick films

Moderate stiffness, dielectric

deposition, electrochemical deposition, or casting. Since thin-film materials are synthesized as they are deposited or grown, their properties are strongly related to the specific processing conditions. Material properties can be tailored within a certain range, which is useful to match design requirements but requires stringent process control for repeatability. Film thicknesses depend on the process and can vary from a few nanometres (nm) up to tens of micrometres (mm) [1 m  106 mm  109 nm  1010 Å (angstroms)]. As a reference, the human hair is approximately 70 mm in diameter. In physical vapor deposition, a solid piece of the material to be deposited is either vaporized or sputtered in line of sight with the substrate, such that atoms of the material will deposit uniformly on the substrate to form a thin film. Pure metals are easily evaporated, while sputtering can also deposit a wide range of metallic and nonmetallic compounds. Chemical vapor deposition (CVD) involves a chemical reaction to synthesize a material, such as polycrystalline silicon, as it is deposited over the substrate. This process is often carried out in batches in hightemperature furnaces with a controlled flow of reactants and at low or atmospheric pressure (LPCVD or APCVD, respectively). In a similar environment, the silicon substrate can also be directly oxidized by reacting with a flow of oxygen to form SiO2 on the surface (the addition of H2 enhances the oxidation rate, and is referred to as wet oxidation, as opposed to dry oxidation when only oxygen is used). The chemical reactions for CVD can also be plasma enhanced (PECVD), which allows deposition at lower temperatures, good control of the film properties, and high deposition rates, but requires expensive serial processing. An alternative approach for metal films, especially thick films (from micrometres to millimetres), is electrochemical deposition, or electroplating. The substrate acts as the cathode, where electrons combine with metallic ions in the liquid electrolyte, to cover exposed

areas of the wafer with the metal (commonly Ni, but also Cu, Au, NiFe, or NiW). The substrate can also be covered with solvent-based solutions by directly pouring the solution on the wafer and using gravity or centrifugal forces (spin casting) to form a uniform thin film. Spin-on-glass and photosensitive polymers are commonly applied by spin casting and then heated to evaporate the solvents and solidify. Photolithography

The photolithographic process is used to transfer a pattern onto a planar substrate, using light. As illustrated in Fig. 20.1.3, a thin photosensitive polymer film, or photoresist, is first spin-cased on the surface of the wafer. The pattern is then exposed over the film by shining UV light through a photolithographic mask. The mask consists of a glass plate coated with a patterned chrome layer to locally expose the photoresist: the mask blocks the light where chrome is present and allows light to shine through where chrome has been removed. The coated wafer is then developed in a liquid solution, physically dissolving the photoresist in the exposed areas, for positive photoresist (for negative resist, dark or unexposed areas would be dissolved instead). This process allows replication of complex, micrometre-scale features over the entire wafer in a single or a few repeated exposures. This parallel fabrication approach is central to the low-cost, high-volume advantage of microfabrication. The feature size is limited to fractions of the wavelength of light used for exposure, but alternative nanolithography techniques have been developed to reach tens of nanometres, such as electron beam and nanoimprint lithography. Etching and Other Patterning Approaches

The pattern created in the photoresist layer is typically transferred to the underlying thin film or into the substrate by etching. The wafer is

20-6

AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS)

PR

PR

Si

Si

Photoresist (PR) (a) Reactive ion etch (anisotropic)

(a) Substrate (Si)

(b) Wet/chemical etch (isotropic)

Fig. 20.1.4 Cross section profiles of etched structures: (a) reactive-ion-etched features with vertical side walls and (b) chemical-etch-dominated process (plasma or wet etching) that results in mask under cut and rounded features.

Glass mask Reactive ion etching exhibits poor selectivity to photoresist (near 1:1) while chemically dominated etching can show selectivity 100:1. Table 20.1.2 summarizes some of the most common combinations of materials and etchants, along with their properties. After etching, the photoresist is commonly removed in an oxygen plasma (ashing) or in a solution (acid or solvent). The wafer is then ready for subsequent processing. Lift-off is an alternative patterning approach consists of depositing and patterning the photoresist layer first, then covering the wafer surface with a thin film, commonly a metal. The photoresist is then dissolved to effectively lift-off the metal that covers it while leaving the metal that was deposited in the exposed areas of the mask.

PR

(b)

Si

PR

(c)

Si

PROCESSING FOR MEMS

Fig. 20.1.3 Photolithography process: (a) spin-coat a layer of photoresist over the wafer and prebake the film; (b) expose certain areas to UV light; (c) develop photoresist, leaving a physical mask in the nonexposed areas.

immersed in an etchant, which reacts and removes the exposed material, while the photoresist acts as a physical mask that protects the unexposed areas of the wafer. In wet etching, a chemical in liquid or vapor phase comes in direct contact with the wafer, preferentially reacting with the exposed material. In dry etching, a gas is ionized to create a plasma of free radicals and ions that chemically react with the exposed material or physically ablate the surface through ion bombardment, by driving the ions toward the wafer with an electrical field. This process is referred to as reactive ion etching (RIE) and is most commonly used in microelectronics to form fine patterns in thin films. Given the directionality created by the electrical field applied between the plasma and substrate, anisotropic features with relatively vertical side walls are possible (Fig. 20.1.4a). Chemically dominated etching, such as wet etching, is, however, characterized by isotropic removal of material that effectively undercuts the mask and widens the exposed features (Fig. 20.1.4b). In addition to etch profiles, etch rate and selectivity are important parameters that characterize an etch process. The etch rate is simply defined as the thickness of material removed per unit time, while selectivity is the ratio of etch rate of the material intentionally removed to that of a secondary material such as the photoresist mask or the underlying substrate.

The basic microelectronics processing techniques introduced above have been extended to allow the fabrication of movable structures for sensors, actuators, and multiphysical microsystems. The most common approaches are presented next. Surface Micromachining

One approach consists of depositing and patterning a structural material over a sacrificial thin film. The sacrificial material is then selectively etched to release the structures formed in the top layer. This approach creates thin planar structures that can move with respect to the substrate, normal or parallel to it. Figure 20.1.5 illustrates a surface micromachining process in which alternating layers of polysilicon (to form the structure) and silicon dioxide (as the sacrificial material) are deposited and patterned. Etched openings in the sacrificial layers allow mechanical and electrical connections between the polysilicon layers and attachment to the substrate. A hydrofluoric solution can be used to remove the silicon dioxide and release the structures, since it does not etch silicon significantly. This surface micromachining approach is the basis for successful commercial devices, such as automotive accelerometers,5 and foundry services, such as polyMUMPs6 and SUMMIT.7 Other material combinations can be used, such as aluminum and photoresist8 or silicon carbide and polysilicon/silicon dioxide.9 An alternative approach consists of forming the electromechanical structures into the thin films of a CMOS wafer,10 leveraging the substantial

Table 20.1.2 Common combinations of Materials and Etching Approaches in Micromachining; Many Other Choices Are Available2 Material to pattern

Common patterning approaches

Si, polySi

RIE (Cl-based) DRIE (SF6) Wet (KOH, TMAH)

Slow, directional Fast, vertical profile Anisotropic (54.748)

PR (5:1) PR (70:1); SiO2 (150:1) SiN (100:1)

SiO2, glass, quartz

RIE (CF4) HF (diluted, BOE)

Directional; isotropic (quartz: anisotropic)

PR (10:1) PR (20:1)

Si3N4

RIE (CF4) Phosphoric acid

Directional Isotropic, wet

PR (10:1); SiO2 (50:1) SiO2 (10:1); Si (30:1)

Metals

Lift-off (for Au) RIE (Cl, F, Ar) Etchant solutions

Wet, thin film ( 1 mm) Directional, slow Isotropic, wet

— PR (1:1) PR (100:1)

Characteristics

Masking materials and selectivity

PROCESSING FOR MEMS

20-7

Si3N4 Sacrifical SiO2

(a)

Substrate (Si)

Structural polySi SiO2

(b)

Si

PolySi

(e)

SiO2

(c)

Si

PolySi

(d)

(f) Hub (poly5i 2) Gear/Rotor (Poly5i 1)

Si

Fig. 20.1.5 Basic principles of surface micromachining: (a) thin silicon nitride (Si3N4) and silicon dioxide (SiO2) films are deposited, and openings are etched in the SiO2, using lithography and etching; (b) a polysilicon film is deposited (LPCVD) and (c) patterned; (d) the polysilicon structures are released by wet etching of the sacrificial SiO2. Steps (a)–(c) can be repeated to create multilayer structures such as the gear train mechanism (Photo: Sandia National Lab.) shown in (e). Rotary structures are formed with a hub covering part of the rotor ( f ), separated by SiO2 before release.

CMOS manufacturing resources and allowing natural integration with electronics. The structures can also extend into the silicon substrate by using bulk micromachining techniques. Bulk Micromachining

Alternatively, the structures can be formed directly into the substrate, which is referred to as bulk micromachining. Three types of processing methods are available, described by the etchant state: wet, vapor, or plasma (the last two are referred to as dry etching). Although wet etching typically creates isotropic features (rounded etching in all directions), unique formulations can leverage the microstructure of

single-crystal silicon to form anisotropic features. Hydroxide-based etchants (such as KOH and TMAH) etch much slower in one direction that the others, exposing the {111} planes of silicon. Depending on the crystallographic orientation of the wafer, {111} planes are either at 54.748 to the wafer surface ( 100 silicon wafers) or perpendicular to it ( 110 silicon wafers). Furthermore, the etch rate can also depend on dopant type and concentration, such that areas of the wafer that were previously diffused with a dopant can remain after wet etching. These characteristics allow the fabrication of a wide range of geometries, such as pyramidal pits, channels, cantilevers, or membranes as illustrated in Fig. 20.1.6a, but with geometric constraints to the crystal planes.

Self-limiting pits Cantilever

(111) Membrane

54.74°

Boron doped etch stops (a)

(b)

Fig. 20.1.6 Bulk micromachining of Si: (a) Anisotropic wet etching of Si. Typical anisotropic features that leverage the crystalline structure of Si. The {111} planes and boron-doped regions etch at significantly lower rates than the rest during wet etching with KOH, EDP, or TMAH; (b) deep reactive ion etch (DRIE) structures, showing that unconstrained 2-D shapes are possible, but only with vertical sidewalls.

20-8

AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS)

Commercial pressure sensors are commonly fabricated as singlecrystal silicon membranes by anisotropic wet etching, as illustrated in Fig. 20.1.1a. Alternatively, plasma processing can allow deep reactive ion etching (DRIE) into silicon substrates of practically any lithographically defined geometry, as illustrated in Fig. 20.1.6b. The DRIE process combines isotropic plasma etching with sidewall passivation in order to generate deep trenches with vertical side walls,11 reaching aspect ratios above 30:1 and fast enough etch rates (3 mm /min) for through-wafer etching. Compared to wet etching, DRIE allows greater design freedom but requires expensive equipment and serial processing. Wafer Bonding

In contrast to thin structures created by surface micromachining, thick multilayer devices can be formed by joining patterned or blank substrates together in a process referred to as wafer bonding. Three main bonding techniques are available: anodic bonding (glass to silicon), direct fusion bonding (silicon to silicon), and intermediate layer bonding. The planar substrates are first cleaned and brought into contact in a controlled ambient. In anodic bonding, a potential (500 to 1,000 V) is applied across a contacted pair of sodium-bearing glass and silicon wafers, elevated to a moderate temperature (300 to 4508C). Migration of the sodium ions generates strong attraction forces and a permanent chemical bond is formed between the substrates. This approach is commonly used to create optically accessible flow channels in microfluidic devices and to thermally and mechanically isolate sensors from their package and environment. Direct fusion bonding relies on molecular diffusion at the interface of two substrates brought in intimate contact. Silicon wafers can be fusion bonded by aligning, pressing, and heating (800 to 10008C) them. The bond strength can reach that of the crystalline substrate, but requires clean and polished surfaces (roughness 1 nm) and precludes materials that would not withstand the high temperature process, such as most metals. Alternatively, an intermediate layer can be used to join two substrates, such as eutectic bonds (Au layer), polymers (resists, polyimide), solders, low-melting-temperature glass (frits) or thermocompression bonds. Some intermediate-layer bonds require only low temperatures, but may not allow long-term vacuum cavities due to outgassing (for glasses, polymers, and solders) and yield lower bond strengths. Wafer bonding is commonly combined with bulk micromachining or thin-film processes to encapsulate sensors in vacuum or controlled environments (wafer-level packaging), form closed cavities for absolute pressure sensors, or create complex microfluidic devices (such as microturbines12). Nonsilicon Materials and Processing

In order to meet the requirements for a growing range of applications such as biomedicine, chemistry, and wireless communications, nontraditional microfabrication materials and processes have been developed. Polymers and metal electroplating are highlighted here. Polymers The need for low-rigidity materials in MEMS actuators and polymers for bioMEMS has led to the development of various thinand thick-film polymers. For example, vapor-deposited parylene thin films can serve as flexible membranes, joints, and pneumatic actuators.13 Casting of thick polymer layers, such as polydimethylsiloxane (PDMS), allows rapid fabrication of microchannels for biochemical MEMS. In a process referred to as soft lithography,14 PDMS is cast over a micromachined substrate that acts as a mold. After curing, the cast polymer is pealed from the mold and pressed against a glass substrate or another polymer layer to form microfluidic geometries. Microchannels and other thick structures can also be directly formed in standard photoresists or photosensitive epoxy, such as SU-8.15 Electroplating and LIGA In addition to thin film metal processing described earlier, thick metal layers can be formed by electroplating. The most common approach to create thick metal microstructures is selective electroplating, in which a seed layer is covered by a patterned thick photoresist or SU-8 mask, such that the metal (e.g., Ni) is plated only in the exposed areas of the mask. The LIGA process (German acronym for

lithography, electroplating, and molding) implements this approach by using X-ray lithography to expose very thick photoresist (several millimetres) with vertical sidewalls.16 This yields very high aspect ratio metal parts (up to 70:1) for MEMS and other micromachines. ENGINEERING SCIENCE AND PRACTICE FOR MEMS DESIGN

Many of the engineering topics covered throughout this handbook can be applied for the design of microsystems. Engineering practice, however, differs as a result of miniaturization and convergence with the semiconductor world. Here we discuss the design approach for MEMS, the effects of scaling on engineering science, and the most common transduction and operating principles, along with examples of MEMS devices. The reader is referred to the literature17 for an extended discussion of MEMS design. MEMS Design Process

Depending on the device complexity and scope of the application, development of MEMS may entail: 1. Definition of market needs and product specifications. An assessment must be made to determine if MEMS technology is appropriate for the application, by answering the following questions: • Does a MEMS and microfabrication approach bring unique capabilities and performance levels due to: fundamental scaling, unique properties of micromachined materials, potential integration with microelectronics, or large array integration? • Is there an economic benefit at the component or system level for using MEMS and microfabrication, such as low unit cost due to batch fabrication, on-chip functionality, or miniaturization? 2. Choice of operating principle and fabrication process, system-level design, and detailed design. The conceptual device is defined by using previously demonstrated technology and creative thinking. The principles of operation and the microfabrication approach are intimately related since the implementation is constrained by available fabrication technologies. Analytical and numerical modeling of increasing complexity along with detailed process flow definition completes the device design. Traditional or specialized CAD software can facilitate this effort.18 Packaging considerations are critical at the design phase to ensure appropriate mechanical, thermal, fluidic, biological, chemical, electrical, magnetic, and/or optical interfaces between the device and the environment. 3. Process development, fabrication, packaging, and testing. Lithographic masks are made with the designed patterns and used for device microfabrication. Each fabrication process is developed or improved to achieve the desired microstructures. Iterations with the design phase may be required, ranging from slight mask modifications to adopting alternative process flows. The process accuracy is characterized between fabrication steps. Basic device operation can be tested before dicing the wafer into individual chips. Each device is then packaged and tested. In many commercial MEMS products, the final packaging and qualification steps represent the major production cost, often more than the cost of making the MEMS device itself. Principles and Implementation of MEMS Sensors and Actuators

Many MEMS devices are sensors that monitor physical, biological, or chemical aspects of their environment to generate a useful signal, often electrical, that is correlated to the measured quantity. An actuator will provide the inverse function by acting on its environment on the basis of an input signal, also usually electrical. Various transduction principles convert a signal, or energy, from one physical domain to another, and can be implemented via microfabrication. They are of two types, based on either a displacement or a change in physicochemical properties. Most devices serially combine multiple transduction principles on chip to achieve high levels of functionality and sensitivity. The most common approaches are presented here with device examples. The reader is referred to the literature19 for a comprehensive discussion of transduction mechanisms and their MEMS implementations.

ENGINEERING SCIENCE AND PRACTICE FOR MEMS DESIGN Mechanical Elements: Flexures, Mechanisms, and Other Microstructures

Most MEMS consist of micromechanical components and mechanisms that provide a displacement, in combination with other transduction principles. Some of the most common mechanical elements are depicted in Fig. 20.1.7, and can be classified as flexible structures, mechanisms, or static structures. These include: beams that act as cantilever beams or springs; plates that form membranes, bridges, mirrors, or masses; hub-and-pin joints for rotary gears and motors (see Fig. 20.1.5e–f ); hinges for large deflections; and microchannels and other internal flow components to guide fluid flow (see Fig. 20.1.6b). Continuum elasticity modeling is generally applicable for such components, although the geometries can achieve extreme aspect ratios (up 1000:1) and the material properties can be process-dependent and anisotropic, as discussed previously. These components can be used in static behavior, or for a desired dynamic response. Micromachined spring-mass systems can be excited at resonance and used as gyroscopes through the Coriolis effect or as chemical sensors by monitoring changes in stiffness or mass. Dynamic behavior, such as resonance and time response, are also affected by fluid film damping.17 Such sensors are typically packaged under vacuum to reduce this effect, and therefore achieve a large quality factor Q, which is defined as the amplitude at resonance normalized by the displacement at low frequency. Capacitive Sensing and Electrostatic Actuation

Displacements can be measured or induced with electrostatic charges. Capacitive sensing consists of measuring the change in capacitance Cantilever

between two electrodes (i.e., conductive structures), as for the commercial accelerometer illustrated in Fig. 20.1.7a and b. Neglecting edge effects, capacitance between parallel plates is simply expressed as C 5 e A/g, where a change in capacitance C can result from a change in: overlapping plate area A, due to parallel motion of the plates (electrodes); gap g, due to normal motion of the plates; or even a change in permittivity e, by insertion of a different material in the gap. When the sensor is combined with on-chip circuits for capacitance measurement, extreme sensitivity can be achieved since parasitic capacitance is reduced, especially when a differential change in capacitance is measured. Electrostatic actuation is accomplished by applying an electrostatic field between structures attached to flexures or mechanisms that allow relative motion. For simple parallel plates, attraction forces are experienced normal to the plates and parallel to them (when they are offset), as illustrated in Fig. 20.1.8a. Since electrostatic forces are proportional to V 2 and vary inversely to the spacing, d or d 2, high voltages and small gaps are desirable. The maximum force is limited by breakdown across the air gap, which occurs above 300 V for micrometre-scale gaps at atmospheric pressure. Comb drive actuators provide in-plane motion (Fig. 20.1.8b), while micromirror devices for display applications use electrostatic actuation to tilt a mirror out of plane and redirect light that reflects on an array of individually actuated micromirrors (see Ref. 8 and Fig. 20.1.1b). Piezoresistive Sensing

Stain gages are commonly used in mechanical testing to measure deformations and the related stress in structures. In MEMS, the same principle is used to measure inertial or pressure forces, by transmitting the force to a silicon structure and measuring the induced strain. Crystalline silicon offers the benefit of a large gage factor (GF  100)

Mass

Displacement

Spring

Stationary electrodes Cantilever C

Mass At F = ma rest

C

Anchor Spring Released (a)

Flow passages

20-9

Anchored (b)

Open

Closed

P

Flexible membrane Cavity (c)

Fig. 20.1.7 Microstructures commonly used in MEMS. (a) Capacitive accelerometer. (Photo: Analog Devices.) The surface micromachined accelerometer consists of a mass supported on folded beams that act as springs. (b) Components and operation. Displacement is measured from the change in capacitance between the stationary and movable electrodes. (c) Thermopneumatic valve. Flexible membranes can be used in valves to control the flow through microchannels, as well as in pressure sensors.

20-10

AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS)

Comb drive

F V w

Spring

g F ˝ 2 2 F = 1 ε AV ; F = 1 ε wV ˝ 2 g2 2g

(a)

V (b)

Fig. 20.1.8 (a) Electrostatic attraction forces between electrodes with components normal and parallel to the structures. For example, the parallel force is used in (b) comb drive actuators to move an electrode structure held by a spring. This configuration is similar to capacitive accelerometers but it is used for actuation instead of sensing. (Photo: Sandia National Laboratory.)

compared to metals sGF < 2d, where the gage factor is defined as the relative change in resistance per unit strain, GF 5 R⁄R/L⁄L 5 R/eR. Typical implementations for pressure sensors consist of resistances that are diffused on the surface of a bulk micromachined silicon membrane (Fig. 20.1.1a). The piezoresistors are connected in a Wheatstone bridge configuration to provide a voltage measurement proportional to the applied pressure difference across the membrane. Piezoresistive accelerometers can operate on a similar principle, by locating a resistor on a silicon beam that supports a proof mass etched from the substrate. Commercial devices can include on-chip temperature compensation, stress isolation with the package, and mechanical stops to prevent excessive stresses.20 Piezoelectric Sensing and Actuation

Certain crystalline materials, such as quartz, lead titanate zirconate (PZT), and zinc oxide (ZnO), exhibit an electrical polarization under mechanical strain due to a relative displacement of negative and positive charge centers in the material’s microstructure, a phenomenon called piezoelectricity.2 This behavior can be leveraged for force sensing, but also for actuation since an applied electric field will in turn induce mechanical deformation. Traditionally, piezoelectric materials can provide large forces or pressures (10 MPa) and small displacements ( 0.1 mm) at high voltages (100 V), but piezoelectric thin films applied on flexible beams and membranes in MEMS devices allow larger deflections (up to 10 mm), although at proportionally lower forces. Some commercial ink-jet print heads integrate piezoelectric actuators to generate microdroplets at high frequencies. Thermal Transduction and Applications

Microsystems can also integrate temperature measurement capabilities, use heat for actuation, or act on their thermal environment. Traditional approaches to measure temperature, such as thermocouples and thermistors, can be implemented via thin-film deposition and patterning to create a solid-state temperature sensor for integration within MEMS devices. For example, resistors can be formed by depositing a thin film of metal or doped polysilicon, characterized by its sheet resistivity Rsheet 5 r/t, where r is the material’s electrical resistivity and t is the film thickness. Using photolithography and etching (or through lift-off), the resistor can be patterned into a line of total length l and width w, for a resistance of R  Rsheet l/w (Fig. 20.1.9a). Since the material resistivity is a function of temperature, monitoring electrical resistance provides a measurement of temperature, according to R/R0 5 aT , where a is the material’s temperature coefficient of resistance (TCR). Thermocouple junctions have also been microfabricated by depositing and patterning multiple thin metal films. Heat flux can be measured by monitoring the temperature of two overlaid resistors or thermocouple

CTE1 l/6

CTE2

Heat w

t (a)

CTE2 > CTE1 (b)

Fig. 20.1.9 Thermal sensing and actuation using thin films. A patterned thinfilm resistor (a) can act as a temperature sensor by monitoring the change of resistance with temperature, or as a heat source through ohmic heating. A thermal bimorph cantilever (b) will curve when heated, so it can act as a thermal sensor or an actuator when electrically heated.

junctions separated by a thermally and electrically insulating film, such as a SiO2, Si3N4, or polyimide. A resistor can also be used # as a heat source, converting electrical energy into heat at a rate of Q 5 Pelec 5 Vi 5 Ri 2. This allows the integration of electrically controlled heaters in MEMS and microfluidic devices. Heat can be a driving mechanism for actuation by raising the local temperature of microstructures or trapped gases to induce thermal expansion, which then results in a membrane or beam deformation. Micropumps, microvalves, and switches can be based on this operating principle, as illustrated in Fig. 20.1.7c. Here, an enclosed fluid tends to expand because of resistive heating, increasing the pressure in the cavity and pushing the membrane upward to close the valve. Alternatively, outof-plane deflections of thin-film cantilevers can be achieved by layering materials with different coefficients of thermal expansion (CTE), referred to as bimorph actuators, as illustrated in Fig. 20.1.9b. To minimize the actuation power and provide accurate temperature measurements, thermal isolation can be achieved through the use of highaspect-ratio structures and low-thermal-conductivity materials, such as SiO2 or Si3N4 films and glass substrates. Microsystems for cooling are less advanced, but potential approaches include thermoelectrics and microchannel heat exchangers, with single or two-phase flow. Optical Transduction and Applications

Semiconductor technology was also leveraged to create optoelectronic components such as photodiodes and light-emitting diodes. Silicon-based

PHYSICS AND SCALING

MEMS can therefore integrate not only microelectronics but also optical elements for a broader range of transduction principles. When combined with mechanical and/or fluidic elements, optoelectronics can measure the displacement of a shutter that partly covers photodiodes, or detect biological species on-chip with an emitter-detector pair along a microfluidic channel. Light manipulation can also be accomplished by reflecting incident light onto discrete, controllable, movable micromirrors. Commercial devices include actuated micromirror arrays that redirect light onto a display8 (Fig. 20.1.1b) and variable diffraction gratings to separate wavelengths of the light for analytical or telecommunication applications.21 Other Transduction Principles and Applications

The range of transduction principles extends beyond the most common ones presented above. It includes principles from material science, such as shape memory alloys; from physics, such as electron tunneling; from chemistry, such as electrochemical spectroscopy; and from biology. Novel approaches are generally spurred by specific needs for sensing, actuation, or miniaturization requirements, but are often limited by the microfabrication technologies and capabilities. The reader is referred to scientific journals for the latest advancements in MEMS science and technology (see references). PHYSICS AND SCALING

To assess the potential benefits of MEMS technology for a given application or compare various operating principles, it is useful to consider the effects of miniaturization through scaling laws. Two main aspects are to be considered: the length-scale dependence of various physical phenomena and the effect of scale on physical properties and behavior. Table 20.1.3

Length-Scale Dependence

Most physical phenomena can be classified as volumetric, such as gravity or heat capacity, or surface-dependent, such as viscous forces and heat transfer. When a physical system is miniaturized, volume scales as [l 3], whereas surface scales as [l 2], where l is a characteristic dimension of the system. The ratio of volume to surface therefore scales as [l3/l2]  [l], such that surface phenomena tend to prevail at small scales; this is referred to as the cube-square law. This first-order analysis is applied for various physical domains in Table 20.1.3, showing, for example, that electrostatic forces, viscous forces, capillary forces, and heat transfer become prevalent upon miniaturization. In particular at macroscale, electrostatic transduction is not common because of its low energy density compared to magnetic devices. As they are miniaturized, however, electrostatic forces scale as [l2] or remain constant, assuming constant field strength or constant voltage, respectively. In comparison, magnetic forces are more affected by scaling, and their three-dimensional windings are challenging to microfabricate. True scaling may not follow these trends exactly, since the capabilities of micromachining allow aspect ratios and material properties that are significantly different from those possible with traditional machining, as discussed next. Physical Properties and Behavior

Not only can commonly neglected forces and phenomena become important at small scales, but physical behavior can shift dramatically. In traditional engineering, the smallest device features are typically much greater than the atomic or molecular scales of the device’s constituents, such that the material behavior can be represented by continuum mechanics. As device scales are reduced, this representation may not hold and discrete molecular interactions become important. In microfluidics, for example, gas flows become rarefied when the mean

Scaling of Forces and Fluxes

Field, forces, fluxes Fluid mechanics Inertial forces Viscous forces

Basic relation

Scaling proportional to

1⁄2rAU 2

[l 2]

'U tA 5 m A 'y

[l1]

Note Reynolds number: rUL inertial Re 5 5 m ~ [l 1] viscous

Heat transfer Conduction Convection Energy transport

Mechanics Strain energy Beam bending Natural frequency Electromagnetism Electrostatic forces Magnetic forces Piezoelectric forces Physicochemistry Van der Walls Surface tension Gravity

'T A 'y qconv 5 hATe sh ~ l0.5d # m cpT 5 rAU cpT qcond 5 2k

1 1s se AL 5 AL 2 2E F 5 kx

¢k 5

v 5 2k/m 1 2 1 F5 2 1 F5 2 F5

Ewt 3 ≤ L3

[l 1] 2.5

[l ] [l2]

[l2] [l-1]

[l0] [l2]

eV A t

[l1]

— F 5 g2pr cos sucd F 5 mg

Biot number: convection Bi 5 ~ [l 1.5] conduction Stanton number: convection St 5 ~ [l 0.5] transport

[l1] [l3]

eV 2 A d2 2 B mA 0

20-11

For constant voltage V For constant field (V/d)

[l2]

[l1] [l1/4] [l1] [l3]

Bond number: surface Bo 5 ~ [l 21] gravity

20-12

AN INTRODUCTION TO MICROELECTROMECHANICAL SYSTEMS (MEMS)

free path between molecules, l, is comparable to channel widths d, as represented by the Knudsen number, Kn 5 l/d. For Kn 0.01, flow can be considered a continuum with zero relative velocity along surfaces. In the range 0.01 Kn 0.1, slip flow may occur at the surfaces, which can be accounted for in continuum modeling through appropriate boundary conditions.22 For air at atmospheric conditions, this range corresponds to 0.5- to 5-mm gaps, which are relatively common in MEMS. For Kn  0.1, however, the flow is best represented by statistical molecular interactions. Similar considerations lead to quantum effects for the heat transfer through thin films (less than 0.1 mm) or through nanostructured materials, such as superlattices. Modeling of heat conduction as phonon transport along or through these structures becomes necessary.23 Even in the continuum domain, physical phenomena can differ with scale. In microfluidics, flows are typically laminar because of low Reynolds numbers, Re 5 rUd/m , 1000–2000, where r is fluid density, U is a characteristic velocity, and m is the fluid viscosity. Such flows are well represented by laminar flow theory, such as Poisseuille flow, Couette flow, and more generally, Stokes flow and the Reynolds equation for lubrication theory.24 The absence of turbulence increases the challenges of mixing in biochemical microsystems, but can also be leveraged to form layered sheets of fluids with well-defined diffusionbased concentration profiles. For liquids, such as water and electrolytes, electrical surface phenomena can become noticeable in microchannels. A charged double layer can form at the surfaces, such that an electrical field applied along the channel will tend to drive the flow along the wall. Because of the small channel diameter, the core flow is entrained by viscous shear to create a plug-flow, uniform velocity profile. This approach requires high electrical fields, which are achievable in microsystems with reasonably high voltages (up to 1000 V) over short channels lengths (millimetres). This approach is commonly used to drive fluids in micro total analysis and lab-on-a-chip devices, and is referred to as electro-osmosis. In electrostatics, the maximum field strength (V/d ) at which breakdown occurs is a function of scale. At small gaps (d 5 mm, atmospheric pressure), the voltage limit stops decreasing to reach a minimum near 300 V; it even increases at submicrometre gaps. This results from the lack of molecules in the air gap to create the avalanche of collisions that lead to arcing. This benefits electrostatic actuation by increasing its maximum energy density and forces. In micromechanics, the crystalline microstructure of silicon components formed by bulk micromachining introduces the need for anisotropic continuum mechanics to accurately predict the stress and strain.25 Surface micromachined devices or other MEMS devices formed of thin films must be designed with consideration for their residual in-plane stress and stress gradient across the film thickness, which are highly dependent on the processing conditions. Material characterization and process control become critical aspects for the performance and reliability of such devices. In addition, the brittle nature of siliconbased materials excludes plastic deformation and their fracture strength is defined by maximum flaw size in the part. Fortunately, the probability of having a significant flaw in a brittle part increases with its volume and surface area, hence strength and reliability are favored in small scale structures. BEYOND SENSORS AND ACTUATORS: INTEGRATED MICROSYSTEMS

Although most commercial MEMS can be classified as sensors or actuators, there is an increasing trend toward microsystems that integrate multiple components to perform specific applications. For example: • Photonic and microoptoelectromechanical systems, or MOEMS, that direct, filter, and characterize light for optical telecommunication applications • Radio frequency (RF) MEMS that generate and transmit high-frequency signals for wireless communication applications • BioMEMS and lab-on-a-chip devices that perform high-throughput biological assays and revolutionize medical instrumentation

• Micro total analysis systems (mTAS) for synthesis and characterization of chemical and biochemical products • Power MEMS to provide miniature power sources, cooling, and propulsion for consumer electronics, people, and small robots. CONCLUSIONS

MEMS technology can be considered a toolbox to create miniature sensors, actuators, and integrated microsystems. The main tools are the materials and processes used in microfabrication, the engineering science with emphasis on small-scale phenomena, as well as the sensing and actuating principles. The parallel nature of microfabrication processes such as photolithography, thin-film deposition, and etching allows batch fabrication of devices with high accuracy and in large quantities. In-plane device complexity (in terms of number of components in a system) comes for free, allowing large arrays or complex networks to be implemented with micrometre-scale resolution. The devices can naturally integrate electrical and mechanical elements, as well as optical, fluidic, thermal, chemical, or biological functionality on chip. The microfabrication infrastructure, equipment, process development, and control are, however, expensive, and are best amortized over large production quantities. To this cost must be added the costs of packaging and testing, which can exceed the cost of the chip itself, and the extensive engineering effort to achieve reproducibility and reliability, as required in commercial products. To be commercially successful, MEMS devices must typically offer increased functionality or performance at a lower cost than competing technologies, and in a highvolume market. Lower-volume applications can also be successful when the use of MEMS adds high value to the final product and the market share is protected by intellectual property or unique know-how. Whether commercially successful or not, an ever-increasing range of MEMS devices and technologies is being explored. This will contribute to our MEMS toolbox and enable novel solutions to our greatest challenges, ranging from nano to macro scales and spanning the physical, chemical, and biological worlds. REFERENCES 1. Got MEMS? 2003 Industry Overview and Forecast, In-Stat/MDR, July 2003. 2. M. Madou, Fundamentals of Microfabrication, CRC Press, New York, 1997. 3. S. Wolf and R. N. Tauber, Silicon Processing for the VLSI Era—Vol. 1: Process Technology, 2nd ed., Lattice Press, Sunset Beach, CA, 2000. 4. O. M. Jadaan, N. N. Nemeth, J. Bagdahn, and W. N. Sharp Jr., “Probabilistic Weibull behavior and mechanical properties of MEMS brittle materials,” J. Materials Sci., 38:4087–4113 (2003). 5. ADXL capacitive accelerometer, Analog Devices Inc. Norwood, MA. 6. polyMUMPs multiuser polysilicon surface micromachined process, MEMSCap, Bernin, France. 7. SUMMIT V polysilicon surface micromachined process, MEMX Albuquerque, New Mexico. 8. L. Hornbeck, “Digital Light Processing and MEMS: Timely Convergence for a Bright Future,” Proc. SPIE, 2639:2, Micromachining and Microfabrication Process Technology (1995) (www.dlp.com). 9. A. A. Yasseen, C. A. Zorman, and M. Mehregany, “Surface Micromachining of Polycrystalline SiC Films,” IEEE/ASME J. Microelectromechanical Systems 8(3):237–242 (1999). 10. G. Fedder et al., “Laminated High-Aspect-Ratio Microstructures in a Conventional CMOS Process,” Sensors & Actuators A, 57(2):103–110 (1996). 11. A. A. Ayón et al., “Characterization of a time multiplexed inductively coupled plasma etcher,” J. Electrochem. Soc. 146(1) (1999). 12. L. G. Fréchette et al., “High-Speed Microfabricated Silicon Turbomachinery and Fluid Film Bearings,” J. Microelectromechanical Systems, 14(1): 141–152 (2005). 13. Y. Lu and C. J. Kim, “Micro-finger Articulation by Pneumatic Parylene Balloons,” Proc. 12th Int’l Conf. on Solid State Sensors, Actuators and Microsystems, Boston, June 8–12, 2003. 14. Y. Xia and G. M. Whitesides, “Soft Lithography,” Annu. Rev. Mater. Sci., 28: 153–184 (1998). 15. H. Lorenz, M. Despont, P. Vettiger, P. Renaud, “Fabrication of photoplastic high-aspect ratio microparts and micromolds using SU-8 UV resist,” Microsystem Technologies, 4:143–146 (1998).

A BRIEF HISTORY OF NANOTECHNOLOGY 16. H. Guckel, “High-Aspect-Ratio Micromachining Via Deep X-Ray Lithography,” Proc. IEEE, 86(8):1586–1593 (1998). 17. S. D. Senturia, Microsystem Design, Kluwer Academic Publishers, Norwell, 2001. 18. S. D. Senturia, “CAD Challenges for Microsensors, Microactuators, and Microsystems,” Proc. IEEE, 86(8):1611–1626 (1998). 19. G. T. A. Kovacs, Micromachined Transducers Sourcebook, WCB McGrawHill, New York, 1998. 20. Kulite Pressure Transducer Handbook, Kulite Semiconductor Products, Inc., Leonia, NJ. 21. O. Solgaard, F. S. A. Sandejas, and D. M. Bloom, “Deformable grating optical modulator,” Optics Letters, 17:688–690 (1992). 22. G. E. Karniadakis and A. Beskok, Micro Flows: Fundamentals and Simulation, Springer, New York, 2002.

20-13

23. D. G. Cahill, W. K. Ford, K. E. Goodson, G. D. Mahan, A. Majumdar, H. J. Maris, R. Merlin, S. R. Phillpot, “Nanoscale Thermal Transport,” J. Applied Physics, 93:793–818 (2003). 24. V. N. Constantinescu, Laminar Viscous Flow, Springer, New York, 1995. 25. W. M. Lai, D. Rubin, and E. Krempl, Introduction to Continuum Mechanics, Butterworth–Heinemann, New York, 1996. Scientific Journals Journal of Microelectromechanical Systems, ASME/IEEE. Sensors and Actuators, Elsevier. Lab on a chip, RSC Publishing. Journal of Micromechanics and Microengineering, Institute of Physics. Sensors Journal, IEEE.

20.2 INTRODUCTION TO NANOTECHNOLOGY by Alexander Couzis Nanotechnology focuses on the fabrication of devices with atomic- or molecular-scale precision. Molecular nanotechnology is defined as the three-dimensional positional control of molecular structure to create materials and devices to molecular precision. Devices with minimum feature sizes less than 100 nanometres (nm) are considered to be products of nanotechnology. A nanometre (nm) is one billionth of a metre (109 m), that is, about 1/80,000 of the diameter of a human hair, or 10 times the diameter of a hydrogen atom. The nanometre is the unit of length that is generally most appropriate for describing the size of single molecules. The nanoscale marks the interface between the classical and quantum-mechanical worlds. Nanoscience is an interdisciplinary field that encompasses physics, biology, engineering, chemistry, and computer science, among others, and the prefix nano appears with increasing frequency in scientific journals and the news. Thus, as we increase our ability to fabricate computer chips with smaller features and improve our ability to cure disease at the molecular level, nanotechnology is at the ready. Scientists and engineers believe that the fabrication of nanomachines, nanoelectronics, and other nanodevices will help solve numerous problems faced by mankind today related to energy, health, and materials development. Nanotechnology, currently in its early stages, has already provided us with the ability to organize matter on the atomic scale, and there are already numerous products available as a direct result of our rapidly increasing ability to fabricate and characterize feature sizes less than 100 nm. Transparent nanoparticulate titania-based ultraviolet-absorbing coatings,1–3 particle-based methods for delivery of highly potent anticancer drugs,4 mirrors that don’t fog, biomimetic* coatings with a water contact angle near 1808,5,6 and gene and protein chips7,8 are some of the first manifestations of nanotechnology. Breakthroughs in computer science and medicine will be where the real potential of nanotechnology will first be achieved; some breakthroughs are not that for off. A BRIEF HISTORY OF NANOTECHNOLOGY

The amount of space available to us for information storage (or other uses) is enormous. Richard P. Feynman,9 in a 1959 lecture titled “There’s Plenty of Room at the Bottom,” stated that there is nothing besides our clumsy size that keeps us from using this space. At the time of that lecture, it was not possible to manipulate single atoms or molecules because they were far too small for the existing tools, yet Feynman correctly predicted that the time would come when atomically precise manipulation of matter would become possible. *

Systems that mimic biological equivalents.

Feynman also described such atomic scale fabrication as a bottom-up approach, as opposed to the top-down approach that we are accustomed to. The current top-down method for manufacturing involves the construction of parts through methods such as cutting, carving, and molding. Using these methods, we have been able to fabricate a remarkable variety of machinery and electronics devices. However, the sizes at which we can make these devices is severely limited by our ability to cut, carve, and mold. The current set of top-down tools that allow us to achieve nanoscale-sized devices include chemical vapor deposition, reactive ion etching, optical or deep ultraviolet (UV) or X-ray lithography, electron-beam (e-beam) lithography, ion implantation, etc. All these tools are well developed and have been used for a number of years in the microelectronics industry for the fabrication of central processing units (CPUs), memory, and photonic devices.10 Bottom-up manufacturing, on the other hand, would provide components made of single molecules, which are held together by covalent forces that are far stronger than the forces that hold together macroscale components. Furthermore, the amount of information that could be stored in devices built from the bottom up would be enormous. The key obstacle in these approaches is the degree of our ability to exercise positional control over molecules and atoms. At the macroscopic scale, the idea that we can hold parts in our hands and assemble them by properly positioning them with respect to each other goes back to prehistory. At the molecular scale, the idea of holding and positioning molecules is new. One method that provides positional control is referred to as directed selfassembly. In a self-assembly-based process, the intermolecular interactions and the experimental conditions are manipulated precisely so that the parts themselves can spontaneously assemble into the desired structure. This is a well-established and powerful method of synthesizing complex molecular structures. A basic principle in self-assembly is selective affinity: if two molecular parts have complementary shapes and charge patterns—one part has a hollow where the other has a bump, and one part has a positive charge where the other has a negative charge—then they will tend to stick together in one singular way. By stirring these parts, aided by Brownian motion especially if the parts are in solution, the parts will eventually, purely by chance, be brought together in just the right way and combine into a bigger part. This bigger part can combine in the same way with other parts, letting us gradually build a complex whole from molecular pieces by stirring them together. Even though self-assembly is a path to the formation of nanostructures, by itself it would not be able to make the very wide range of products promised by nanotechnology. During self-assembly the molecular components move around and collide into each other in all kinds of ways, and if they aggregate in an undesirable manner, unwanted random structures will result. Many molecular components exhibit this problem, so self-assembly won’t work for them.

20-14

INTRODUCTION TO NANOTECHNOLOGY

We can avoid these issues if we can hold and position the parts; this leads to the concept of directed self-assembly. When two sticky parts do come into contact with each other, they’ll do so in the right orientation because they are held in the right orientation. In short, positional control at the molecular scale should let us make things that would otherwise be difficult or impossible. Thus, in a competing approach, we develop positioning tools. One such tool is the scanning probe microscope (SPM).11–13 In SPMs, a very sharp tip is brought down to the surface of the sample being scanned. By monitoring the interaction of the tip with the surface, we can tell that the tip is approaching the surface and so can “feel” the outlines of the surface in front of us. Many different types of physical interactions with the surface are used to detect its presence (see earlier references). Some scanning probe microscopes literally push on the surface and note how hard the surface pushes back. Others connect the surface and probe to a voltage source, and measure the current flow when the probe gets close to the surface. A host of other probe-surface interactions can be measured, and are used to make different types of SPMs. In all of them, the basic idea is the same: when the sharp tip of the probe approaches the surface, a signal is generated, a signal that allows us to map out the surface being probed. The SPM can not only map a surface, but also in many cases the probe-surface interaction can change the surface. This has already been used experimentally to spell out molecular words, and opportunities to modify the surface in a controlled way are being investigated both experimentally and theoretically.14–16 Since that initial preview of nanotechnology, several methods have been developed which prove that Feynman was correct. The most notable methods are scanning probe microscopy1,14,17–22 and the corresponding advancements in supramolecular chemistry.3,23,24 Scanning probe microscopy gives us the ability to position single atoms and/or molecules in the desired place exactly as Feynman had predicted. Feynman was critical of traditional chemistry’s tedious procedures and the random nature of its reactions; recent advances have shown improved potential for use in nanotechnology.

two different levels. Progress made in these areas during recent years has shown both to be feasible replacements for semiconductor chips. Quantum computing seeks to write, process, and read information on the quantum level. It is at the nanoscale that quantum mechanical effects (such as wave-particle duality) become apparent. Numerous scientists are seeking ways to store information within the quantum mechanical realm. This is not a simple task. Since the laws of quantum mechanics involve unintuitive principles such as superposition and entanglement, a quantum computer would be able to violate some rules that limit our current computers. For instance, taking advantage of superposition allows a quantum bit of information, termed a qubit, to be used in several computations at the same time. Taking advantage of entanglement would allow information to be processed over long distances without the conventional requirement for wires. Molecular computation is another method, complementary to quantum computing, that seeks to write, process, and read information within single molecules. One molecule that has proved most promising for molecular computation is deoxyribonucleic acid (DNA). DNA is a long polymer made of four different nucleotides that can be represented by the letters A, T, C, and G. The order or sequence of these nucleotides within DNA provides the information for making protein, the main components of the molecular-scale machinery used by living organisms to carry out life-sustaining functions. Mathematicians have devised numerous ways to use DNA and the various proteins that come with it to carry out numerical computations that are notoriously difficult for silicon computers, namely NP-complete problems. The advantage that molecular computing using DNA has over conventional computing is that it is massively parallel. This means that each DNA molecule can function as a single processor, which greatly improves the speed of computation for complex problems. MEDICAL APPLICATIONS

SEMICONDUCTOR FABRICATION25

Computer chips (and the silicon-based transistors within them) are rapidly shrinking according to a predictable formula (by a factor of 4 every 3 years—Moore’s law). According to the Semiconductor Industry Association’s extrapolation of formulas such as this one (SIA roadmap), it is expected that the sizes of circuits within chips will reach the size of only a few atoms in about 20 years. Since almost all of our modern computers are built from silicon semiconductor transistors patterned and carved by light (photolithography), the shrinking of circuits predicted by the SIA may not be economical in the future, despite the enormous investment made to constantly shrink and improve semiconductor electronics. Smaller circuits require less energy, operate more quickly and, of course, take up less space. Thus, Moore’s law has held since computers first became commercially available. However, simple shrinking of components cannot continue much longer. As a transistor such as the metal-oxide-semiconductor field-effect transistor (MOSFET; one of the primary components used in integrated circuits) is made smaller, both its properties and manufacturing cost change with scale. Currently, ultraviolet light is used to create the silicon circuits with a lateral resolution around 200 nm (the wavelength of ultraviolet light). As the circuits shrink below 100 nm, new fabrication methods must be created, but at increasing cost. Furthermore, once the circuit size reaches only a few nanometres, quantum effects such as tunneling must be reckoned with; this drastically affects the computer’s ability to function normally. New methods for computer chip fabrication are constantly being sought by microchip manufactures. ALTERNATIVE ARCHITECTURES FOR NANOCOMPUTING26

In addition to single electron transistors, two promising alternatives to traditional computers are molecular computing and quantum computing. These two methods are intimately related, yet deal with information on

Molecular-scale manipulation of matter is receiving abundant attention in medicine. Since all living organisms are composed of molecules, molecular biology has become the primary focus of biotechnology. Countless diseases have been cured by our ability to synthesize small molecules, commonly know as “drugs,” that interact with the protein molecules that make up the molecular machinery that keeps us alive. Our understanding of how proteins interact with DNA, phospholipids, and other biological molecules allows such progress. Living systems exist because of the vast amount of highly ordered molecular machinery from which they are built. The central principle of molecular biology states that information required to build a living cell or organism is stored in DNA (which was described above for its use in molecular computation). This information is transferred from the DNA to the proteins by transcription and translation. These processes are all executed by various biomolecular components, mostly protein and nucleic acids. Molecular biology and the study of these interactions have led to the discovery of numerous pharmaceuticals that have been enormously effective to combat disease. Understanding of molecular mechanisms including substrate recognition, energy expenditure, electron transport, membrane activity, and much more have greatly improved our medical technology. This is related to nanotechnology for several reasons. First of all, it illustrates the capabilities of molecular-scale machinery. The goal of nanotechnology is molecular and atomic precision; thus nanotechnology has much to learn from nature. Copying, borrowing, and learning from nature is one of the primary techniques used by nanotechnology and has been termed biomimetics.15,27–35 Secondly, our ability to design synthetic, semisynthetic, and natural molecular machinery gives us an enormous potential for curing disease and preserving life. See Refs. 36 and 37. The human body is composed of molecules, hence the availability of molecular nanotechnology will permit dramatic progress in human medical services. More than just an extension of “molecular medicine,” nanomedicine will employ molecular machine systems to

CARBON NANOTUBES

20-15

address medical problems, and will use molecular knowledge to maintain and improve human health at the molecular scale. Nanomedicine will have extraordinary and far-reaching implications for the medical profession, for the definition of disease, for the diagnosis and treatment of medical conditions including aging, and ultimately for the improvement and extension of natural human biological structure and function. MOLECULAR SIMULATION38,39

A branch of computer science that is allowing rapid progress to be made in nanotechnology is the computer simulation of molecular-scale events. Molecular simulation is able to provide and predict data about molecular systems that would normally require enormous effort to obtain physically. By organizing virtual atoms in a molecular simulation environment, one can effectively model nanoscale systems. Current limitations of molecular simulation techniques are the molecular simulation algorithm and computation time for complex systems. Force-field algorithms are currently quite efficient and are often used today. However, such models neglect electronic properties of the system. In order to calculate electron density, quantum mechanical models are required. However, as the number of atoms and electrons is increased, the computational complexity of the model quickly reaches the limits of our most modern supercomputers. Thus, as the computational abilities of our computers are improved (often with help from nanoscience), increasingly complex systems will be within the reach of molecular simulation. CARBON NANOTUBES

Carbon nanotubes are one of nanotechnology’s most mentioned building blocks. With one hundred times the tensile strength of steel, thermal conductivity better than all but the purest diamond, and electrical conductivity similar to copper, but with the ability to carry much higher currents, they seem to be a wonder material. Carbon nanotubes come in a variety of flavors: long, short, singlewalled, multiwalled, open, closed, with different types of spiral structure, etc. Each type has specific production costs and applications. Some have been produced in large quantities for years while others are only now being produced commercially with decent purity and in quantities greater than a few grams. Carbon nanotubes were “discovered” in 1991 by Sumio Iijima of NEC and are effectively long, thin cylinders of graphite. Graphite is made up of layers of carbon atoms arranged in a hexagonal lattice, like chicken wire (see Fig. 20.2.1). Though the chicken wire structure itself is very strong, the layers themselves are not chemically bonded to each other but held together by weak Van der Waals forces. Taking one of these sheets of chicken wire and rolling it up into a cylinder and joining the loose wire ends will result in a carbon nanotube. In addition to their remarkable tensile strength (usually quoted as

Fig. 20.2.1 Layer structure of graphite.

100 times that of steel at one-sixth the weight), carbon nanotubes have shown other surprising properties. They can conduct heat as efficiently as most diamond (only diamond grown by deposition from a vapor is better), conduct electricity as efficiently as copper, yet can also be semiconducting. They can produce streams of electrons very efficiently (field emission), which can be used to create light in displays for televisions or computers, or even in domestic lighting, and they can enhance the fluorescence of materials they are close to. Their electrical properties can be made to change in the presence of certain substances or as a result of mechanical stress. Nanotubes within nanotubes can act like miniature springs, and they can even be stuffed with other materials. Nanotubes and their variants hold promise for storing fuels such as hydrogen or methanol for use in fuel cells, and they make good supports for catalysts. One of the major classifications of carbon nanotubes is into singlewalled varieties (SWNTs), which have a single cylindrical wall, and multiwalled varieties (MWNTs), which have cylinders within cylinders (see Fig. 20.2.2). The lengths of both types vary greatly, depending on the way they are made, and are generally microscopic rather than nanoscopic, i.e., greater than 100 nanometres. The aspect ratio (length divided by diameter) is typically greater than 100 and can be up to 10,000. Recent developments continue apace.

Fig. 20.2.2 Computational image of single- and multiwalled nanotubes.

20-16

INTRODUCTION TO NANOTECHNOLOGY

Although it is easier to produce significant quantities of MWNTs than SWNTs, their structures are less well understood than single-wall nanotubes because of their greater complexity and variety. Multitudes of exotic shapes and arrangements, often with imaginative names such as bamboo trunks, sea urchins, necklaces, and coils, have also been observed under different processing conditions. The variety of forms may be interesting but also has a negative side—MWNTs always (so far) have more defects than SWNTs, and these diminish their desirable properties. Many of the nanotube applications now being considered or put into practice involve multiwalled nanotubes, because they are easier to produce in large quantities at a reasonable price and have been available in decent amounts for much longer than SWNTs. In fact, one of the current major manufacturers of MWNTs, Hyperion Catalysis, does not even sell the nanotubes directly, but only premixed with polymers for composite applications. The tubes typically have 8 to 15 walls and are around 10 nm wide and 10 mm long. Focus on Structure

Fig. 20.2.3 Representation of a multiwalled carbon nanotube.

Single-Walled Carbon Nanotubes (SWNTs)

These are the stars of the nanotube world, and are much harder to make than the multiwalled variety. The oft-quoted amazing properties generally refer to SWNTs. As previously described, they are basically tubes of graphite and are normally capped at the ends (see Fig. 20.2.2), although the caps can be removed. The caps are made by mixing in some pentagons with the hexagons and are the reason that nanotubes are considered close cousins to buckminsterfullerene, a roughly spherical molecule made of 60 carbon atoms that looks like a soccer ball and is named after the architect Buckminster Fuller. (The word fullerene is used to refer to the variety of such molecular cages, some with more carbon atoms than buckminsterfullerene, and some with fewer.) The theoretical minimum diameter of a carbon nanotube is around 0.4 nm, which is about as wide as two silicon atoms side by side; nanotubes of this size have been made. Average diameters tend to be around 1.2 nm, depending on the process used to create them. SWNTs are more pliable than their multiwalled counterparts and can be twisted, flattened, and bent into small circles or around sharp bends without breaking. Discussions of the electrical behavior of carbon nanotubes usually relate to experiments on the single-walled variety. They can be conducting, like metal (such nanotubes are often referred to as metallic nanotubes), or semiconducting. The latter property has given rise to dreams of using nanotubes to make extremely dense electronic circuitry, and the year 2005 saw major advances in creating basic electronic structures from nanotubes in the lab, from transistors up to simple logic elements. The gap between these experiments and commercial nanotube electronics is extremely wide. Multiwalled Carbon Nanotubes (MWNTs)

Multiwalled carbon nanotubes are basically concentric SWNTs— cylindrical graphitic tubes. In these more complex structures, the different SWNTs that form the MWNT may have quite different structures (length and chirality). MWNTs are typically 100 times longer than they are wide and have outer diameters mostly in the tens of nanometres.

A few key studies have explored the structure of carbon nanotubes by using high-resolution microscopy techniques. These experiments have confirmed that nanotubes are cylindrical structures based on the hexagonal lattice of carbon atoms that forms crystalline graphite. Three types of nanotubes are possible, called armchair, zigzag, and chiral nanotubes, depending on how the two-dimensional graphene sheet is “rolled up.” The different types are most easily explained in terms of the unit cell of a carbon nanotube—in other words, the smallest group of atoms that defines its structure (see Fig. 20.2.4a). The so-called chiral vector of the nanotube, Ch, is defined by Ch 5 na1 1 ma2, where a1 and a2 are unit vectors in the two-dimensional hexagonal lattice and n and m are integers. Another important parameter is the chiral angle, which is the angle between Ch and a1. When the graphene sheet is rolled up to form the cylindrical part of the nanotube, the ends of the chiral vector meet each other. The chiral vector thus forms the circumference of the nanotube’s circular cross section, and different values of n and m lead to different nanotube structures (see Fig. 20.2.5). Armchair nanotubes are formed when n  m and the chiral angle is 308. Zigzag nanotubes are formed when either n or m are zero and the chiral angle u, as shown in Fig. 20.2.4a, is 08. All other nanotubes, with chiral angles intermediate between 08 and 308, are known as chiral nanotubes. The properties of nanotubes are determined by their diameter and chiral angle, both of which depend on n and m. The diameter dt is simply the length of the chiral vector divided by 1⁄4, and we find that dt 5 s 23/pdac2c sm 2 1 mn 1 n2d1/2, where acc is the distance between neighboring carbon atoms in the flat sheet. In turn, the chiral angle is given by tan1 C 23n/s2m 1 nd D . Measurements of the nanotube diameter and the chiral angle have been made with scanning tunneling microscopy and transmission electron microscopy. However, it remains a major challenge to determine dt and u at the same time as measuring a physical property such as resistivity. This is partly because the nanotubes are so small, and partly because the carbon atoms are in constant thermal motion. Also, the nanotubes can be damaged by the electron beam in the microscope. How to Make Nanotubes

When a Rice University group found a relatively efficient way to produce bundles of ordered SWNTs in 1996, they opened new opportunities for quantitative experimental studies on carbon nanotubes. These ordered nanotubes are prepared by the laser vaporization of a carbon target in a furnace at 1,2008C. A cobalt-nickel catalyst helps the growth of the nanotubes, presumably because it prevents the ends from being “capped” during synthesis, and about 70 to 90 percent of the carbon target can be converted to SWNTs. By using two laser pulses 50 ns apart, growth conditions can be maintained over a larger volume and for a longer time. This scheme provides more uniform vaporization and better

CARBON NANOTUBES

20-17

B´ B θ

A a1 O a2 (a) Zigzag (0,0) (1,0) (2,0) (3,0) (4,0) (5,0) (1,1)

(2,1) (3,1) (2,2)

a1

(6,0)

(4,1) (5,1)

(7,0) (8,0) (9,0) (10,0) (11,0)

(6,1) (7,1) (8,1)

(3,2) (4,2) (5,2) (6,2)

(7,2) (8,2) (9,2) (10,2)

(3,3) (4,3) (5,3) (6,3) (4,4) (5,4)

a2

(9,1) (10,1)

(7,3) (8,3)

(9,3)

(6,4) (7,4) (8,4)

(5,5) (6,5)

(9,4)

(7,5) (8,5)

(6,6) (7,6) (8,6)

Armchair (b) Fig. 20.2.4 Basics of structure.

control of the growth conditions. Flowing argon gas sweeps the nanotubes from the furnace to a water-cooled copper collector just outside of the furnace. At the University of Montpellier (France) Catherine Journet, Patrick Bernier, and colleagues later developed a carbon-arc method to grow similar arrays of SWNT. In this case, ordered nanotubes were also produced from ionized carbon plasma, and joule heating from the discharge generated the plasma. Several other groups are now making bundles of SWNTs, using variants of these two methods. However, the Rice group has had the largest impact on the field, largely because it was the first to develop an efficient synthesis method and has formed a lot of international collaborations to measure the properties of SWNTs. In a scanning electron microscope, the nanotube material produced by either of these methods looks like a mat of carbon ropes. The ropes are between 10 and 20 nm across and up to 100 µm long. When examined in a transmission electron microscope, each rope is found to consist of a bundle of SWNTs aligned along a single direction. X-ray diffraction, which views many ropes at once, also shows that the diameters of the SWNTs have a narrow distribution with a strong peak. For the synthesis conditions used by the Rice and Montpellier groups, the diameter distribution peaked at 1.38 6 0.02 nm, very close

to the diameter of an ideal (10, 10) nanotube. X-ray diffraction measurements by John Fischer and coworkers at the University of Pennsylvania showed that bundles of SWNTs form a two-dimensional triangular lattice. The lattice constant is 1.7 nm and the tubes are separated by 0.315 nm at closest approach, which agrees with prior theoretical modeling by Jean-Christophe Charlier and others at the University of Louvain-la-Neuve (Belgium). Chemical vapor deposition (CVD) can be used to make either SWNTs or MWNTs, and produces them by heating a precursor gas and flowing it over a metallic or oxide surface with a prepared catalyst (typically a nickel, iron, molybdenum, or cobalt catalyst). In several gas-phase processes, such as a high-pressure carbon monoxide process, both SWNTs and MWNTs can be produced by using a reaction that takes place on a catalyst flowing in a stream, rather than bonded to a surface. While MWNTs do not need a catalyst for growth, SWNTs can be grown only with a catalyst. However, the detailed mechanisms responsible for growth are not yet well understood. Experiments show that the width and peak of the diameter distribution depend on the composition of the catalyst, the growth temperature, and various other growth conditions. Great efforts are now being made to produce narrower

20-18

INTRODUCTION TO NANOTECHNOLOGY

The previous evidence would lead one to assume that the diameter and shape of the nanotube was the determining factor for its elastic modulus. However, when working with different MWNTs, Forró noted that the modulus measurements of MWNTs in 1999 (using AFM) did not strongly depend on the diameter, as had been recently suggested. Instead, they argued that the modulus of MWNTs correlates to the amount of disorder in the nanotube walls. However, the evidence showed that the value for SWNTs does in fact depend on diameter; an individual tube had a modulus of about 1 TPa while bundles (or ropes) of 15 to 20 nm in diameter had a modulus of about 100 GPa. It has been suggested that the controversy into the value of the modulus is due to the interpretation of the thickness of the walls of the nanotube. If the tube were considered to be a solid cylinder, then it would have a lower Young’s modulus. If the tube is considered to be hollow, the modulus gets higher, and the thinner the walls of the nanotube are treated, the higher the modulus will become. Nanotube Applications

Fig. 20.2.5 nanotube.

Schematic representation of rolling graphite to create a carbon

diameter distributions with different mean diameters, and to gain better control of the growth process. From an applications point of view, the emphasis will be on methods that produce high yields of nanotubes at low cost, and some sort of continuous process will probably be needed to grow carbon nanotubes on a commercial scale. Elastic Nanotube Properties Determining the elastic properties of SWNTs has been one of the most disputed areas of nanotube study in recent years. On the whole, SWNTs are stiffer than steel and are resistant to damage from physical forces. Pressing on the tip of the nanotube will cause it to bend without damage. When the force is removed, the tip of the nanotube will recover to its original state. Quantizing these effects, however, is rather difficult and an exact numerical value cannot be agreed on. The Young’s modulus (elastic modulus) of SWNTs lies close to 1 TPa. The maximum tensile strength is close to 30 GPa. The results of various studies over the years have shown a large variation in the value reported. In 1996, researchers at NEC in Princeton and the University of Illinois measured the average modulus to be 1.8 TPa. This was measured by first allowing a tube to stand freely and then taking a microscopic image of its tip. The modulus is calculated from the amount of blur seen in the photograph at different temperatures. In 1997, G. Gao, T. Cagin, and W. Goddard III reported three variations on the Young’s modulus to five decimal places that were dependent on the chiral vector. They concluded that a (10, 10) armchair tube had a modulus of 640.30 GPa, a (17, 0) zigzag tube had a modulus of 648.43 GPa, and a (12, 6) tube had a value of 673.94 GPa. These values were calculated from the second derivatives of potential. Further studies were conducted. In 1998, Treacy reported an elastic modulus of 1.25 TPa using the same basic method used 2 years earlier. This compared well with the modulus of MWNTs (1.28 TPa), found by Wong in 1997. Using an atomic force microscope (AFM), they pushed the unanchored end of a free-standing nanotube out of its equilibrium position and recorded the force that the nanotube exerted back onto the tip. In 1999, E. Hernández and Angel Rubio showed, using tight-binding calculations, the Young’s modulus was dependent on the size and chirality of the SWNT, ranging from 1.22 TPa for the (10, 0) and (6, 6) tubes to 1.26 TPa for the large (20, 0) SWNTs. However, using first-principle calculations, they calculated a value of 1.09 TPa for a generic tube.

The materials markets are already seeing applications for composites based on multi-walled carbon nanotubes and nanofibers. In many ways this is an old market; carbon fibers have been around commercially for a couple of decades. The benefits of the new materials are the same as those of carbon fibers, but better, given that the main properties considered are strength and conductivity. Carbon fibers are quite large, typically about a tenth of a millimetre in diameter, and blacken the material to which they are added. MWNTs can offer the same improvements in strength to a polymer composite without the blackening and often with a smaller amount of added material (a filler). The greater aspect ratio (length compared to diameter) of the newer materials can make plastics conducting with a smaller filler load, one significant application being electrostatic painting of composites in products such as car parts. Additionally, the surface of the composite is smoother, which benefits more refined structures such as platens for computer disk drives. When thinking about structural applications such as these, it should be remembered that, in general, as the fibers get smaller, the number of defects decreases, in a progression toward the perfection of the singlewalled nanotube. The inverse progression is seen in terms of ease of manufacturing—the more perfect, and thus more structurally valuable, the material, the harder it is to produce in quantity at a good price. This relationship is not written in stone—there is no reason that near-perfect SWNTs should not be producible cheaply and in large quantities. When this happens, and it might not be too far off, the improvements seen in the strength-to-weight ratios of composite materials could soar, impacting a wide variety of industries from sports equipment to furniture, from the construction industry to kitchenware, and from automobiles to airplanes and spacecraft (the aerospace industry is probably the one that stands to reap the greatest rewards). In fact, a carbon nanotube composite has recently been reported that is 6 times stronger than conventional carbon fiber composites. Just because the perfect nanotube is 100 times stronger than steel at a sixth of the weight doesn’t mean one is going to be able to achieve those properties in a bulk material containing them. The chicken wire arrangement that makes up the layers in graphite does adhere well to other materials, which is why graphite is used in lubricants and pencils. The same holds true of nanotubes—they are quite insular in nature, preferring not to interact with other materials. To capitalize on their strength in a composite, they need to latch on to the surrounding polymer, which they are not inclined to do (blending a filler in a polymer is difficult even without these issues—it took a decade to perfect this for the new nanoclay polymers now hitting the markets). One way of making a nanotube interact with something else, such as a surrounding polymer, is to modify it chemically. This is called functionalization and is being explored not just for composite applications but also for a variety of others, such as biosensors. For structural applications, the problem is that functionalization can reduce the valuable qualities of the nanotubes that one is trying to capitalize on.

REFERENCES

Of course, in theory there is no need to mix the nanotubes with another material. To make superstrong cables, for example, the best solution would be to use bundles of sufficiently long nanotubes with no other material added. For this reason, one of the dreams of nanotube production is to be able to spin them, like thread, to indefinite lengths. Such a technology would have applications from textiles (the U.S. military is in fact investigating the use of nanotubes for bulletproof vests) to the “space elevator.” The space elevator concept, which sounds like something straight out of science fiction, involves anchoring one end of a huge cable to the earth and another to an object in space. The taut cable so produced could then support an elevator that would take passengers and cargo into orbit for a fraction of the cost of the rockets used today. Sounds too far out? In fact, it has been established by NASA as feasible in principle, given a material as strong as SWNTs. The materials market is a big one, and there are others, which we’ll come to, but smaller ones exist too. Nanotubes are already being shipped on the tip of atomic-force microscope probes to enhance atomic-resolution imaging. Nanotube-based chemicals and biosensors should be on the market soon (they face stiff competition from other areas of nanotechnology). The thermal conductivity of nanotubes shows promise in applications from cooling integrated circuits to aerospace materials. Electron beam lithography, which is a method of producing nanoscale patterns in materials, may become considerably cheaper thanks to the field-emission properties of carbon nanotubes. Recent developments show promise of the first significant change in X-ray technology in a century. Entering more speculative territory, nanotubes may one day be used as nanoscale needles that can inject substances into, or sample substances from, individual cells, or they could be used as appendages for miniature machines (the tubes in multiwalled nanotubes slide over each other like graphite, but have preferred locations that they tend to spring back to). Big markets, apart from materials, in which nanotubes may make an impact, include flat panel displays (near-term commercialization is promised here), lighting, fuel cells, and electronics. This last is one of the most talked-about areas but one of the farthest from commercialization, with one exception, this being the promise of huge computer memories (more than 1,000 times greater in capacity than what you probably have in your machine now) that could, in theory, put a lot of the $40 billion magnetic disk industry out of business. Companies like to make grand claims, however, and in this area there is not just the technological hurdle to face but the even more daunting economic one, a challenge made harder by a host of competing technologies. Despite an inevitable element of hype, the versatility of nanotubes does suggest that they might one day rank as one of the most important materials ever discovered. In years to come they could find their way into myriad materials and devices around us and quite probably make some of the leaders in this game quite rich. REFERENCES 1. M. Bartz et al., “Stamping of monomeric SAMs as a route to structured crystallization templates: Patterned titania films,” Chemistry—A European Journal, 6(22):4149–4153 (2000). 2. B. T. Holland et al., “Synthesis of highly ordered, three-dimensional, macroporous structures of amorphous or crystalline inorganic oxides, phosphates, and hybrid composites,” Chemistry of Materials, 11(3):795–805 (1999). 3. J. D. Brennan, R. S. Brown, and U. J. Krull, “Transduction of analytical signals by supramolecular assemblies of amphiphiles containing heterogeneously distributed fluorophores,” Analytical Sciences, 14(1):141–149 (1998). 4. A. Miwa et al., “Development of novel chitosan derivatives as micellar carriers of taxol,” Pharmaceutical Research, 15(12):1844–1850 (1998). 5. T. Onda et al., “Super-Water-Repellent Fractal Surfaces,” Langmuir, 12(9): 2125–2127 (1996). 6. E. A. Vogler, “Structure and reactivity of water at biomaterial surfaces.” Advances in Colloid and Interface Science, 74(1–3):69–117 (1998). 7. M. M. Ngundi et al., “Array Biosensor for Detection of Ochratoxin A in Cereals and Beverages,” Analytical Chemistry, 77(1):148–154 (2005). 8. F. X. Zhou, J. Bonin, and P. F. Predki, “Development of functional protein microarrays for drug discovery: Progress and challenges,” Combinatorial Chemistry & High Throughput Screening, 7(6):539–546 (2004).

20-19

9. R. P. Feynman, “There’s Plenty of Room at the Bottom,” in American Physical Society. December 29, 1959, California Institure of Technology. 10. S. G. Wolf and R. N. Tauber, Silicon Processing for the VLSI Era, Vol. 1. Lattice Press, New York, 1987. 11. E. Occhiello, G. Mara, and F. Garbassi, “STM and AFM microscopies: new techniques for the study of polymer surfaces,” Polymer News, 14(7):198–200 (1989). 12. C. A. J. Putman et al., “Tapping Mode Atomic Force Microscopy in Liquids,” Applied Physics Letters, 64:2454–2456 (1994). 13. J. Schneider et al., “Atomic force microscope image contrast mechanisms on supported lipid bilayers,” Biophysical Journal, 79(2):1107–1118 (2000). 14. N. A. Amro, S. Xu, and G. Y. Liu, “Patterning surfaces using tip-directed displacement and self-assembly,” Langmuir, 16(7):3006–3009 (2000). 15. H. Colfen and L. M. Qi, “A systematic examination of the morphogenesis of calcium carbonate in the presence of a double-hydrophilic block copolymer,” Chemistry—A European Journal, 7(1):106–116 (2001). 16. G. Liu, S. Xu, and Y. Qian, “Nanofabrication of Self Assembled Monolayers Using Scanning Probe Lithography,” Acc. Chem. Res., 33:457–466 (2000). 17. A. R. Burns et al., “Molecular Level Friction As Revealed with a Novel Scanning Probe,” Langmuir, 15: 2922–2930 (1999). 18. P. Cacciafesta et al., “Human plasma fibrinogen adsorption on ultraflat titanium oxide surfaces studied with atomic force microscopy,” Langmuir, 16(21):8167–8175 (2000). 19. M. Callaway, D. J. Tildesley, and N. Quirke, “Recording of Surface Phases during Atomic Force Microscopy: A Simulation Study,” Langmuir, 10: 3350–3356 (1994). 20. J. J. Davis, H. A. O. Hill, and A. M. Bond, “The application of electrochemical scanning probe microscopy to the interpretation of metalloprotein voltammetry,” Coordination Chemistry Reviews, 200:411–442 (2000). 21. J. A. DeRose et al., Electrochemical deposition of nucleic acid polymers for scanning probe microscopy J. Vac. Sci. Technol. B, 9(2, Pt. 2):1166–1170. 22. A. C. Dumaual, L. J. Jenski, and W. Stillwell, “Liquid crystalline/gel state phase separation in docosahexaenoic acid-containing bilayers and monolayers,” Biochimica et Biophysica Acta—Biomembranes, 1463(2): 395–406 (2000). 23. P. L. Anelli et al., “Towards synthetic supramolecular polymers,” Polym. Prepr. (Am. Chem. Soc., Div. Polym. Chem.), 32(1):405–406 (1991). 24. S. I. Stupp et al., “Supramolecular Materials: Self-Organized Nanostructures,” Science, 276(5311): 384–389 (1997). 25. R. C. Jaeger, Introduction to Microelectronic Fabrication, 2nd ed, Prentice Hall, Upper Saddle River, N.J., 2002. 26. S. K. Shukla and R. I. Bahar, Nano, Quantum, and Molecular Computing: Implications to High Level Design and Validation, Kluwer Academic Publishers, Boston, 2004. 27. P. Calvert and P. Rieke, “Biomimetic Mineralization in and on Polymers,” Chemistry of Materials, 8(8):1715–1727 (1996). 28. N. Costa and P. M. Maquis, “Biomimetic processing of calcium phosphate coating,” Medical Engineering & Physics, 20(8):602–606 (1998). 29. N. U. Dan, “Synthesis of hierarchical materials,” Trends in Biotechnology, 18(9):370–374 (2000). 30. A. J. Golumbfskie, V. S. Pande, and A. K. Chakraborty, “Simulation of Biomimetic Recognition between Polymers and Surfaces,” Proceedings of the National Academy of Sciences of the United States of America, 96(21): 11707–11712 (1999). 31. A. H. Heuer et al., “Innovative materials processing strategies: A biomimetic approach,” Science, 255:1098–1105 (1992). 32. T. Hirai, S. Hariguchi, and I. Komasawa, “Biomimetic Synthesis of Calcium Carbonate Particles in a Pseudovesicular Double Emulsion,” Langmuir, 13: 6650–6653 (1997). 33. G. F. Xu et al., “Biomimetic synthesis of macroscopic-scale calcium carbonate thin films. Evidence for a multistep assembly process,” Journal of the American Chemical Society, 120(46):11977–11985 (1998). 34. M. Yada et al., “Biomimetic Surface Patterns of Layered Aluminum Oxide Mesophases Templated by Mixed Surfactant Assemblies,” Langmuir, 13(20): 5252–5257 (1997). 35. J. A. Zasadzinski, E. Kisak, and C. U. Evans, “Complex vesicle-based structures,” Current Opinion in Colloid & Interface Science, 6(1):85–90 (2001). 36. R. A. Freitas, Nanomedicine, Vol. 1, Landes Bioscience, Austin, TX, 1999. 37. J. S. Hall, Nanofuture: What’s Next for Nanotechnology, Prometheus Books, Amherst, NY, 2005. 38. S. Bandyopadhyay, et al., “Surfactant aggregation at a hydrophobic surface,” Journal of Physical Chemistry B, 102(33):6318–6322 (1998). 39. M. R. Zachariah and M. J. Carrier, “Molecular dynamics computation of gasphase nanoparticle sintering: A comparison with phenomenological models,” Journal of Aerosol Science, 30(9):1139–1151 (1999).

20.3 FERROELECTRICS/PIEZOELECTRICS AND SHAPE MEMORY ALLOYS by Jacqueline J. Li FERROELECTRICS/PIEZOELECTRICS Ferroelectrics are materials exhibiting a spontaneous dielectric polarization and hysteresis effects in the relation between dielectric displacement and electric field. This ferroelectric behavior is observed only below a transition temperature Tc—the so-called Curie temperature. Above the Curie temperature, the materials are no longer ferroelectric and show normal dielectric behavior. Ferroelectric phenomena were first observed in 1920 by Valasek in Rochelle salt [304-59-6], KNaC4H4O64H2O. In his description, Valasek called attention to the analogy with ferromagnetic phenomena. Ferro is a prefix associated with iron-containing materials. But the hysteresis nature in the relation between dielectric displacement and electric field is remarkably similar to the induction-magnetic field relation for ferromagnetic materials. Ferromagnetic materials generally contain iron. Ferroelectrics are named after the similar hysteresis loop and rarely contain iron as a significant constituent. Table 20.3.1 (Table I-1 in “Ferroelectric Crystals,” Franco Jona and G. Shirane, Dover, 1993) provides a partial list of ferroelectric crystals arranged approximately in chronological order of their discovery.

Table 20.3.1

Barium Titanate

In order to better understand the ferroelectric properties, let’s concentrate on barium titanate (BaTiO3), which is the most extensively investigated ferroelectric material and whose structure is far simpler than that of any other ferroelectrics. Its crystal structure is of perovskite type.

Partial List of Ferroelectric Crystals

Name and chemical formula Rochelle salt NaKC4H4O6  4H2O Lithium ammonium tartrate Li(NH4)C4H4O6  H2O Potassium dihydrogen phosphate KH2PO4 Potassium dideuterium phosphate KD2PO4 Potassium dihydrogen arsenate KH2AsO4 Barium titanate BaTiO3 Lead titanate PbTiO3 Potassium niobate KNbO3 Potassium tantalate KTaO3 Cadmium (pyro) niobate Cd2Nb2O7 Lead (meta) niobate PbNb2O6 Guanidinium aluminium sulfate hexahydrate C(NH2)3Al(SO4)2  6H2O Methylammonium aluminum alum CH3 NH3Al(SO4)2  12H2O Ammonium sulfate (NH4)2SO4 Trilglycine sulfate (NH2CH2COOH)2  H2SO4 Colemanite CaB3O4(OH)3  H2O Dicalcium strontium propionate Ca2Sr(CH3CH2COO)6 Lithium acid selenite LiH3(SeO3)2 Sodium nitrite NaNO2 20-20

Most ferroelectric materials are ceramics; some polymers also exhibit ferroelectric behaviors. At present, the number of known pure compounds with ferroelectric properties is about 200; they are classified into about 40 families. Examples of typical ferroelectrics are: potassium dihydrogen phosphate, KH2PO4 (commonly abbreviated as KDP), and a number of isomorphous phosphates and arsenates; barium titanate, BaTiO3, and other isomorphous double oxides; Rochelle salt (sodium potassium tartrate tetrahydrate, NaKC4H4O64H2O); and some other isomorphous crystals. The most widely used ferroelectrics now are lead zirconate titanate solid solutions, Pb(Zr-Ti)O3 (commonly known as PZT). Before we introduce the general properties and structures of ferroelectric materials, we focus on barium titanate, which is one of the simplest ferroelectric materials.

Curie temperature, 8C

Spontaneous polarization, 106 C/cm2

Year in which reported

 23

0.25

1921

170

0.20

1951

150

4.0

1935

 60

5.5

1942

177

5.0

1938

120

26.0

1945

490

>50

1950

415

30

1951

260

?

1951

 85

~10

1952

570

?

1953

None

0.35

1955

 96

1.0

1956

 50

0.25

1956

 49

2.8

1956

 7

0.65

1956

 8

0.3

1957

None

10.0

1959

160

7.0

1958

FERROELECTRICS/PIEZOELECTRICS

20-21

c P P

P

a0 Cubic (a)

a Tetragonal (b) Ti +4 P

O −2

P

Ba +2 (c) Fig. 20.3.1 (a) Cubic structure of a BaTiO3 unit cell above the Curie point of 1208C. (b) Tetragonal structure below 1208C. (c) 908 reorientation under a compressive stress or a perpendicular electric field.

This structure is common to a large family of compounds with the general formula ABO3, whose representative in nature is the mineral CaTiO3, called perovskite. Figure 20.3.1a shows a unit cell of the cubic perovskite-type lattice of BaTiO3. The Curie temperature of BaTiO3 is at 1208C. Above 1208C, BaTiO3 remains in the centrosymmetric cubic structure and nonpolar phase (nonpiezoelectric, or paraelectric). Upon cooling just below 1208C, BaTiO3 undergoes a phase transformation to a tetragonal modification as shown in Fig. 20.3.1b. In this process, the positively charged titanium ion moves from its original central position to a position aligned along the polar axis (or c axis), and together with the negatively charged oxygen ions, forms a dipole moment. The total dipole moments in the material with a unit volume define the spontaneous polarization P. Between 58C and 1208C, the crystal has a stable tetragonal structure and the lattice parameters are a  0.3992 nm and c  0.4032 nm. Such a poled crystal can undergo a 1808 domain switch under a reversed electric field, or a 908 switch under a compressive stress or under an electric field perpendicular to its original polar direction, as shown in Fig. 20.3.1c. Hysteresis Loop

The ferroelectric material can have zero overall polarization under zero applied electric field, due to a random orientation of microscopicscale domains, regions in which the c axes of adjacent unit cells have a common direction. If we first apply a small electric field, we will have only linear relationship between polarization P and the electric field E because the field is not large enough to switch any of the domains and the crystal will behave like a normal paraelectric. Figure 20.3.2 shows schematic plot of P versus E. When the applied electric field increases, the dipole orientations are favored to the direction parallel to the applied field direction. In this case, domains with such orientations “grow” at the expense of other, less favorably oriented, ones. When the applied electric field increases more, more domains will switch over in the favored applied field direction and the polarization will increase rapidly, until it reaches a state in which all the domains are aligned in the applied electric field direction. This is a state of saturation and the crystal consists now of a single domain. If now we decrease the applied field to zero, the polarization will generally not return to zero and some of the domains will remain aligned in

the applied field direction. The polarization at this point is called remanant polarization Pr. The extrapolation of the linear portion of the top curve back to the polarization axis represents the value of the spontaneous polarization Ps. In order to observe the overall polarization of a ferroelectric material, an electric field in the opposite direction needs to be applied. The value of the electric field required to reduce the polarization P to zero is called the coercive field Ec. Further increase of the field in the opposite direction will, of course, cause complete alignment of the dipoles in this direction, and the cycle can be completed by reversing the field direction once again. From Fig 20.3.2, the hysteresis loop of the polarization P versus the applied field E is obtained, which is the most important characteristic of a ferroelectric material. The essential feature of a ferroelectric is not that it has a spontaneous polarization, but rather that this spontaneous polarization can be reversed by means of an electric field. Electromechanical Coupling Behavior

Because of the nature of polarization and the reorientation or domain switch of a ferroelectric under a specific electric and/or mechanical loading, a coupling behavior between electric and mechanical can be observed. When a ferroelectric material is subjected to an electric field, it can be deformed up to 0.2 percent of strain. Figure 20.3.3 shows the relation between the axial strain and the electric field of a PLZT, which gives a typical butterfly-shaped strain versus electric field curve. On the other hand, under the applied compressive stress loading, a ferroelectric produces a measurable voltage change. Piezoelectricity

Although ferroelectricity is an intriguing phenomenon, it does not have the practical importance that ferromagnetic materials display (in areas such as magnetic storage of information). The most common applications of ferroelectrics stem from a closely related phenomenon, piezoelectricity. The prefix piezo comes from the Greek word for pressure. Piezoelectricity is the ability of certain crystalline materials to develop an electric charge proportional to a mechanical stress. The piezoelectric phenomenon was first discovered by J. Curie and P. Curie in 1880. The direct piezoelectric effect is that electric polarization is produced by mechanical stress. Closely related to it is the converse effect, in which

20-22

FERROELECTRICS/PIEZOELECTRICS AND SHAPE MEMORY ALLOYS

Fig. 20.3.2 Schematic diagram of ferroelectric hysteresis loop.

All ferroelectric materials exhibit piezoelectric behavior in limited elastic range. But not all pyroelectrics that possess a spontaneous polarization are ferroelectric in the sense of possessing a reorientable electric moment. While ferroelectric materials exhibit nonlinear coupling behavior between electrical and mechanical systems and hysteresis loops as discussed above, piezoelectrics have only a linear electrical-mechanical coupling effect. The principal reason that ferroelectrics were discovered so much later was because the formation of domains of differently oriented polarization within virgin single crystals led to a lack of any net polarization and very small pyroelectric and piezoelectric response. Piezoelectric Polymers

Fig. 20.3.3 The butterfly-shaped strain versus electric field relations of a PLZT.

a crystal becomes strained when an electric field is applied. We can write the two effects that occur in piezoelectrics as Direct piezoelectric effect:

E 5 gs

(20.3.1)

Converse piezoelectric effect:

e  dE

(20.3.2)

where E is the electric field (V/m), s is the applied stress (Pa), e is the strain, and g and d are piezoelectric constants. The g constant is related to the d constant by the permittivity: g 5 d/er 5 d/Ke0 where er is permittivity and K is the relative dielectric constant, which equals to the ratio between the charge stored on an electroded slab of material brought to a given voltage and the charge stored on a set of identical electrodes separated by vacuum (K 5 er/e0). Some piezoelectric materials and their typical values for d are listed in Table 20.3.2.

So far we have seen plenty of examples of ceramics showing piezoelectricity. It is known that some natural organic substances, e.g., rubber, wood, and hair, are piezoelectric. Bone also shows a piezoelectricity of the same order as wood. The symmetry of rubber and wood is the same as that for piezoelectric ceramics. In 1969, Kawai discovered that the piezoelectricity in polyvinylidene fluoride (PVDF or PVF2) is several times larger than that of quartz. A thin PVDF film was extended and poled at 1008C under a high field, such as 300 kV/cm. Some improvements were made in the poling procedure and a high d constant, about 30 pC/N, was obtained. PVDF is a polymer that has crystallinity of 40 to 50 percent. The PVDF crystal is dimorphic, with two phases (a and b phases).The b phase is polar and piezoelectric. To improve the piezoelectricity, some copolymers have been developed. The copolymer VDF/TrFE consists mostly of the piezoelectric b phase of PVDF with high crystallinity, about 90 percent. The magnitude of polarization in TrEF is only half of that in VDF, but the excellent crystallinity improves the piezoelectric coupling in VDF polymers. Because of the flexibility of the piezoelectric polymer, it usually is supplied in the form of a thin film. This feature makes it suitable for applications in headphones and speakers. Furthermore, the thin-film transducer is applicable to a high-frequency ultrasonic generator. The performance of these materials greatly depends on the thermal and poling conditions. Typical characteristics of piezoelectric polymers are given in Table 20.3.3 (Table 9.4 in Ikeda’s book).

FERROELECTRICS/PIEZOELECTRICS Table 20.3.2

20-23

The Piezoelectric Constant d for Selected Materials Material

Piezoelectric constant d  1012, C/Pa · m2  m/V

Quartz (SiO2) Rochelle salt (NaKC4H4O6 · 4H2O) EDT (C6H12N2O6) DKT (N2C4H4O6 · H2O) Amonium dihydrogen phosphate—ADP (NH4H2PO4) BaTiO3 Lithium niobate (LiNbO3) Lithium tantalite (LiTaO3)

2.3 11 10–20 10–20 23 85.6 6 8

Piezoelectric crystals

Piezoelectric semiconductors BeO ZnO CdS CdSe AIN

0.24 12.4 10.3 7.8 5

Piezoelectric ceramics PZTs (PbZr O3 and PbTiO3) PLZT BaTiO3 PbNb2O6

250–365 1188 191 80

Application of Piezoelectrics

Because of the electric and mechanical coupling behavior in piezoelectric materials, they can be used as transducers between electrical and mechanical energy. They also are active materials (or “smart” materials), which have the ability to perform both sensing and responding functions. This ability traditionally is associated only with living systems, as opposed to inanimate matter. The distinguishing feature of smart materials is, then, their ability to imitate this fundamental aspect of life. In practice, piezoelectric transducers are limited to devices involving only very small mechanical displacements and small amounts of electric charge per cycle. They are also the ideal candidates as transducers, actuators, or sensors in microelectromechanical systems (MEMS). A list of representative applications of piezoelectric materials is given below. 1. Phonograph cartridges. Phonograph pickups have used more piezoelectric ceramic elements than all other applications combined. In Table 20.3.3

the period from 1935 to 1950, the phonograph pickup field was dominated by the first ferroelectric single crystal, Rochelle salt. A schematic sketch of phonograph pickup elements in the conventional configuration of a cemented sandwich of flat plates of ceramic on a brass vane is shown in Fig. 20.3.4. The forces available from the phonograph record groove are only 1 to 5 g. Typical output voltage for the ceramic is from 0.1 to 0.5 V. Modified lead titanate zirconates (PZTs) have largely replaced barium titanate, as they give approximately twice as much output. 2. Ultrasonic transducers. A common application of piezoelectric materials is in ultrasonic transducers. Figure 20.3.5 shows a typical cross section of an ultrasonic transducer. In this cutaway view, the piezoelectric crystal or element is encased in a convenient housing. The constraint of the backing material causes the reverse piezoelectric effect to generate a pressure when the faceplate is pressed against a solid material to be inspected. When the transducer is operated in this way (as an ultrasonic transmitter), an ac electrical signal produces an ultrasonic

Characteristics of Piezoelectric Polymers Copolymer

r [103 kg/m3] e33/e0 tan d d31 [pC/N] e33 [C/m2] g33 [103 V  m/N] kt c33D [109 N/m2] v31 [km/s] Qm Z0 [106 kg/m2  s] TC [8C] Pt [C/m2] Ec [Kv/mm] p [mC/m2  K]§ *

Polymer PVDF

P(VDF TrFE)*

P(VDFTeFE)†

P(VDCN VAc)‡

Composite PZT/ PVDF

1.78 6.2 0.25 28 0.16 320 0.20 9.1 2.26 10 4.0 170 0.055 45 35

1.88 6.0 0.15 12.5 0.23 380 0.30 11.3 2.40 20 4.5 130 0.082 38 50

1.90 5.5

1.20 6.0

5.5 30

8

6 0.14

0.21

0.22

0.18 26 0.07 26.6 2.2

4.4 141 0.071 39 40

3.2

12.1

0.035 30

VDF0.75 TrFE0.25. † VDF0.8 TeFE0.2. ‡ VDCN0.5 VAc0.5. § Pyroelectric coefficient. VDF: vinylidene fluoride; TrFE: trifluoroethylene; TeFE: tetrafluoroethylene; VDCN: vinylidene cyanide; VAc: vinylacetate.

20-24

FERROELECTRICS/PIEZOELECTRICS AND SHAPE MEMORY ALLOYS Force Series connected v

Force Parallel connected v

Fig. 20.3.4 Ceramic phonograph pickup, series- and parallel-connected. (From Jaffe, Cook, and Jaffe, Fig. 12.1.)

Housing

Epoxy potting Backing material

Electrical contact (+)

Coaxial connector Wear-resistant Electrical faceplate Piezoelectric contact (−) element Fig. 20.3.5 A cross section view of an ultrasonic transducer. ( From Metals Handbook, 8th ed. Vol. 11, American Society for Metals, Metals Park, Ohio, 1976.)

signal of the same frequency (usually in megahertz). When the transducer is operated as an ultrasonic receiver, the direct piezoelectric effect is employed. In that case, the high-frequency elastic wave striking the faceplate generates a measurable voltage oscillation of the same frequency. 3. Instrument transducers. PZT ceramics have become the dominant material for piezoelectric accelerometers. In addition to their use in generating ultrasonic waves of frequencies in the order of 1 to 5 MHz to detect flaws and other inhomogeneities in solids or in joints between solid bodies, miniature ceramic transducers have been inserted into blood vessels to record periodic pressure changes connected with the heart beat. Special-purpose transducers such as blast gages and noisesensing elements to detect boiling of cooling water in nuclear reactors have been served well by piezoelectrics. Much ingenuity has been expended on other schemes that utilize piezoelectrics for measurement of forces that can be related to input variables of aerospace, medical, and industrial instrumentation. 4. Air transducers. Air transducers have been widely used in microphones, especially in the hearing-aid field. But with the advent of the transistor, the market became dominated by electrodynamic microphones of low impedance. Progress in techniques for manufacturing thin ceramic bending elements has resulted in recouping some of the microphone market for piezoelectricity. Piezoelectrics also work well as tuned ultrasonic microphones in remote control of television sets. 5. Actuators. One example of piezoelectric materials as smart materials is the piezoelectric actuator used in an impact dot-matrix printer. A schematic illustration of such a ceramic actuator is shown in Fig. 20.3.6. In this case, of course, the signal is coming in the form of a computer text or image to be printed. Each character formed in a typical printer is a 24  24 dot matrix. A printing ribbon is impacted by the multiwire array. The printing element is composed of a multilayer piezoelectric device involving 100 thin ceramic sheets. Each sheet is 100 mm thick. The hinged lever provides a displacement magnification of 30 times, resulting in a net motion at the ink ribbon of 0.5 mm with an energy transfer efficiency of 50 percent. The advantages of the piezoelectric design application compared with conventional electromagnetic types are order-of-magnitude higher printing speed, order-ofmagnitude lower energy consumption, and reduced printing noise. The reason for the last advantage is that the low level of heat generation by the piezoelectric allows greater sound shielding. There are many other applications of piezoelectrics such as delay line transducers, wave filters, high-voltage sources, and sensors. Good sources of information on piezoelectric devices and their applications

Head element Platen Paper Ink ribbon Piezoelectric actuator

Guide

Wire Wire Stroke amplifier Wire guide

(a)

(b)

Fig. 20.3.6 A schematic illustration of a piezoelectric actuator in an impact dot-matrix printer. (a) Overall structure of the printer head. (b) Close-up of the multilayer piezoelectric printer-head element. (From K. Uchino, MRS Bulletin, 18; 42, 1993.)

SHAPE MEMORY ALLOYS

20-25

can be found in Katz’s book (1959) which emphasizes circuit applications, Mason’s book (1958), and some other materials science handbooks. Glossary Converse piezoelectric effect: A crystal becomes strained when an elec-

tric field is applied. Curie temperature: The temperature at which such a transition from the polar into the nonpolar state occurs. Direct piezoelectric effect: Electric polarization is produced by mechanical stress. Domains: Regions with uniform alignment of the electric dipoles or uniform polarization. Ferroelectric crystal: Pyroelectric crystal with reversible polarization. Hysteresis loop: The loop traced out by the polarization as the electric field is cycled. A similar loop occurs in magnetic materials. Paraelectric: A symmetric unit cell material; only a small polarization is possible, as the applied electric field causes a small induced dipole. Piezoelectricity: A linear interaction between electrical and mechanical systems. Polarization: Alignment of dipoles so that a charge can be permanently stored. Pyroelectric: A crystal having a spontaneous polarization. Pyroelectricity: An interaction between electrical and thermal systems. Spontaneous polarization: Measured in terms of dipole moment per unit volume, or with reference to the charges induced on the surfaces perpendicular to the polarization, in terms of charge per unit area.

Fig. 20.3.7 Schematic sketch of NiTi microstructure. (a) Austenite; (b) martensite (twined).

higher temperature, Af . A schematic illustration of two phases of nitinol, which is a typical shape memory alloy that contains a nearly equal mixture of nickel (55 wt.%) and titanium, is shown in Fig. 20.3.7. If no forces applied during heating or cooling, the microstructure of the alloy will change without macroscopically noticeable shape change. In the martensitic condition, an alloy can be easily deformed into a new shape by nonconventional atomistic mechanisms. The alloy will remain in the new deformed shape as long as the temperature is kept constant. Heating will cause the material to transform back to the austenitic phase and restore its original shape. This restoring effect is called shape memory effect (Fig. 20.3.8).

SHAPE MEMORY ALLOYS Shape memory alloys (SMAs) are metals that, after being strained, at a

certain temperature revert back to their original shape. A change in their crystal structure above their transformation temperature causes them to return to their original shape. They exhibit two very unique properties: pseudo-elasticity and the shape memory effect. The shape memory effect was first discovered by the Swedish physicist Arne Olander in 1932, who used an alloy of gold and cadmium, but not until the 1960s were any serious research advances made in the field of shape memory alloys. In early 1960s, researchers at the U.S. Naval Ordnance Laboratory discovered the shape memory effect in nickel titanium alloy and named it Nitinol (an acronym for nickel titanium naval ordnance laboratory). Still, not until the early 1970s did several commercially available products appear that implemented shape memory alloys. The most effective and widely used SM alloys include NiTi (nickel-titanium), CuZnAl, and CuAlNi. Most shape memory alloys also exhibit pseudo-elastic behavior, which is a generic name for both rubberlike behavior and superelasticity. Both the shape memory effect and pseudo-elasticity are mainly due to the solid-state phase transformation of the material. There are two stable phases in shape memory alloys—martensite at low temperature and austenite at high temperature. Martensite (after the German scientist Martens) was originally used to designate the hard microconstituent formed in quenched steels. Now it is used in a wider context to apply to products of all martensitic transformations. Such transformations take place in crystalline solids by coordinated displacements of atoms or molecules over distances smaller than interatomic distances in the parent phase. For this reason the transformations are described as being diffusionless. More descriptions of martensite can be found in other sections. Austenite was the name originally given to the face-centered-cubic (FCC) crystal structure of iron. It is now used in a wider context for other metals, alloys with cubic structures. In general, a shape memory alloy starts martensitic transformation at a temperature Ms during cooling; the transformation is complete when a lower temperature Mf is reached, where the material is said to be in the martensitic state. Above a certain temperature As, the material starts to transform back to austenite, and the transformation is completed at a

Fig. 20.3.8 Schematic microstructure diagram of the shape memory effect.

In most cases, the memory effect is one-way, which means that, upon cooling, a shape memory alloy does not undergo any shape change, even though the structure changes back to martensite. When the martensite is strained up to several percent, however, that strain is retained until the material is heated again, at which time shape recovery occurs. Upon recooling, the material does not spontaneously change shape, but must be deliberately strained if shape recovery is again desired. It is possible in some of the shape memory alloys to cause two-way shape memory. That is, shape change occurs upon both heating and cooling. The amount of this shape change is always significantly less than obtained with the one-way shape memory effect, and very little stress can be exerted by the alloy as it tries to assume its low-temperature shape. The heating shape change can still exert very high forces, as with the one-way memory. Shape memory alloys also exhibit pseudo-elastic behavior. Unlike the shape memory effect, pseudo-elasticity occurs without a change of temperature. In the austenite condition, a SMA undergoes a phase transformation from austenite to deformed martensite (or stress-induced martensite) when a stress loading is applied. Once the stress is released, the material returns to the austenite state, which means its shape is restored. This is called superelasticity. A typical stress-strain curve is shown

20-26

FERROELECTRICS/PIEZOELECTRICS AND SHAPE MEMORY ALLOYS

Fig. 20.3.9 Schematic stress-strain curve of a NiTi alloy loaded above Ms temperature and then unloaded. Stress-induced martensite (SIM) is formed during loading, which disappears upon unloading.

in Fig. 20.3.9, while the load-temperature diagram in Fig. 20.3.10 shows the transformation process during a loading and unloading cycle. Rubberlike behavior was first observed in 1932 by Olander, who studied an Au-Cd alloy. This phenomenon has also been found in several other SMA alloy systems, such as Cu-Al-Ni, Cu-Au-Zn, and Cu-Zn. Considering the Au-Cd alloy, it is interesting to note that the rubberlike behavior is found only after the martensite is aged at room

Fig. 20.3.10 Load diagram of the superelastic effect above Af temperature. Table 20.3.4

temperature for at least a few hours. Freshly transformed specimens exhibit the shape memory effect, and bent rods will not spring back to their original shape without aging. Ruberlike behavior also constitutes a mechanical type of shape memory, as does the process of superelasticity, and, in fact, these two phenomena cannot be distinguished on the basis of stress-strain curves alone. But rubberlike behavior is characteristic of a fully martensitic structure, whereas supperelastic behavior is associated with formation of martensite from the parent phase under stress. These two types of behavior collectively fall in the category of pseudo-elasticity. The most commercially available SMAs are the NiTi alloys and the copper-base alloys. Properties of the two systems (see Table 20.3.4) are quite different. The NiTi alloys have greater shape memory strain (up to 10 percent versus 4 to 5 percent for the copper-base alloys), tend to be much more thermally stable, have excellent corrosion resistance, and have much higher ductility. On the other hand, the copper-base alloys are much less expensive, can be melted and extruded in air with ease, and have a wider range of potential transformation temperatures. The two alloy systems thus have advantages and disadvantages that must be considered in a particular application. Applications of Shape Memory Alloys

Some examples of applications using shape memory effect are listed below. • Coffeepots: A nickel-titanium spring in coffeepots marketed in Japan (Fig. 20.3.11) is trained to open a valve and release hot water at the proper temperature to brew a perfect pot of coffee. • Linear motor for remote control: Locks, valves, screens, and other devices can be remotely controlled by a shape memory linear motor. A self-limiting electric heating element (PTC) is fixed onto a shape memFig. 20.3.11 Coffeepot. ory bending element. Applied heat causes shape recovery of the bending element that is transformed into linear motion of the actuator. As the electric current drops, the temperature falls and the actuator returns to its starting position. Energy consumption is low, and heating elements to operate at any desired voltage are easily available. The main advantage of this application is the direct linear movement, without the need to convert a rotary movement into a linear one. Another advantage is its noiseless operation. • Hubble space telescope: In April 1990 the Hubble space telescope was launched into space aboard the shuttle Discovery. The telescope

Some Selected Properties of Shape Memory Alloys Composition, wt.%

Alloy

Nitinol*

Transformation temperature Af , °C

Modulus of elasticity, GPa

Recovery strain, %

SE508 tubing

Ni: 55.8 Ti: balance O (max): 0.05 C (max): 0.02

15

41–75

10

SE508 wiring

Same as tubing

5–18

41–75

10

SM495 wire

Ni: 54.5 Ti: balance O (max): 0.05 C (max): 0.02

60

28–41

10

200 to 110

28–83

8.5

Cu-Al-Ni

Al: 14–14.5 Ni: 3–4.5

200

80–85

4

Cu-Zn-Al

Zn: 38.5–41.5 Al: A few wt.%

120

70–72

4

Ni-Ti

* From Nitinol.com.

Ni: 49–51 (at.%)

REFERENCES

was put into orbit around the earth for observing planets and stars and with the aim to obtain pictures 50 times sharper compared to observations from the earth. Several systems of the Hubble space telescope were driven by solar energy. The solar panels that were folded during the launching were unfolded and positioned after the Hubble space telescope was brought into position. The mechanism, designed by Dornier, is reliable and space- and energy-efficient. It used a specially bent Ni-Ti shape memory element to keep the solar panels in the locked position during launch. At a temperature of 1158F, the shape memory element remembered its original straight form and unlocked the panels. • Vascular stents: A vascular stent is a device that is implanted in a blood vessel (vein or artery) to provide structural rein- Fig. 20.3.12 Stent. forcement of the vessel wall (Fig. 20.3.12). The introduction of SMA to stent manufacture allows greater recovery strains for use in wider vessels, or stronger recovery force in narrower vessels. • Antiscald devices: Basically these are an extension that fits between a shower head and water pipe, containing a Ti-based SMA element that expands if water becomes too hot, choking off the flow. A very cheap, easy, and effective alternative to more expensive technologies. • Connecting rings: Rings and cylinders made of SMA that is austenite at ambient temperature can provide significant constraining force around the inner circumference, ensuring a strong join that can be released by cooling the coupling. For instance, NiTi rings on electrical connectors (Fig. 20.3.13) provide a secure joint at ambient temperatures. Yet, they can easily be released by Fig. 20.3.13 Electric connector. chilling. Some examples of applications in which pseudo-elasticity is used are: • Eyeglass frames: Nitinol is extensively used in the manufacture of eyeglass frames. It is used in the superelastic state or in the combined shape memory and superelastic states. The frames hold prescription lenses in the proper position so that the wearer gets the maximum benefit from them. • Endodontic wire—high-fatigue superelastic wire: Nickel-titanium alloy has entered the world of endodontics. The flexibility of nickel titanium makes it an ideal material for use in the manufacture of endodontic instruments. Endodontic files using NiTi alloys, both hand and rotary types, have the potential to greatly improve a clinician’s ability to negotiate curved root canal systems. Current research is under progress to better define the limitations and strengths of NiTi, and to determine the file designs and techniques to take best advantage of this unique material. • Orthodontic arches—low-deformation-resistance superelastic wire: Superelastic shape memory wires have found wide use as orthodontic wires. The difference between these shape memory wires and a normal wire is a large elastic deformation combined with a low plateau stress obtainable in the shape memory wire. The advantages for the patient are two-fold: fewer visits to the orthodontist, because of the larger elastic stroke, and more comfort, because of lower stress levels.

20-27

Glossary As: The temperature at which austenite starts to form. Af: The temperature at which a material is completely transformed

into austenite. Ms: The temperature at which the martensitic transformation starts upon cooling. Mf: The temperature at which the martensitic transformation is completed, where the material is said to be in the martensitic state. Austenite: The name originally given to the FCC crystal structure of iron. It is a stronger phase with a cubic structure in shape memory alloys at high temperature. Martensite: The name originally used to designate the hard microconstituent formed in quenched steels. Now it is a phase that forms as the result of a diffusionless solid-state transformation. It is a more deformable phase in shape memory alloys at lower temperature. Pseudo-elasticity: A generic name for both rubberlike flexibility and superelasticity. Rubberlike behavior: Some SME alloys show rubberlike flexibility. When bars are bent, they spontaneously unbend upon release of the stress. In order to obtain this behavior, the martensite, after its initial formation, usually must be aged for a period of time; unaged martensites show typical SME behavior. The rubberlike behavior, unlike superelasticity, is a characteristic of the martensite phase—not the parent phase. Shape memory effect (SME): The ability of certain materials to develop microstructures that, after being deformed, can return to the material to its initial shape when heated. Upon cooling, the specimen does not return to its deformed shape. Superelasticity (SE): The result of stress-induced phase transformation from austenite directly into deformed martensite under external loading. When the deforming stress is released, the material will transform back to its stable austenitic state and shape. The martensite disappears (reverses) when the stress is released, giving rise to a superelastic stress-strain loop with some stress hysteresis.

REFERENCES Ferroelectrics/Piezoelectrics Cady (1964). “Piezoelectricity,” Dover. Ferroelectrics in “Ullmann’s Encyclopedia of Industrial Chemistry,” Wiley, 2003. “Ferroelectrics” Journal. IEEE Standard on Piezoelectricity, 1987. Ikeda, (1996). “Fundamentals of Piezoelectricity,” Oxford. Jaffe, Cook, and Jaffe (1971). “Piezoelectric Ceramics,” Academic Press. Jona and Shirane, (1993). “Ferroelectric Crystals,” Dover. Katz (1959). “Solid State Magnetic and Dielectric Devices,” John Wiley, New York. Lines and Glass (1997). “Principles and Applications of Ferroelectrics and Related Materials,” Oxford. Mason (1958). “Physical Acoustics and the Properties of Solids,” Van Nostrand, Princeton, NJ. Nye, (1979). “Physical Properties of Crystals,” Oxford. Shackelford (2000). “Introduction to Materials Science for Engineers,” Sec. 15.4, Prentice Hall. Valasek, Phys. Rev. 15 (1920): 537–538; 17(1921):475–481. Shape Memory Alloys Bever, “Encyclopedia of Materials Science and Engineering,” Vol. 4, Pergamon, MIT Press, 1986. Chang and Read, Trans. AIME, 191(47), 1951. http://www.nitinol.com, http://www.amtbe.com Wayman, “Shape Memory and Related Phenomena,” Progress in Materials Science, Vol. 36, 203–224, 1992.

20.4 INTRODUCTION TO THE FINITE-ELEMENT METHOD by Farid Amirouche INTRODUCTION

The finite-element method (FEM) is essentially a technique used to obtain a solution to complex problems by discretizing a given physical or mathematical problem into smaller fundamental parts called elements. Then an analysis of the element is conducted using the required principles or law of mechanics. Finally, the solution to the problem as a whole is obtained through an assembly procedure of the individual solutions of the elements. These finite-element techniques have been used in many fields of engineering and science as numerical tools for solution of problems, using digital computers. For example, FEM can be used in computational fluid dynamics (CFD) where the most fundamental consideration is how one treats a continuous fluid in a discretized fashion on a computer. One method is to discretize the spatial domain into small cells to form a volume mesh or grid, and then apply a suitable algorithm to solve the equations of motion. In addition, such a mesh can be either irregular (for instance, consisting of triangles in 2-D, or pyramidal solids in 3-D) or regular; the distinguishing characteristic of the former is that each cell must be stored separately in memory. Lastly, if the problem is highly dynamic and occupies a wide range of scales, the grid itself can be dynamically modified in time, as in adaptive mesh refinement methods. In numerical analysis, the finite-element method (FEM) is used for solving partial differential equations (PDE) approximately. Solutions are approximated by either eliminating the differential equation completely (steady-state problems) or rendering the PDE into an equivalent ordinary differential equation, which is then solved by using standard techniques such as finite differences. The use of the finite-element method in engineering for the analysis of physical systems is commonly known as finite-element analysis. Finite-element methods have also been developed to approximately solve integral equations such as the heat transport equation. Finite-element analysis (FEA) or the finite-element method (FEM) is a numerical technique for solution of boundary-value problems. It was first developed for use in problems related to trusses and structural analysis in general. The system is represented by a similar model consisting of multiple linked discrete elements. From equations of equilibrium, as will be seen in this section, in conjunction with the appropriate boundary conditions applicable to each element and the system as a whole, a set of simultaneous equations is constructed. The system of equations is then solved for the unknown values by using the techniques of linear algebra. Although it is an approximate method, the accuracy of the FEA

Fig. 20.4.1 Car crash simulation. 20-28

Fig. 20.4.2 Car crash simulation results.

method can be improved by refining the mesh in the model by using more elements and nodes. A common use of FEA is for the determination of stresses and displacements in mechanical objects and systems. However, it is also routinely used in the analysis of many other types of problems. Figures 20.4.1 and 20.4.2 show the finite-element analysis used for studying car crash simulation. The last decade has witnessed an unprecedented increase in the speed and power of personal computers. Because of this revolution in personal computing power, PC-based FEM programs are becoming very popular and can now perform designs in minutes that not long ago would have required hours or even days of processing time. As computer technology continues to advance, even more sophisticated programs will become available to the designer/engineer at an affordable cost.

Fig. 20.4.3 FEM of total knee joint.

BASIC CONCEPTS IN THE FINITE-ELEMENT METHOD

Three-dimensional FEM can handle greater detail and more complex characterizations of construction materials. Three-dimensional FEM can incorporate nonlinear and nonelastic material models not available in linear elastic analysis (LEA). In finite-element modeling, the structure is divided into a large number of smaller parts, or “elements.” Elements may be of different sizes, and may be assigned different properties, depending on their location within the structure. By breaking down the structure into elements, the structural problem is transformed into a finite set of equations that can be solved on a computer. Because a structure can be broken down into finite elements, or “discretized,” in any number of ways, it is essential that any design standard incorporating finite elements include rules for systematic three-dimensional meshing. Finite-element modeling is also a tool used extensively in biomechanics for estimating the mechanical behavior of complex adaptive materials such as trabecular bone. Figure 20.4.3 shows the finiteelement model of a knee with a tibial insert. The intrinsic complexities captured by FEM include the volume fraction, connectivity, and anisotropy of bone. In the analysis of total knee arthroplasty (TKA), balancing of the soft tissue can be helped by developing FEM models that can simulate the behavior of the knee during flexion extension and contact pressure in particular at the condyles. The objective of such a study is to optimize the design of the stabilizing post and to understand the mechanism of rollback.

Fig. 20.4.4 Cubic continuum element.

unknowns and are related to the external loads or temperatures through a mathematical relationship of the form f e 5 k eu e 1 f eadd

BASIC CONCEPTS IN THE FINITE-ELEMENT METHOD

There are three phases in either using an FE code or simply developing the procedures for solving problems by FEM. Each phase is described below: 1. Preprocessing phase a. Create and discretize the solution domain into finite elements. b. Assume a shape function to represent the physical behavior of an element; that is, an approximate continuous function is assumed to represent the solution of an element. c. Develop relationships between input and output equations for an element. d. Assemble the elements to present the entire problem. Construct the global stiffness matrix. e. Apply boundary conditions, initial conditions, and loading conditions. 2. Solution phase. Solve a set of linear or nonlinear algebraic equations simultaneously to obtain nodal results, such as displacement values at different nodes or temperature values at different nodes. 3. Postprocessor phase. Obtain other important information such as stresses, energy, and any information needed to understand the problem solution. In the preprocessing phase the FEM takes the object to be analyzed and performs a discretization of the body into a limited number of elements. There is usually a selection of the elements used that is best suitable for the problem solution. On the basis of the equation for a single element, the entire problem is formulated by an assembly of these element equations. Before a solution is adapted the boundary conditions, initial conditions, and loading conditions are introduced. At this stage the solution phase is adapted in solving a set of linear or nonlinear equations simultaneously. The solution yields values such as displacements or temperatures at the nodes. The final phase of the FEM is the postprocessor, where one chooses the type of output to be displayed. The latter might require additional steps in analysis to extract the information needed. Consider an element of a continuum as shown in Fig. 20.4.4; let the element be represented by a small cube. The nodal points are defined as the end points of the edges of the cube, which are eight in this case. All the elements are connected through the nodal points. Any deformation of the body caused by external loads or temperatures induces certain displacements at the nodes. In general, the displacements are the

20-29

(20.4.1)

e

where k (e being the element) is the local element stiffness, f e is the external forces applied at each element, ue represents the nodal displacements for the element, and f eadd represents the additional forces. These additional forces could be a result of surface traction or of the material being initially under stress. For the moment, we will assume them to be zero. Hence, Equation (20.4.1) reduces to f e 5 k eu e (20.4.2) It is important at times to express this equation by using the relationship between stresses and strains: se 5 s ee e e

(20.4.3) e

where s is the element stress matrix and s is the constitutive relationship connecting the strain ee and stress se. From Eq. (20.4.2), we can see that if f e and ke are known, ue can be evaluated. Similarly, if ue and k e are known, we can compute f e. The finite-element method’s most basic function is to automatically generate the local stiffness matrix knowing the element type and the properties of the material of the element. Several types of elements that are available in the finite-element library are given in Fig. 20.4.5. The well-known general-purpose FEM packages provide an element library that helps the user in the solution of particular problems. These codes and others can select any of these elements with the proper number of nodes. Thus, it is important at this stage to understand the basic mathematics involved in finite-element formulations.

Fig. 20.4.5 Types of elements in the finite-element library.

20-30

INTRODUCTION TO THE FINITE-ELEMENT METHOD

POTENTIAL ENERGY FORMULATION

The finite-element method is based on the minimization of the total potential energy formulation. When a closed-form solution is not possible, approximation methods such as finite element are the most commonly used in solid mechanics. To illustrate how the finite-element method is formulated. We will consider an elastic body such as the one shown in Fig. 20.4.6, where the body is subjected to a loading force that causes it to deform.

ssed 5 Ee

(20.4.10)

Ee2 &s 5 3 dV 2

(20.4.11)

v

The total energy, which consists of the strain energy due to the deformation of the body and the work performed by the external forces, can be expressed as function of the potential energy n

P

e51

i51

1 5 g &sed 2 g Fiui P

(20.4.12)

where U denotes displacement along the force F1 or potential energy 1. It follows that a stable system requires that the potential energy be minimum at equilibrium ' '1 5 c g &sed 2 g Fiui d 5 0 'ui 'u

y

(20.4.13)

We know that e 5 sui11 2 uid/l, Then

x

AavgE '&e 5 sui 2 ui11d 'ui l Aavg E '&e 5 sui11