Information Security Management Handbook, Fourth Edition, Volume 4

  • 38 65 6
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Information Security Management Handbook, Fourth Edition, Volume 4

Information Security Management H A N D B O O K 4 TH E D I T I O N VOLUME 4 OTHER AUERBACH PUBLICATIONS The ABCs of IP

3,124 526 12MB

Pages 1018 Page size 437.04 x 684.6 pts Year 2002

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Information Security Management H A N D B O O K 4 TH E D I T I O N VOLUME 4

OTHER AUERBACH PUBLICATIONS The ABCs of IP Addressing Gilbert Held ISBN: 0-8493-1144-6 The ABCs of TCP/IP Gilbert Held ISBN: 0-8493-1463-1

Information Security Management Handbook, 4th Edition, Volume 4 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1518-2

Building an Information Security Awareness Program Mark B. Desman ISBN: 0-8493-0116-5

Information Security Policies, Procedures, and Standards: Guidelines for Effective Information Security Management Thomas R. Peltier ISBN: 0-8493-1137-3

Building a Wireless Office Gilbert Held ISBN: 0-8493-1271-X

Information Security Risk Analysis Thomas R. Peltier ISBN: 0-8493-0880-1

The Complete Book of Middleware Judith Myerson ISBN: 0-8493-1272-8

A Practical Guide to Security Engineering and Information Assurance Debra Herrmann ISBN: 0-8493-1163-2

Computer Telephony Integration, 2nd Edition William A. Yarberry, Jr. ISBN: 0-8493-1438-0 Cyber Crime Investigator’s Field Guide Bruce Middleton ISBN: 0-8493-1192-6 Cyber Forensics: A Field Manual for Collecting, Examining, and Preserving Evidence of Computer Crimes Albert J. Marcella and Robert S. Greenfield, Editors ISBN: 0-8493-0955-7 Global Information Warfare: How Businesses, Governments, and Others Achieve Objectives and Attain Competitive Advantages Andy Jones, Gerald L. Kovacich, and Perry G. Luzwick ISBN: 0-8493-1114-4 Information Security Architecture Jan Killmeyer Tudor ISBN: 0-8493-9988-2 Information Security Management Handbook, 4th Edition, Volume 1 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-9829-0

The Privacy Papers: Managing Technology and Consumers, Employee, and Legislative Action Rebecca Herold ISBN: 0-8493-1248-5 Secure Internet Practices: Best Practices for Securing Systems in the Internet and e-Business Age Patrick McBride, Jody Patilla, Craig Robinson, Peter Thermos, and Edward P. Moser ISBN: 0-8493-1239-6 Securing and Controlling Cisco Routers Peter T. Davis ISBN: 0-8493-1290-6 Securing E-Business Applications and Communications Jonathan S. Held and John R. Bowers ISBN: 0-8493-0963-8 Securing Windows NT/2000: From Policies to Firewalls Michael A. Simonyi ISBN: 0-8493-1261-2 Six Sigma Software Development Christine B. Tayntor ISBN: 0-8493-1193-4

Information Security Management Handbook, 4th Edition, Volume 2 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-0800-3

A Technical Guide to IPSec Virtual Private Networks James S. Tiller ISBN: 0-8493-0876-3

Information Security Management Handbook, 4th Edition, Volume 3 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1127-6

Telecommunications Cost Management Brian DiMarsico, Thomas Phelps IV, and William A. Yarberry, Jr. ISBN: 0-8493-1101-2

AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail: [email protected]

Information Security Management H A N D B O O K 4 TH E D I T I O N VOLUME 4

Harold F. Tipton Micki Krause EDITORS

AUERBACH PUBLICATIONS A CRC Press Company Boca Raton London New York Washington, D.C.

AU1518 FMFrame.backup Page iv Friday, November 15, 2002 2:07 PM

Chapter 21, “Security Assessment,” © 2003. INTEGRITY. All rights reserved. Chapter 23, “How to Work with a Managed Security service Provider,” © 2003. Laurie Hill McQuillan. All rights reserved. Chapter 44, “Liability for Lax Computer Security in DDoS,” © 2003. Dorsey Morrow. All rights reserved.

Library of Congress Cataloging-in-Publication Data Information security management handbook / Harold F. Titon, Micki Krause, editors.—4th ed. p. cm. Revised edition of: Handbook of information security management 1999. Includes bibliographical references and index. ISBN 0-8493-1518-2 (alk. paper) 1. Computer security — Management — Handbooks, manuals, etc. 2. Data protection— Handbooks, manuals, etc. I. Tipton, Harold F. II. Krause, Micki. III. Title: Handbook of information security management 1999. QA76.9.A25H36 1999a 658¢.0558—dc21 99-42823 CIP

This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the authors and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. All rights reserved. Authorization to photocopy items for internal or personal use, or the personal or internal use of specific clients, may be granted by CRC Press LLC, provided that $1.50 per page photocopied is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA. The fee code for users of the Transactional Reporting Service is ISBN 0-8493-15182/02/$0.00+$1.50. The fee is subject to change without notice. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.

Visit the Auerbach Publications Web site at www.auerbach-publications.com © 2003 by CRC Press LLC Auerbach is an imprint of CRC Press LLC No claim to original U.S. Government works International Standard Book Number 0-8493-1518-2 Library of Congress Card Number 99-42823 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper

iv

AU1518 FMFrame Page v Thursday, November 14, 2002 9:28 PM

Contributors THOMAS AKIN, CISSP, Founding Director, Southeast Cybercrime Institute, Marietta, Georgia ALLEN BRUSEWITZ, CISSP, CBCP, Consultant, Huntington Beach, California CARL BURNEY, CISSP, IBM, Internet Security Analyst, Salt Lake City, Utah KEN BUSZTA, CISSP, Consultant, Lakewood, Ohio MICHAEL J. CORBY, President, QinetiQ Trusted Information Management, Inc., Worcester, Massachusetts KEVIN J. DAVIDSON, CISSP, Senior Staff Systems Engineer, Lockheed Martin Mission Systems, Gaithersburg, Maryland DAVID DECKTER, CISSP, Manager, Enterprise Risk Services, Deloitte & Touche LLP, Chicago, Illinois MARK EDMEAD, CISSP, SSCP, TICSA, President, MTE Software, Inc., Escondido, California JEFFREY H. FENTON, CBCP, CISSP, Senior Staff Computer System Security Analyst, Corporate Information Security Office, Lockheed Martin Corporation, Sunnyvale, California ED GABRYS, CISSP, Information Security Manager, People’s Bank, Bridgeport, Connecticut B RIAN G EFFERT , CISSP, CISA, Senior Manager, Security Services Practice, Deloitte & Touche LLP, San Francisco, California ALEX GOLOD, CISSP, Infrastructure Specialist, EDS, Troy, Michigan CHRIS HARE, CISSP, CISA, Information Security and Control Consultant, Nortel Networks, Dallas, Texas GILBERT HELD, Director, 4-Degree Consulting, Macon, Georgia K EVIN H ENRY, CISA, CISSP, Information Systems Auditor, Oregon Judicial Department, Salem, Oregon PAUL A. HENRY, CISSP, Vice President, CyberGuard Corporation, Fort Lauderdale, Florida R EBECCA H EROLD, CISSP, CISA, FLMI, Senior Security Consultant, QinetiQ Trusted Information Management, Van Meter, Iowa DEBRA S. HERRMANN, Manager of Security Engineering, FAA Telecommunications Infrastructure, ITT Advanced Engineering Sciences, Washington, D.C.

v

AU1518 FMFrame Page vi Thursday, November 14, 2002 9:28 PM

Contributors RALPH HOEFELMEYER, CISSP, Senior Engineer, WorldCom, Colorado Springs, Colorado PATRICK D. HOWARD, Senior Information Security Architect, QinetiQ Trusted Information Management, Worcester, Massachusetts JAVED IKBAL, CISSP, Director, IT Security, Major Financial Services Company, Reading, Massachusetts CARL B. JACKSON, CISSP, Vice President, Continuity Planning, QinetiQ-Trusted Information Management, Houston, Texas SUDHANSHU KAIRAB, CISSP, CISA, Information Security Consultant, East Brunswick, New Jersey WALTER S. KOBUS, Jr., CISSP, Vice President, Security Consulting Services, Total Enterprise Solutions, Raleigh, North Carolina MOLLIE E. KREHNKE, CISSP, Principal Information Security Analyst, Northrop Grumman, Raleigh, North Carolina DAVID C. KREHNKE, CISSP, Principal Information Security Analyst, Northrop Grumman, Raleigh, North Carolina DAVID LITZAU, Teacher, San Diego, California JEFFREY LOWDER, CISSP, GSEC, Independent Information Security Consultant, Paoli, Pennsylvania DAVID MACLEOD, Ph.D., CISSP, Chief Information Security Officer, The Regence Group, Portland, Oregon LAURIE HILL MCQUILLAN, CISSP, Vice President, KeyCrest Enterprises, Manassas, Virginia DORSEY MORROW, CISSP, JD, Operations Manager and General Counsel, International Information Systems Security Certification Consortium, Inc. [(ISC)2], Framingham, Massachusetts WILLIAM HUGH MURRAY, CISSP, Executive Consultant, IS Security, Deloitte & Touche, New Caanan, Connecticut DR. K. NARAYANASWAMY, Chief Technology Officer, Cs3, Incorporated, Los Angeles, California KEITH PASLEY, CISSP, CNE, Senior Security Technologist, Ciphertrust, Atlanta, Georgia THERESA E. PHILLIPS, CISSP, Senior Engineer, WorldCom, Colorado Springs, Colorado STEVE A. RODGERS, CISSP, Co-founder, Security Professional Services, Leawood, Kansas TY R. SAGALOW, Executive Vice President and Chief Operating Officer, eBusiness Risk Solutions, American International Group (AIG), New York, New York CRAIG A. SCHILLER, CISSP, Information Security Consultant, Hawkeye Security, Wichita, Kansas BRIAN R. SCHULTZ, CISSP, CISA, Chairman of the Board, INTEGRITY, Centreville, Virginia PAUL SERRITELLA, Security Architect, American International Group (AIG), New York, New York vi

AU1518 FMFrame Page vii Thursday, November 14, 2002 9:28 PM

Contributors KEN SHAURETTE, CISSP, CISA, Information Systems Security Staff Advisor, American Family Institute, Madison, Wisconsin CAROL A. SIEGEL, CISSP, Chief Security Officer, American International Group (AIG), New York, New York VALENE SKERPAC, CISSP, President, iBiometrics, Inc., Millwood, New York EDWARD SKOUDIS, Vice President, Security Strategy, Predictive Systems, New York, New York ROBERT SLADE, CISSP, Security Consultant and Educator, Vancouver, British Columbia, Canada ALAN B. STERNECKERT, CISA, CISSP, CFE, COCI, Owner and General Manger, Risk Management Associates, Salt Lake City, Utah JAMES S. TILLER, CISSP, Global Portfolio and Practice Manager, International Network Services, Tampa, Florida JAMES TRULOVE, Network Engineer, Austin, Texas MICHAEL VANGELOS, Information Security Officer, Federal Reserve Bank of Cleveland, Cleveland, Ohio JAYMES WILLIAMS, CISSP, Security Analyst, PG&E National Energy Group, Portland, Oregon JAMES M. WOLFE, MSM, Senior Virus Researcher, Enterprise Virus Management Group, Lockheed Martin Corporation, Orlando, Florida

vii

AU1518 FMFrame Page viii Thursday, November 14, 2002 9:28 PM

AU1518 FMFrame Page ix Thursday, November 14, 2002 9:28 PM

Contents DOMAIN 1 ACCESS CONTROL SYSTEMS AND METHODOLOGY . . .

1

Section 1.1 Access Control Techniques Chapter 1 It Is All about Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chris Hare

3

Chapter 2 Controlling FTP: Providing Secured Data Transfers . . . . . 21 Chris Hare Section 1.2 Access Control Administration Chapter 3 The Case for Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Michael J. Corby Section 1.3 Methods of Attack Chapter 4 Breaking News: The Latest Hacker Attacks and Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Edward Skoudis Chapter 5 Counter-Economic Espionage . . . . . . . . . . . . . . . . . . . . . . . 67 Craig A. Schiller DOMAIN 2 TELECOMMUNICATIONS AND NETWORK SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Section 2.1 Communications and Network Security Chapter 6 What’s Not So Simple about SNMP? . . . . . . . . . . . . . . . . . . 93 Chris Hare Section 2.2 Internet, Intranet, and Extranet Security Chapter 7 Security for Broadband Internet Access Users . . . . . . . . . 107 James Trulove Chapter 8 New Perspectives on VPNs. . . . . . . . . . . . . . . . . . . . . . . . . . 119 Keith Pasley Chapter 9 An Examination of Firewall Architectures . . . . . . . . . . . . . 129 Paul A. Henry ix

AU1518 FMFrame Page x Thursday, November 14, 2002 9:28 PM

Contents Chapter 10 Deploying Host-Based Firewalls across the Enterprise: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Jeffery Lowder Chapter 11 Overcoming Wireless LAN Security Vulnerabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Gilbert Held Section 2.3 Secure Voice Communication Chapter 12 Voice Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Chris Hare Chapter 13 Secure Voice Communications (VoI) . . . . . . . . . . . . . . . . . . 191 Valene Skerpac Section 2.4 Network Attacks and Countermeasures Chapter 14 Packet Sniffers: Use and Misuse. . . . . . . . . . . . . . . . . . . . . . 211 Steve A. Rodgers Chapter 15 ISPs and Denial-of-Service Attacks. . . . . . . . . . . . . . . . . . . . 225 Dr. K. Narayanaswamy DOMAIN 3 SECURITY MANAGEMENT PRACTICES. . . . . . . . . . . . . . . 237 Section 3.1 Security Management Concepts and Principles Chapter 16 The Human Side of Information Security . . . . . . . . . . . . . . 239 Kevin Henry Chapter 17 Security Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Ken Buszta Section 3.2 Policies, Standards, Procedures, and Guidelines Chapter 18 The Common Criteria for IT Security Evaluation. . . . . . . . 275 Debra S. Herrmann Chapter 19 The Security Policy Life Cycle: Functions and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Patrick Howard Section 3.3 Risk Management Chapter 20 Security Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Sudhanshu Kairab Chapter 21 Evaluating the Security Posture of an Information Technology Environment: The Challenges of Balancing Risk, Cost, and Frequency of Evaluating Safeguards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Brian R. Schultz x

AU1518 FMFrame Page xi Thursday, November 14, 2002 9:28 PM

Contents Chapter 22 Cyber-Risk Management: Technical and Insurance Controls for Enterprise-Level Security . . . . . . . . . . . . . . . . 341 Carol A. Siegel, Ty R. Sagalow, and Paul Serritella Section 3.4 Security Management Planning Chapter 23 How to Work with a Managed Security Service Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Laurie Hill McQuillan Chapter 24 Considerations for Outsourcing Security . . . . . . . . . . . . . . 383 Michael J. Corby Section 3.5 Employment Policies and Practices Chapter 25 Roles and Responsibilities of the Information Systems Security Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Carl Burney Chapter 26 Information Protection: Organization, Roles and Separation of Duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Rebecca Herold Chapter 27 Organizing for Success: Human Resources Issues in Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Jeffrey H. Fenton and James M. Wolfe Chapter 28 Ownership and Custody of Data . . . . . . . . . . . . . . . . . . . . . 461 William Hugh Murray DOMAIN 4 APPLICATION PROGRAM SECURITY . . . . . . . . . . . . . . . . 473 Section 4.1 Application Issues Chapter 29 Application Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Walter S. Kobus, Jr. Section 4.2 Systems Development Controls Chapter 30 Certification and Accreditation Methodology . . . . . . . . . . 485 Mollie Krehnke and David Krehnke Chapter 31 A Framework for Certification Testing . . . . . . . . . . . . . . . . 509 Kevin J. Davidson Section 4.3 Malicious Code Chapter 32 Malicious Code: The Threat, Detection, and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Ralph Hoefelmeyer and Theresa E. Phillips Chapter 33 Malware and Computer Viruses. . . . . . . . . . . . . . . . . . . . . . 565 Robert Slade xi

AU1518 FMFrame Page xii Thursday, November 14, 2002 9:28 PM

Contents DOMAIN 5 CRYPTOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 Section 5.1 Crypto Concepts, Methodologies, and Practices Chapter 34 Steganography: The Art of Hiding Messages . . . . . . . . . . . 619 Mark Edmead Chapter 35 An Introduction to Cryptography . . . . . . . . . . . . . . . . . . . . 627 Javek Ikbel Chapter 36 Hash Algorithms: From Message Digests to Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Keith Pasley Section 5.2 Public Key Infrastructure (PKI) Chapter 37 PKI Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Alex Golod DOMAIN 6 COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 Section 6.1 Principles of Computer and Network Organizations, Architectures, and Designs Chapter 38 Security Infrastructure: Basics of Intrusion Detection Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Ken Shaurette Chapter 39 Firewalls, Ten Percent of the Solution: A Security Architecture Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699 Chris Hare Chapter 40 The Reality of Virtual Computing. . . . . . . . . . . . . . . . . . . . . 719 Chris Hare DOMAIN 7 OPERATIONS SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 Section 7.1 Operations Controls Chapter 41 Directory Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747 Ken Buszta DOMAIN 8 BUSINESS CONTINUITY PLANNING. . . . . . . . . . . . . . . . . . 759 Chapter 42 The Changing Face of Continuity Planning . . . . . . . . . . . . . 761 Carl Jackson Chapter 43 Business Continuity Planning: A Collaborative Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775 Kevin Henry xii

AU1518 FMFrame Page xiii Thursday, November 14, 2002 9:28 PM

Contents DOMAIN 9 LAW, INVESTIGATION, AND ETHICS . . . . . . . . . . . . . . . . 789 Section 9.1 Information Law Chapter 44 Liability for Lax Computer Security in DDoS Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791 Dorsey Morrow Chapter 45 HIPAA 201: A Framework Approach to HIPAA Security Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799 David MacLeod, Brian Geffert, and David Deckter Section 9.2 Major Categories of Computer Crime Chapter 46 The International Dimensions of Cyber-Crime . . . . . . . . . 815 Ed Gabrys Section 9.3 Incident Handling Chapter 47 Reporting Security Breaches . . . . . . . . . . . . . . . . . . . . . . . . 841 James S. Tiller Chapter 48 Incident Response Management . . . . . . . . . . . . . . . . . . . . . 855 Alan B. Sterneckert Chapter 49 Managing the Response to a Computer Security Incident . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873 Michael Vangelos Chapter 50 Cyber-Crime: Response, Investigation, and Prosecution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889 Thomas Akin DOMAIN 10 PHYSICAL SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899 Section 10.1 Elements of Physical Security Chapter 51 Computing Facility Physical Security . . . . . . . . . . . . . . . . . 901 Allen Brusewitz Chapter 52 Closed-Circuit Television and Video Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915 David Litzau Section 10.2 Environment and Life Safety Chapter 53 Physical Security: The Threat after September 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927 Jaymes Williams INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957

xiii

AU1518 FMFrame Page xiv Thursday, November 14, 2002 9:28 PM

AU1518 FMFrame Page xv Thursday, November 14, 2002 9:28 PM

Introduction This past year has brought an increasing focus on the need for information security at all levels of public- and private-sector organizations. The continuous growth of technology, distributed denial-of-service attacks, a significant (13 percent) increase in virus and worm attacks over the prior year, and, of course, the anticipated aftermath of September 11 — terrorism over the Internet — all have worked to increase concerns about how well we are protecting our information processing assets. This Volume 4, in combination with the previous volumes of the 4th Edition of the Information Security Management Handbook (ISMH), is designed to cover more of the topics in the Common Body of Knowledge as well as address items resulting from new technology. As such, it should be a valuable reference for those preparing to take the CISSP examination as well as for those who are working in the field. Those CISSP candidates who take the (ISC)2 CBK Review Seminar and use the volumes of the 4th Edition of the ISMH to study those areas that they have not covered in their work experience have achieved an exceptionally high pass rate for the examination. On the other hand, those who have already attained CISSP status comment frequently that the ISMH books are a very useful reference in the workplace. These comments are especially heartwarming because they underscore our success in obtaining the most proficient authors for the ISMH chapters. The environment in which information processing is required to perform these days is very challenging from an information security viewpoint. Consequently, it is more and more imperative that organizations employ the most qualified information security personnel available. Although qualifications can be reflected in several different ways, one of the best is through the process of professional certification. Achieving professional certification by passing the CISSP examination is considered to be the worldwide leader. There are currently over 9000 CISSPs internationally. With this in mind, we have again formatted the Table of Contents for this volume to be consistent with the ten domains of the Common Body of Knowledge for the field of information security. This makes it easier for the xv

AU1518 FMFrame Page xvi Thursday, November 14, 2002 9:28 PM

Introduction reader to select chapters for study in preparation for the CISSP examination and for professionals to find the chapters they need to refer to in order to solve specific problems in the workplace. None of the chapters in the 4th Edition Volumes 1 through 4 is repeated. All represent new material, and the several volumes supplement each other. HAL TIPTON MICKI KRAUSE October 2002

xvi

AU1518Ch01Frame Page 1 Thursday, November 14, 2002 6:27 PM

Domain 1

Access Control Systems and Methodology

AU1518Ch01Frame Page 2 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY There is ample justification for beginning this volume of the Handbook with the fundamental concept of controlling access to critical resources. Absent access controls, there is little if any assurance that information will be used or disclosed in an authorized manner. In this domain, one of our authors aptly points to Sawyer’s Internal Auditing, Fourth Edition, to offer a comprehensive definition of control and clearly conveys that control can take on many diverse forms while achieving similar results. The definition follows: “Control is the employment of all the means in an enterprise to promote, direct, restrain, govern, and check upon its various activities for the purpose of seeing that the enterprise objectives are met. These means of control include, but are not limited to, form of organization, policies, systems, procedures, instructions, standards, committees, charts of account, forecasts, budgets, schedules, reports, checklists, records, devices, and internal auditing.” Paradoxically, we often employ computer technology to counter and control the threats posed by evolving computer technologies. While this Handbook is being written, wireless networking is coming of age as prices continue to decrease and usability and interoperability continue to increase. As attractive as wireless networks are, however, wide deployment is still hampered by the acknowledged lack of security. As we read in this domain, computer attackers continue to gain unauthorized system access by exploiting insecure technologies. The good news offered herein is that there are numerous controls available to be implemented for wireless local area networks that will minimize or mitigate risk. The terrorist attacks of September 11, 2001, still live clearly in our minds and hearts as ever-living proof that we live in a constant state of world war. Although the resultant losses from economic espionage are clearly not of the magnitude suffered by the loss of lives from the World Trade Center catastrophe, they are sufficient to be reckoned with. In this domain, we feature a chapter that details the history of economic espionage, many of the players, stories of organizations affected, and some of the ways in which we can counter the economic espionage threat.

2

AU1518Ch01Frame Page 3 Thursday, November 14, 2002 6:27 PM

Chapter 1

It Is All about Control Chris Hare, CISSP, CISA

The security professional and the auditor come together around one topic: control. The two professionals may not agree with the methods used to establish control, but their concerns are related. The security professional is there to evaluate the situation, identify the risks and exposures, recommend solutions, and implement corrective actions to reduce the risk. The auditor also evaluates risk, but the primary role is to evaluate the controls implemented by the security professional. This role often puts the security professional and the auditor at odds, but this does not need to be the case. This chapter discusses controls in the context of the Common Body of Knowledge of the Certified Information Systems Security Professional (CISSP), but it also introduces the language and definitions used by the audit profession. This approach will ease some of the concept misconceptions and terminology differences between the security and audit professions. Because both professions are concerned with control, albeit from different perspectives, the security and audit communities should have close interaction and cooperate extensively. Before discussing controls, it is necessary to define some parameters. Audit does not mean security. Think of it this way: the security professional does not often think in control terms. Rather, the security professional is focused on what measures or controls should be put into operation to protect the organization from a variety of threats. The goal of the auditor is not to secure the organization but to evaluate the controls to ensure risk is managed to the satisfaction of management. Two perspectives of the same thing — control. WHAT IS CONTROL? According to Webster’s Dictionary, control is a method “to exercise restraining or directing influence over.” An organization uses controls to regulate or define the limits of behavior for its employees or its operations for processes and systems. For example, an organization may have a process for defining widgets and uses controls within the process to maintain quality or production standards. Many manufacturing facilities use controls 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

3

AU1518Ch01Frame Page 4 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY to limit or regulate production of their finished goods. Professions such as medicine use controls to establish limits on acceptable conduct for their members. For example, the actions of a medical student or intern are monitored, reviewed, and evaluated — hence controlled — until the applicable authority licenses the medical student. Regardless of the application, controls establish the boundaries and limits of operation. The security professional establishes controls to limit access to a facility or system or privileges granted to a user. Auditors evaluate the effectiveness of the controls. There are five principle objectives for controls: 1. 2. 3. 4. 5.

Propriety of information Compliance with established rules Safeguarding of assets Efficient use of resources Accomplishment of established objectives and goals

Propriety of information is concerned with the appropriateness and accuracy of information. The security profession uses integrity or data integrity in this context, as the primary focus is to ensure the information is accurate and has not been inappropriately modified. Compliance with established rules defines the limits or boundaries within which people or systems must work. For example, one method of compliance is to evaluate a process against a defined standard to verify correct implementation of that process. Safeguarding the organization’s assets is of concern for management, the security professional, and the auditor alike. The term asset is used to describe any object, tangible or intangible, that has value to the organization. The efficient use of resources is of critical concern in the current market. Organizations and management must concern themselves with the appropriate and controlled use of all resources, including but not limited to cash, people, and time. Most importantly, however, organizations are assembled to achieve a series of goals and objectives. Without goals to establish the course and desired outcomes, there is little reason for an organization to exist. To complete our definition of controls, Sawyer’s Internal Auditing, 4th Edition, provides an excellent definition: Control is the employment of all the means and devices in an enterprise to promote, direct, restrain, govern, and check upon its various activities for the purpose of seeing that enterprise objectives are met. These means of control include, but are not limited to, form of organization, 4

AU1518Ch01Frame Page 5 Thursday, November 14, 2002 6:27 PM

It Is All about Control policies, systems, procedures, instructions, standards, committees, charts of account, forecasts, budgets, schedules, reports, checklists, records, methods, devices, and internal auditing. — Lawrence Sawyer Internal Auditing, 4th Edition The Institute of Internal Auditors

Careful examination of this definition demonstrates that security professionals use many of these same methods to establish control within the organization. COMPONENTS USED TO ESTABLISH CONTROL A series of components are used to establish controls, specifically: • • • • •

The control environment Risk assessment Control activities Information and communication Monitoring

The control environment is a term more often used in the audit profession, but it refers to all levels of the organization. It includes the integrity, ethical values, and competency of the people and management. The organizational structure, including decision making, philosophy, and authority assignments are critical to the control environment. Decisions such as the type of organizational structure, where decision-making authority is located, and how responsibilities are assigned all contribute to the control environment. Indeed, these areas can also be used as the basis for directive or administrative controls as discussed later in the chapter. Consider an organization where all decision-making authority is at the top of the organization. Decisions and progress are slower because all information must be focused upward. The resulting pace at which the organization changes is lower, and customers may become frustrated due to the lack of employee empowerment. However, if management abdicates its responsibility and allows anyone to make any decision they wish, anarchy results, along with differing decisions made by various employees. Additionally, the external audit organization responsible for reviewing the financial statements may have less confidence due to the increased likelihood that poor decisions are being made. Risk assessments are used in many situations to assess the potential problems that may arise from poor decisions. Project managers use risk assessments to determine the activities potentially impacting the schedule or budget associated with the project. Security professionals use risk 5

AU1518Ch01Frame Page 6 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY assessments to define the threats and exposures and to establish appropriate controls to reduce the risk of their occurrence and impact. Auditors also use risk assessments to make similar decisions, but more commonly use risk assessment to determine the areas requiring analysis in their review. Control activities revolve around authorizations and approvals for specific responsibilities and tasks, verification and review of those activities, and promoting job separation and segregation of duties within activities. The control activities are used by the security professional to assist in the design of security controls within a process or system. For example, SAP associates a transaction — an activity — with a specific role. The security professional assists in the review of the role to ensure no unauthorized activity can occur and to establish proper segregation of duties. The information and communication conveyed within an organization provide people with the data they need to fulfill their job responsibilities. Changes to organizational policies or management direction must be effectively communicated to allow people to know about the changes and adjust their behavior accordingly. However, communications with customers, vendors, government, and stockholders are also of importance. The security professional must approach communications with care. Most commonly, the issue is with the security of the communication itself. Was the communication authorized? Can the source be trusted, and has the information been modified inappropriately since its transmission to the intended recipients? Is the communication considered sensitive by the organization, and was the confidentiality of the communication maintained? Monitoring of the internal controls systems, including security, is of major importance. For example, there is little value gained from the installation of intrusion detection systems if there is no one to monitor the systems and react to possible intrusions. Monitoring also provides a sense of learning or continuous improvement. There is a need to monitor performance, challenge assumptions, and reassess information needs and information systems in order to take corrective action or even take advantage of opportunities for enhanced operations. Without monitoring or action resulting from the monitoring, there is no evolution in an organization. Organizations are not closed static systems and, hence, must adapt their processes to changes, including controls. Monitoring is a key control process to aid the evolution of the organization. CONTROL CHARACTERISTICS Several characteristics available to assess the effectiveness of the implemented controls are commonly used in the audit profession. Security professionals should consider these characteristics when selecting or designing the control structure. The characteristics are: 6

AU1518Ch01Frame Page 7 Thursday, November 14, 2002 6:27 PM

It Is All about Control • • • • • • • •

Timeliness Economy Accountability Placement Flexibility Cause identification Appropriateness Completeness

Ideally, controls should prevent and detect potential deviations or undesirable behavior early enough to take appropriate action. The timeliness of the identification and response can reduce or even eliminate any serious cost impact to the organization. Consider anti-virus software: organizations deploying this control must also concern themselves with the delivery method and timeliness of updates from the anti-virus vendor. However, having updated virus definitions available is only part of the control because the new definitions must be installed in the systems as quickly as possible. Security professionals regularly see solutions provided by vendors that are not economical due to the cost or lack of scalability in large environments. Consequently, the control should be economical and cost effective for the benefit it brings. There is little economic benefit for a control costing $100,000 per year to manage a risk with an annual impact of $1000. The control should be designed to hold people accountable for their actions. The user who regularly attempts to download restricted material and is blocked by the implemented controls must be held accountable for such attempts. Similarly, financial users who attempt to circumvent the controls in financial processes or systems must also be held accountable. In some situations, users may not be aware of the limits of their responsibilities and thus may require training. Other users knowingly attempt to circumvent the controls. Only an investigation into the situation can tell the difference. The effectiveness of the control is often determined by its placement. Accepted placement of controls are considered: • Before an expensive part of a process. For example, before entering the manufacturing phase of a project, the controls must be in place to prevent building the incorrect components. • Before points of difficulty or no return. Some processes or systems have a point where starting over introduces new problems. Consequently, these systems must include controls to ensure all the information is accurate before proceeding to the next phase. • Between discrete operations. As one operation is completed, a control must be in place to separate and validate the previous operation. For 7

AU1518Ch01Frame Page 8 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY





• •

example, authentication and authorization are linked but discrete operations. Where measurement is most convenient. The control must provide the desired measurement in the most appropriate place. For example, to measure the amount and type of traffic running through a firewall, the measurement control would not be placed at the core of the network. Corrective action response time. The control must alert appropriate individuals and initiate corrective action either automatically or through human intervention within a defined time period. After the completion of an error-prone activity. Activities such as data entry are prone to errors due to keying the data incorrectly. Where accountability changes. Moving employee data from a human resources system to a finance system may involve different accountabilities. Consequently, controls should be established to provide both accountable parties confidence in the data export and import processes.

As circumstances or situations change, so too must the controls. Flexibility of controls is partially a function of the overall security architecture. The firewall with a set of hard-coded and inflexible rules is of little value as organizational needs change. Consequently, controls should ideally be modular in a systems environment and easily replaced when new methods or systems are developed. The ability to respond and correct a problem when it occurs is made easier when the control can establish the cause of the problem. Knowing the cause of the problem makes it easier for the appropriate corrective action to be taken. Controls must provide management with the appropriate responses and actions. If the control impedes the organization’s operations or does not address management’s concerns, it is not appropriate. As is always evident to the security professional, a delicate balance exists between the two; and often the objectives of business operations are at odds with other management concerns such as security. For example, the security professional recommending system configuration changes may affect the operation of a critical business system. Without careful planning and analysis of the controls, the change may be implemented and a critical business function paralyzed. Finally, the control must be complete. Implementing controls in only one part of the system or process is no better than ignoring controls altogether. This is often very important in information systems. We can control the access of users and limit their ability to perform specific activities within an application. However, if we allow the administrator or programmer a backdoor into the system, we have defeated the controls already established. 8

AU1518Ch01Frame Page 9 Thursday, November 14, 2002 6:27 PM

It Is All about Control There are many factors affecting the design, selection, and implementation of controls. This theme runs throughout this chapter and is one the security professional and auditor must each handle on a daily basis. TYPES OF CONTROLS There are many types of controls found within an organization to achieve its objectives. Some are specific to particular areas within the organization but are nonetheless worthy of mention. The security professional should be aware of the various controls because he will often be called upon to assist in their design or implementation. Internal Internal controls are those used to primarily manage and coordinate the methods used to safeguard an organization’s assets. This process includes verifying the accuracy and reliability of accounting data, promoting operational efficiency, and adhering to managerial polices. We can expand upon this statement by saying internal controls provide the ability to: • Promote an effective and efficient operation of the organization, including quality products and services • Reduce the possibility of loss or destruction of assets through waste, abuse, mismanagement, or fraud • Adhere to laws and external regulations • Develop and maintain accurate financial and managerial data and report the same information to the appropriate parties on a timely basis The term internal control is primarily used within the audit profession and is meant to extend beyond the limits of the organization’s accounting and financial departments. Directive/Administrative Directive and administrative controls are often used interchangeably to identify the collection of organizational plans, policies, and records. These are commonly used to establish the limits of behavior for employees and processes. Consider the organizational conflict of interest policy. Such a policy establishes the limits of what the organization’s employees can do without violating their responsibilities to the organization. For example, if the organization states employees cannot operate a business on their own time and an employee does so, the organization may implement the appropriate repercussions for violating the administrative control. Using this example, we can more clearly see why these mechanisms are called administrative or directive controls — they are not easily enforced in 9

AU1518Ch01Frame Page 10 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY automated systems. Consequently, the employee or user must be made aware of limits and stay within the boundaries imposed by the control. One directive control is legislation. Organizations and employees are bound to specific conduct based upon the general legislation of the country where they work, in addition to any specific legislation regarding the organization’s industry or reporting requirements. Every organization must adhere to revenue, tax collection, and reporting legislation. Additionally, a publicly traded company must adhere to legislation defining reporting requirements, senior management, and the responsibilities and liabilities of the board of directors. Organizations that operate in the healthcare sector must adhere to legislation specific to the protection of medical information, confidentiality, patient care, and drug handling. Adherence to this legislation is a requirement for the ongoing existence of the organization and avoidance of criminal or civil liabilities. The organizational structure is an important element in establishing decision-making and functional responsibilities. The division of functional responsibilities provides the framework for segregation of duties controls. Through segregation of duties, no single person or department is responsible for an entire process. This control is often implemented within the systems used by organizations. Aside from the division of functional responsibilities, organizations with a centralized decision-making authority have all decisions made by a centralized group or person. This places a high degree of control over the organization’s decisions, albeit potentially reducing the organization’s effectiveness and responsiveness to change and customer requirements. Decentralized organizations place decision making and authority at various levels in the company with a decreasing range of approval. For example, the president of the company can approve a $1 million expenditure, but a first-level manager cannot. Limiting the range and authority of decision making and approvals gives the company control while allowing the decisions to be made at the correct level. However, there are also many examples in the news of how managers abuse or overstep their authority levels. The intent in this chapter is not to present one as better than the other but rather to illustrate the potential repercussions of choosing either. The organization must make the decision regarding which model is appropriate at which time. The organization also establishes internal policies to control the behavior of its employees. These policies typically are implemented by procedures, standards, and guidelines. Policies describe senior management’s decisions. They limit employee behavior by typically adding sanctions for noncompliance, often affecting an employee’s position within the organization. Policies may also include codes of conduct and ethics in addition to 10

AU1518Ch01Frame Page 11 Thursday, November 14, 2002 6:27 PM

It Is All about Control the normal finance, audit, HR, and systems policies normally seen in an organization. The collective body of documentation described here instructs employees on what the organization considers acceptable behavior, where and how decisions are made, how specific tasks are completed, and what standards are used in measuring organizational or personal performance. Accounting Accounting controls are an area of great concern for the accounting and audit departments of an organization. These controls are concerned with safeguarding the organization’s financial assets and accounting records. Specifically, these controls are designed to ensure that: • Only authorized transactions are performed, recorded correctly, and executed according to management’s directions. • Transactions are recorded to allow for preparation of financial statements using generally accepted accounting principles. • Access to assets, including systems, processes, and information, is obtained and permitted according to management’s direction. • Assets are periodically verified against transactions to verify accuracy and resolve inconsistencies. While these are obviously accounting functions, they establish many controls implemented within automated systems. For example, an organization that allows any employee to make entries into the general ledger or accounting system will quickly find itself financially insolvent and questioning its operational decisions. Financial decision making is based upon the data collected and reported from the organization’s financial systems. Management wants to know and demonstrate that only authorized transactions have been entered into the system. Failing to demonstrate this or establish the correct controls within the accounting functions impacts the financial resources of the organization. Additionally, internal or external auditors cannot validate the authenticity of the transactions; they will not only indicate this in their reports but may refuse to sign the organization’s financial reports. For publicly traded companies, failing to demonstrate appropriate controls can be disastrous. The recent events regarding mishandling of information and audit documentation in the Enron case (United States, 2001–2002) demonstrate poor compliance with legislation, accepted standards, accounting, and auditing principles. 11

AU1518Ch01Frame Page 12 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Preventive As presented thus far, controls may exist for the entire organization or for subsets of specific groups or departments. However, some controls are implemented to prevent undesirable behavior before it occurs. Other controls are designed to detect the behaviors when they occur, to correct them, and improve the process so that a similar behavior will not recur. This suite of controls is analogous to the prevent–detect–correct cycle used within the information security community. Preventive controls establish mechanisms to prevent the undesirable activity from occurring. Preventive controls are considered the most costeffective approach of the preventive–detective–corrective cycle. When a preventive control is embedded into a system, the control prevents errors and minimizes the use of detective and corrective techniques. Preventive controls include trustworthy, trained people, segregation of duties, proper authorization, adequate documents, proper record keeping, and physical controls. For example, an application developer who includes an edit check in the zip or postal code field of an online system has implemented a preventive control. The edit check validates the data entered as conforming to the zip or postal code standards for the applicable country. If the data entered does not conform to the expected standards, the check generates an error for the user to correct. Detective Detective controls find errors when the preventive system does not catch them. Consequently, detective controls are more expensive to design and implement because they not only evaluate the effectiveness of the preventive control but must also be used to identify potentially erroneous data that cannot be effectively controlled through prevention. Detective controls include reviews and comparisons, audits, bank and other account reconciliation, inventory counts, passwords, biometrics, input edit checks, checksums, and message digests. A situation in which data is transferred from one system to another is a good example of detective controls. While the target system may have very strong preventive controls when data is entered directly, it must accept data from other systems. When the data is transferred, it must be processed by the receiving system to detect errors. The detection is necessary to ensure that valid, accurate data is received and to identify potential control failures in the source system. Corrective The corrective control is the most expensive of the three to implement and establishes what must be done when undesirable events occur. No 12

AU1518Ch01Frame Page 13 Thursday, November 14, 2002 6:27 PM

It Is All about Control matter how much effort or resources are placed into the detective controls, they provide little value to the organization if the problem is not corrected and is allowed to recur. Once the event occurs and is detected, appropriate management and other resources must respond to review the situation and determine why the event occurred, what could have been done to prevent it, and implement the appropriate controls. The corrective controls terminate the loop and feed back the new requirements to the beginning of the cycle for implementation. From a systems security perspective, we can demonstrate these three controls. • An organization is concerned with connecting the organization to the Internet. Consequently, it implements firewalls to limit (prevent) unauthorized connections to its network. The firewall rules are designed according to the requirements established by senior management in consultation with technical and security teams. • Recognizing the need to ensure the firewall is working as expected and to capture events not prevented by the firewall, the security teams establish an intrusion detection system (IDS) and a log analysis system for the firewall logs. The IDS is configured to detect network behaviors and anomalies the firewall is expected to prevent. Additionally, the log analysis system accepts the firewall logs and performs additional analysis for undesirable behavior. These are the detective controls. • Finally, the security team advises management that the ability to review and respond to issues found by the detective controls requires a computer incident response team (CIRT). The role of the CIRT is to accept the anomalies from the detective systems, review them, and determine what action is required to correct the problem. The CIRT also recommends changes to the existing controls or the addition of new ones to close the loop and prevent the same behavior from recurring. Deterrent The deterrent control is used to discourage violations. As a control itself, it cannot prevent them. Examples of deterrent controls are sanctions built into organizational policies or punishments imposed by legislation. Recovery Recovery controls include all practices, procedures, and methods to restore the operations of the business in the event of a disaster, attack, or system failure. These include business continuity planning, disaster recovery plans, and backups. 13

AU1518Ch01Frame Page 14 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY All of these mechanisms enable the enterprise to recover information, systems, and business processes, thereby restoring normal operations. Compensating If the control objectives are not wholly or partially achieved, an increased risk of irregularities in the business operation exists. Additionally, in some situations, a desired control may be missing or cannot be implemented. Consequently, management must evaluate the cost–benefit of implementing additional controls, called compensating controls, to reduce the risk. Compensating controls may include other technologies, procedures, or manual activities to further reduce risk. For example, it is accepted practice to prevent application developers from accessing a production environment, thereby limiting the risk associated with insertion of improperly tested or unauthorized program code changes. However, in many enterprises, the application developer may be part of the application support team. In this situation, a compensating control could be used to allow the developer restricted (monitored and/or limited) access to the production system, only when access is required. CONTROL STANDARDS With this understanding of controls, we must examine the control standards and objectives of security professionals, application developers, and system managers. Control standards provide developers and administrators with the knowledge to make appropriate decisions regarding key elements within the security and control framework. The standards are closely related to the elements discussed thus far. Standards are used to implement the control objectives, namely: • • • • • •

Data validation Data completeness Error handling Data management Data distribution System documentation

Application developers who understand these objectives can build applications capable of meeting or exceeding the security requirements of many organizations. Additionally, the applications will be more likely to satisfy the requirements established by the audit profession. Data accuracy standards ensure the correctness of the information as entered, processed, and reported. Security professionals consider this an element of data integrity. Associated with data accuracy is data completeness. Similar to ensuring the accuracy of the data, the security professional 14

AU1518Ch01Frame Page 15 Thursday, November 14, 2002 6:27 PM

It Is All about Control must also be concerned with ensuring that all information is recorded. Data completeness includes ensuring that only authorized transactions are recorded and none are omitted. Timeliness relates to processing and recording the transactions in a timely fashion. This includes service levels for addressing and resolving error conditions. Critical errors may require that processing halts until the error is identified and corrected. Audit trails and logs are useful in determining what took place after the fact. There is a fundamental difference between audit trails and logs. The audit trail is used to record the status and processing of individual transactions. Recording the state of the transaction throughout the processing cycle allows for the identification of errors and corrective actions. Log files are primarily used to record access to information by individuals and what actions they performed with the information. Aligned with audit trails and logs is system monitoring. System administrators implement controls to warn of excessive processor utilization, low disk space, and other conditions. Developers should insert controls in their applications to advise of potential or real error conditions. Management is interested in information such as the error condition, when it was recorded, the resolution, and the elapsed time to determine and implement the correction. Through techniques including edit controls, control totals, log files, checksums, and automated comparisons, developers can address traditional security concerns. CONTROL IMPLEMENTATION The practical implementations of many of the control elements discussed in this chapter are visible in today’s computing environments. Both operating system and application-level implementations are found, often working together to protect access and integrity of the enterprise information. The following examples illustrate and explain various control techniques available to the security professional and application developer. Transmission Controls The movement of data from the origin to the final processing point is of importance to security professionals, auditors, management, and the actual information user. Implementation of transmission controls can be established through the communications protocol itself, hardware, or within an application. 15

AU1518Ch01Frame Page 16 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY For example, TCP/IP implementations handle transmission control through the retransmission of information errors when received. The ability of TCP/IP to perform this service is based upon error controls built into the protocol or service. When a TCP packet is received and the checksum calculated for the packet is incorrect, TCP requests retransmission of the packet. However, UDP packets must have their error controls implemented at the application layer, such as with NFS. Sequence Sequence controls are used to evaluate the accuracy and completeness of the transmission. These controls rely upon the source system generating a sequence number, which is tested by the receiving system. If the data is received out of sequence or a transmission is missing, the receiving system can request retransmission of the missing data or refuse to accept or process any of it. Regardless of the receiving system’s response, the sequence controls ensure data is received and processed in order. Hash Hash controls are stored in the record before it is transmitted. These controls identify errors or omissions in the data. Both the transmitting and receiving systems must use the same algorithm to compute and verify the computed hash. The source system generates a hash value and transmits both the data and the hash value. The receiving system accepts both values, computes the hash, and verifies it against the value sent by the source system. If the values do not match, the data is rejected. The strength of the hash control can be improved through strong algorithms that are difficult to fake and by using different algorithms for various data types. Batch Totals Batch totals are the precursors to hashes and are still used in many financial systems. Batch controls are sums of information in the transmitted data. For example, in a financial system, batch totals are used to record the number of records and the total amounts in the transmitted transactions. If the totals are incorrect on the receiving system, the data is not processed. Logging A transaction is often logged on both the sending and receiving systems to ensure continuity. The logs are used to record information about the 16

AU1518Ch01Frame Page 17 Thursday, November 14, 2002 6:27 PM

It Is All about Control transmission or received data, including date, time, type, origin, and other information. The log records provide a history of the transactions, useful for resolving problems or verifying that transmissions were received. If both ends of the transaction keep log records, their system clocks must be synchronized with an external time source to maintain traceability and consistency in the log records. Edit Edit controls provide data accuracy and consistency for the application. With edit activities such as inserting or modifying a record, the application performs a series of checks to validate the consistency of the information provided. For example, if the field is for a zip code, the data entered by the user can be verified to conform to the data standards for a zip code. Likewise, the same can be done for telephone numbers, etc. Edit controls must be defined and inserted into the application code as it is developed. This is the most cost-efficient implementation of the control; however, it is possible to add the appropriate code later. The lack of edit controls affects the integrity and quality of the data, with possible repercussions later. PHYSICAL The implementation of physical controls in the enterprise reduces the risk of theft and destruction of assets. The application of physical controls can decrease the risk of an attacker bypassing the logical controls built into the systems. Physical controls include alarms, window and door construction, and environmental protection systems. The proper application of fire, water, electrical, temperature, and air controls reduces the risk of asset loss or damage. DATA ACCESS Data access controls determine who can access data, when, and under what circumstances. Common forms of data access control implemented in computer systems are file permissions. There are two primary control methods — discretionary access control and mandatory access control. Discretionary access control, or DAC, is typically implemented through system services such as file permissions. In the DAC implementation, the user chooses who can access a file or program based upon the file permissions established by the owner. The key element here is that the ability to access the data is decided by the owner and is, in turn, enforced by the system. 17

AU1518Ch01Frame Page 18 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Mandatory access control, also known as MAC, removes the ability of the data owner alone to decide who can access the data. In the MAC model, both the data and the user are assigned a classification and clearance. If the clearance assigned to the user meets or exceeds the classification of the data and the owner permits the access, the system grants access to the data. With MAC, the owner and the system determine access based upon owner authorization, clearance, and classification. Both DAC and MAC models are available in many operating system and application implementations. WHY CONTROLS DO NOT WORK While everything present in this chapter makes good sense, implementing controls can be problematic. Overcontrolling an environment or implementing confusing and redundant controls results in excessive human/ monetary expense. Unclear controls might bring confusion to the work environment and leave people wondering what they are supposed to do, delaying and impacting the ability of the organization to achieve its goals. Similarly, controls might decrease effectiveness or entail an implementation that is costlier than the risk (potential loss) they are designed to mitigate. In some situations, the control may become obsolete and effectively useless. This is often evident in organizations whose polices have not been updated to reflect changes in legislation, economic conditions, and systems. Remember: people will resist attempts to control their behaviors. This is human nature and very common in situations in which the affected individuals were not consulted or involved in the development of the control. Resistance is highly evident in organizations in which the controls are so rigid or overemphasized as to cause mental or organizational rigidity. The rigidity causes a loss of flexibility to accommodate certain situations and can lead to strict adherence to procedures when common sense and rationality should be employed. Personnel can and will accept controls. Most people are more willing to accept them if they understand what the control is intended to do and why. This means the control must be a means to an end and not the end itself. Alternatively, the control may simply not achieve the desired goal. There are four primary reactions to controls the security professional should consider when evaluating and selecting the control infrastructure: 1. The control is a game. Employees consider the control as a challenge, and they spend their efforts in finding unique methods to circumvent the control. 2. Sabotage. Employees attempt to damage, defeat, or ignore the control system and demonstrate, as a result, that the control is worthless. 18

AU1518Ch01Frame Page 19 Thursday, November 14, 2002 6:27 PM

It Is All about Control 3. Inaccurate information. Information may be deliberately managed to demonstrate the control as ineffective or to promote a department as more efficient than it really is. 4. Control illusion. While the control system is in force and working, employees ignore or misinterpret results. The system is credited when the results are positive and blamed when results are less favorable. The previous four reactions are fairly complex reactions. Far more simplistic reactions leading to the failure of control systems have been identified: • Apathy. Employees have no interest in the success of the system, leading to mistakes and carelessness. • Fatigue. Highly complex operations result in fatigue of systems and people. Simplification may be required to address the problem. • Executive override. The executives in the organization provide a “get out of jail free” card for ignoring the control system. Unfortunately, the executives involved may give permission to employees to ignore all the established control systems. • Complexity. The system is so complex that people cannot cope with it. • Communication. The control operation has not been well communicated to the affected employees, resulting in confusion and differing interpretations. • Efficiency. People often see the control as impeding their abilities to achieve goals. Despite the reasons why controls fail, many organizations operate in very controlled environments due to business competitiveness, handling of national interest or secure information, privacy, legislation, and other reasons. People can accept controls and assist in their design, development, and implementation. Involving the correct people at the correct time results in a better control system. SUMMARY This chapter has examined the language of controls, including definitions and composition. It has looked at the different types of controls, some examples, and why controls fail. The objective for the auditor and the security professional alike is to understand the risk the control is designed to address and implement or evaluate as their role may be. Good controls do depend on good people to design, implement, and use the control. However, the balance between the good and the bad control can be as simple as the cost to implement or the negative impact to business operations. For a control to be effective, it must achieve management’s objectives, be relevant to the situation, be cost effective to implement, and easy for the affected employees to use. 19

AU1518Ch01Frame Page 20 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Acknowledgments

Many thanks to my colleague and good friend, Mignona Cote. She continues to share her vast audit experience daily, having a positive effect on information systems security and audit. Her mentorship and leadership have contributed greatly to my continued success. References Gallegos, Frederick. Information Technology Control and Audit. Auerbach Publications, Boca Raton, FL, 1999. Sawyer, Lawrence. Internal Auditing. The Institute of Internal Auditors, 1996.

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

20

AU1518Ch02Frame Page 21 Thursday, November 14, 2002 6:26 PM

Chapter 2

Controlling FTP: Providing Secured Data Transfers Chris Hare, CISSP, CISA

Several scenarios exist that must be considered when looking for a solution: • The user with a log-in account who requires FTP access to upload or download reports generated by an application. The user does not have access to a shell; rather, his default connection to the box will connect him directly to an application. He requires access to only his home directory to retrieve and delete files. • The user who uses an application as his shell but does not require FTP access to the system. • An application that automatically transfers data to a remote system for processing by a second application. It is necessary to find an elegant solution to each of these problems before that solution can be considered viable by an organization. Scenario A A user named Bob accesses a UNIX system through an application that is a replacement for his normal UNIX log-in shell. Bob has no need for, and does not have, direct UNIX command-line access. While using the application, Bob creates reports or other output that he must upload or download for analysis or processing. The application saves this data in either Bob’s home directory or a common directory for all application users. Bob may or may not require the ability to put files onto the application server. The requirements break down as follows:

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

21

AU1518Ch02Frame Page 22 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY • Bob requires FTP access to the target server. • Bob requires access to a restricted number of directories, possibly one or two. • Bob may or may not require the ability to upload files to the server. Scenario B Other application users in the environment illustrated in Scenario A require no FTP access whatsoever. Therefore, it is necessary to prevent them from connecting to the application server using FTP. Scenario C The same application used by the users in Scenarios A and B regularly dumps data to move to another system. The use of hard-coded passwords in scripts is not advisable because the scripts must be readable for them to be executed properly. This may expose the passwords to unauthorized users and allow them to access the target system. Additionally, the use of hard-coded passwords makes it difficult to change the password on a regular basis because all scripts using this password must be changed. A further requirement is to protect the data once stored on the remote system to limit the possibility of unauthorized access, retrieval, and modification of the data. While there are a large number of options and directives for the /etc/ ftpaccess file, the focus here is on those that provide secured access to meet the requirements in the scenarios described. CONTROLLING FTP ACCESS Advanced FTP servers such as wu-ftpd provide extensive controls for controlling FTP access to the target system. This access does not extend to the IP layer, as the typical FTP client does not offer encryption of the data stream. Rather, FTP relies on the properties inherent in the IP (Internet Protocol) to recover from malformed or lost packets in the data stream. This means one still has no control over the network component of the data transfer. This may allow for the exposure of the data if the network is compromised. However, that is outside the scope of the immediate discussion. wu-ftpd uses two control files: /etc/ftpusers and /etc/ftpaccess. The /etc/ftpusers file is used to list the users who do not have FTP access rights on the remote system. For example, if the /etc/ftpusers file is empty, then all users, including root, have FTP rights on the system. This is not the desired operation typically, because access to system accounts such as root are to be controlled. Typically, the /etc/ftpusers file contains the following entries: 22

AU1518Ch02Frame Page 23 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Exhibit 2-1. Denying FTP access.

C:\WINDOWS>ftp 192.168.0.2 Connected to 192.168.0.2. 220 poweredge.home.com FTP server (Version wu-2.6.1(1) Wed Aug 9 05:54:50 EDT 20 00) ready. User (192.168.0.2:(none)): root 331 Password required for root. Password: 530 Login incorrect. Login failed. ftp>

• • • • • • • • • • • • • •

root bin daemon adm lp sync shutdown halt mail news uucp operator games nobody

When a user in this list, root for example, attempts to access the remote system using FTP, they are denied access because their account is listed in the /etc/ftpusers file. This is illustrated in Exhibit 2-1. By adding additional users to this list, one can control who has FTP access to this server. This does, however, create an additional step in the creation of a user account, but it is a related process and could be added as a step in the script used to create a user. Should a user with FTP privileges no longer require this access, the user’s name can be added to the /etc/ftpusers list at any time. Similarly, if a denied user requires this access in the future, that user can be removed from the list and FTP access restored. Recall the requirements of Scenario B: the user has a log-in on the system to access his application but does not have FTP privileges. This scenario has been addressed through the use of /etc/ftpusers. The user 23

AU1518Ch02Frame Page 24 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-2. Sample /etc/ftpaccess file.

class all real,guest,anonymous * email root@localhost loginfails 5 readme readme

README* README*

login cwd=*

message /var/ftp/welcome.msg message .message compress tar chmod delete overwrite rename

yes yes no no no no

login cwd=*

all all guest,anonymous guest,anonymous guest,anonymous guest,anonymous

log transfers anonymous,real inbound,outbound shutdown /etc/shutmsg passwd-check rfc822 warn

can still have UNIX shell access or access to a UNIX-based application through the normal UNIX log-in process. However, using /etc/ftpusers prevents access to the FTP server and eliminates the problem of unauthorized data movement to or from the FTP server. Most current FTP server implementations offer the /etc/ftpusers feature. EXTENDING CONTROL Scenarios A and C require additional configuration because reliance on the extended features of the wu-ftpd server is required. These control extensions are provided in the file /etc/ftpaccess. A sample /etc/ftpaccess file is shown in Exhibit 2-2. This is the default /etc/ftpaccess file distributed with wu-ftpd. Before one can proceed to the problem at hand, one must examine the statements in the /etc/ftpaccess file. Additional explanations for other statements not found in this example, but required for the completion of our scenarios, are also presented later in the chapter. The class statement in /etc/ftpaccess defines a class of users, in the sample file a user class named all, with members of the class being real, guest, and anonymous. The syntax for the class definition is: class [ ...] 24

AU1518Ch02Frame Page 25 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Typelist is one of real, guest, and anonymous. The real keyword matches users to their real user accounts. Anonymous matches users who are using anonymous FTP access, while guest matches guest account access. Each of these classes can be further defined using other options in this file. Finally, the class statement can also identify the list of allowable addresses, hosts, or domains that connections will be accepted from. There can be multiple class statements in the file; the first one matching the connection will be used. Defining the hosts requires additional explanation. The host definition is a domain name, a numeric address, or the name of a file, beginning with a slash (‘/’) that specifies additional address definitions. Additionally, the address specification may also contain IP address:netmask or IP address/CIDR definition. (CIDR, or Classless Internet Domain Routing, uses a value after the IP address to indicate the number of bits used for the network. A Class C address would be written as 192.168.0/24, indicating 24 bits are used for the network.) It is also possible to exclude users from a particular class using a ‘!’ to negate the test. Care should be taken in using this feature. The results of each of the class statements are OR’d together with the others, so it is possible to exclude an allowed user in this manner. However, there are other mechanisms available to deny connections from specific hosts or domains. The primary purpose of the class statement is to assign connections from specific domains or types of users to a class. With this in mind, one can interpret the class statement in Exhibit 2-2, shown here as: class all real,guest,anonymous *

This statement defines a class named all, which includes user types real, anonymous, and guest. Connections from any host are applicable to this class. The email clause specifies the e-mail address of the FTP archive maintainer. It is printed at various times by the FTP server. The message clause defines a file to be displayed when the user logs in or when they change to a directory. The statement message /var/ftp/welcome.msg login

causes wu-ftpd to display the contents of the file /var/ftp/welcome.msg when a user logs in to the FTP server. It is important for this file to be somewhere accessible to the FTP server so that anonymous users will also be greeted by the message. NOTE: Some FTP clients have problems with multiline responses, which is how the file is displayed. 25

AU1518Ch02Frame Page 26 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY When accessing the test FTP server constructed for this chapter, the message file contains: ***** WARNING ***** This is a private FTP server. If you do not have an account, you are not welcome here. ******************* It is currently %T local time in Ottawa, Canada. You are %U@%R accessing %L. for help, contact %E.

The % strings are converted to the actual text when the message is displayed by the server. The result is: 331 Password required for chare. Password: 230-***** WARNING ***** 230-This is a private FTP server. If you do not have an account, 230-you are not welcome here. 230-******************* 230-It is currently Sun Jan 28 18:28:01 2001 local time in Ottawa, Canada. 230-You are chare@chris accessing poweredge.home.com. 230-for help, contact root@localhost. 230230230 User chare logged in. ftp>

The % tags available for inclusion in the message file are listed in Exhibit 2-3. It is allowable to define a class and attach a specific message to that class of users. For example: class class message

real anon /var/ftp/welcome.msg

real anonymous login

* * real

Now, the message is only displayed when a real user logs in. It is not displayed for either anonymous or guest users. Through this definition, one can provide additional information using other tags listed in Exhibit 2-3. The ability to display class-specific message files can be extended on a 26

AU1518Ch02Frame Page 27 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Exhibit 2-3. %char definitions. Tag

Description

%T %F %C %E %R %L %u %U %M %N %B %b %Q %I %i %q %H %h %xu %xd %xR %xc %xT %xE %xL %xU %xD

Local time (form Thu Nov 15 17:12:42 1990) Free space in partition of CWD (kbytes) Current working directory The maintainer’s e-mail address as defined in ftpaccess Remote host name Local host name Username as determined via RFC931 authentication Username given at log-in time Maximum allowed number of users in this class Current number of users in this class Absolute limit on disk blocks allocated Preferred limit on disk blocks Current block count Maximum number of allocated inodes (+1) Preferred inode limit Current number of allocated inodes Time limit for excessive disk use Time limit for excessive files Uploaded bytes Downloaded bytes Upload/download ratio (1:n) Credit bytes Time limit (minutes) Elapsed time since log-in (minutes) Time left Upload limit Download limit

user-by-user basis by creating a class for each user. This is important because individual limits can be defined for each user. The message command can also be used to display information when a user enters a directory. For example, using the statement message /var/ftp/etc/.message CWD=*

causes the FTP server to display the specified file when the user enters the directory. This is illustrated in Exhibit 2-4 for the anonymous user. The message itself is displayed only once to prevent annoying the user. The noretrieve directive establishes specific files no user is permitted to retrieve through the FTP server. If the path specification for the file 27

AU1518Ch02Frame Page 28 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-4. Directory-specific messages. User (192.168.0.2:(none)): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd etc 250-***** WARNING ***** 250-There is no data of any interest in the /etc directory. 250250 CWD command successful. ftp>

begins with a ‘/’, then only those files are marked as nonretrievable. If the file specification does not include the leading ‘/’, then any file with that name cannot be retrieved. For example, there is a great deal of sensitivity with the password file on most UNIX systems, particularly if that system does not make use of a shadow file. Aside from the password file, there is a long list of other files that should not be retrievable from the system, even if their use is discouraged. The files that should be marked for nonretrieval are files containing the names: • • • • • • • • • • • •

passwd shadow .profile .netrc .rhosts .cshrc profile core .htaccess /etc /bin /sbin

This is not a complete list, as the applications running on the system will likely contain other files that should be specifically identified. Using the noretrieve directive follows the syntax: noretrieve [absolute|relative] [class=] ... [-] ...

For example, noretrieve passwd

prevents any user from downloading any file on the system named passwd. 28

AU1518Ch02Frame Page 29 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers When specifying files, it is also possible to name a directory. In this situation, all files in that directory are marked as nonretrievable. The option absolute or relative keywords identify if the file or directory is an absolute or relative path from the current environment. The default operation is to consider any file starting with a ‘/’ as an absolute path. Using the optional class keyword on the noretrieve directive allows this restriction to apply to only certain users. If the class keyword is not used, the restriction is placed against all users on the FTP server. Denying Connections Connections can be denied based on the IP address or domain of the remote system. Connections can also be denied based on how the user enters his password at log-in. NOTE: This password check applies only to anonymous FTP users. It has no effect on real users because they authenticate with their standard UNIX password. The password-check directive informs the FTP server to conduct checks against the password entered. The syntax for the password-check directive is passwd-check ()

It is not recommended to use password-check with the none argument because this disables analysis of the entered password and allows meaningless information to be entered. The trivial argument performs only checking to see if there is an ‘@’ in the password. Using the argument is the recommended action and ensures the password is compliant with the RFC822 e-mail address standard. If the password is not compliant with the trivial or rfc822 options, the FTP server can take two actions. The warn argument instructs the server to warn the user that his password is not compliant but still allows access. If the enforce argument is used, the user is warned and the connection terminated if a noncomplaint password is entered. Use of the deny clause is an effective method of preventing access from specific systems or domains. When a user attempts to connect from the specified system or domain, the message contained in the specified file is displayed. The syntax for the deny clause is: deny

The file location must begin with a slash (‘/’). The same rules described in the class section apply to the addrglob definition for the deny command. In addition, the use of the keyword !nameservd is allowed to deny connections from sites without a working nameserver. 29

AU1518Ch02Frame Page 30 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Consider adding a deny clause to this file; for example, adding deny!nameservd /var/ftp/.deny to /etc/ftpaccess. When testing the deny clause, the denied connection receives the message contained in the file. Using the !nameservd definition means that any host not found in a reverse DNS query to get a host name from an IP address is denied access. Connected to 192.168.0.2. 220 poweredge.home.com FTP server (Version wu-2.6.1(1) Wed Aug 9 05:54:50 EDT 20 00) ready. User (192.168.0.2:(none)): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 530-**** ACCESS DENIED **** 530530-Access to this FTP server from your domain has been denied by the administrator. 530530 Login incorrect. Login failed. ftp>

The denial of the connection is based on where the connection is coming from, not the user who authenticated to the server. Connection Management With specific connections denied, this discussion must focus on how to control the connection when it is permitted. A number of options for the server allow this and establish restrictions from throughput to access to specific files or directories. Preventing anonymous access to the FTP server is best accomplished by removing the ftp user from the /etc/passwd file. This instructs the FTP server to deny all anonymous connection requests. The guestgroup and guestuser commands work in a similar fashion. In both cases, the session is set up exactly as with anonymous FTP. In other words, a chroot() is done and the user is no longer permitted to issue the USER and PASS commands. If using guestgroup, the groupname must be defined in the /etc/group file; or in the case of guestuser, a valid entry in /etc/passwd. guestgroup [ ...] guestuser [ ...] realgroup [ ...] realuser [ ...] 30

AU1518Ch02Frame Page 31 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers In both cases, the user’s home directory must be correctly set up. This is accomplished by splitting the home directory entry into two components separated by the characters ‘/./’. The first component is the base directory for the FTP server and the second component is the directory the user is to be placed in. The user can enter the base FTP directory but cannot see any files above this in the file system because the FTP server establishes a restricted environment. Consider the /etc/passwd entry: systemx::503:503:FTP Only Access from systemx:/var/ftp/./systemx:/etc/ftponly

When systemx successfully logs in, the FTP server will chroot(“/var/ ftp”) and then chdir(“/systemx”). The guest user will only be able to access the directory structure under /var/ftp (which will look and act as / to systemx), just as an anonymous FTP user would. Either an actual name or numeric ID specifies the group name. To use a numeric group ID, place a ‘%’ before the number. Ranges may be given and the use of an asterisk means all groups. guestuser works like guestgroup except uses the username (or numeric ID). realuser and realgroup have the same syntax but reverse the effect of guestuser and guestgroup. They allow real user access when the remote user would otherwise be determined a guest. For example: guestuser * realuser chare

causes all nonanonymous users to be treated as guest, with the sole exception of user chare, who is permitted real user access. Bear in mind, however, that the use of /etc/ftpusers overrides this directive. If the user is listed in /etc/ftpusers, he is denied access to the FTP server. It is also advisable to set timeouts for the FTP server to control the connection and terminate it appropriately. The timeout directives are listed in Exhibit 2-5. The accept timeout establishes how long the FTP server will wait for an incoming connection. The default is 120 seconds. The connect value establishes how long the FTP server will wait to establish an outgoing connection. The FTP server generally makes several attempts and will give up after the defined period if a successful connection cannot be established. The data timeout determines how long the FTP server will wait for some activity on the data connection. This should be kept relatively long because the remote client may have a low-speed link and there may be a lot of data queued for transmission. The idle timer establishes how long the 31

AU1518Ch02Frame Page 32 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-5. Timeout directives. Timeout Value Timeout accept Timeout connect Timeout data Timeout idle Timeout maxidle Timeout RFC931

Default

Recommended

120 120 1200 900 7200 10

120 120 1200 900 1200 10

server will wait for the next command from the client. This can be overridden with the –a option to the server. Using the access clause overrides both the command line parameter if used and the default. The user can also use the SITE IDLE command to establish a higher value for the idle timeout. The maxidle value establishes the maximum value that can be established by the FTP client. The default is 7200 seconds. Like the idle timeout, the default can be overridden using the –A command line option to the FTP server. Defining this parameter overrides the default and the command line. The last timeout value allows the maximum time for the RFC931 ident/AUTH conversation to occur. The information recorded from the RFC931 conversation is recorded in the system logs and used for any authentication requests. Controlling File Permissions File permissions in the UNIX environment are generally the only method available to control who has access to a specific file and what they are permitted to do with that file. It may be a requirement of a specific implementation to restrict the file permissions on the system to match the requirements for a specific class of users. The defumask directive allows the administrator to define the umask, or default permissions, on a per-class or systemwide basis. Using the defumask command as defumask 077

causes the server to remove all permissions except for the owner of the file. If running a general access FTP server, the use of a 077 umask may be extreme. However, umask should be at least 022 to prevent modification of the files by other than the owner. By specifying a class of user following the umask, as in defumask 077 real

all permissions are removed. Using these parameters prevents world writable files from being transferred to your FTP server. If required, it is possible 32

AU1518Ch02Frame Page 33 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers to set additional controls to allow or disallow the use of other commands on the FTP server to change file permissions or affect the files. By default, users are allowed to change file permissions and delete, rename, and overwrite files. They are also allowed to change the umask applied to files they upload. These commands allow or restrict users from performing these activities. chmod delete overwrite rename umask

To restrict all users from using these commands, apply the directives as: chmod no all delete no all overwrite no all rename no all umask no all

Setting these directives means no one can execute commands on the FTP server that require these privileges. This means the FTP server and the files therein are under the full control of the administrator. ADDITIONAL SECURITY FEATURES There are a wealth of additional security features that should be considered when configuring the server. These control how much information users are shown when they log in about the server, and print banner messages, among other capabilities. The greeting directive informs the FTP server to change the level of information printed when the user logs in. The default is full, which prints all information about the server. A full message is: 220 poweredge.home.com FTP server (Version wu-2.6.1(1) Wed Aug 9 05:54:50 EDT 2000) ready.

A brief message on connection prints the server name as: 220 poweredge.home.com FTP server ready.

Finally, the terse message, which is the preferred choice, prints only: 220 FTP server ready.

The full greeting is the default unless the greeting directive is defined. This provides the most information about the FTP server. The terse greeting is the preferred choice because it provides no information about the server to allow an attacker to use that information for identifying potential attacks against the server. 33

AU1518Ch02Frame Page 34 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY The greeting is controlled with the directive: greeting

An additional safeguard is the banner directive using the format: banner

This causes the text contained in the named file to be presented when the users connect to the server prior to entering their username and password. The path of the file is relative from the real root directory, not from the anonymous FTP directory. If one has a corporate log-in banner that is displayed when connecting to a system using Telnet, it would also be available to use here to indicate that the FTP server is for authorized users only. NOTE: Use of this command can completely prevent noncompliant FTP clients from establishing a connection. This is because not all clients can correctly handle multiline responses, which is how the banner is displayed. Connected to 192.168.0.2. 220-************************************************************* 220-* * 220-* * W A R N I N G * * 220-* * 220-* ACCESS TO THIS FTP SERVER IS FOR AUTHORIZED USERS ONLY. * 220-* ALL ACCESS IS LOGGED AND MONITORED. IF YOU ARE NOT AN * 220-* AUTHORIZED USER, OR DO NOT AGREE TO OUR MONITORING POLICY,* 220-* DISCONNECT NOW. * 220-* * 220-* NO ABUSE OR UNAUTHORIZED ACCESS IS TOLERATED. * 220-* * 220-************************************************************* 220220 FTP server ready. User (192.168.0.2:(none)):

At this point, one has controlled how the remote user gains access to the FTP server, and restricted the commands they can execute and the permissions assigned to their files. Additionally, certain steps have been taken to ensure they are aware that access to this FTP server is for authorized use only. However, one must also take steps to record the connections and transfers made by users to fully establish what is being done on the FTP server. LOGGING CAPABILITIES Recording information in the system logs is a requirement for proper monitoring of transfers and activities conducted on the FTP server. There are a number of commands that affect logging, and each is presented in this section. Normally, only connections to the FTP server are logged. However, using the log commands directive, each command executed by the 34

AU1518Ch02Frame Page 35 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers user can be captured. This may create a high level of output on a busy FTP server and may not be required. However, it may be advisable to capture traffic for anonymous and guest users specifically. The directive syntax is: log commands

As with other directives, it is known that typelist is a combination of real, anonymous, and guest. If the real keyword is used, logging is done for users accessing FTP using their real accounts. Anonymous logs all commands performed by anonymous users, while guest matches users identified using the guestgroup or guestuser directives. Consider the line log commands guest, anonymous

which results in all commands performed by anonymous and guest users being logged. This can be useful for later analysis to see if automated jobs are being properly performed and what files are uploaded or downloaded. Like the log commands directive, log transfers performs a similar function, except that it records all file transfers for a given class of users. The directive is stated as: log transfers

The directions argument is inbound or outbound. Both arguments can be used to specify logging of transfers in both directions. For clarity, inbound are files transferred to the server, or uploads, and outbound are transfers from the server, or downloads. The typelist argument again consists of real, anonymous, and guest. It is not only essential to log all of the authorized functions, but also to record the various commands and requests made by the user that are denied due to security requirements. For example, if there are restrictions placed on retrieving the password file, it is desirable to record the security events. This is accomplished for real, anonymous, and guest users using the log security directive, as in: log security

If rename is a restricted command on the FTP server, the log security directive results in the following entries Feb 11 20:44:02 poweredge ftpd[23516]: RNFR dayo.wav Feb 11 20:44:02 poweredge ftpd[23516]: RNTO day-o.wav Feb 11 20:44:02 poweredge ftpd[23516]: systemx of localhost.home.com [127.0.0.1] tried to rename /var/ftp/systemx/dayo.wav to /var/ftp/ systemx/day-o.wav

This identifies the user who tried to rename the file, the host that the user connected from, and the original and desired filenames. With this information, the 35

AU1518Ch02Frame Page 36 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY system administrator or systems security personnel can investigate the situation. Downloading information from the FTP server is controlled with the noretrieve clause in the /etc/ftpaccess file. It is also possible to limit uploads to specific directories. This may not be required, depending on the system configuration. A separate entry for each directory one wishes to allow uploads to is highly recommended. The syntax is: upload [absolute|relative] [class=]... [-] [“dirs”|”nodirs”] []

This looks overly complicated, but it is in fact relatively simple. Define a directory called that permits or denies uploads. Consider the following entry: upload /var/ftp /incoming yes ftpadmin ftpadmin 0440 nodirs

This means that for a user with the home directory of /var/ftp, allow uploads to the incoming directory. Change the owner and group to be ftpadmin and change the permissions to readonly. Finally, do not allow the creation of directories. In this manner, users can be restricted to the directories to which they can upload files. Directory creation is allowed by default, so one must disable it if required. For example, if one has a user on the system with the following password file entry: chare:x:500:500:Chris Hare:/home/chare:/bin/bash

and if one wants to prevent the person with this userid from being able to upload files to his home directory, simply add the line: upload /home/chare no

to the /etc/ftpaccess file. This prevents the user chare from being able to upload files to his home directory. However, bear in mind that this has little effect if this is a real user, because real users will be able to upload files to any directory they have write permission to. The upload clause is best used with anonymous and guest users. Note: The wu-ftpd server denies anonymous uploads by default. To see the full effect of the upload clause, one must combine its use with a guest account, as illustrated with the systemx account shown here: systemx:x:503:503:FTP access from System X:/home/ systemx/./:/bin/false 36

AU1518Ch02Frame Page 37 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Note in this password file entry the home directory path. This entry cannot be made when the user account is created. The ‘/./’ is used by wu-ftpd to establish the chroot environment. In this case, the user is placed into his home directory, /home/systemx, which is then used as the base for his chroot file system. At this point, the guest user can see nothing on the system other than what is in his home directory. Using the upload clause of upload /home/chare yes

means the user can upload files to his home directory. When coupled with the noretrieve clause discussed earlier, it is possible to put a high degree of control around the user. THE COMPLETE /etc/ftpaccess FILE The discussion thus far has focused on a number of control directives available in the wu-ftpd FTP server. It is not necessary that these directives appear in any particular order. However, to further demonstrate the directives and relationships between those directives, the /etc/ftpaccess file is illustrated in Exhibit 2-6. REVISITING THE SCENARIOS Recall the scenarios from the beginning of this chapter. This section reviews each scenario and defines an example configuration to achieve it. Scenario A A user named Bob accesses a UNIX system through an application that is a replacement for his normal UNIX log-in shell. Bob has no need for, and does not have, direct UNIX command-line access. While using the application, Bob creates reports or other output that he must retrieve for analysis. The application saves this data in either Bob’s home directory or a common directory for all application users. Bob may or may not require the ability to put files onto the application server. The requirements break down as follows: • Bob requires FTP access to the target server. • Bob requires access to a restricted number of directories, possibly one or two. • Bob may or may not require the ability to upload files to the server. Bob requires the ability to log into the FTP and access several directories to retrieve files. The easiest way to do this is to deny retrieval for the entire system by adding a line to /etc/ftpaccess as noretrieve / 37

AU1518Ch02Frame Page 38 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-6. The /etc/ftpaccess file.

# # Define the user classes # class all real,guest * class anonymous anonymous * class real real * # # Deny connections from systems with no reverse DNS # deny !nameservd /var/ftp/.deny # # What is the email address of the server administrator. # Make sure someone reads this from time to time. email root@localhost # # How many login attempts can be made before logging an # error message and terminating the connection? # loginfails 5 greeting terse readme README* login readme README* cwd=* # # Display the following message at login # message /var/ftp/welcome.msg login banner /var/ftp/warning.msg # # display the following message when entering the directory # message .message cwd=* # # ACCESS CONTROLS # # What is the default umask to apply if no other matching # directive exists # defumask 022 chmod no guest,anonymous delete no guest,anonymous overwrite no guest,anonymous rename no guest,anonymous # remove all permissions except for the owner if the user # is a member of the real class # defumask 077 real guestuser systemx realuser chare # #establish timeouts #

38

AU1518Ch02Frame Page 39 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Exhibit 2-6. The /etc/ftpaccess file (Continued).

timeout timeout timeout timeout timeout

accept 120 connect 120 data 1200 idle 900 maxidel 1200

# # establish non-retrieval # # noretrieve passwd # noretrieve shadow # noretrieve .profile # noretrieve .netrc # noretrieve .rhosts # noretrieve .cshrc # noretrieve profile # noretrieve core # noretrieve .htaccess # noretrieve /etc # noretrieve /bin # noretrieve /sbin noretrieve / allow-retrieve /tmp upload /home/systemx / no # # Logging # log commands anonymous,guest,real log transfers anonymous,guest,real inbound,outbound log security anonymous,real,guest compress tar

yes yes

all all

shutdown /etc/shutmsg passwd-check rfc822 warn

This marks every file and directory as nonretrievable. To allow Bob to get the files he needs, one must set those files or directories as such. This is done using the allow-retrieve directive. It has exactly the same syntax as the noretrieve directive, except that the file or directory is now retrievable. Assume that Bob needs to retrieve files from the /tmp directory. Allow this using the directive allow-retrieve /tmp

When Bob connects to the FTP server and authenticates himself, he cannot get files from his home directory.

39

AU1518Ch02Frame Page 40 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY ftp> pwd 257 “/home/bob” is current directory. ftp> get .xauth xauth 200 PORT command successful. 550 /home/chare/.xauth is marked unretrievable

However, Bob can retrieve files from the /tmp directory. ftp> cd /tmp 250 CWD command successful. ftp> pwd 257 “/tmp” is current directory. ftp> get .X0-lock X0lock 200 PORT command successful. 150 Opening ASCII mode data connection for .X0-lock (11 bytes). 226 Transfer complete. ftp: 12 bytes received in 0.00Seconds 12000.00Kbytes/sec. ftp>

If Bob must be able to retrieve files from his home directory, an additional allow-retrieve directive is required: class real real * allow-retrieve /home/bob class=real

When Bob tries to retrieve a file from anywhere other than /tmp or his home directory, access is denied. Additionally, it may be necessary to limit Bob’s ability to upload files. If a user requires the ability to upload files, no additional configuration is required, as the default action for the FTP server is to allow uploads for real users. If one wants to prohibit uploads to Bob’s home directory, use the upload directive: upload /home/bob / no

This command allows uploads to the FTP server. The objective of Scenario A has been achieved. Scenario B Other application users in the environment illustrated in Scenario A require no FTP access whatsoever. Therefore, it is necessary to prevent them from connecting to the application server using FTP. 40

AU1518Ch02Frame Page 41 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers This is done by adding those users to the /etc/ftpaccess file. Recall that this file lists a single user per line, which is checked. Additionally, it may be advisable to deny anonymous FTP access. Scenario C The same application used by the users in Scenarios A and B regularly dumps data to move to another system. The use of hard-coded passwords in scripts is not advisable because the scripts must be readable for them to be executed properly. This may expose the passwords to unauthorized users and allow them to access the target system. Additionally, the use of hard-coded passwords makes it difficult to change the password on a regular basis because all scripts using this password must be changed. A further requirement is to protect the data once stored on the remote system to limit the possibility of unauthorized access, retrieval, and modification of the data. Accomplishing this requires the creation of a guest user account on the system. This account will not support a log-in and will be restricted in its FTP abilities. For example, create a UNIX account on the FTP server using the source hostname, such as systemx. The password is established as a complex string but with the other compensating controls, the protection on the password itself does not need to be as stringent. Recall from an earlier discussion that the account resembles systemx:x:503:503:FTP access from System X:/home/ systemx/./:/bin/false

Also recall that the home directory establishes the real user home directory, and the ftp chroot directory. Using the upload command upload /home/systemx / no

means that the systemx user cannot upload files to the home directory. However, this is not the desired function in this case. In this scenario, one wants to allow the remote system to transfer files to the FTP server. However, one does not want to allow for downloads from the FTP server. To do this, the command noretrieve / upload /home/systemx / yes

prevents downloads and allows uploads to the FTP server. One can further restrict access by controlling the ability to rename, overwite, change permissions, and delete a file using the appropriate directives in the /etc/ftpaccess file: 41

AU1518Ch02Frame Page 42 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY chmod delete overwrite rename

no no no no

guest,anonymous guest,anonymous guest,anonymous guest,anonymous

Because the user account has no interactive privileges on the system and has restricted privileges on the FTP server, there is little risk involved with using a hard-coded password. While using a hard-coded password is not considered advisable, there are sufficient controls in place to compensate for this. Consider the following controls protecting the access: The user cannot retrieve files from the system. The user can upload files. The user cannot see what files are on the system and thus cannot determine the names of the files to block the system from putting the correct data on the server. The user cannot change file permissions. The user cannot delete files. The user cannot overwrite existing files. The user cannot rename files. The user cannot establish an interactive session. FTP access is logged. With these compensating controls to address the final possibility of access to the system and the data using a password attack or by guessing the password, it will be sufficiently difficult to compromise the integrity of the data. The requirements defined in the scenario have been fulfilled. SUMMARY This discussion has shown how one can control access to an FTP server and allow controlled access for downloads or uploads to permit the safe exchange of information for interactive and automated FTP sessions. The extended functionality offered by the wu-ftpd FTP server provides extensive access, and preventative and detective controls to limit who can access the FTP server, what they can do when they can connect, and the recording of their actions. ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture. 42

AU1518Ch03Frame Page 43 Thursday, November 14, 2002 6:26 PM

Chapter 3

The Case for Privacy Michael J. Corby

Any revelation of a secret happens by the mistake of [someone] who shared it in confidence. — La Bruyere, 1645–1694

It is probably safe to say that since the beginning of communication, back in prehistoric times, there were things that were to be kept private. From the location of the best fishing to the secret passage into the cave next door, certain facts were reserved only for a few knowledgeable friends. Maybe even these facts were so private that there was only one person in the world who knew them. We have made “societal rules” around a variety of things that we want to keep private or share only among a few, but still the concept of privacy expectations comes with our unwritten social code. And wherever there has been the code of privacy, there has been the concern over its violation. Have computers brought this on? Certainly not! Maintaining privacy has been important and even more important have been the methods used to try to keep that data a secret. Today in our wired society, however, we still face the same primary threat to privacy that has existed for centuries: mistakes and carelessness of the individuals who have been entrusted to preserve privacy — maybe even the “owner” of the data. In the past few years, and heightened within the past few months, we have become more in tune to the cry — no, the public outcry — regarding the “loss of privacy” that has been forced upon us because of the information age. Resolving this thorny problem requires that we re-look at the way we design and operate our networked systems, and most importantly, that we re-think the way we allocate control to the rightful owners of the information which we communicate and store. Finally, we need to be careful about how we view the data that we provide and for which we are custodians. PRIVACY AND CONTROL The fact that data is being sent, printed, recorded, and shared is not the real concern of privacy. The real concern is that some data has been

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

43

AU1518Ch03Frame Page 44 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY implied, by social judgment, to be private, for sharing only by and with the approval of its owner. If a bank balance is U.S.$1240, that is an interesting fact. If it happens to be my account, that is private information. I have, by virtue of my agreement with the bank, given them the right to keep track of my balance and to provide it to me for the purpose of keeping me informed and maintaining a control point with which I can judge their accuracy. I did not give them permission to share that balance with other people indiscriminately, nor did I give them permission to use that balance even subtly to communicate my standing in relation to others (i.e., publish a list of account holders sorted by balance). The focal points of the issue of privacy are twofold: • How is the data classified as private? • What can be done to preserve the owner’s (my) expectations of privacy? Neither of these are significantly more challenging than, for example, sending digital pictures and sound over a telephone line. Why has this subject caused such a stir in the technology community? This chapter sheds some light on this issue and then comes up with an organized approach to resolve the procedural challenges of maintaining data privacy. RUDIMENTS OF PRIVACY One place to start examining this issue is with a key subset of the first point on classifying data as private: What, exactly, is the data we are talking about? Start with the obvious: private data includes those facts that I can recognize as belonging to me, and for which I have decided reveal more about myself or my behavior than I would care to reveal. This includes three types of data loosely included in the privacy concerns of information technology (IT). These three types of data shown in Exhibit 3-1 are: static, dynamic, and derived data. Static Data Static data is pretty easy to describe. It kind of sits there in front of us. It does not move. It does not change (very often). Information that describes who we are, significant property identifiers, and other tangible elements are generally static. This information can of course take any form. It can be entered into a computer by a keyboard; it can be handwritten on a piece of paper or on a form; it can be photographed or created as a result of using a biological interface such as a fingerprint pad, retina scanner, voice or facial image recorder, or pretty much any way that information can be retained. It does not need to describe an animate object. It can also identify something we have. Account numbers, birth certificates, passport numbers, and employee numbers are all concepts that can be recorded and would generally be considered static data. 44

AU1518Ch03Frame Page 45 Thursday, November 14, 2002 6:26 PM

The Case for Privacy Exhibit 3-1. Types of private data. 1. Static data: a. Who we are: i. Bio-identity (fingerprints, race, gender, height, weight) ii. Financial identity (bank accounts, credit card numbers) iii. Legal identity (Social Security number, driver’s license, birth certificate, passport) iv. Social identity (church, auto clubs, ethnicity) b. What we have: i. Property (buildings, automobiles, boats, etc.) ii. Non-real property (insurance policies, employee agreements) 2. Dynamic data: a. Transactions (financial, travel, activities) b. How we live (restaurants, sporting events) c. Where we are (toll cards, cell phone records) 3. Derived data: a. Financial behavior (market analysis): i. Trends and changes (month-to-month variance against baseline) ii. Perceived response to new offerings (match with experience) b. Social behavior (profiling): i. Behavior statistics (drug use, violations or law, family traits)

In most instances, we get to control the initial creation of static data. Because we are the one identifying ourselves by name, account number, address, driver’s license number, or by speaking into a voice recorder or having our retina or face scanned or photographed, we usually will know when a new record is being made of our static data. As we will see later, we need to be concerned about the privacy of this data under three conditions: when we participate in its creation, when it is copied from its original form to a duplicate form, and when it is covertly created (created without our knowledge) such as in secretly recorded conversations or hidden cameras. Dynamic Data Dynamic data is also easy to identify and describe, but somewhat more difficult to control. Records of transactions we initiate constitute the bulk of dynamic data. It is usually being created much more frequently than static data. Every charge card transaction, telephone call, and bank transaction adds to the collection of dynamic data. Even when we drive on toll roads or watch television programs, information can be recorded without our doing anything special. These types of transactions are more difficult for us to control. We may know that a computerized recording of the event is being made, but we often do not know what that information contains, nor if it contains more information than we suspect. Take, for example, purchasing a pair of shoes. You walk into a shoe store, try on various styles and sizes, make your selection, pay for the shoes, and walk out with your 45

AU1518Ch03Frame Page 46 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY purchase in hand. You may have the copy of your charge card transaction, and you know that somewhere in the store’s data files, one pair of shoes has been removed from their inventory and the price you just paid has been added to their cash balance. But what else might have been recorded? Did the sales clerk, for example, record your approximate age or ethnic or racial profile, or make a judgment as to your income level. Did you have children with you? Were you wearing a wedding band? What other general observations were made about you when the shoes were purchased? These items are of great importance in helping the shoe store replenish its supply of shoes, determining if they have attracted the type of customer they intended to attract and analyzing whether they are, in general, serving a growing or shrinking segment of the population. Without even knowing it, some information that you may consider private may have been used without your knowledge simply by the act of buying a new pair of shoes. Derived Data Finally, derived data is created by analyzing groups of dynamic transactions over time to build a profile of your behavior. Your standard way of living out your day, week, and month may be known by others even better than you may know it yourself. For example, you may, without even planning it, have dinner at a restaurant 22 Thursdays during the year. The other six days of the week, you may only dine out eight times in total. If you and others in your area fall into a given pattern, the restaurant community may begin to offer “specials” on Tuesday, or raise their prices slightly on Thursdays to accommodate the increased demand. In this case, your behavior is being recorded and used by your transaction partners in ways you do not even know or approve of. If you use an electronic toll recorder, as has become popular in many U.S. states, do you know if they are also computing the time it took to enter and exit the highway, and consequently your average speed? Most often, this derived data is being collected without even a hint to us, and certainly without our expressed permission. PRESERVING PRIVACY One place to start examining this issue is with a key subset of the first point on classifying data as private: What, exactly, is the data we are talking about? Start with the obvious: private data includes those items that we believe belong to us exclusively and it is not necessary for us to receive the product or service we wish to receive. To examine privacy in the context of computer technology today, we need to examine the following four questions: 1. Who owns the private data? 2. Who is responsible for security and accuracy? 46

AU1518Ch03Frame Page 47 Thursday, November 14, 2002 6:26 PM

The Case for Privacy 3. Who decides how it can be used? 4. Does the owner need to be told when it is used or compromised? You already have zero privacy. Get over it. — Scott McNealy, Chairman, Sun Microsystems, 1999

Start with the first question about ownership. Cyber-consumers love to get offers tailored to them. Over 63 percent of the buying public in the United States bought from direct mail in 1998. Companies invest heavily in personalizing their marketing approach because it works. So what makes it so successful? By allowing the seller to know some pretty personal data about your preferences, a trust relationship is implied. (Remember that word “trust”; it will surface later.) The “real deal” is this: vendors do not know about your interests because they are your friend and want to make you happy. They want to take your trust and put together something private that will result in their product winding up in your home or office. Plain and simple: economics. And what does this cost them? If they have their way, practically nothing. You have given up your own private information that they have used to exploit your buying habits or personal preferences. Once you give up ownership, you have let the cat out of the bag. Now they have the opportunity to do whatever they want with it. “Are there any controls?” That brings us to the second question. The most basic control is to ask you clearly whether you want to give up something you own. That design method of having you “opt in” to their data collection gives you the opportunity to look further into their privacy protection methods, a stated or implied process for sharing (or not sharing) your information with other organizations and how your private information is to be removed. By simply adding this verification of your agreement, 85 percent of surveyed consumers would approve of having their profile used for marketing. Not that they ask, but they will be responsible for protecting your privacy. You must do some work to verify that they can keep their promise, but at least you know they have accepted some responsibility (their privacy policy should tell you how much). Their very mission will ensure accuracy. No product vendor wants to build its sales campaign on inaccurate data — at least not a second time. Who decides use? If done right, both you and the marketer can decide based on the policy. If you are not sure if they are going to misuse their data, you can test them. Use a nickname, or some identifying initial to track where your profile is being used. I once tested an online information service by using my full middle name instead of an initial. Lo and behold, I discovered that my “new” name ended up on over 30 different mailing lists, and it took me several months to be removed from most of them. Some still are using my name, despite my repeated attempts to stop the vendors from 47

AU1518Ch03Frame Page 48 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY doing so. Your method for deciding who to trust (there is that word again) depends on your preferences and the genre of services and products you are interested in buying. Vendors also tend to reflect the preferences of their customers. Those who sell cheap, ultra-low-cost commodities have a different approach than those who sell big-ticket luxuries to a well-educated executive clientele. Be aware and recognize the risks. Special privacy concerns have been raised in three areas: data on children, medical information, and financial information (including credit/debit cards). Be especially aware if these categories of data are collected and hold the collector to a more stringent set of protection standards. You, the public, are the judge. If your data is compromised, it is doubtful that the collector will know. This situation is unfortunate. Even if it is known, it could cost them their business. Now the question of ethics comes into play. I actually know of a company that had its customer credit card files “stolen” by hackers. Rather than notify the affected customers and potentially cause a mass exodus to other vendors, the company decided to keep quiet. That company may be only buying some time. It is a far greater mistake to know that a customer is at risk and not inform them that they should check their records carefully than it is to have missed a technical component and, as a result, their system was compromised. The bottom line is that you are expected to report errors, inconsistencies, and suspected privacy violations to them. If you do, you have a right to expect immediate correction. WHERE IS THE DATA TO BE PROTECTED? Much ado has been made about the encryption of data while connected to the Internet. This is a concern; but to be really responsive to privacy directives, more than transmitting encrypted data is required. For a real privacy policy to be developed, the data must be protected when it is: • • • • •

Captured Transmitted Stored Processed Archived

That means more than using SSL or sending data over a VPN. It also goes beyond strong authentication using biometrics or public/private keys. It means developing a privacy architecture that protects data when it is sent, even internally; while stored in databases, with access isolated from those who can see other data in the same database; and while it is being stored in program work areas. All these issues can be solved with technology and should be discussed with the appropriate network, systems development, or data center managers. Despite all best efforts to make technology 48

AU1518Ch03Frame Page 49 Thursday, November 14, 2002 6:26 PM

The Case for Privacy respond to the issues of privacy, the most effective use of resources and effort is in developing work habits that facilitate data privacy protection. GOOD WORK HABITS Privacy does not just happen. Everyone has certain responsibilities when it comes to protecting the privacy of one’s own data or the data that belongs to others. In some cases, the technology exists to make that responsibility easier to carry out. Vendor innovations continue to make this technology more responsive, for both data “handlers” and data “owners.” For the owners, smart cards carry a record of personal activity that never leaves the wallet-sized token itself. For example, smart cards can be used to record selection of services (video, phone, etc.) without divulging preferences. They can maintain complex medical information (e.g., health, drug interactions) and can store technical information in the form of x-rays, nuclear exposure time (for those working in the nuclear industry), and tanning time (for those who do not). For the handlers, smart cards can record electronic courier activities when data is moved from one place to another. They can enforce protection of secret data and provide proper authentication, either using a biometric such as a fingerprint or a traditional personal identification number (PIN). There are even cards that can scan a person’s facial image and compare it to a digitized photo stored on the card. They are valuable in providing a digital signature that does not reside on one’s office PC, subject to theft or compromise by office procedures that are less than effective. In addition to technology, privacy can be afforded through diligent use of traditional data protection methods. Policies can develop into habits that force employees to understand the sensitivity of what they have access to on their desktops and personal storage areas. Common behavior such as protecting one’s territory before leaving that area and when returning to one’s area is as important as protecting privacy while in one’s area. Stories about privacy, the compromise of personal data, and the legislation (both U.S. and international) being enacted or drafted are appearing daily. Some are redundant and some are downright scary. One’s mission is to avoid becoming one of those stories. RECOMMENDATIONS For all 21st-century organizations (and all people who work in those organizations), a privacy policy is a must and adherence to it is expected. Here are several closing tips: 49

AU1518Ch03Frame Page 50 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY 1. If your organization has a privacy coordinator (or chief privacy officer), contact that person or a compliance person if you have questions. Keep their numbers handy. 2. Be aware of the world around you. Monitor national and international developments, as well as all local laws. 3. Be proactive; anticipate privacy issues before they become a crisis. 4. Much money can be made or lost by being ahead of the demands for privacy or being victimized by those who capitalize on your shortcomings. 5. Preserve your reputation and that of your organization. As with all bad news, violations of privacy will spread like wildfire. Everyone is best served by collective attention to maintaining an atmosphere of respect for the data being handled. 6. Communicate privacy throughout all areas of your organization. 7. Imbed privacy in existing processes — even older legacy applications. 8. Provide notification and allow your customers/clients/constituents to opt out or opt in. 9. Conduct audits and consumer inquiries. 10. Create a positive personalization image of what you are doing (how does this really benefit the data owner). 11. Use your excellent privacy policies and behavior as a competitive edge. ABOUT THE AUTHOR Michael Corby is president of QinetiQ Trusted Information Management, Inc. He was most recently vice president of the Netigy Global Security Practice, CIO for Bain & Company and the Riley Stoker division of Ashland Oil, and founder of M Corby & Associates, Inc., a regional consulting firm in continuous operation since 1989. He has more than 30 years of experience in the information security field and has been a senior executive in several leading IT and security consulting organizations. He was a founding officer of (ISC)2 Inc., developer of the CISSP program, and was named the first recipient of the CSI Lifetime Achievement Award. A frequent speaker and prolific author, Corby graduated from WPI in 1972 with a degree in electrical engineering.

50

AU1518Ch04Frame Page 51 Thursday, November 14, 2002 6:25 PM

Chapter 4

Breaking News: The Latest Hacker Attacks and Defenses Edward Skoudis

Computer attackers continue to hone their techniques, getting ever better at undermining our systems and networks. As the computer technologies we use advance, these attackers find new and nastier ways to achieve their goals — unauthorized system access, theft of sensitive data, and alteration of information. This chapter explores some of the recent trends in computer attacks and presents tips for securing your systems. To create effective defenses, we need to understand the latest tools and techniques our adversaries are throwing at our networks. With that in mind, we will analyze four areas of computer attack that have received significant attention in the past 12 months: wireless LAN attacks, active and passive operating system fingerprinting, worms, and sniffing backdoors. WIRELESS LAN ATTACKS (WAR DRIVING) In the past year, a very large number of companies have deployed wireless LANs, using technology based on the IEEE 802.11b protocol, informally known as Wi-Fi. Wireless LANs offer tremendous benefits from a usability and productivity perspective: a user can access the network from a conference room, while sitting in an associate’s cubicle, or while wandering the halls. Unfortunately, wireless LANs are often one of the least secure methods of accessing an organization’s network. The technology is becoming very inexpensive, with a decent access point costing less than U.S.$200 and wireless cards for a laptop or PC costing below U.S.$100. In addition to affordability, setting up an access point is remarkably simple (if security is ignored, that is). Most access points can be plugged into the corporate network and configured in a minute by a completely inexperienced user. Because of their low cost and ease of (insecure) use, wireless LANs are in rapid deployment in most networks today, whether upper management or 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

51

AU1518Ch04Frame Page 52 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY even IT personnel realize or admit it. These wireless LANs are usually completely unsecure because the inexperienced employees setting them up have no idea of or interest in activating security features of their wireless LANs. In our consulting services, we often meet with CIOs or Information Security Officers to discuss issues associated with information security. Given the widespread use of wireless LANs, we usually ask these upper-level managers what their organization is doing to secure its wireless infrastructure. We are often given the answer, “We don’t have to worry about it because we haven’t yet deployed a wireless infrastructure.” After hearing that stock answer, we conduct a simple wireless LAN assessment (with the CIO’s permission, of course). We walk down a hall with a wireless card, laptop, and wireless LAN detection software. Almost always we find renegade, completely unsecure wireless networks in use that were set up by employees outside of formal IT roles. The situation is similar to what we saw with Internet technology a decade ago. Back then, we would ask corporate officers what their organizations were doing to secure their Internet gateways. They would say that they did not have one, but we would quickly discover that the organization was laced with homegrown Internet connectivity without regard to security. Network Stumbling, War Driving, and War Walking Attackers have taken to the streets in their search for convenient ways to gain access to organizations’ wireless networks. By getting within a few hundred yards of a wireless access point, an attacker can detect its presence and, if the access point has not been properly secured, possibly gain access to the target network. The process of searching for wireless access points is known in some circles as network stumbling. Alternatively, using an automobile to drive around town looking for wireless access points is known as war driving. As you might guess, the phrases war walking and even war biking have been coined to describe the search for wireless access points using other modes of transportation. I suppose it is only a matter of time before someone attempts war hang gliding. When network stumbling, attackers set up a rig consisting of a laptop PC, wireless card, and antenna for discovering wireless access points. Additionally, a global positioning system (GPS) unit can help record the geographic location of discovered access points for later attack. Numerous software tools are available for this task as well. One of the most popular is NetStumbler (available at www.netstumbler.com), an easy-to-use GUIbased tool written by Marius Milner. NetStumbler runs on Windows systems, including Win95, 98, and 2000, and a PocketPC version called MiniStumbler has been released. For UNIX, several war-driving scripts have 52

AU1518Ch04Frame Page 53 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses been released, with Wi-scan (available at www.dis.org/wl/) among the most popular. This wireless LAN discovery process works because most access points respond, indicating their presence and their services set identifier (SSID) to a broadcast request from a wireless card. The SSID acts like a name for the wireless access point so that users can differentiate between different wireless LANs in close proximity. However, the SSID provides no real security. Some users think that a difficult-to-guess SSID will get them extra security. They are wrong. Even if the access point is configured not to respond to a broadcast request for an SSID, the SSIDs are sent in cleartext and can be intercepted. In a recent war-driving trip in a taxi in Manhattan, an attacker discovered 455 access points in one hour. Some of these access points had their SSIDs set to the name of the company using the access point, gaining the attention of attackers focusing on juicy targets. After discovering target networks, many attackers will attempt to get an IP address on the network, using the Dynamic Host Configuration Protocol (DHCP). Most wireless LANs freely give out addresses to anyone asking for them. After getting an address via DHCP, the attacker will attempt to access the LAN itself. Some LANs use the Wired Equivalent Privacy (WEP) protocol to provide cryptographic authentication and confidentiality. While WEP greatly improves the security of a wireless LAN, it has some significant vulnerabilities that could allow an attacker to determine an access point’s keys. An attacker can crack WEP keys by gathering a significant amount of traffic (usually over 500 MB) using a tool such as Airsnort (available at airsnort.shmoo.com/). Defending against Wireless LAN Attacks So, how do you defend against wireless LAN attacks in your environment? There are several levels of security that you could implement for your wireless LAN, ranging from totally unsecure to a strong level of protection. Techniques for securing your wireless LAN include: • Set the SSID to an obscure value. As described above, SSIDs are not a security feature and should not be treated as such. Setting the SSID to an obscure value adds very little from a security perspective. However, some access points can be configured to prohibit responses to SSID broadcast requests. If your access point offers that capability, you should activate it. • Use MAC address filtering. Each wireless card has a unique hardwarelevel address called the media access control (MAC) address. A wireless access point can be configured so that it will allow traffic only from specific MAC addresses. While this MAC filtering does improve 53

AU1518Ch04Frame Page 54 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY security a bit, it is important to note that an attacker can spoof wireless card MAC addresses. • Use WEP, with periodic rekeying. While WEP keys can be broken using Airsnort, the technology significantly improves the security of a wireless LAN. Some vendors even support periodic generation of new WEP keys after a given timeout. If an attacker does crack a WEP key, it is likely that they break the old key, while a newer key is in use on the network. If your access points support dynamic rotating of WEP keys, such as Cisco’s Aironet security solution, activate this feature. • Use a virtual private network (VPN). Because SSID, MAC, and even WEP solutions have various vulnerabilities as highlighted above, the best method for securing wireless LANs is to use a VPN. VPNs provide endto-end security without regard to the unsecured wireless network used for transporting the communication. The VPN client encrypts all data sent from the PC before it gets sent into the air. The wireless access point simply collects encrypted streams of bits and forwards them to a VPN gateway before they can get access to the internal network. In this way, the VPN ensures that all data is strongly encrypted and authenticated before entering the internal network. Of course, before implementing these technical solutions, you should establish specific policies for the use of wireless LANs in your environment. The particular wireless LAN security policies followed by an organization depend heavily on the need for security in that organization. The following list, which I wrote with John Burgess of Predictive Systems, contains recommended security policies that could apply in many organizations. This list can be used as a starting point, and pared down or built up to meet specific needs. • All wireless access points/base stations connected to the corporate network must be registered and approved by the organization’s computer security team. These access points/base stations are subject to periodic penetration tests and audits. Unregistered access points/ base stations on the corporate network are strictly forbidden. • All wireless network interface cards (i.e., PC cards) used in corporate laptop or desktop computers must be registered with the corporate security team. • All wireless LAN access must use corporate-approved vendor products and security configurations. • All computers with wireless LAN devices must utilize a corporate-approved virtual private network (VPN) for communication across the wireless link. The VPN will authenticate users and encrypt all network traffic. • Wireless access points/base stations must be deployed so that all wireless traffic is directed through a VPN device before entering the 54

AU1518Ch04Frame Page 55 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses corporate network. The VPN device should be configured to drop all unauthenticated and unencrypted traffic. While the policies listed above fit the majority of organizations, the policies listed below may or may not fit, depending on the technical level of employees and how detailed an organizations’ security policy and guidelines are: • The wireless SSID provides no security and should not be used as a password. Furthermore, wireless card MAC addresses can be easily gathered and spoofed by an attacker. Therefore, security schemes should not be based solely on filtering wireless MAC addresses because they do not provide adequate protection for most uses. • WEP keys can be broken. WEP may be used to identify users, but only together with a VPN solution. • The transmit power for access points/base stations near a building’s perimeter (such as near exterior walls or top floors) should be turned down. Alternatively, wireless systems in these areas could use directional antennas to control signal bleed out of the building. With these types of policies in place and a suitable VPN solution securing all traffic, the security of an organization’s wireless infrastructure can be vastly increased. ACTIVE AND PASSIVE OPERATING SYSTEM FINGERPRINTING Once access is gained to a network (through network stumbling, a renegade unsecured modem, or a weakness in an application or firewall), attackers usually attempt to learn about the target environment so they can hone their attacks. In particular, attackers often focus on discovering the operating system (OS) type of their targets. Armed with the OS type, attackers can search for specific vulnerabilities of those operating systems to maximize the effectiveness of their attacks. To determine OS types across a network, attackers use two techniques: (1) the familiar, time-tested approach called active OS fingerprinting, and (2) a technique with new-found popularity, passive OS fingerprinting. We will explore each technique in more detail. Active OS Fingerprinting The Internet Engineering Task Force (IETF) defines how TCP/IP and related protocols should work. In an ever-growing list of Requests for Comment (RFCs), this group specifies how systems should respond when specific types of packets are sent to them. For example, if someone sends a TCP SYN packet to a listening port, the IETF says that a SYN ACK packet should be sent in response. While the IETF has done an amazing job of 55

AU1518Ch04Frame Page 56 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY defining how the protocols we use every day should work, it has not thoroughly defined every case of how the protocols should fail. In other words, the RFCs defining TCP/IP do not handle all of the meaningless or perverse cases of packets that can be sent in TCP/IP. For example, what should a system do if it receives a TCP packet with the code bits SYN-FIN-URG-PUSH all set? I presume such a packet means to SYNchronize a new connection, FINish the connection, do this URGently, and PUSH it quickly through the TCP stack. That is nonsense, and a standard response to such a packet has not been devised. Because there is no standard response to this and other malformed packets, different vendors have built their OSs to respond differently to such bizarre cases. For example, a Cisco router will likely send a different response than a Windows NT server for some of these unexpected packets. By sending a variety of malformed packets to a target system and carefully analyzing the responses, an attacker can determine which OS it is running. An active OS fingerprinting capability has been built into the Nmap port scanner (available at www.insecure.org/nmap). If the OS detection capability is activated, Nmap will send a barrage of unusual packets to the target to see how it responds. Based on this response, Nmap checks a user-customizable database of known signatures to determine the target OS type. Currently, this database houses over 500 known system types. A more recent addition to the active OS fingerprinting realm is the Xprobe tool by Fyodor Yarochkin and Ofir Arkin. Rather than manipulating the TCP code bit options like Nmap, Xprobe focuses exclusively on the Internet Control Message Protocol (ICMP). ICMP is used to send information associated with an IP-based network, such as ping requests and responses, port unreachable messages, and instructions to quench the rate of packets sent. Xprobe sends between one and four specially crafted ICMP messages to the target system. Based on a very carefully constructed logic tree on the sending side, Xprobe can determine the OS type. Xprobe is stealthier than the Nmap active OS fingerprinting capability because it sends far fewer packets. Passive OS Fingerprinting While active OS fingerprinting involves sending packets to a target and analyzing the response, passive OS fingerprinting does not send any traffic while determining a target’s OS type. Instead, passive OS fingerprinting tools include a sniffer to gather data from a network. Then, by analyzing the particular packet settings captured from the network and consulting a local database, the tool can determine what OS type sent that traffic. This technique is far stealthier than active OS fingerprinting because the attacker sends no data to the target machine. However, the attacker must 56

AU1518Ch04Frame Page 57 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses be in a position to analyze traffic sent from the target system, such as on the same LAN or on a network where the target frequently sends packets. One of the best passive OS fingerprinting tools is p0f (available at www.stearns.org/p0f/), originally written by Michal Zalewski and now maintained by William Stearns. P0f determines the OS type by analyzing several fields sent in TCP and IP traffic, including the rounded-up initial time-to-live (TTL), window size, maximum segment size, don’t fragment flag, window scaling option, and initial packet size. Because different OSs set these initial values to varying levels, p0f can differentiate between 149 different system types. Defending against Operating System Fingerprinting To minimize the impact an attacker can have using knowledge of your OS types, you should have a defined program for notification, testing, and implementation of system patches. If you keep your systems patched with the latest security fixes, an attacker will be far less likely to compromise your machines even if they know which OS you are running. One or more people in your organization should have assigned tasks of monitoring vendor bulletins and security lists to determine when new patches are released. Furthermore, once patches are identified, they should be thoroughly but quickly tested in a quality assurance environment. After the full functionality of the tested system is verified, the patches should be rolled into production. While a solid patching process is a must for defending your systems, you may also want to analyze some of the work in progress to defeat active OS fingerprinting. Gaël Roualland and Jean-Marc Saffroy wrote the IP personality patch for Linux systems, available at ippersonality.sourceforge.net/. This tool allows a system administrator to configure a Linux system running kernel version 2.4 so that it will have any response of the administrator’s choosing for Nmap OS detection. Using this patch, you could make your Linux machine look like a Solaris system, a Macintosh, or even an old Windows machine during an Nmap scan. Although you may not want to put such a patch onto your production systems due to potential interference with critical processes, the technique is certainly worth investigating. To foil passive OS fingerprinting, you may want to consider the use of a proxy-style firewall. Proxy firewalls do not route packets, so all information about the OS type transmitted in the packet headers is destroyed by the proxy. Proxy firewalls accept a connection from a client, and then start a new connection to the server on behalf of that client. All packets on the outside of the firewall will have the OS fingerprints of the firewall itself. Therefore, the OS type of all systems inside the firewall will be masked. Note that this technique does not work for most packet filter firewalls 57

AU1518Ch04Frame Page 58 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY because packet filters route packets and, therefore, transmit the fingerprint information stored in the packet headers. RECENT WORM ADVANCES A computer worm is a self-replicating computer attack tool that propagates across a network, spreading from vulnerable system to vulnerable system. Because they use one set of victim machines to scan for and exploit new victims, worms spread on an exponential basis. In recent times, we have seen a veritable zoo of computer worms with names like Ramen, L10n, Cheese, Code Red, and Nimda. New worms are being released at a dizzying rate, with a new generation of worm hitting the Internet every two to six months. Worm developers are learning lessons from the successes of each generation of worms and expanding upon them in subsequent attacks. With this evolutionary loop, we are rapidly approaching an era of super worms. Based on recent advances in worm functions and predictions for the future, we will analyze the characteristics of the coming super worms we will likely see in the next six months. Rapidly Spreading Worms Many of the worms released in the past decade have spread fairly quickly throughout the Internet. In July 2001, Code Red was estimated to have spread to 250,000 systems in about six hours. Fortunately, recent worms have had rather inefficient targeting mechanisms, a weakness that actually impeded their speeds. By randomly generating addresses and not taking into account the accurate distribution of systems in the Internet address space, these worms often wasted time looking for nonexistent systems or scanning machines that were already conquered. After Code Red, several articles appeared on the Internet describing more efficient techniques for rapid worm distribution. These articles, by Nicholas C. Weaver and the team of Stuart Staniford, Gary Grim, and Roelof Jonkman, described the hypothetical Warhol and Flash worms, which theoretically could take over all vulnerable systems on the Internet in 15 minutes or even less. Warhol and Flash, which are only mathematical models and not actual worms (yet), are based on the idea of fast-forwarding through an exponential spread. Looking at a graph of infected victims over time for a conventional worm, a hockey-stick pattern appears. Things start out slowly as the initial victims succumb to the worm. Only after a critical mass of victims succumbs to the attack does the worm rapidly spread. Warhol and Flash jump past this initial slow spread by prescanning the Internet for vulnerable systems. Through automated scanning techniques from static machines, an attacker can find 100,000 or more vulnerable systems before ever releasing the worm. The attacker then loads these known vulnerable addresses into the worm. As the worm spreads, the addresses 58

AU1518Ch04Frame Page 59 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses of these prescanned vulnerable systems would be split up among the segments of the worm propagating across the network. By using this initial set of vulnerable systems, an attacker could easily infect 99 percent of vulnerable systems on the Internet in less than an hour. Such a worm could conquer the Internet before most people have even heard of the problem. Multi-Platform Worms The vast majority of worms we have seen to date focused on a single platform, often Windows or Linux. For example, Nimda simply ripped apart as many Microsoft products as it could, exploiting Internet Explorer, the IIS Web server, Outlook, and Windows file sharing. While it certainly was challenging, Nimda’s Window-centric approach actually limited its spread. The security community implemented defenses by focusing on repairing Windows systems. While single-platform worms can cause trouble, be on the lookout for worms that are far less discriminating from a platform perspective. New worms will contain exploits for Windows, Solaris, Linux, BSD, HP-UX, AIX, and other operating systems, all built into a single worm. Such worms are even more difficult to eradicate because security personnel and system administrators will have to apply patches in a coordinated fashion to many types of machines. The defense job will be more complex and require more time, allowing the worm to cause more damage. Morphing and Disguised Worms Recent worms have been relatively easy to detect. Once spotted, the computer security community has been able to quickly determine their functionalities. Once a worm has been isolated in the lab, some brilliant folks have been able to rapidly reverse-engineer each worm’s operation to determine how best to defend against it. In the very near future, we will face new worms that are far stealthier and more difficult to analyze. We will see polymorphic worms, which change their patterns every time they run and spread to a new system. Detection becomes more difficult because the worm essentially recodes itself each time it runs. Additionally, these new worms will encrypt or otherwise obscure much of their own payloads, hiding their functionalities until a later time. Reverse-engineering to determine the worm’s true functions and purpose will become more difficult because investigators will have to extract the crypto keys or overcome the obfuscation mechanisms before they can really figure out what the worm can do. This time lag for the analysis will allow the worm to conquer more systems before adequate defenses are devised. 59

AU1518Ch04Frame Page 60 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Zero-Day Exploit Worms The vast majority of worms encountered so far are based on old, off-theshelf exploits to attack systems. Because they have used old attacks, a patch has been readily available for administrators to fix their machines quickly after infection or to prevent infection in the first place. Using our familiar example, Code Red exploited systems using a flaw in Microsoft’s IIS Web server that had been known for over a month and for which a patch had already been published. In the near future, we are likely going to see a worm that uses brand-new exploits for which no patch exists. Because they are brand new, such attacks are sometimes referred to as Zero-Day Exploits. New vulnerabilities are discovered practically every day. Oftentimes, these problems are communicated to a vendor, who releases a patch. Unfortunately, these vulnerabilities are all too easy to discover; and it is only a matter of time before a worm writer discovers a major hole and first devises a worm that exploits it. Only after the worm has propagated across the Internet will the computer security community be capable of analyzing how it spreads so that a patch can be developed. More Damaging Attacks So far, worms have caused damage by consuming resources and creating nuisances. The worms we have seen to date have not really had a malicious payload. Once they take over hundreds of thousands of systems, they simply continue to spread without actually doing something nasty. Do not get me wrong; fighting Code Red and Nimda consumed much time and many resources. However, these attacks did not really do anything beyond simply consuming resources. Soon, we may see worms that carry out some plan once they have spread. Such a malicious worm may be released in conjunction with a terrorist attack or other plot. Consider a worm that rapidly spreads using a zero-day exploit and then deletes the hard drives of ten million victim machines. Or, perhaps worse, a worm could spread and then transfer the financial records of millions of victims to a country’s adversaries. Such scenarios are not very far-fetched, and even nastier ones could be easily devised. Worm Defenses All of the pieces are available for a moderately skilled attacker to create a truly devastating worm. We may soon see rapidly spreading, multi-platform, morphing worms using zero-day exploits to conduct very damaging attacks. So, what can you do to get ready? You need to establish both reactive and proactive defenses. 60

AU1518Ch04Frame Page 61 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses Incident Response Preparation. From a reactive perspective, your organization must establish a capability for determining when new vulnerabilities are discovered, as well as rapidly testing patches and moving them into production. As described above, your security team should subscribe to various security mailing lists, such as Bugtraq (available at www.securityfocus.com), to help alert you to such vulnerabilities and the release of patches. Furthermore, you must create an incident response team with the skills and resources necessary to discover and contain a worm attack. Vigorously Patch and Harden Your Systems. From the proactive side, your

organization must carefully harden your systems to prevent attacks. For each platform type, your organization should have documentation describing to system administrators how to build the machine to prevent attacks. Furthermore, you should periodically test your systems to ensure they are secure. Block Unnecessary Outbound Connections. Once a worm takes over a system, it attempts to spread by making outgoing connections to scan for other potential victims. You should help stop worms in their tracks by severely limiting all outgoing connections on your publicly available systems (such as your Web, DNS, e-mail, and FTP servers). You should use a border router or external firewall to block all outgoing connections from such servers, unless there is a specific business need for outgoing connections. If you do need some outgoing connections, allow them only to those IP addresses that are absolutely critical. For example, your Web server needs to send responses to users requesting Web pages, of course. But does your Web server ever need to initiate connections to the Internet? Likely, the answer is no. So, do yourself and the rest of the Internet a favor by blocking such outgoing connections from your Internet servers. Nonexecutable System Stack Can Help Stop Some Worms. In addition to overall system hardening, one particular step can help stop many worms. A large number of worms utilize buffer overflow exploits to compromise their victims. By sending more data than the program developer allocated space for, a buffer overflow attack allows an attacker to get code entered as user input to run on the target system. Most operating systems can be inoculated against simple stack-based buffer overflow exploits by being configured with nonexecutable system stacks. Keep in mind that nonexecutable stacks can break some programs (so test these fixes before implementing them), and they do not provide a bulletproof shield against all buffer overflow attacks. Still, preventing the execution of code from the stack will stop a huge number of both known and as-yet-undiscovered vulnerabilities in their tracks. Up to 90 percent of buffer overflows can be prevented using this technique. To create a nonexecutable stack on a Linux system, you can use the free kernel patch at www.openwall.com/linux. On a Solaris 61

AU1518Ch04Frame Page 62 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY machine, you can configure the system to stop execution of code from the stack by adding the following lines to the/etc/system file: set noexec_user_stack = 1 set noexec_user_stack_log = 1

On a Windows NT/2000 machine, you can achieve the same goal by deploying the commercial program SecureStack, available at www.securewave.com. SNIFFING BACKDOORS Once attackers compromise a system, they usually install a backdoor tool to allow them to access the machine repeatedly. A backdoor is a program that lets attackers access the machine on their own terms. Normal users are required to type in a password or use a cryptographic token; attackers use a backdoor to bypass these normal security controls. Traditionally, backdoors have listened on a TCP or UDP port, silently waiting in the background for a connection from the attacker. The attacker uses a client tool to connect to these backdoor servers on the proper TCP or UDP port to issue commands. These traditional backdoors can be discovered by looking at the listening ports on a system. From the command prompt of a UNIX or Windows NT/2000/XP machine, a user can type “netstat-na” to see which TCP and UDP ports on the local machine have programs listening on them. Of course, normal usage of a machine will cause some TCP and UDP ports to be listening, such as TCP port 80 for Web servers, TCP port 25 for mail servers, and UDP port 53 for DNS servers. Beyond these expected ports based on specific server types, a suspicious port turned up by the netstat command could indicate a backdoor listener. Alternatively, a system or security administrator could remotely scan the ports of the system, using a port-scanning tool such as Nmap (available at www.insecure.org/nmap). If Nmap’s output indicates an unexpected listening port, an attacker may have installed a backdoor. Because attackers know that we are looking for their illicit backdoors listening on ports, a major trend in the attacker community is to avoid listening ports altogether for backdoors. You may ask, “How can they communicate with their backdoors if they aren’t listening on a port?” To accomplish this, attackers are integrating sniffing technology into their backdoors to create sniffing backdoors. Rather than configuring a process to listen on a port, a sniffing backdoor uses a sniffer to grab traffic from the network. The sniffer then analyzes the traffic to determine which packets are supposed to go to the backdoor. Instead of listening on a port, the sniffer employs pattern matching on the network traffic to determine what to scoop up and pass to the backdoor. The backdoor then executes the commands and sends responses to the attacker. An excellent example of a sniffing backdoor is the 62

AU1518Ch04Frame Page 63 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses Cd00r program written by FX. Cd00r is available at http://www. phenoelit. de/stuff/cd00r.c. There are two general ways of running a sniffing backdoor, based on the mode used by the sniffer program to gather traffic: the so-called nonpromiscuous and promiscuous modes. A sniffer that puts an Ethernet interface in promiscuous mode gathers all data from the LAN without regard to the actual destination address of the traffic. If the traffic passes by the interface, the Ethernet card in promiscuous mode will suck in the traffic and pass it to the backdoor. Alternatively, a nonpromiscuous sniffer gathers traffic destined only for the machine on which the sniffer runs. Because these differences in sniffer types have significant implications on how attackers can use sniffing backdoors, we will explore nonpromiscuous and promiscuous backdoors separately below. Nonpromiscuous Sniffing Backdoors As their name implies, nonpromiscuous sniffing backdoors do not put the Ethernet interface into promiscuous mode. The sniffer sees only traffic going to and from the single machine where the sniffing backdoor is installed. When attackers use a nonpromiscuous sniffing backdoor, they do not have to worry about a system administrator detecting the interface in promiscuous mode. In operation, the nonpromiscuous backdoor scours the traffic going to the victim machine looking for specific ports or other fields (such as a cryptographically derived value) included in the traffic. When the special traffic is detected, the backdoor wakes up and interacts with the attacker. Promiscuous Sniffing Backdoors By putting the Ethernet interface into promiscuous mode to gather all traffic from the LAN, promiscuous sniffing backdoors can make an investigation even more difficult. To understand why, consider the scenario shown in Exhibit 4-1. This network uses a tri-homed firewall to separate the DMZ and internal network from the Internet. Suppose an attacker takes over the Domain Name System (DNS) server on the DMZ and installs a promiscuous sniffing backdoor. Because this backdoor uses a sniffer in promiscuous mode, it can gather all traffic from the LAN. The attacker configures the sniffing backdoor to listen in on all traffic with a destination address of the Web server (not the DNS server) to retrieve commands from the attacker to execute. In our scenario, the attacker does not install a backdoor or any other software on the Web server. Only the DNS server is compromised. Now the attacker formulates packets with commands for the backdoor. These packets are all sent with a destination address of the Web server (not the DNS server). The Web server does not know what to do with these commands, so it will either discard them or send a RESET or related 63

AU1518Ch04Frame Page 64 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY

Sniffer listens for traffic destined for the WWW server

DNS World Wide Web

Internet Black Hat Attacker

Firewall

Exhibit 4-1. A promiscuous sniffing backdoor.

message to the attacker. However, the DNS server with the sniffing backdoor will see the commands on the LAN. The sniffer will gather these commands and forward them to the backdoor where they will be executed. To further obfuscate the situation, the attacker can send all responses from the backdoor using the spoofed source address of the Web server. Given this scenario, consider the dilemma faced by the investigator. The system administrator or an intrusion detection system complains that there is suspicious traffic going to and from the Web server. The investigator conducts a detailed and thorough analysis of the Web server. After a painstaking process to verify the integrity of the applications, operating system programs, and kernel on the Web server machine, the investigator determines that this system is intact. Yet backdoor commands continue to be sent to this machine. The investigator would only discover what is really going on by analyzing other systems connected to the LAN, such as the DNS server. The investigative process is significantly slowed down by the promiscuous sniffing backdoor. Defending against Sniffing Backdoor Attacks It is important to note that the use of a switch on the DMZ network between the Web server and DNS server does not eliminate this dilemma. As described in Chapter 3, Volume 3 of Information Security Management Handbook, attackers can use active sniffers to conduct ARP cache poisoning attacks and successfully sniff a switched environment. An active sniffer such as Dsniff (available at http://www.monkey.org/~dugsong/dsniff/) married to a sniffing backdoor can implement this type of attack in a switched environment. So if a switch does not eliminate this problem, how can you defend against this kind of attack? First, as with most backdoors, system and security administrators must know what is supposed to be running on their systems, especially processes running with root or system-level privileges. Keeping up 64

AU1518Ch04Frame Page 65 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses with this information is not a trivial task, but it is especially important for all publicly available servers such as systems on a DMZ. If a security or system administrator notices a new process running with escalated privileges, the process should be investigated immediately. Tools such as lsof for UNIX (available at ftp://vic.cc.purdue.edu/pub/tools/ unix/lsof/) or Inzider for Windows NT/2000 (available at http://ntsecurity. nu/toolbox/inzider/) can help to indicate the files and ports used by any process. Keep in mind that most attackers will not name their backdoors “cd00r” or “backdoor,” but instead will use less obvious names to camouflage their activities. In my experience, attackers like to name their backdoors “SCSI” or “UPS” to prevent a curious system administrator from questioning or shutting off the attackers’ processes. Also, while switches do not eliminate attacks with sniffers, a switched environment can help to limit an attacker’s options, especially if it is carefully configured. For your DMZs and other critical networks, you should use a switch and hard-code all ARP entries in each host on the LAN. Each system on your LAN has an ARP cache holding information about the IP and MAC addresses of other machines on the LAN. By hard-coding all ARP entries on your sensitive LANs so that they are static, you minimize the possibility of ARP cached poisoning. Additionally, implement port-level security on your switch so that only specific Ethernet MAC addresses can communicate with the switch. CONCLUSIONS The computer underground and information security research fields remain highly active in refining existing methods and defining completely new ways to attack and compromise computer systems. Advances in our networking infrastructures, especially wireless LANs, are not only giving attackers new avenues into our systems, but they are also often riddled with security vulnerabilities. With this dynamic environment, defending against attacks is certainly a challenge. However, these constantly evolving attacks can be frustrating and exciting at the same time, while certainly providing job security to solid information security practitioners. While we need to work diligently in securing our systems, our reward is a significant intellectual challenge and decent employment in a challenging economy. ABOUT THE AUTHOR Edward Skoudis is the vice president of security strategy for Predictive Systems’ Global Integrity consulting practice. His expertise includes hacker attacks and defenses, the information security industry, and computer privacy issues. Skoudis is a frequent speaker on issues associated with hacker tools and defenses. He has published the book Counter Hack (Prentice Hall) and the interactive CD-ROM, Hack–Counter Hack. 65

AU1518Ch04Frame Page 66 Thursday, November 14, 2002 6:25 PM

AU1518Ch05Frame Page 67 Thursday, November 14, 2002 6:24 PM

Chapter 5

Counter-Economic Espionage Craig A. Schiller, CISSP

Today’s economic competition is global. The conquest of markets and technologies has replaced former territorial and colonial conquests. We are living in a state of world economic war, and this is not just a military metaphor — the companies are training the armies, and the unemployed are the casualties. — Bernard Esambert, President of the French Pasteur Institute at a Paris Conference on Economic Espionage

The Attorney General of the United States defined economic espionage as “the unlawful or clandestine targeting or acquisition of sensitive financial, trade, or economic policy information; proprietary economic information; or critical technologies.” Note that this definition excludes the collection of open and legally available information that makes up the majority of economic collection. This means that aggressive intelligence collection that is entirely open and legal may harm U.S. companies but is not considered espionage, economic or otherwise. The FBI has extended this definition to include the unlawful or clandestine targeting or influencing of sensitive economic policy decisions. Intelligence consists of two broad categories — open source and espionage. Open-source intelligence collection is the name given to legal intelligence activities. Espionage is divided into the categories of economic and military/political/governmental; the distinction is the targets involved. A common term, industrial espionage was used (and is still used to some degree) to indicate espionage between two competitors. As global competitors began to conduct these activities with possible assistance from their governments, the competitor-versus-competitor nature of industrial espionage became less of a discriminator. As the activities expanded to include sabotage and interference with commerce and proposal competitions, the term economic espionage was coined for the broader scope. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

67

AU1518Ch05Frame Page 68 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY While the examples and cases discussed in this chapter focus mainly on the United States, the issues are universal. The recommendations and types of information gathered can and should be translated for any country. BRIEF HISTORY The prosperity and success of this country is due in no small measure to economic espionage committed by Francis Cabot Lowell during the Industrial Revolution. Britain replaced costly, skilled hand labor with waterdriven looms that were simple and reliable. The looms were so simple that they could be operated by a few unskilled women and children. The British government passed strict patent laws and prohibited the export of technology related to the making of cotton. A law was passed making it illegal to hire skilled textile workers for work abroad. Those workers who went abroad had their property confiscated. It was against the law to make and export drawings of the mills. So Lowell memorized and stole the plans to a Cartwright loom, a waterdriven weaving machine. It is believed that Lowell perfected the art of spying by driving around. Working from Edinburgh, he and his wife traveled daily throughout the countryside, including Lancashire and Derbyshire, the hearts of the industrial revolution. Returning home, he built a scale model of the loom. His company built its first loom in Waltham. Soon, his factories were capable of producing up to 30 miles of cloth a day.1 This marked America’s entry into the Industrial Revolution. By the early 20th century, we had become “civilized” to the point that Henry L. Stimson, our Secretary of State, said for the record that “Gentlemen do not read other gentlemen’s mail” while refusing to endorse a codebreaking operation. For a short time the U.S. Government was the only government that believed this fantasy. At the beginning of World War II, the United States found itself almost completely blind to activities inside Germany and totally dependent on other countries’ intelligence services for information. In 1941 the United States recognized that espionage was necessary to reduce its losses and efficiently engage Germany. To meet this need, first the COI and then the OSS were created under the leadership of General “Wild Bill” Donovan. It would take tremendous forces to broaden this awakening to include economic espionage. WATERSHED: END OF COLD WAR, BEGINNING OF INFORMATION AGE In the late 1990s, two events occurred that radically changed information security for many companies. The end of the Cold War — marked by the collapse of the former Soviet Union — created a pool of highly trained intelligence officers without targets. In Russia, some continued to work for 68

AU1518Ch05Frame Page 69 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage the government, some began to work in the newly created private sector, and some provided their services for the criminal element. Some did all three. The world’s intelligence agencies began to focus their attentions on economic targets and information war, just in time for watershed event number-two — the beginning of the information age. John Lienhard, M.D. Anderson Professor of Mechanical Engineering and History at the University of Houston, is the voice and driving force behind the “Engines of Our Ingenuity,” a syndicated program for public radio. He has said that the change of our world into an information society is not like the Industrial Revolution. No; this change is more like the change from a hunter-gatherer society to an agrarian society. A change of this magnitude happened only once or twice in all of history. Those who were powerful in the previous society may have no power in the new society. In the huntergatherer society, the strongest man and best hunter rules. But where is he in an agrarian society? There, the best hunter holds little or no power. During the transition to an information society, those with power in the old ways will not give it up easily. Now couple the turmoil caused by this shift with the timing of the “end” of the Cold War. The currency of the new age is information. The power struggle in the new age is the struggle to gather, use, and control information. It is at the beginning of this struggle that the Cold War ended, making available a host of highly trained information gatherers to countries and companies trying cope with the new economy. Official U.S. acknowledgment of the threat of economic espionage came in 1996 with the passage of the Economic Espionage Act. For the information security professional, the world has fundamentally changed. Until 1990, a common practice had been to make the cost of an attack prohibitively expensive. How do you make an attack prohibitively expensive, when your adversaries have the resources of governments behind them? Most information security professionals have not been trained and are not equipped to handle professional intelligence agents with deep pockets. Today, most business managers are incapable of fathoming that such a threat exists. ROLE OF INFORMATION TECHNOLOGY IN ECONOMIC ESPIONAGE In the 1930s, the German secret police divided the world of espionage into five roles.2 Exhibit 5-1 illustrates some of the ways that information technology today performs these five divisions of espionage functionality. In addition to these roles, information technology may be exploited as a target, used as a tool, used for storage (for good or bad), used as protection for critical assets as a weapon, used as a transport mechanism, or used as an agent to carry out tasks when activated. 69

AU1518Ch05Frame Page 70 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 5-1. Five divisions of espionage functionality. Role

WWII Description

IT Equivalent

Collectors

Located and gathered desired People or IT (hardware or software) information agents, designer viruses that transmit data to the Internet Transmitters Forwarded the data to GerE-mail, browsers with convenient 128-bit many, by coded mail or encryption, FTP, applications with shortwave radio built-in collection and transmission capabilities (e.g., comet cursors, Real Player, Media Player, or other spy ware), covert channel applications Couriers Worked on steamship lines Visiting country delegations, partand transatlantic clippers, ners/suppliers, temporary workers, and carried special mesand employees that rotate in and out of sages to and from Germany companies with CD-R/CD-RW, Zip disks, tapes, drawings, digital camera images, etc. Drops Innocent-seeming addresses E-mail relays, e-mail anonymizers, Web of businesses or private indi- anonymizers, specially designed softviduals, usually in South ware that spreads information to multiAmerican or neutral Europle sites (the reverse of distributed pean ports; reports were DoS) to avoid detection sent to these addresses for forwarding to Germany Specialists Expert saboteurs Viruses, worms, DDoS, Trojan horses, chain e-mail, hoaxes, using e-mail to spread dissension, public posting of sensitive information about salaries, logic bombs, insiders sabotaging products, benchmarks, etc.

• Target. Information and information technology can be the target of interest. The goal of the exploitation may be to discover new information assets (breach of confidentiality), deprive one of exclusive ownership, acquire a form of the asset that would permit or facilitate reverse-engineering, corrupt the integrity of the asset — either to diminish the reputation of the asset or to make the asset become an agent — or to deny the availability of the asset to those who rely on it (denial of service). • Tool. Information technology can be the tool to monitor and detect traces of espionage or to recover information assets. These tools include intrusion detection systems, log analysis programs, content monitoring programs, etc. For the bad guys, these tools would include probes, enumeration programs, viruses that search for PGP keys, etc. • Storage. Information technology can store stolen or illegal information. IT can store sleeper agents for later activation. 70

AU1518Ch05Frame Page 71 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage • Protection. Information technology may have the responsibility to protect the information assets. The protection may be in the form of applications such as firewalls, intrusion detection systems, encryption tools, etc., or elements of the operating system such as file permissions, network configurations, etc. • Transport. Information technology can be the means by which stolen or critical information is moved, whether burned to CDs, e-mailed, FTP’d, hidden in a legitimate http stream, or encoded in images or music files. • Agent. Information technology can be used as an agent of the adversary, planted to extract significant sensitive information, to launch an attack when given the appropriate signal, or to receive or initiate a covert channel through a firewall. IMPLICATIONS FOR INFORMATION SECURITY Implication 1 A major tenet of our profession has been that, because we cannot always afford to prevent information system-related losses, we should make it prohibitively expensive to compromise those systems. How does one do that when the adversary has the resources of a government behind him? Frankly, this tenet only worked on adversaries who were limited by time, money, or patience. Hackers with unlimited time on their hands — and a bevy of unpaid researchers who consider a difficult system to be a trophy waiting to be collected — turn this tenet into Swiss cheese. This reality has placed emphasis on the onion model of information security. In the onion model you assume that all other layers will fail. You build prevention measures but you also include detection measures that will tell you that those measures have failed. You plan for the recovery of critical information, assuming that your prevention and detection measures will miss some events. Implication 2 Information security professionals must now be able to determine if their industry or their company is a target for economic espionage. If their company/industry is a target, then the information security professionals should adjust their perceptions of their potential adversaries and their limits. One of the best-known quotes from the Art of War by Sun Tsu says, “Know your enemy.” Become familiar with the list of countries actively engaging in economic espionage against your country or within your industry. Determine if any of your vendors, contractors, partners, suppliers, or customers come from these countries. In today’s global economy, it may not be easy to determine the country of origin. Many companies move their global headquarters to the United States and keep only their main R&D offices in the country of origin. Research the company and its 71

AU1518Ch05Frame Page 72 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY founders. Learn where and how they gained their expertise. Research any publicized accounts regarding economic espionage/intellectual property theft attributed to the company, the country, or other companies from the country. Pay particular attention to the methods used and the nature of the known targets. Contact the FBI or its equivalent and see if they can provide additional information. Do not forget to check your own organization’s history with each company. With this information you can work with your business leaders to determine what may be a target within your company and what measures (if any) may be prudent. He who protects everything, protects nothing. — Napoleon

Applying the wisdom of Napoleon implies that, within the semipermeable external boundary, we should determine which information assets truly need protection, to what degree, and from what threats. Sun Tsu speaks to this need as well. It is not enough to only know your enemy. Therefore I say, “Know the enemy and know yourself; in a hundred battles you will never be in peril.” When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and yourself, you are certain in every battle to be in peril. — Sun Tzu, The Art of War (III.31–33)

A company can “know itself” using a variation from the business continuity concept of a business impact assessment (BIA). The information security professional can use the information valuation data collected during the BIA and extend it to produce information protection guides for sensitive and critical information assets. The information protection guides tell users which information should be protected, from what threats, and what to do if an asset is found unprotected. It should tell the technical staff about threats to each information asset and about any required and recommended safeguards. A side benefit gained from gathering the information valuation data is that, in order to gather the value information, the business leaders must internalize questions of how the data is valuable and the degrees of loss that would occur in various scenarios. This is the most effective security awareness that money can buy. After the information protection guides have been prepared, you should meet with senior management again to discuss the overall posture the company wants to take regarding information security and counter-economic 72

AU1518Ch05Frame Page 73 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage espionage. Note that it is significant that you wait until after the information valuation exercise is complete before addressing the security posture. If management has not accepted the need for security, the question about desired posture will yield damaging results. Here are some potential postures that you can describe to management: • Prevent all. In this posture, only a few protocols are permitted to cross your external boundary. • City wall. A layered approach, prevention, detection, mitigation, and recovery strategies are all, in effect, similar to the walled city in the Middle Ages. Traffic is examined, but more is permitted in and out. Because more is permitted, detection, mitigation, and recovery strategies are needed internally because the risk of something bad getting through is greater. • Aggressive. A layered approach, but embracing new technology, is given a higher priority than protecting the company. New technology is selected, and then security is asked how they will deal with it. • Edge racer. Only general protections are provided. The company banks on running faster than the competition. “We’ll be on the next technology before they catch up with our current release.” This is a common position before any awareness has been effective. Implication 3 Another aspect of knowing your enemy is required. As security professionals we are not taught about spycraft. It is not necessary that we become trained as spies. However, the FBI, in its annual report to Congress on economic espionage, gives a summary about techniques observed in cases involving economic espionage. Much can be learned about modern techniques in three books written about the Mossad — Gideon’s Spies by Gordon Thomas, and By Way of Deception, and The Other Side of Deception, both by Victor Ostrovsky and Claire Hoy. These describe the Mossad as an early adopter of technology as a tool in espionage, including their use of Trojan code in software sold commercially. The books describe software known as Promis that was sold to intelligence agencies to assist in tracking terrorists; and the authors allege that the software had a Trojan that permitted the Mossad to gather information about the terrorists tracked by its customers. By Way of Deception describes the training process as seen by Ostrovsky. Implication 4 Think Globally, Act Locally. The Chinese government recently announced that the United States had placed numerous bugging devices on a plane for President Jiang Zemin. During the customization by a U.S. company of the 73

AU1518Ch05Frame Page 74 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY interior of the plane for its use as the Chinese equivalent of Air Force One, bugs were allegedly placed in the upholstery of the president’s chair, in his bedroom, and even in the toilet. When the United States built a new embassy in Moscow, the then-extant Soviet Union insisted it be built using Russian workers. The United States called a halt to its construction in 1985 when it discovered it was too heavily bugged for diplomatic purposes. The building remained unoccupied for a decade following the discovery. The 1998 Annual Report to Congress on Foreign Economic Collection and Industrial Espionage concluded with the following statement: ...foreign software manufacturers solicited products to cleared U.S. companies that had been embedded with spawned processes and multithreaded tasks.

This means that foreign software companies sold products with Trojans and backdoors to targeted U.S. companies. In response to fears about the Echelon project, in 2001 the European Union announced recommendations that member nations use open-source software to ensure that Echelon software agents are not present. Security teams would benefit by using open-source software tools if they could be staffed sufficiently to maintain and continually improve the products. Failing that, security in companies in targeted industries should consider the origins of the security products they use. If your company knows it is a target for economic espionage, it would be wise to avoid using security products from countries actively engaged in economic espionage against your country. If unable to follow this strategy, the security team should include tools in the architecture (from other countries) that could detect extraneous traffic or anomalous behavior of the other security tools. In this strategy you should follow the effort all the way through implementation. In one company, the corporate standard for firewall was a product of one of the most active countries engaging in economic espionage. Management was unwilling to depart from the standard. Security proposed the use of an intrusion detection system (IDS) to guard against the possibility of the firewall being used to permit undetected, unfiltered, and unreported access. The IDS was approved; but when procurement received the order, they discovered that the firewall vendor sold a special, optimized version of the same product and — without informing the security team — ordered the IDS from the vendor that the team was trying to guard against. Implication 5 The system of rating computers for levels of security protection is incapable of providing useful information regarding products that might have 74

AU1518Ch05Frame Page 75 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage malicious code that is included intentionally. In fact, companies that have intentions of producing code with these Trojans are able to use the system of ratings to gain credibility without merit. It appears that the first real discovery by one of the ratings systems caused the demise of the ratings system and a cover-up of the findings. I refer to the MISSI ratings system’s discovery of a potential backdoor in Checkpoint Firewall-1 in 1997. After this discovery, the unclassified X31 report3 for this product and all previous reports were pulled from availability. The Internet site that provided them was shut down, and requestors were told that the report had been classified. The federal government had begun pulling Checkpoint Firewall-1 from military installations and replacing them with other companies’ products. While publicly denying that these actions were happening, Checkpoint began a correspondence with the NSA, owners of the MISSI process, to answer the findings of that study. The NSA provided a list of findings and preferred corrective actions to resolve the issue. In Checkpoint’s response4 to the NSA, they denied that the code in question, which involved SNMP and which referenced files containing IP addresses in Israel, was a backdoor. According to the NSA, two files with IP addresses in Israel “could provide access to the firewall via SNMPv2 mechanisms.” Checkpoint’s reply indicated that the code was dead code from the Carnegie Mellon University and that the files were QA testing data that was left in the final released configuration files. The X31 report, which I obtained through an FOIA request, contains no mention of the incident and no indication that any censorship had occurred. This fact is particularly disturbing because a report of this nature should publish all issues and their resolutions to ensure that there is no complicity between testers and the test subjects. However, the letter also reveals two other vulnerabilities that I regard as backdoors, although the report classes them as software errors to be corrected. The Checkpoint response to some of these “errors” is to defend aspects of them as desirable. One specific reference claims that most of Checkpoint’s customers prefer maximum connectivity to maximum security, a curious claim that I have not seen in their marketing material. This referred to the lack of an ability to change the implicit rules in light of the vulnerability of stateful inspection’s handling of DNS using UDP, which existed in Version 3 and earlier. Checkpoint agreed to most of the changes requested by the NSA; however, the exception is notable in that it would have required Checkpoint to use digital signatures to sign the software and data electronically to prevent someone from altering the product in a way that would go undetected. These changes would have provided licensees of the software with the ability to know that, at least initially, the software they were running was indeed the software and data that had been tested during the security review. 75

AU1518Ch05Frame Page 76 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY It is interesting to note that Checkpoint had released an internal memo nine months prior to the letter responding to the NSA claims in which they claimed nothing had ever happened.5 Both the ITSEC and Common Criteria security rating systems are fatally flawed when it comes to protection against software with intentional malicious code. Security companies are able to submit the software for rating and claim the rating even when the entire system has not been submitted. For example, a company can submit the assurance processes and documentation for a targeted rating. When it achieves the rating on just that portion, it can advertise the rating although the full software functionality has not been tested. For marketing types, they gain the benefit of claiming the rating without the expense of full testing. Even if the rating has an asterisk, the damage is done because many that authorize the purchase of these products only look for the rating. When security reports back to management that the rating only included a portion of the software functionality, it is portrayed as sour grapes by those who negotiated the “great deal” they were going to get. The fact is that there is no commercial push to require critical software such as operating systems and security software to include exhaustive code reviews, covert channel analysis, and to only award a rating when it is fully earned. To make matters worse, if it appears that a company is going to get a poor rating from a test facility, the vendor can stop the process and start over at a different facility, perhaps in another country, with no penalty and no carry-over. WHAT ARE THE TARGETS? The U.S. Government publishes a list of military critical technologies (MCTs). A summary of the list is published annually by the FBI (see Exhibit 5-2). There is no equivalent list for nonmilitary critical technologies. However, the government has added “targeting the national information infrastructure” to the National Security Threat List (NSTL). Targeting the national information infrastructure speaks primarily to the infrastructure as an object of potential disruption, whereas the MCT list contains technologies that foreign governments may want to acquire illegally. The NSTL consists of two tables. One is a list of issues (see Exhibit 5-3); the other is a classified list of countries engaged in collection activities against the United States. This is not the same list captured in Exhibit 5-4. Exhibit 5-4 contains the names of countries engaged in economic espionage and, as such, contains the names of countries that are otherwise friendly trading partners. You will note that the entire subject of economic espionage is listed as one of the threat list issues. 76

AU1518Ch05Frame Page 77 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage Exhibit 5-2. Military Critical Technologies (MCTs). Information systems Sensors and lasers Electronics Aeronautics systems technology Armaments and energetic materials Marine systems Guidance, navigation and vehicle signature control Space systems Materials Manufacturing and fabrication Information warfare Nuclear systems technology Power systems Chemical/biological systems Weapons effects and counter-measures Ground systems Directed and kinetic energy systems

Exhibit 5-3. National security threat list issues. Terrorism Espionage Proliferation Economic espionage Targeting the national information infrastructure Targeting the U.S. Government Perception management Foreign intelligence activities

Exhibit 5-4. Most active collectors of economic intelligence. China Japan Israel France Korea Taiwan India

According to the FBI, the collection of information by foreign agencies continues to focus on U.S. trade secrets and science and technology products, particularly dual-use technologies and technologies that provide high profitability. 77

AU1518Ch05Frame Page 78 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Examining the cases that have been made public, you can find intellectual property theft, theft of proposal information (bid amounts, key concepts), and requiring companies to participate in joint ventures to gain access to new country markets — then either stealing the IP or awarding the contract to an internal company with an identical proposal. Recently, a case involving HP found a planted employee sabotaging key benchmarking tests to HP’s detriment. The message from the HP case is that economic espionage also includes efforts beyond the collection of information, such as sabotage of the production line to cause the company to miss key delivery dates, deliver faulty parts, fail key tests, etc. You should consider yourself a target if your company works in any of the technology areas on the MCT list, is a part of the national information infrastructure, or works in a highly competitive international business. WHO ARE THE PLAYERS? Countries This section is written from the published perspective of the U.S. Government. Readers from other countries should attempt to locate a similar list from their government’s perspective. It is likely that two lists will exist: a “real” list and a “diplomatically correct” edition. For the first time since its original publication in 1998, the Annual Report to Congress on Foreign Economic Collection and Industrial Espionage 2000 lists the most active collectors of economic intelligence. The delay in providing this list publicly is due to the nature of economic espionage. To have economic espionage you must have trade. Our biggest trading partners are our best friends in the world. Therefore, a list of those engaged in economic espionage will include countries that are otherwise friends and allies. Thus the poignancy of Bernard Esambert’s quote used to open this chapter. Companies Stories of companies affected by economic espionage are hard to come by. Public companies fear the effect on stock prices. Invoking the economic espionage law has proven very expensive — a high risk for a favorable outcome — and even the favorable outcomes have been inadequate considering the time, money, and commitment of company resources beyond their primary business. The most visible companies are those that have been prosecuted under the Economic Espionage Act, but there have only been 20 of those, including: • Four Pillars Company, Taiwan, stole intellectual property and trade secrets from Avery Dennison. 78

AU1518Ch05Frame Page 79 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage • Laser Devices, Inc, attempted to illegally ship laser gun sights to Taiwan without Department of Commerce authorization. • Gilbert & Jones, Inc., New Britain, exported potassium cyanide to Taiwan without the required licenses. • Yuen Foong Paper Manufacturing Company, Taiwan, attempted to steal the formula for Taxol, a cancer drug patented and licensed by the Bristol-Myers Squibb (BMS) Company. • Steven Louis Davis attempted to disclose trade secrets of the Gillette Company to competitors Warner-Lambert Co., Bic, and American Safety Razor Co. The disclosures were made by fax and e-mail. Davis worked for Wright Industries, a subcontractor of the Gillette Company. • Duplo Manufacturing Corporation, Japan, used a disgruntled former employee of Standard Duplicating Machines Corporation to gain unauthorized access into a voicemail system. The data was used to compete against Standard. Standard learned of the issue through an unsolicited phone call from a customer. • Harold Worden attempted to sell Kodak trade secrets and proprietary information to Kodak rivals, including corporations in the Peoples Republic of China. He had formerly worked for Kodak. He established his own consulting firm upon retirement and subsequently hired many former Kodak employees. He was convicted on one felony count of violating the Interstate Transportation of Stolen Property law. • In 1977, Mitsubishi Electric bought one of Fusion Systems Corporation’s microwave lamps, took it apart, then filed 257 patent actions on its components. Fusion Systems had submitted the lamp for a patent in Japan two years earlier. After 25 years of wrangling with Mitsubishi, the Japanese patent system, congress, and the press, Fusion’s board fired the company’s president (who had spearheaded the fight) and settled the patent dispute with Mitsubishi a year later. • The French are known to have targeted IBM, Corning Glass, Boeing, Bell Helicopter, Northrup, and Texas Instruments (TI). In 1991, a guard in Houston noticed two well-dressed men taking garbage bags from the home of an executive of a large defense contractor. The guard ran the license number of the van and found it belonged to the French Consul General in Houston, Bernard Guillet. Two years earlier, the FBI had helped TI remove a French sleeper agent. According to Cyber Wars6 by Jean Guisnel, the French intelligence agency (the DGSE) had begun to plant young French engineers in various French subsidiaries of well-known American firms. Over the years they became integral members of the companies they had entered, some achieving positions of power in the corporate hierarchy. Guillet claims that the primary beneficiary of these efforts was the French giant electronics firm, Bull. 79

AU1518Ch05Frame Page 80 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY WHAT HAS BEEN DONE? REAL-WORLD EXAMPLES Partnering with a Company and Then Hacking the Systems Internally In one case, very senior management took a bold step. In the spirit of the global community, they committed the company to use international partners for major aspects of a new product. Unfortunately, in selecting the partners, they chose companies from three countries listed as actively conducting economic espionage against their country. In the course of developing new products, the employees of one company were caught hacking sensitive systems. Security measures were increased but the employees hacked through them as well. The company of the offending partners was confronted. Its senior management claimed that the employees had acted alone and that their actions were not sanctioned. Procurement, now satisfied that their fragile quilt of partners was okay, awarded the accused partner company a lucrative new product partnership. Additionally, they erased all database entries regarding the issues and chastised internal employees who continued to voice suspicions. No formal investigation was launched. Security had no record of the incident. There was no information security function at the time of the incident. When the information security function was established, it stumbled upon rumors that these events had occurred. In investigating, they found an internal employee who had witnessed the stolen information in use at the suspect partner’s home site. They also determined that the offending partner had a history of economic espionage, perhaps the most widely known in the world. Despite the corroboration of the partner’s complicity, line management and procurement did nothing. Procurement knew that the repercussions within their own senior management and line management would be severe because they had pressured the damaged business unit to accept the suspected partner’s earlier explanation. Additionally, it would have underscored the poor choice of partners that had occurred under their care and the fatal flaw in the partnering concept of very senior management. It was impossible to extricate the company from this relationship without causing the company to collapse. IT line management would not embrace this issue because they had dealt with it before and had been stung, although they were right all along. Using Language to Hide in Plain Sight Israeli Air Force officers assigned to the Recon/Optical Company passed on technical information beyond the state-of-the-art optics to a competing Israeli company, El Op Electro-Optics Industries Ltd. Information was written in Hebrew and faxed. The officers tried to carry 14 boxes out of the plant when the contract was terminated. The officers were punished upon return to Israel — for getting caught.7 80

AU1518Ch05Frame Page 81 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage In today’s multinational partnerships, language can be a significant issue for information security and for technical support. Imagine the difficulty in monitoring and supporting computers for five partners, each in a different language. The Annual Report to Congress 20008 reveals that the techniques used to steal trade secrets and intellectual property are limitless. The insider threat, briefcase and laptop computer thefts, and searching hotel rooms have all been used in recent cases. The information collectors are using a wide range of redundant and complementary approaches to gather their target data. At border crossings, foreign officials have conducted excessive attempts at elicitation. Many U.S. citizens unwittingly serve as third-party brokers to arrange visits or circumvent official visitation procedures. Some foreign collectors have invited U.S. experts to present papers overseas to gain access to their expertise in export-controlled technologies. There have been recent solicitations to security professionals asking for research proposals for security ideas as a competition for awarding grants to conduct studies on security topics. The solicitation came from one of the most active countries engaging in economic espionage. Traditional clandestine espionage methods (such as agent recruitment, U.S. volunteers, and cooptees) are still employed. Other techniques include: • • • • • •

Breaking away from tour groups Attempting access after normal working hours Swapping out personnel at the last minute Customs holding laptops for an extended period of time Requests for technical information Elicitation attempts at social gatherings, conferences, trade shows, and symposia • Dumpster diving (searching a company’s trash for corporate proprietary data) • Using unencrypted Internet messages

To these I would add holding out the prospect of lucrative sales or contracts, but requiring the surrender or sharing of intellectual property as a condition of partnering or participation. WHAT CAN WE, AS INFORMATION SECURITY PROFESSIONALS, DO? We must add new skills and improve our proficiency in others to meet the challenge of government funded/supported espionage. Our investigative and forensic skills need improvement over the level required for nonespionage cases. We need to be aware of the techniques that have been and may be used against us. We need to add the ability to elicit information without raising suspicion. We need to recognize when elicitation is attempted and be able to teach our sales, marketing, contracting, and executive personnel to recognize such attempts. We need sources that tell us 81

AU1518Ch05Frame Page 82 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY where elicitation is likely to occur. For example, at this time, the Paris Air Show is considered the number-one economic espionage event in the world. We need to be able to raise the awareness of our companies regarding the perceived threat and real examples from industry that support those perceptions. Ensure that you brief the procurement department. Establish preferences for products from countries not active in economic espionage. When you must use a product from a country active in economic espionage, attempt to negotiate an indemnification against loss. Have procurement add requirements that partners/suppliers provide proof of background investigations, particularly if individuals will be on site. Management and procurement should be advised that those partners with intent to commit economic espionage are likely to complain to management that the controls are too restrictive, that they cannot do their jobs, or that their contract requires extraordinary access. You should counter these objectives before they occur by fully informing management and procurement about awareness, concerns, and measures to be taken. The measures should be applied to all supplier/partners. Ensure that these complaints and issues will be handed over to you for an official response. Treat each one individually and ask for specifics rather than generalities. If procurement has negotiated a contract that commits the company to extraordinary access, your challenge is greater. Procurement may insist that you honor their contract. At this time you will discover where security stands in the company’s pecking order. A stance you can take is, “Your negotiated contract does not and cannot relieve me of my obligation to protect the information assets of this corporation.” It may mean that the company has to pay penalties or go back to the negotiating table. You should not have to sacrifice the security of the company’s information assets to save procurement some embarrassment. We need to develop sources to follow developments in economic espionage in industries and businesses similar to ours. Because we are unlikely to have access to definitive sources about this kind of information, we need to develop methods to vet the information we find in open sources. The FBI provides advanced warning to security professionals through ANSIR (Awareness of National Security Issues and Responses) systems. Interested security professionals for U.S. corporations should provide their email addresses, positions, company names and addresses, and telephone and fax numbers to [email protected]. A representative of the nearest field division office will contact you. The FBI has also created Infraguard (http:// www.infragard.net/fieldoffice.htm) chapters for law enforcement and corporate security professionals to share experiences and advice. 9 82

AU1518Ch05Frame Page 83 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage InfraGard is dedicated to increasing the security of the critical infrastructures of the United States. All InfraGard participants are committed to the proposition that a robust exchange of information about threats to and actual attacks on these infrastructures is an essential element in successful infrastructure protection efforts. The goal of InfraGard is to enable information flow so that the owners and operators of infrastructures can better protect themselves and so that the U.S. Government can better discharge its law enforcement and national security responsibilities. BARRIERS ENCOUNTERED IN ATTEMPTS TO ADDRESS ECONOMIC ESPIONAGE A country is made up of many opposing and cooperating forces. Related to economic espionage, for information security, there are two significant forces. One force champions the businesses of that country. Another force champions the relationships of that country to other countries. Your efforts to protect your company may be hindered by the effect of the opposition of those two forces. This was evident in the first few reports to Congress by the FBI on economic espionage. The FBI was prohibited from listing even the countries that were most active in conducting economic espionage. There is no place in the U.S. Government that you can call to determine if a partner you are considering has a history of economic espionage, or if a software developer has been caught with backdoors, placing Trojans, etc. You may find that, in many cases, the FBI interprets the phrase information sharing to mean that you share information with them. In one instance, a corporate investigator gave an internal e-mail that was written in Chinese to the FBI, asking that they translate it. This was done to keep the number of individuals involved in the case to a minimum. Unless you know the translator and his background well, you run the risk of asking someone that might have ties to the Chinese to perform the translation. Once the translation was performed, the FBI classified the document as secret and would not give the investigator the translated version until the investigator reasoned with them that he would have to translate the document with an outside source unless the FBI relented. Part of the problem facing the FBI is that there is no equivalent to a DoD or DoE security clearance for corporate information security personnel. There are significant issues that complicate any attempt to create such a clearance. A typical security clearance background check looks at criminal records. Background investigations may go a step further and check references, interview old neighbors, schoolmates, colleagues, etc. The most rigorous clearance checks include viewing bank records, credit records, and other signs of fiscal responsibility. They may include a psychological evaluation. They are not permitted to include issues of national origin or religion unless 83

AU1518Ch05Frame Page 84 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY the United States is at war with a particular country. In those cases, the DoD has granted the clearance but placed the individuals in positions that would not create a conflict of interest. In practice, this becomes impossible. Do you share information about all countries and religious groups engaging in economic espionage, except for those to which the security officer may have ties? Companies today cannot ask those questions of its employees. Unfortunately, unless a system of clearances is devised, the FBI will always be reluctant to share information, and rightfully so. Another aspect of the problem facing the FBI today is the multinational nature of corporations today. What exactly is a U.S. corporation? Many companies today were conceived in foreign countries but established their corporate headquarters in the United States, ostensibly to improve their competitiveness in the huge U.S. marketplace. What of U.S. corporations that are wholly owned by foreign corporations? Should they be entitled to assistance, to limited assistance, or to no assistance? If limited assistance, how are the limits determined? Within your corporation there are also opposing and cooperating forces. One of the most obvious is the conflict between marketing/sales and information security. In many companies, sales and marketing personnel are the most highly paid and influential people in the company. They are, in most cases, paid largely by commission. This means that if they do not make the sale, they do not get paid. They are sometimes tempted to give the potential customer anything they want, in-depth tours of the plant, details on the manufacturing process, etc., in order to make the sale. Unless you have a well-established and accepted information protection guide that clearly states what can and cannot be shared with these potential customers, you will have little support when you try to protect the company. The marketing department may have such influence that they cause your procurement personnel to abandon reason and logic in the selection of critical systems and services. A Canadian company went through a lengthy procurement process for a massive wide area network contract. An RFP was released. Companies responded. A selection committee met and identified those companies that did not meet the RFP requirements. Only those companies that met the RFP requirements were carried over into the final phase of the selection process. At this point, marketing intervened and required that procurement re-add two companies to the final selection process — companies that had not met the requirements of the RFP. These two companies purchased high product volumes from this plant. Miracle of miracles, one of the two unqualified companies won the contract. It is one thing for the marketing department to request that existing customers be given some preference from the list of qualified finalists. It is 84

AU1518Ch05Frame Page 85 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage quite another to require that unqualified respondents be given any consideration. A product was developed in a country that conducts economic espionage operations against U.S. companies in your industry sector. This product was widely used throughout your company, leaving you potentially vulnerable to exploitation or exposed to a major liability. When the issue was raised, management asked if this particular product had a Trojan or evidence of malicious code. The security officer responded, “No, but due to the nature of this product, if it did contain a Trojan or other malicious code, it could be devastating to our company. Because there are many companies that make this kind of product in countries that do not conduct economic espionage in our industry sector, we should choose one of those to replace this one and thus avoid the risk.” Management’s response was surprising. “Thank you very much, but we are going to stay with this product and spread it throughout the corporation — but do let us know if you find evidence of current backdoors and the like.” One day the security team learned that, just as feared, there had indeed been a backdoor, in fact several. The news was reported to management. Their response was unbelievable. “Well, have they fixed it?” The vendor claimed to have fixed it, but that was not the point. The point was that they had placed the code in the software to begin with, and there was no way to tell if they had replaced the backdoor with another. Management responded, “If they have fixed the problem, we are going to stay with the product, and that is the end of it. Do not bring this subject up again.” In security you must raise every security concern that occurs with a product, even after management has made up its mind. To fail to do so would set the company up for charges of negligence should a loss occur that relates to that product. “Doesn’t matter, do not raise this subject again.” So why would management make a decision like this? One possible answer has to do with pressure from marketing and potential sales to that country. Another has to do with embarrassment. Some vice president or director somewhere made a decision to use the product to begin with. They may even have had to fall on a sword or two to get the product they wanted. Perhaps it is because a more powerful director had already chosen this product for his site. This director may have forced the product’s selection as the corporate standard so that staff would not be impacted. One rumor has it that the product was selected as a corporate standard because the individual choosing the standard was being paid a kickback by a relative working for a third-party vendor of the product. If your IT department raises the issue, it runs the risk of embarrassing one or more of these senior managers and incurring their wrath. Your director may feel intimidated enough that he will not even raise the issue. 85

AU1518Ch05Frame Page 86 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Even closer to home is the fact that the issue was raised to your management in time to prevent the spread of the questionable product throughout the corporation. Now if the flag is raised, someone may question why it was not raised earlier. That blame would fall squarely on your director’s shoulders. Does it matter that both the vice president and the director have fiduciary responsibility for losses related to these decisions should they occur? Does it matter that their decisions would not pass the prudent man test and thus place them one step closer to being found negligent? No, it does not. The director is accepting the risk — not the risk to the corporation, but the risk that damage might occur during his watch. The vice president probably does not know about the issue or the risks involved but could still be implicated via the concept of respondent superior. The director may think he is protecting the vice president by keeping him out of the loop — the concept of plausible deniability — but the courts have already tackled that one. Senior management is responsible for the actions of those below them, regardless of whether they know about the actions. Neither of these cases exists if the information security officer reports to the CEO. There is only a small opportunity for it to exist if the information security officer reports to the CIO. As the position sinks in the management structure, the opportunity for this type of situation increases. The first time you raise the specter of economic espionage, you may encounter resistance from employees and management. “Our company isn’t like that. We don’t do anything important. No one I know has ever heard of anything like that happening here. People in this community trust one another.” Some of those who have been given evidence that such a threat does exist have preferred to ignore the threat, for to acknowledge it would require them to divert resources (people, equipment, or money) from their own initiatives and goals. They would prefer to “bet the company” that it would not occur while they are there. After they are gone it no longer matters to them. When you raise these issues as the information security officer, you are threatening the careers of many people — from the people who went along with it because they felt powerless to do anything, to the senior management who proposed it, to the people in between who protected the concept and decisions of upper management in good faith to the company. Without a communication path to the CEO and other officers representing the stockholders, you do not have a chance of fulfilling your fiduciary liability to them. 86

AU1518Ch05Frame Page 87 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage The spy of the future is less likely to resemble James Bond, whose chief assets were his fists, than the Line X engineer who lives quietly down the street and never does anything more violent than turn a page of a manual or flick on his computer. — Alvin Toffler, Power Shift: Knowledge, Wealth and Violence at the Edge of the 21st Century

References 1. War by Other Means, John J. Fialka, W.W. Norton Company, 1997. 2. Sabotage! The Secret War Against America, Michael Sayers and Albert E. Kahn, Harper & Brothers, 1942, p. 25. 3. NSA X3 Technical Report X3-TR001–97 Checkpoint Firewall-1 Version 3.0a, Analysis and Penetration Test Report. 4. Letter of reply from David Steinberg, Director, Federal Checkpoint Software, Inc. to Louis F. Giles, Deputy Chief Commercial Solutions & Enabling Technology; 9800 Savage Road Suite 6740, Ft. Meade, MD, dated September 10, 1998. 5. E-mail from Craig Johnson dated June 3, 1998, containing memo dated Jan 19, 1998, to all U.S. Sales of Checkpoint. 6. Cyber Wars, Jean Guisnel, Perseus Books, 1997. 7. War by Other Means, John J Fialka, W.W. Norton Company, 1997, pp. 181–184. 8. Annual Report to Congress on Foreign Economic Collection and Industrial Espionage — 2000, prepared by the National Counterintelligence Center. 9. Infragard National By-Laws, undated, available online at http://www.infragard.net/applic_ requirements/natl_bylaws.htm.

ABOUT THE AUTHOR Craig Schiller, CISSP, an information security consultant for Hawkeye Security, is the principal author of the first published edition of Generally Accepted System Security Principles.

87

AU1518Ch05Frame Page 88 Thursday, November 14, 2002 6:24 PM

AU1518Ch06Frame Page 89 Thursday, November 14, 2002 6:24 PM

Domain 2 Telecommunications and Network Security

AU1518Ch06Frame Page 90 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY This domain is certainly the most technical as well as the most volatile of the ten. It is also the one that attracts the most questions on the CISSP examination. As before, we devote a major amount of effort to assembling pertinent chapters that can enable readers to keep up with the security issues involved with this rapidly evolving area. Section 2.1 deals with communications and network security. Chapter 6 addresses SNMP security. Simple Network Management Protocol (SNMP) simply provides for monitoring network and computing devices everywhere. The chapter defines SNMP and discusses its operation. Then it goes on to explain the inherent security issues, most resulting from system/network administrators’ failures to change default values — which could lead to denial-of-service attacks or other availability issues. Section 2.2 focuses on Internet, intranet, and extranet security. Chapter 7 talks to the security issues resulting from the advent of highspeed, broadband Internet access. Broadband access methods are thoroughly discussed and the related security risks described. How to achieve broadband security in view of its rapidly increasing popularity is explained as difficult but not impossible. Chapter 8 provides new perspectives on the use of VPNs. With the growth of broadband, more companies are using VPNs for remote access and telecommuting, and they already are widely used to protect data transiting insecure networks. Several new mechanisms are identified that add to the feasibility of increased use of VPN technology. Following that, Chapter 9 examines firewall architectures, complete with a review of the fundamentals of firewalls, the basic types, and their pros and cons. This chapter explains in detail the various kinds of firewalls available today and comes to some excellent conclusions. Chapter 10 presents a case study of the use of personal firewalls as host-based firewalls to provide layered protection against the wide spectrum of attacks mounted against hosts from networks. The conclusions from the case study contain some surprising advantages discovered for the use of personal firewall technology in a host environment. Chapter 11 deals with wireless security vulnerabilities — probably the most frequently discussed issue we face these days. The author describes the three IEEE wireless LAN standards and their common security issues. The security mechanisms available (network name, authentication, and encryption) all have security problems. This chapter is a must for those using or intending to use wireless LANs. Section 2.3 covers secure voice communication, an area that we have not paid much attention to previously, but one that is nevertheless quite important in the field of information security. Chapter 12 points out that, although we spend most of our security resources on the protection of electronic information, we are losing millions of dollars annually to voice 90

AU1518Ch06Frame Page 91 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY and telecommunications fraud. The terminology related to voice communication is clarified, and the security issues are discussed in detail. It is also pointed out that the next set of security challenges is Voice-over-IP. Chapter 12 talks about secure voice communications. Events are driving a progressive move toward convergence of voice over some combination of ATM, IP, and MPLS. New security mechanisms will be necessary that include encryption and security services. This chapter reviews architectures, protocols, features, quality-of-service, and security issues related to both landline and wireless voice communication and then examines the convergence aspects. The final section in this rather large domain is probably the most interesting to information security professionals because it addresses network attacks and countermeasures — our bread-and-butter concerns. There are two chapters in this section. The first deals with packet sniffers. The use and misuse of packet sniffers has been a big security concern for many years. Here we have a complete description of what they are, how they work, an example, and the legitimate uses of them. Then we go on to describe their misuse and why they are such a security concern. Ways to reduce this serious risk are described. You must be aware of the inherent dangers associated with sniffers and the methods to mitigate their threat. The second chapter discusses the various types of denial-of-service attacks and their importance to us in the security world. The answer is in their relationship to the growth and success of ISPs. We focus on what ISPs can do about these attacks, particularly with respect to the newest rage — distributed denial-of-service attacks. The chapters in this section in the telecommunications and network security domain contain extremely important information for any organization involved in this technology.

91

AU1518Ch06Frame Page 92 Thursday, November 14, 2002 6:24 PM

AU1518Ch06Frame Page 93 Thursday, November 14, 2002 6:24 PM

Chapter 6

What’s Not So Simple about SNMP? Chris Hare, CISSP, CISA

The Simple Network Management Protocol, or SNMP, is a defined Internet standard from the Internet Engineering Task Force, as documented in Request for Comment (RFC) 1157. This chapter discusses what SNMP is, how it is used, and the challenges facing network management and security professionals regarding its use. While several SNMP applications are mentioned in this chapter, no support or recommendation of these applications is made or implied. As with any application, the enterprise must select its SNMP application based upon its individual requirements. SNMP DEFINED SNMP is used to monitor network and computer devices around the globe. Simply stated, network managers use SNMP to communicate management information, both status and configuration, between the network management station and the SNMP agents in the network devices. The protocol is aptly named because despite the intricacies of a network, SNMP itself is very simple. Before examining the architecture, a review of the terminology used is required. • Network element: any device connected to the network, including hosts, gateways, servers, terminal servers, firewalls, routers, switches and active hubs • Network management station (or management station): a computing platform with SNMP management software to monitor and control the network elements; examples of common management stations are HP Openview and CA Unicenter • SNMP agent: a software management agent responsible for performing the network management functions received from the management station 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

93

AU1518Ch06Frame Page 94 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Requests

Traps

Management Station

Network Devices with SNMP Agents

Exhibit 6-1. The SNMP network manager.

• SNMP request: a message sent from the management station to the SNMP agent on the network device • SNMP trap receiver: the software on the management station that receives event notification messages from the SNMP agent on the network device • Management information base: a standard method identifying the elements in the SNMP database A network configured to SNMP for the management of network devices consists of at least one SNMP agent and one management station. The management station is used to configure the network elements and receive SNMP traps from those elements. Through SNMP, the network manager can monitor the status of the various network elements, make appropriate configuration changes, and respond to alerts received from the network elements (see Exhibit 6-1). As networks increase in size and complexity, a centralized method of monitoring and management is essential. Multiple management stations may exist and be used to compartmentalize the network structure or to regionalize operations of the network. SNMP can retrieve the configuration information for a given network element in addition to device errors or alerts. Error conditions will vary from one SNMP agent to another but would include network interface failures, system failures, disk space warnings, etc. When the device issues an alert to the management station, network management personnel can investigate to resolve the problem. Access to systems is controlled through knowledge of a community string, which can be compared to a password. Community strings are discussed in more detail later in the chapter, but by themselves should not be considered a form of authentication. 94

AU1518Ch06Frame Page 95 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? From time to time it is necessary for the management station to send configuration requests to the device. If the correct community string is provided, the device configuration is changed appropriately. Even this simple explanation evidences the value gained from SNMP. An organization can monitor the status of all its equipment and perform remote troubleshooting and configuration management. THE MANAGEMENT INFORMATION BASE (MIB) The MIB defines the scope of information available for retrieval or configuration on the network element. There is a standard MIB all devices should support. The manufacturer of the device can also define custom extensions to the device to support additional configuration parameters. The definition of MIB extensions must follow a defined convention for the management stations to understand and interpret the MIB correctly. The MIB is expressed using the ASN.1 language; and, while important to be aware of, it is not a major concern unless you are specifically designing new elements for the MIB. All MIB objects are defined explicitly in the Internet standard MIB or through a defined naming convention. Using the defined naming convention limits the ability of product vendors to create individual instances of an MIB element for a particular network device. This is important, given the wide number of SNMP capable devices and the relatively small range of monitoring station equipment. An understanding of the MIB beyond this point is only necessary for network designers who must concern themselves with the actual MIB structure and representations. Suffice to say for this discussion, the MIB components are represented using English identifiers. SNMP OPERATIONS All SNMP agents must support both inspection and alteration of the MIB variables. These operations are referred to as SNMP get (retrieval and inspection) and SNMP set (alteration). The developers of SNMP established only these two operations to minimize the number of essential management functions to support and to avoid the introduction of other imperative management commands. Most network protocols have evolved to support a vast array of potential commands, which must be available in both the client and the server. The File Transfer Protocol (FTP) is a good example of a simple command set that has evolved to include more than 74 commands. The SNMP management philosophy uses the management station to poll the network elements for appropriate information. SNMP uses traps to send messages from the agent running on the monitored system to the monitoring station, which are then used to control the polling. Limiting the 95

AU1518Ch06Frame Page 96 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY number of messages between the agent and the monitoring station achieves the goal of simplicity and minimizes the amount of traffic associated with the network management functions. As mentioned, limiting the number of commands makes implementing the protocol easier: it is not necessary to develop an interface to the operating system, causing a system reboot, or to change the value of variables to force a reboot after a defined time period has elapsed. The interaction between the SNMP agent and management station occurs through the exchange of protocol messages. Each message has been designed to fit within a single User Datagram Protocol (UDP) packet, thereby minimizing the impact of the management structure on the network. ADMINISTRATIVE RELATIONSHIPS The management of network elements requires an SNMP agent on the element itself and on a management station. The grouping of SNMP agents to a management station is called a community. The community string is the identifier used to distinguish among communities in the same network. The SNMP RFC specifies an authentic message as one in which the correct community string is provided to the network device from the management station. The authentication scheme consists of the community string and a set of rules to determine if the message is in fact authentic. Finally, the SNMP authentication service describes a function identifying an authentic SNMP message according to the established authentication schemes. Administrative relationships are called communities, that pair a monitored device with the management station. Through this scheme, administrative relationships can be separated among devices. The agent and management station defined within a community establish the SNMP access policy. Management stations can communicate directly with the agent or, in the event of network design, an SNMP proxy agent. The proxy agent relays communications between the monitored device and the management station. The use of proxy agents allows communication with all network elements, including modems, multiplexors, and other devices that support different management frameworks. Additional benefits from the proxy agent design include shielding network elements from access policies, which might be complex. The community string establishes the access policy community to use, and it can be compared to passwords. The community string establishes the password to access the agent in either read-only mode, commonly referred to the public community, or the read-write mode, known as the private community. 96

AU1518Ch06Frame Page 97 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? SNMP REQUESTS There are two access modes within SNMP: read-only and read-write. The command used, the variable, and the community string determine the access mode. Corresponding with the access mode are two community strings, one for each access mode. Access to the variable and the associated action is controlled by: • If the variable is defined with an access type of none, the variable is not available under any circumstances. • If the variable is defined with an access type of read-write or read-only, the variable is accessible for the appropriate get, set, or trap commands. • If the variable does not have an access type defined, it is available for get and trap operations. However, these rules only establish what actions can be performed on the MIB variable. The actual communication between the SNMP agent and the monitoring station follows a defined protocol for message exchange. Each message includes the: • SNMP version identifier • Community string • Protocol data unit (PDU) The SNMP version identifier establishes the version of SNMP in use — Version 1, 2, or 3. As mentioned previously, the community string determines which community is accessed, either public or private. The PDU contains the actual SNMP trap or request. With the exception of traps, which are reported on UDP port 162, all SNMP requests are received on UDP port 161. RFC 1157 specifies that protocol implementations need not accept messages more than 484 bytes in length, although in practice a longer message length is typically supported. There are five PDUs supported within SNMP: 1. 2. 3. 4. 5.

GetRequest-PDU GetNextRequest-PDU GetResponse-PDU SetRequest-PDU Trap-PDU

When transmitting a valid SNMP request, the PDU must be constructed using the implemented function, the MIB variable in ASN.1 notation. The ASN.1 notation, the source and destination IP addresses, and UDP ports are included along with the community string. Once processed, the resulting request is sent to the receiving system. 97

AU1518Ch06Frame Page 98 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Send SNMP request with version, variable, and community string

Return requested information or trap with an error

Exhibit 6-2. The SNMP transmission process.

As shown in Exhibit 6-2, the receiving system accepts the request and assembles an ASN.1 object. The message is discarded if the decoding fails. If implemented correctly, this discard function should cause the receiving system to ignore malformed SNMP requests. Similarly, the SNMP version is checked; and if there is a mismatch, the packet is also dropped. The request is then authenticated using the community string. If the authentication fails, a trap may be generated indicating an authentication failure, and the packet is dropped. If the message is accepted, the object is again parsed to assemble the actual request. If the parse fails, the message is dropped. If the parse is successful, the appropriate SNMP profile is selected using the named community, and the message is processed. Any resulting data is returned to the source address of the request. THE PROTOCOL DATA UNIT As mentioned, there are five protocol data units supported. Each is used to implement a specific request within the SNMP agent and management station. Each will be briefly examined to review purpose and functionality. The GetRequest PDU requests information to be retrieved from the remote device. The management station uses the GetRequest PDU to make queries of the various network elements. If the MIB variable specified is matched exactly in the network element MIB, the value is returned using the GetResponse PDU. We can see the direct results of the GetRequest and GetResponse messages using the snmpwalk command commonly found on Linux systems: [chare@linux chare]$ for host in 1 2 3 4 5 > do > snmpwalk 192.168.0.$host public system.sysDescr.0 > done system.sysDescr.0 = Instant Internet version 7.11.2 Timeout: No Response from 192.168.0.2 98

AU1518Ch06Frame Page 99 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? system.sysDescr.0 = Linux linux 2.4.9–31 #1 Tue Feb 26 07:11:02 EST 2002 i686 Timeout: No Response from 192.168.0.4 Timeout: No Response from 192.168.0.5 [chare@linux chare]$

Despite the existence of a device at all five IP addresses in the above range, only two are configured to provide a response; or perhaps the SNMP community string provided was incorrect. Note that, on those systems where snmpwalk is not installed, the command is available in the net-ucb-cnmp source code available from many network repositories. The GetResponse PDU is the protocol type containing the response to the request issued by the management station. Each GetRequest PDU results in a response using GetResponse, regardless of the validity of the request. The GetNextResponse PDU is identical in form to the GetResponse PDU, except it is used to get additional information from a previous request. Alternatively, table traversals through the MIB are typically done using the GetNextResponse PDU. For example, using the snmpwalk command, we can traverse the entire table using the command: # snmpwalk localhost public system.sysDescr.0 = Linux linux 2.4.9–31 #1 Tue Feb 26 07:11:02 EST 2002 i686 system.sysObjectID.0 = OID: enterprises.ucdavis.ucdSnmpAgent.linux system.sysUpTime.0 = Timeticks: (4092830521) 473 days, 16:58:25.21 system.sysContact.0 = root@localhost system.sysName.0 = linux system.sysLocation.0 = Unknown system.sysORLastChange.0 = Timeticks: (4) 0:00:00.04 …

In our example, no specific MIB variable is requested, which causes all MIB variables and their associated values to be printed. This generates a large amount of output from snmpwalk. Each variable is retrieved until there is no additional information to be received. Aside from the requests to retrieve information, the management station also can set selected variables to new values. This is done using the SetRequest PDU. When receiving the SetRequest PDU, the receiving station has several valid responses: 99

AU1518Ch06Frame Page 100 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • If the named variable cannot be changed, the receiving station returns a GetResponse PDU with an error code. • If the value does not match the named variable type, the receiving station returns a GetResponse PDU with a bad value indication. • If the request exceeds a local size limitation, the receiving station responds with a GetResponse PDU with an indication of too big. • If the named variable cannot be altered and is not covered by the preceding rules, a general error message is returned by the receiving station using the GetResponse PDU. If there are no errors in the request, the receiving station updates the value for the named variable. The typical read-write community is called private , and the correct community string must be provided for this access. If the value is changed, the receiving station returns a GetResponse PDU with a “No error” indication. As discussed later in this chapter, if the SNMP read-write community string is the default or set to another well-known value, any user can change MIB parameters and thereby affect the operation of the system. SNMP TRAPS SNMP traps are used to send an event back to the monitoring station. The trap is transmitted at the request of the agent and sent to the device specified in the SNMP configuration files. While the use of traps is universal across SNMP implementations, the means by which the SNMP agent determines where to send the trap differs among SNMP agent implementations. There are several traps available to send to the monitoring station: • • • • • • •

coldStart warmStart linkDown linkUp authenticationFailure egpNeighborLoss enterpriseSpecific

Traps are sent using the PDU, similar to the other message types, previously discussed. The coldStart trap is sent when the system is initialized from a poweredoff state and the agent is reinitializing. This trap indicates to the monitoring station that the SNMP implementation may have been or may be altered. The warmStart trap is sent when the system restarts, causing the agent to reinitialize. In a warmStart trap event, neither the SNMP agent’s implementation nor its configuration is altered. 100

AU1518Ch06Frame Page 101 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP?

Exhibit 6-3. Router with multiple network interfaces.

Most network management personnel are familiar with the linkDown and linkUp traps. The linkDown trap is generated when a link on the SNMP agent recognizes a failure of one or more of the network links in the SNMP agent’s configuration. Similarly, when a communication link is restored, the linkUp trap is sent to the monitoring station. In both cases, the trap indicates the network link where the failure or restoration has occurred. Exhibit 6-3 shows a device, in this case a router, with multiple network interfaces, as seen in a Network Management Station. The failure of the red interface (shown here in black) caused the router to send a linkDown trap to the management station, resulting in the change in color for the object. The green objects (shown in white) represent currently operational interfaces. The authenticationFailure trap is generated when the SNMP agent receives a message with the incorrect community string, meaning the attempt to access the SNMP community has failed. When the SNMP agent communicates in an Exterior Gateway Protocol (EGP) relationship, and the peer is no longer reachable, an egpNeighborLoss trap is generated to the management station. This trap means routing information available from the EGP peer is no longer available, which may affect other network connectivity. Finally, the enterpriseSpecific trap is generated when the SNMP agent recognizes an enterpriseSpecific trap has occurred. This is implementation dependent and includes the specific trap information in the message sent back to the monitoring station. SNMP SECURITY ISSUES The preceding brief introduction to SNMP should raise a few issues for the security professional. As mentioned, the default SNMP community 101

AU1518Ch06Frame Page 102 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY strings are public for read-only access and private for read-write. Most system and network administrators do not change these values. Consequently, any user, authorized or not, can obtain information through SNMP about the device and potentially change or reset values. For example, if the read-write community string is the default, any user can change the device’s IP address and take it off the network. This can have significant consequences, most notably surrounding the availability of the device. It is not typically possible to access enterprise information or system passwords or to gain command line or terminal access using SNMP. Consequently, any changes could result in the monitoring station identifying the device as unavailable, forcing corrective action to restore service. However, the common SNMP security issues are: • Well-known default community strings • Ability to change the configuration information on the system where the SNMP agent is running • Multiple management stations managing the same device • Denial-of-service attacks Many security and network professionals are undoubtedly familiar with the Computer Emergency Response Team (CERT) Advisory CA-2002–03 published in February 2002. While this is of particular interest to the network and security communities today, it should not overshadow the other issues mentioned above because many of the issues in CA-2002-03 are possible due to the other security issues. Well-Known Community Strings As mentioned previously, there are two SNMP access polices, read-only and read-write, using the default community strings of public and private, respectively. Many organizations do not change the default community strings. Failing to change the default values means it is possible for an unauthorized person to change the configuration parameters associated with the device. Consequently, SNMP community strings should be treated as passwords. The better the quality of the password, the less likely an unauthorized person could guess the community string and change the configuration. Ability to Change SNMP Configuration On many systems, users who have administrative privileges can change the configuration of their system, even if they have no authority to do so. This ability to change the local SNMP agent configuration can affect the operation of the system, cause network management problems, or affect the operation of the device. 102

AU1518Ch06Frame Page 103 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? Consequently, SNMP configuration files should be controlled and, if possible, centrally managed to identify and correct configuration changes. This can be done in a variety of ways, including tools such as tripwire. Multiple Management Stations While this is not a security problem per se, multiple management stations polling the same device can cause problems ranging from poor performance, to differing SNMP configuration information, to the apparent loss of service. If your network is large enough to require multiple management stations, separate communities should be established to prevent these events from taking place. Remember, there is no constraint on the number of SNMP communities that can be used in the network; it is only the network engineer who imposes the limits. Denial-of-Service Attacks Denial of service is defined as the loss of service availability either through authorized or unauthorized configuration changes. It is important to be clear about authorized and unauthorized changes. The system or application administrator who makes a configuration change as part of his job and causes a loss of service has the same impact as the attacker who executes a program to cause the loss of service remotely. A key problem with SNMP is the ability to change the configuration of the system causing the service outage, or to change the SNMP configuration and imitate a denial of service as reported by the monitoring station. In either situation, someone has to review and possibly correct the configuration problem, regardless of the cause. This has a cost to the company, even if an authorized person made the change. The Impact of CERT CA-2002–03 Most equipment manufacturers, enterprises, and individuals felt the impact of the CERT advisory issued by the Carnegie Mellon Software Engineering Institute (CM-SEI) Computer Emergency Response Team Coordination Center (CERT-CC). The advisory was issued after the Oulu University Secure Programming Group conducted a very thorough analysis of the message-handling capabilities of SNMP Version 1. While the advisory is specifically for SNMP Version 1, most SNMP implementations use the same program code for decoding the PDU, potentially affecting all SNMP versions. The primary issues noted in the advisory as it affects SNMP involve the potential for unauthorized privileged access, denial-of-service attacks, or other unstable behavior. Specifically, the work performed by Oulu University found problems with decoding trap messages received by the SNMP 103

AU1518Ch06Frame Page 104 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY management station or requests received by the SNMP agent on the network device. It was also identified that some of the vulnerabilities found in the SNMP implementation did not require the correct community string. Consequently, vendors have been issuing patches for their SNMP implementations; but more importantly, enterprises have been testing for vulnerabilities within their networks. The cost of the vulnerabilities in code, which has been in use for decades, will cost developers millions of dollars for new development activities to remove the vulnerabilities, verify them, and release patches. The users of those products will also spend millions of dollars on patching and implementing other controls to limit the potential exposures. Many of the recommendations provided by CERT for addressing the problem are solutions for the common security problems when using SNMP. The recommendations provided by CERT can be considered common sense, because SNMP should be treated as a network service: • Disable SNMP. If the device in question is not monitored using SNMP, it is likely safe to disable the service. Remember, if you are monitoring the device and disable SNMP in error, your management station will report the device as down. • Implement perimeter network filtering. Most enterprises should filter inbound SNMP requests from external networks to prevent unauthorized individuals or organizations from retrieving SNMP information about your network devices. Sufficient information exists in the SNMP data to provide a good view of how to attack your enterprise. Secondly, outbound filtering should be applied to prevent SNMP requests from leaving your network and being directed to another enterprise. The obvious exceptions here are if you are monitoring another network outside yours, or if an external organization is providing SNMPbased monitoring systems for your network. • Implement authorized SNMP host filtering. Not every user who wants to should be able to issue SNMP queries to the network devices. Consequently, filters can be installed in the network devices such as routers and switches to limit the source and destination addresses for SNMP requests. Additionally, the SNMP configuration of the agent should include the appropriate details to limit the authorized SNMP management and trap stations. • Change default community strings. A major problem in most enterprises, the default community strings of public and private should be changed to a complex string; and knowledge of that string should be limited to as few people as possible. • Create a separate management network. This can be a long, involved, and expensive process that many enterprises do not undertake. A separate 104

AU1518Ch06Frame Page 105 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? management network keeps connectivity to the network devices even when there is a failure on the network portion. However, it requires a completely separate infrastructure, making it expensive to implement and difficult to retrofit. If you are building a new network, or have an existing network with critical operational requirements, a separate management network is highly advisable. The recommendations identified here should be implemented by many enterprises, even if all their network devices have the latest patches implemented. Implementing these techniques for other network protocols and services in addition to SNMP can greatly reduce the risk of unauthorized network access and data loss. SUMMARY The goal of SNMP is to provide a simple yet powerful mechanism to change the configuration and monitor the state and availability of the systems and network devices. However, the nature of SNMP, as with other network protocols, also exposes it to attack and improper use by network managers, system administrators, and security personnel. Understanding the basics of SNMP and the major security issues affecting its use as discussed here helps the security manager communicate concerns about network design and implementation with the network manager or network engineer. Acknowledgments

The author thanks Cathy Buchanan of Nortel Network’s Internet Engineering team for her editorial and technical clarifications. And thanks to Mignona Cote, my friend and colleague, for her continued support and ideas. Her assistance continues to expand my vision and provides challenges on a daily basis. References Internet Engineering Task Force (IETF) Request for Comments (RFC) documents: RFC-1089 SNMP over Ethernet RFC-1157 SNMP over Ethernet RFC-1187 Bulk Table Retrieval with the SNMP RFC-1215 Convention for Defining Traps for Use with the SNMP RFC-1227 SNMP MUX Protocol and MIB RFC-1228 SNMP-DPI: Simple Network Management Protocol Distributed Program RFC-1270 SNMP Communications Services RFC-1303 A Convention for Describing SNMP-Based Agents

105

AU1518Ch06Frame Page 106 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY RFC-1351 SNMP Administrative Model RFC-1352 SNMP Security Protocols RFC-1353 Definitions of Managed Objects for Administration of SNMP RFC-1381 SNMP MIB Extension for X.25 LAPB RFC-1382 SNMP MIB Extension for the X.25 Packet Layer RFC-1418 SNMP over OSI RFC-1419 SNMP over AppleTalk RFC-1420 SNMP over IPX RFC-1461 SNMP MIB Extension for Multiprotocol Interconnect over X.25 RFC-1503 Algorithms for Automating Administration in SNMPv2 Managers RFC-1901 Introduction to Community-Based SNMPv2 RFC-1909 An Administrative Infrastructure for SNMPv2 RFC-1910 User-Based Security Model for SNMPv2 RFC-2011 SNMPv2 Management Information Base for the Internet Protocol RFC-2012 SNMPv2 Management Information Base for the Transmission Control Protocol RFC-2013 SNMPv2 Management Information Base for the User Datagram Protocol RFC-2089 V2ToV1 Mapping SNMPv2 onto SNMPv1 within a Bi-Lingual SNMP Agent RFC-2273 SNMPv3 Applications RFC-2571 An Architecture for Describing SNMP Management Frameworks RFC-2573 SNMP Applications RFC-2742 Definitions of Managed Objects for Extensible SNMP Agents RFC-2962 An SNMP Application-Level Gateway for Payload Address CERT Advisory CA-2002–03

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

106

AU1518Ch07Frame Page 107 Thursday, November 14, 2002 6:23 PM

Chapter 7

Security for Broadband Internet Access Users James Trulove

High-speed access is becoming increasingly popular for connecting to the Internet and to corporate networks. The term “high-speed” is generally taken to mean transfer speeds above the 56 kbps of analog modems, or the 64 to 128 kbps speeds of ISDN. There are a number of technologies that provide transfer rates from 256 kbps to 1.544 Mbps and beyond. Some offer asymmetrical uplink and downlink speeds that may go as high as 6 Mbps. These high-speed access methods include DSL, cable modems, and wireless point-to-multipoint access. DSL services include all of the so-called “digital subscriber line” access methods that utilize conventional copper telephone cabling for the physical link from customer premise to central office (CO). The most popular of these methods is ADSL, or asymmetrical digital subscriber line, where an existing POTS (plain old telephone service) dial-up line does double duty by having a higher frequency digital signal multiplexed over the same pair. Filters at the user premise and at the central office tap off the digital signal and send it to the user’s PC and the CO router, respectively. The actual transport of the ADSL data is via ATM, a factor invisible to the user, who is generally using TCP/IP over Ethernet. A key security feature of DSL service is that the transport media (one or two pairs) is exclusive to a single user. In a typical neighborhood of homes or businesses, individual pairs from each premise are, in turn, consolidated into larger cables of many pairs that run eventually to the service provider’s CO. As with a conventional telephone line, each user is isolated from other users in the neighborhood. This is inherently more secure than competing high-speed technologies. The logical structure of an ADSL distribution within a neighborhood is shown in Exhibit 7-1A. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

107

AU1518Ch07Frame Page 108 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Cable modems (CMs) allow a form of high-speed shared access over media used for cable television (CATV) delivery. Standard CATV video channels are delivered over a frequency range from 54 MHz to several hundred megahertz. Cable modems simply use a relatively narrow band of those frequencies that are unused for TV signal delivery. CATV signals are normally delivered through a series of in-line amplifiers and signal splitters to a typical neighborhood cable segment. Along each of these final segments, additional signal splitters (or taps) distribute the CATV signals to users. Adding two-way data distribution to the segment is relatively easy because splitters are inherently two-way devices and no amplifiers are within the segment. However, the uplink signal from users in each segment must be retrieved at the head of the segment and either repeated into the next up-line segment or converted and transported separately. As shown in Exhibit 7-1B, each neighborhood segment is along a tapped coaxial cable (in most cases) that terminates in a common-equipment cabinet (similar in design to the subscriber-line interface cabinets used in telephone line multiplexing). This cabinet contains the equipment to filter off the data signal from the neighborhood coax segment and transport it back to the cable head-end. Alternative data routing may be provided between the common equipment cabinets and the NOC (network operations center), often over fiber-optic cables. As a matter of fact, these neighborhood distribution cabinets are often used as a transition point for all CATV signals between fiber-optic transmission links and the installed coaxial cable to the users. Several neighborhood segments may terminate in each cabinet. When a neighborhood has been rewired for fiber distribution and cable modem services, the most often outward sign is the appearance of a four-foot high green or gray metal enclosure. These big green (or gray) boxes are metered and draw electrical power from a local power pole and often have an annoying little light to warn away would-be villains. Many areas do not have ready availability of cable modem circuits or DSL. Both technologies require the user to be relatively near the corresponding distribution point and both need a certain amount of infrastructure expansion by the service provider. A wireless Internet option exists for high-speed access from users who are in areas that are otherwise unserved. The term “wireless Internet” refers to a variety of noncellular radio services that interconnect users to a central access point, generally with a very high antenna location on a high building, a broadcast tower, or even a mountaintop. Speeds can be quite comparable to the lower ranges of DSL and CM (i.e., 128 to 512 kbps). Subscriber fees are somewhat higher, but still a great value to someone who would otherwise have to deal with low-speed analog dial access. Wireless Internet is often described as point-to-multipoint operation. This refers to the coverage of several remote sites from a central site, as opposed 108

AU1518Ch07Frame Page 109 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users

A. DSL Distribution

To Central Office

B. Cable Modem Segment

Data Link Neighborhood Interface To Cable Cabinet Head End

C. Wireless Internet Distribution

Exhibit 7-1. Broadband and wireless Internet access methods.

to point-to-point links that are intended to serve a pair of sites exclusively. As shown in Exhibit 7-1C, remote user sites at homes or businesses are connected by a radio link to a central site. In general, the central site has an omnidirectional antenna (one that covers equally in all radial directions) while remote sites have directional antennas that point at the central antenna. Wireless Internet users share the frequency spectrum among all the users of a particular service frequency. This means that these remote users must share the available bandwidth as well. As a result, as with the cable modem situation, the actual data throughput depends on how many users are online and active. In addition, all the transmissions are essentially broadcast into the air and can be monitored or intercepted with the proper equipment. Some wireless links include a measure of encryption but the key may still be known to all subscribers to the service. 109

AU1518Ch07Frame Page 110 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY There are several types of wireless systems permitted in the United States, as with the European Union, Asia, and the rest of the world. Some of these systems permit a single provider to control the rights to a particular frequency allocation. These exclusively licensed systems protect users from unwanted interference from other users and protect the large investment required of the service provider. Other systems utilize a frequency spectrum that is shared and available to all. For example, the 802.11 systems at 2.4 GHz and 5.2 GHz are shared-frequency, nonlicensed systems that can be adapted to point-to-multipoint distribution. Wireless, or radio-frequency (RF), distribution is subject to all of the same distance limitations, antenna designs, antenna siting, and interference considerations of any RF link. However, in good circumstances, wireless Internet provides a very satisfactory level of performance, one that is comparable to its wired competitors. BROADBAND SECURITY RISKS Traditional remote access methods, by their very nature, provide a fair measure of link security. Dial-up analog and dial-on-demand ISDN links have relatively good protection along the path between the user’s computer and the access service provider (SP). Likewise, dedicated links to an Internet service provider (ISP) are inherently safe as well, barring any intentional (and unauthorized/illegal) tapping. However, this is not necessarily the case with broadband access methods. Of the common broadband access methods, cable modems and wireless Internet have inherent security risks because they use shared media for transport. On the other hand, DSL does indeed utilize an exclusive path to the CO but has some more subtle security issues that are shared with the other two methods. The access-security issue with cable modems is probably the most significant. Most PC users run a version of the Microsoft Windows® operating system, popularly referred to just as Windows. All versions of Windows since Windows 95® have included a feature called peer-to-peer networking. This feature is in addition to the TCP/IP protocol stack that supports Internet-oriented traffic. Microsoft Windows NT® and Windows 2000® clients also support peer-to-peer networking. These personal operating systems share disk, printer, and other resources in a network neighborhood utilizing the NetBIOS protocol. NetBIOS is inherently nonroutable although it can be encapsulated within TCP/IP and IPX protocols. A particular network neighborhood is identified by a Workgroup name and, theoretically, devices with different Workgroup names cannot converse. A standard cable modem is essentially a two-way repeater connected between a user’s PC (or local network) and the cable segment. As such, it 110

AU1518Ch07Frame Page 111 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users repeats everything along your segment to your local PC network and everything on your network back out to the cable segment. Thus, all the “private” conversations one might have with one’s network-connected printer or other local PCs are available to everyone on the segment. In addition, every TCP/IP packet that goes between one’s PC and the Internet is also available for eavesdropping along the cable segment. This is a very serious security risk, at least among those connected to a particular segment. It makes an entire group of cable modem users vulnerable to monitoring, or even intrusion. Specific actions to mitigate this risk are discussed later. Wireless Internet acts essentially as a shared Ethernet segment, where the segment exists purely in space rather than within a copper medium. It is “ethereal,” so to speak. What this means in practice is that every transmission to one user also goes to every authorized (and unauthorized) station within reception range of the central tower. Likewise, a user’s transmissions back to the central station are available to anyone who is capable of receiving that user’s signal. Fortunately, the user’s remote antenna is fairly directional and is not at the great height of the central tower. But someone who is along the path between the two can still pick up the user’s signal. Many wireless Internet systems also operate as a bridge rather than a TCP/IP router, and can pass the NetBIOS protocol used for file and printer sharing. Thus, they may be susceptible to the same type of eavesdropping and intrusion problems of the cable modem, unless they are protected by link encryption. In addition to the shared-media security issue, broadband security problems are more serious because of the vast communication bandwidth that is available. More than anything else, this makes the broadband user valuable as a potential target. An enormous amount of data can be transferred in a relatively short period of time. If the broadband user operates mail systems or servers, these may be more attractive to someone wanting to use such resources surreptitiously. Another aspect of broadband service is that it is “always on,” rather than being connected on-demand as with dial-up service. This also makes the user a more accessible target. How can a user minimize exposure to these and other broadband security weaknesses? INCREASING BROADBAND SECURITY The first security issue to deal with is visibility. Users should immediately take steps to minimize exposure on a shared network. Disabling or hiding processes that advertise services or automatically respond to inquiries effectively shields the user’s computer from intruding eyes. 111

AU1518Ch07Frame Page 112 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Shielding the computer will be of benefit whether the user is using an inherently shared broadband access, such as with cable modems or wireless, or has DSL or dial-up service. Also, remember that the user might be on a shared Ethernet at work or on the road. Hotel systems that offer highspeed access through an Ethernet connection are generally shared networks and thus are subject to all of the potential problems of any shared broadband access. Shared networks clearly present a greater danger for unauthorized access because the Windows Networking protocols can be used to detect and access other computers on the shared medium. However, that does not mean that users are unconditionally safe in using other access methods such as DSL or dial-up. The hidden danger in DSL or dial-up is the fact that the popular peer-to-peer networking protocol, NetBIOS, can be transported over TCP/IP. In fact, a common attack is a probe to the IP port that supports this. There are some specific steps users can take to disable peer networking if they are a single-PC user. Even if there is more than one PC in the local network behind a broadband modem, users can take action to protect their resources. Check Vulnerability Before taking any local-PC security steps, users might want to check on their vulnerabilities to attacks over the Web. This is easy to do and serves as both a motivation to take action and a check on security steps. Two sites are recommended: www.grc.com and www.symantec.com/securitycheck. GRC.com is the site created by Steve Gibson for his company, Gibson Research Corp. Users should look for the “shields up” icon to begin the testing. GRC is free to use and does a thorough job of scanning for open ports and hidden servers. The Symantec URL listed should take the user directly to the testing page. Symantec can also test vulnerabilities in Microsoft Internet Explorer as a result of ActiveX controls. Potentially harmful ActiveX controls can be inadvertently downloaded in the process of viewing a Web page. The controls generally have full access to the computer’s file system, and can thus contain viruses or even hidden servers. As is probably known, the Netscape browser does not have these vulnerabilities, although both types of browsers are somewhat vulnerable to Java and JavaScript attacks. According to information on this site, the online free version at Symantec does not have all the test features of the retail version, so users must purchase the tool to get a full test. These sites will probably convince users to take action. It is truly amazing how a little demonstration can get users serious about security. 112

AU1518Ch07Frame Page 113 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users Remember that this eye-opening experience will not decrease security in any way … it will just decrease a user’s false sense of security! Start by Plugging Holes in Windows To protect a PC against potential attack that might compromise personal data or even harm a PC, users will need to change the Windows Networking default configurations. Start by disabling file and printer sharing, or by password-protecting them, if one must use these features. If specific directories must be shared to other users on the local network, share just that particular directory rather than the entire drive. Protect each resource with a unique password. Longer passwords, and passwords that use a combination of upper/lower case, numbers, and allow punctuation, are more secure. Windows Networking is transported over the NetBIOS protocol, which is inherently unroutable. The advantage to this feature is that any NetBIOS traffic, such as that for printer or file sharing, is blocked at any WAN router. Unfortunately, Windows has the flexibility of encapsulating NetBIOS within TCP/IP packets, which are quite routable. When using IP Networking, users may be inadvertently enabling this behavior. As a matter of fact, it is a little difficult to block. However, there are some steps users can take to isolate their NetBIOS traffic from being routed out over the Internet. The first step is to block NetBIOS over TCP/IP. To do this in Windows, simply go to the Property dialog for TCP/IP and disable “NetBIOS over TCP/IP.” Likewise, disable “Set this protocol to be the default.” Now go to bindings and uncheck all of the Windows-oriented applications, such as Microsoft Networking or Microsoft Family Networking. The next step is to give local networking features an alternate path. Do this by adding IPX/SPX compatible protocol from the list in the Network dialog. After adding IPX/SPX protocol, configure its properties to take up the slack created with TCP/IP. Set it to be the default protocol; check the “enable NetBIOS over IPX/SPX” option; and check the Windows-oriented bindings that were unchecked for TCP/IP. In exiting the dialog, by checking OK, notice that a new protocol has been added, called “NetBIOS support for IPX/SPX compatible Protocol.” This added feature allows NetBIOS to be encapsulated over IPX, isolating the protocol from its native mode and from unwanted encapsulation over TCP/IP. This action provides some additional isolation of the local network’s NetBIOS communication because IPX is generally not routed over the user’s access device. Be sure that IPX routing, if available, is disabled on the router. This will not usually be a problem with cable modems (which do not route) or with DSL connections because both are primarily used in IPonly networks. At the first IP router link, the IPX will be blocked. If the simple NAT firewall described in the next section is used, IPX will likewise be 113

AU1518Ch07Frame Page 114 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

A. Typical Broadband Access Public Address

Public Address Service Provider and the Internet

Cable or DSL Modem

B. Broadband Access with a NAT Firewall Private Address 192.168.1.100

Simple IP NAT Router

Public Address Cable or DSL Modem

Public Address Service Provider and the Internet

Exhibit 7-2. Addition of a NAT firewall for broadband Internet access.

blocked. However, if ISDN is used for access, or some type of T1 router, check that IPX routing is off. Now Add a NAT Firewall Most people do not have the need for a full-fledged firewall. However, a simple routing device that provides network address translation (NAT) can shield internal IP addresses from the outside world while still providing complete access to Internet services. Exhibit 7-2A shows the normal connection provided by a cable or DSL modem. The user PC is assigned a public IP address from the service provider’s pool. This address is totally visible to the Internet and available for direct access and, therefore, for direct attacks on all IP ports. A great deal of security can be provided by masking internal addresses inside a NAT router. This device is truly a router because it connects between two IP subnets, the internal “private” network and the external “public” network. A private network is one with a known private network subnet address, such as 192.168.x.x or 10.x.x.x. These private addresses are nonroutable because Internet Protocol convention allows them to be duplicated at will by anyone who wants to use them. In the example shown in Exhibit 7-2B, the NAT router is inserted between the user’s PC (or internal network of PCs) and the existing cable or DSL modem. The NAT router can act as a DHCP (Dynamic Host Control Protocol) server to the internal private network, and it can act as a DHCP client to the service provider’s DHCP server. In this manner, dynamic IP address assignment can be 114

AU1518Ch07Frame Page 115 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users accomplished in the same manner as before, but the internal addresses are hidden from external view. A NAT router is often called a simple firewall because it does the address-translation function of a full-featured firewall. Thus, the NAT router provides a first level of defense. A common attack uses the source IP address of a user’s PC and steps through the known and upper IP ports to probe for a response. Certain of these ports can be used to make an unauthorized access to the user’s PC. Although the NAT router hides the PC user’s IP address, it too has a valid public IP address that may now be the target of attacks. NAT routers will often respond to port 23 Telnet or port 80 HTTP requests because these ports are used for the router’s configuration. The user must change the default passwords on the router, as a minimum; and, if allowable, disable any access to these ports from the Internet side. Several companies offer simple NAT firewalls for this purpose. In addition, some products are available that combine the NAT function with the cable or DSL modem. For example, LinkSYS provides a choice of NAT routers with a single local Ethernet port or with four switched Ethernet ports. List prices for these devices are less than $200, with much lower street prices. Install a Personal Firewall The final step in securing a user’s personal environment is to install a personal firewall. The current software environment includes countless user programs and processes that access the Internet. Many of the programs that connect to the Internet are obvious: the e-mail and Web browsers that everyone uses. However, one may be surprised to know that a vast array of other software also makes transmissions over the Internet connection whenever it is active. And if using a cable modem or DSL modem (or router), one’s connection is always active if one’s PC is on. For example, Windows 98 has an update feature that regularly connects to Microsoft to check for updates. A virus checker, personal firewall, and even personal finance programs can also regularly check for updates or, in some cases, for advertising material. The Windows update is particularly persistent and can check every five or ten minutes if it is enabled. Advertisements can annoyingly pop up a browser mini-window, even when the browser is not active. However, the most serious problems arise from the unauthorized access or responses from hidden servers. Chances are that a user has one or more Web server processes running right now. Even the music download services (e.g., MP3) plant servers on PCs. Surprisingly, these are often either hidden or ignored, although they represent a significant security risk. 115

AU1518Ch07Frame Page 116 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY These servers can provide a backdoor into a PC that can be opened without the user’s knowledge. In addition, certain viruses operate by planting a stealth server that can be later accessed by an intruder. A personal firewall will provide a user essential control over all of the Internet accesses that occur to or from his PC. Several products are on the market to provide this function. Two of these are Zone Alarm from Zone Labs (www.zonelabs.com) and Black Ice Defender from Network Ice (www.networkice.com). Other products are available from Symantec and Network Associates. The use of a personal firewall will alert the user to all traffic to or from his broadband modem and allow the user to choose whether he wants that access to occur. After an initial setup period, Internet access will appear perfectly normal, except that unwanted traffic, probes, and accesses will be blocked. Some of the products alert the user to unwanted attempts to connect to his PC. Zone Alarm, for example, will pop up a small window to advise the user of the attempt, the port and protocol, and the IP address of the attacker. The user can also observe and approve the ability of his applications to access the Internet. After becoming familiar with the behavior of these programs, the user can direct the firewall to always block or allow access. In addition, the user can explicitly block server behavior from particular programs. A log is kept of actions so that the user can review the firewall activities later, whether or not he disables the pop-up alert window. Thus far, this chapter has concentrated on security for broadband access users. However, after seeing what the personal firewall detects and blocks, users will certainly want to put it on all their computers. Even dialup connections are at great risk from direct port scanning and NetBIOS/IP attacks. After installation of a personal firewall, it is not unusual to notice probes beginning within the first 30 seconds after connecting. And if one monitors these alerts, one will continue to see such probes blocked over the course of a session. Do not be alarmed. These probes were happening before the firewall was installed, just without the user’s knowledge. The personal firewall is now blocking all these attempts before they can do any harm. Broadband users with a consistent public IP address will actually see a dramatic decrease over time in these probes. The intruders do not waste time going where they are unwelcome. SUMMARY Broadband access adds significant security risks to a network or a personal computer. The cable modem or DSL connection is normally always active and the bandwidth is very high compared to slower dial-up or ISDN methods. Consequently, these connections make easy targets for intrusion and disruption. Wireless Internet users have similar vulnerabilities, in addition to possible eavesdropping through the airwaves. Cable modem 116

AU1518Ch07Frame Page 117 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users users suffer additional exposure to nonroutable workgroup protocols, such as Windows-native NetBIOS. Steps should be taken in three areas to help secure PC resources from unwanted intrusions. 1. Eliminate or protect Windows workgroup functions such as file and printer sharing. Change the default passwords and enable IPX encapsulation if these functions are absolutely necessary. 2. Add a simple NAT firewall/router between the access device and PCs. This will screen internal addresses from outside view and eliminate most direct port scans. 3. Install and configure a personal firewall on each connected PC. This will provide control over which applications and programs have access to Internet resources. ABOUT THE AUTHOR James Trulove has more than 25 years of experience in data networking with companies such as Lucent, Ascend, AT&T, Motorola, and Intel. He has a background in designing, configuring, and implementing multimedia communications systems for local and wide area networks, using a variety of technologies. He writes on networking topics and is the author of LAN Wiring, An Illustrated Guide to Network Cabling and A Guide to Fractional T1, and the editor of Broadband Networking, as well as the author of numerous articles on networking.

117

AU1518Ch07Frame Page 118 Thursday, November 14, 2002 6:23 PM

AU1518Ch08Frame Page 119 Thursday, November 14, 2002 6:23 PM

Chapter 8

New Perspectives on VPNs Keith Pasley, CISSP

Wide acceptance of security standards in IP and deployment of quality-ofservice (QoS) mechanisms like Differentiated Services (DiffServ) and Resource Reservation Protocol (RSVP) within multi-protocol label switching (MPLS) is increasing the feasibility of virtual private networks (VPNs). VPNs are now considered mainstream; most service providers include some type of VPN service in their offerings, and IT professionals have grown familiar with the technology. Also, with the growth of broadband, more companies are using VPNs for remote access and telecommuting. Specifically, the small office/home-office market has the largest growth projections according to industry analysts. However, where once lay the promise of IPSec-based VPNs, it is now accepted that IPSec does not solve all remote access VPN problems. As user experience with VPNs has grown, so have user expectations. Important user experience issues such as latency, delay, legacy application support, and service availability are now effectively dealt with through the use of standard protocols such as MPLS and improved network design. VPN management tools that allow improved control and views of VPN components and users are now being deployed, resulting in increased scalability and lower ongoing operational costs of VPNs. At one time it was accepted that deploying a VPN meant installing “fat”-client software on user desktops, manual configuration of encrypted tunnels, arcane configuration entry into server-side text-based configuration files, intrusive network firewall reconfigurations, minimal access control capability, and a state of mutual mystification due to vendor hype and user confusion over exactly what the VPN could provide in the way of scalability and manageability. New approaches to delivering on the objective of secure yet remote access are evolving, as shown by the adoption of alternatives to that pure layer 3 tunneling VPN protocol, IPSec. User feedback to vendor technology, the high cost of deploying and managing large-scale VPNs, and opportunity 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

119

AU1518Ch08Frame Page 120 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY cost analysis are helping to evolve these new approaches to encrypting, authenticating, and authorizing remote access into enterprise applications. WEB-BASED IP VPN A granular focus on Web-enabling business applications by user organizations has led to a rethinking of the problem and solution by VPN vendors. The ubiquitous Web browser is now frequently the “client” of choice for many network security products. The Web-browser-as-client approach solves a lot of the old problems but also introduces new ones. For example, what happens to any residual data left over from a Web VPN session? How is strong authentication performed? How can the remote computer be protected from subversion as an entry point to the internal network while the VPN tunnel is active? Until these questions are answered, Web browserbased VPNs will be limited from completely obsolescing client/server VPNs. Most Web-based VPN solutions claim to deliver applications, files, and data to authorized users through any standard Web browser. How that is accomplished differs by vendor. A trend toward turnkey appliances is influencing the development of single-purpose, highly optimized and scalable solutions based on both proprietary and open-source software preinstalled on hardware. A three-tiered architecture is used by most of these vendors. This architecture consists of a Web browser, Web server/middleware, and back-end application. The Web browser serves as the user interface to the target application. The Web server/middleware is the core component that translates the LAN application protocol and application requests into a Web browser-presentable format. Transport Layer Security (TLS) and Secure Socket Layer (SSL) are the common tunneling protocols used. Authentication options include user name and password across TLS/SSL, two-factor tokens such as RSA SecureID, and (rarely) Web browser-based digital certificates. Due to the high business value assigned to e-mail access, resilient hardware design and performance tuning of software to specific hardware is part of the appeal of the appliance approach. Redundant I/O, RAID 1 disk subsystems, redundant power supplies, hotswappable cooling fans and disk drives, failover/clustering modes, dual processors, and flash memory-based operating systems are features that help ensure access availability. Access control is implemented using common industry-standard authentication protocols such as Remote Access Dial-In User Service (RADIUS, RFC 2138) and Lightweight Directory Access Protocol (LDAP, RFCs 2251–2256).

120

AU1518Ch08Frame Page 121 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs APPLICATIONS E-mail access is the number-one back-end application for this class of VPN. E-mail has become the lifeblood of enterprise operations. Imagine how a business could survive for very long if its e-mail infrastructure were not available. However, most Web-based e-mail systems allow cleartext transmissions of authentication and mail messages by default. A popular Web mail solution is to install a server-side digital certificate and enable TLS/SSL between the user browsers and the Web mail server. The Web mail server would proxy mail messages to the internal mail server. Variations to this include using a mail security appliance (Mail-VPN) that runs a hardened operating system and Web mail reverse proxy. Another alternative is to install the Web mail server on a firewall DMZ. The firewall would handle Web mail authentication and message proxying to and from the Web server on the DMZ. A firewall rule would be configured to only allow the DMZ Web server to connect to the internal mail server using an encrypted tunnel from the DMZ. E-mail gateways such as the McAfee series of e-mail security appliances focus on anti-virus and content inspection with no emphasis on securing the appliance itself from attack. Depending on how the network firewall is configured, this type of solution may be acceptable in certain environments. On the other end of the spectrum, e-mail infrastructure vendors such as Mirapoint focus on e-mail components such as message store and LDAP directory server; but they offer very little integrated security of the appliance platform or the internal e-mail server. In the middle is the in-house solution, cobbled together using open-source components and cheap hardware with emphasis on low costs over resiliency, security, and manageability. Another class of Web mail security is offered by remote access VPN generalists such as Netilla, Neoteris, and Whale Communications. These vendors rationalize that the issue with IPSec VPNs is not that you cannot build an IPSec VPN tunnel between two IPSec gateways; rather, the issue is in trying to convince the peer IT security group to allow an encrypted tunnel through their firewall. Therefore, these vendors have designed their product architectures to use common Web protocols such as TLS/SSL and PPTP to tunnel to perimeter firewalls, DMZ, or directly to applications on internal networks. VPN AS A SERVICE: MPLS-BASED VPNS Multi-Protocol Label Switching (MPLS) defines a data-link layer service (see Exhibit 8-1) based on an Internet Engineering Task Force specification (RFC 3031). MPLS specification does not define encryption or authentication. However, IPSec is a commonly used security protocol to encrypt IP data carried across an MPLS-based network. Similarly, various existing mechanisms can be used for authenticating users of MPLS-based networks. The 121

AU1518Ch08Frame Page 122 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Exhibit 8-1. MPLS topologies. Intranet/closed group Simplest Each site has routing knowledge of all other VPN sites BGP updates are propagated between provider edge routers Extranet/overlapping Access control to prevent unwanted access Strong authentication Centralized firewall and Internet access Use network address translation Inter-provider BGP4 updates exchange Sub-interface for VPNs Sub-interface for routing updates Dial-up Establish L2TP tunnel to virtual network gateway Authenticate using RADIUS Virtual routing and forwarding info downloaded as part authentication/authorization Hub-and-spoke Internet access Use a sub-interface for Internet Use a different sub-interface for VPN

MPLS specification defines a network architecture and routing protocol that efficiently forwards and allows prioritization of packets containing higher layer protocol data. Its essence is in the use of so-called labels. An MPLS label is a short identifier used to identify a group of packets that is forwarded in the same manner, such as along the same path, or given the same treatment. The MPLS label is inserted into existing protocol headers or can shimmed between protocol headers, depending on the type of device used to forward packets and overall network implementation. For example, labels can be shimmed between the data-link and network layer headers or they can be encoded in layer 2 headers. The label is then used to route the so-called labeled packets between MPLS nodes. A network node that participates in MPLS network architectures is called a label switch router (LSR). The particular treatment of a labeled packet by an LSR is defined through the use of protocols that assign and distribute labels. Existing protocols have been extended to allow them to distribute MPLS LSP information, such as label distribution using BGP (MPLS-BGP). Also, new protocols have been defined explicitly to distribute LSP information between MPLS peer nodes. For example, one such newly defined protocol is the Label Distribution Protocol (LDP, RFC 3036). The route that a labeled packet traverses is termed a label switched path (LSP). In general, the MPLS architecture supports LSPs with different label stack encodings used on different hops. Label stacking defines the hierarchy of labels defining packet treatment for a packet as it traverses an MPLS internetwork. Label 122

AU1518Ch08Frame Page 123 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs Exhibit 8-2. Sample MPLS equipment criteria. Hot standby loadsharing of MPLS tunnels Authentication via RADIUS, TACACS+, AAA Secure Shell access (SSH) Secure Copy (SCP) Multi-level access modes (EXEC, standard, etc.) ACL support to protect against DoS attacks Traffic engineering support via RSVP-TE, OSPF-TE, ISIS-TE Scalability via offering a range of links: 10/100 Mbps Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, to OC-3c ATM, OC-3c SONET, OC-12c SONET, and OC-48c SONET Redundant, hot-swappable interface modules Rapid fault detection and failover Network layer route redundancy protocols for resiliency; Virtual Router Redundancy Protocol (VRRP, RFC 2338) for layer 3 MPLS-VPN; Virtual Switch Redundancy Protocol (VSRP); and RSTP for layer 2 MPLS-VPN Multiple queuing methods (e.g., weighted fair queuing, strict priority, etc.) Rate limiting Single port can support tens of thousands of tunnels

stacking occurs when more than one label is used, within a packet, to forward traffic across an MPLS architecture that employs various MPLS node types. For example, a group of network providers can agree to allow MPLS labeled packets to travel between their individual networks and still provide consistent treatment of the packets (i.e., maintain prioritization and LSP). This level of interoperability allows network service providers the ability to deliver true end-to-end service-level guarantees across different network providers and network domains. By using labels, a service provider and organizations can create closed paths that are isolated from other traffic within the service provider’s network, providing the same level of security as other private virtual circuit (PVC)-style services such as Frame Relay or ATM. Because MPLS-VPNs require modifications to a service provider’s or organization’s network, they are considered network-based VPNs (see Exhibit 8-2). Although there are topology options for deploying MPLS-VPNs down to end users, generally speaking, MPLS-VPNs do not require inclusion of client devices and tunnels usually terminate at the service provider edge router. From a design perspective, most organizations and service providers want to set up bandwidth commitments through RSVP and use that bandwidth to run VPN tunnels, with MPLS operating within the tunnel. This design allows MPLS-based VPNs to provide guaranteed bandwidth and application quality-of-service features within that guaranteed bandwidth tunnel. In real terms, it is now possible to not only run VPNs but also enterprise resource planning applications, legacy production systems, and company 123

AU1518Ch08Frame Page 124 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY e-mail, video, and voice telephone traffic over a single MPLS-based network infrastructure. Through the use of prioritization schemes within MPLS, such as Resource Reservation Protocol (RSVP), bandwidth can be reserved for specific data flows and applications. For example, highest prioritization can be given to performance-sensitive traffic that has to be delivered with minimal latency and packet loss and requires confirmation of receipt. Examples include voice and live video streaming, videoconferencing, and financial transactions. A second priority level could then be defined to allow traffic that is mission critical yet only requires an enhanced level of performance. Examples include FTP (e.g., CAD files, video clips) and ERP applications. The next highest priority can be assigned to traffic that does not require specific prioritization, such as e-mail and general Web browsing. A heightened focus on core competencies by companies, now more concerned with improving customer service and reducing cost, has led to an increase in outsourcing of VPN deployment and management. Service providers have responded by offering VPNs as a service using the differentiating capability of MPLS as a competitive differentiator. Service providers and large enterprises are typically deploying two VPN alternatives to traditional WAN offerings such as Frame Relay, ATM, or leased line: IPSecencrypted tunnel VPNs, and MPLS-VPNs. Additional flexibility is an added benefit because MPLS-based VPNs come in two flavors: layer 2 and layer 3. This new breed of VPN based on Multi-Protocol Label Switching (RFC 3031) is emerging as the most marketed alternative to traditional pure IP-based VPNs. Both support multicast routing via Internet Group Membership Protocol (IGMP, RFC 2236), which forwards only a single copy of a transmission to only the requesting port. The appeal of MPLS-based VPNs includes their inherent any-to-any reachability across a common data link. Availability of network access is also a concern of secure VPN design. This objective is achieved through the use of route redundancy along with routing protocols that enhance network availability, such as BGP. MPLS-VPNs give users greater control, allowing them to customize the service to accommodate their specific traffic patterns and business requirements. As a result, they can lower their costs by consolidating all of their data communications onto a single WAN platform and prioritizing traffic for specific users and applications. The resulting simplicity of architecture, efficiencies gained by consolidation of network components, and ability to prioritize traffic make MPLS-VPNs a very attractive and scalable option. LAYER 2 MPLS-VPN Layer 2 MPLS-VPNs, based on the Internet Engineering Task Force’s (IETF) Martini draft or Kompella draft, simply emulate layer 2 services such as Frame Relay, ATM, or Ethernet. With the Martini approach, a customer’s layer 2 traffic is encapsulated when it reaches the edge of the service provider network, mapped onto a label-switched path, and carried 124

AU1518Ch08Frame Page 125 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs across a network. The Martini draft describes point-to-point VPN services across virtual leased lines (VLLs), transparently connecting multiple subscriber sites together, independent of the protocols used. This technique takes advantage of MPLS label stacking, whereby more than one label is used to forward traffic across an MPLS architecture. Specifically, two labels are used to support layer 2 MPLS-VPNs. One label represents a point-topoint virtual circuit, while the second label represents the tunnel across the network. The current Martini drafts define encapsulations for Ethernet, ATM, Frame Relay, Point-to-Point Protocol, and High-level Data Link Control protocols. The Kompella draft describes another method for simplifying MPLS-VPN setup and management by combining the auto-discovery capability of BGP (to locate VPN sites) with the signaling protocols that use the MPLS labels. The Kompella draft describes how to provide multi-pointto-multi-point VPN services across VLLs, transparently connecting multiple subscriber sites independent of the protocols used. This approach simplifies provisioning of new VPNs. Because the packets contain their own forwarding information (e.g., attributes contained in the packet’s label), the amount of forwarding state information maintained by core routers is independent of the number of layer 2 MPLS-VPNs provisioned over the network. Scalability is thereby enhanced because adding a site to an existing VPN in most cases requires reconfiguring only the service provider edge router connected to the new site. Layer 2 MPLS-VPNs are transparent, from a user perspective, much in the same way the underlying ATM infrastructure is invisible to Frame Relay users. The customer is still buying Frame Relay or ATM, regardless of how the provider configures the service. Because layer 2 MPLS-VPNs are virtual circuit-based, they are as secure as other virtual circuit- or connection-oriented technologies such as ATM. Because layer 2 traffic is carried transparently across an MPLS backbone, information in the original traffic, such as class-of-service markings and VLAN IDs, remains unchanged. Companies that need to transport non-IP traffic (such as legacy IPX or other protocols) may find layer 2 MPLS-VPNs the best solution. Layer 2 MPLS-VPNs also may appeal to corporations that have private addressing schemes or prefer not to share their addressing information with service providers. In a layer 2 MPLS-VPN, the service provider is responsible only for layer 2 connectivity; the customer is responsible for layer 3 connectivity, which includes routing. Privacy of layer 3 routing is implicitly ensured. Once the service provider edge router (PE) provides layer 2 connectivity to its connected customer edge (CE) router in an MPLS-VPN environment, the service provider’s job is done. In the case of troubleshooting, the service provider need only prove that connectivity exists between the PE and CE. From a customer perspective, traditional, pure layer 2 VPNs function in the same way. Therefore, there are few migration issues to deal with on the customer side. Configuring a layer 2 MPLS-VPN is similar in process to configuring a 125

AU1518Ch08Frame Page 126 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY traditional layer 2 VPN. The “last mile” connectivity, Frame Relay, HDLC, and PPP must be provisioned. In a layer 2 MPLS-VPN environment, the customer can run any layer 3 protocol they would like, because the service provider is delivering only layer 2 connectivity. Most metropolitan area networks using MPLS-VPNs provision these services in layer 2 of the network and offer them over a high-bandwidth pipe. An MPLS-VPN using the layer 3 BGP approach is quite a complex implementation and management task for the average service provider; the layer 2 approach is much simpler and easier to provision. LAYER 3 Layer 3 MPLS-VPNs are also known as IP-enabled or Private-IP VPNs. The difference between layer 2 and layer 3 MPLS-VPNs is that, in layer 3 MPLSVPNs, the labels are assigned to layer 3 IP traffic flows, whereas layer 2 MPLS-VPNs encode or shim labels between layer 2 and 3 protocol headers. A traffic flow is a portion of traffic, delimited by a start and stop time, that is generated by a particular source or destination networking device. The traffic flow concept is roughly equivalent to the attributes that make up a call or connection. Data associated with traffic flows are aggregate quantities reflecting events that take place in the duration between the start and stop times of the flow. These labels represent unique identifiers and allow for the creation of label switched paths (LSP) within a layer 3 MPLS-VPN. Layer 3 VPNs offer a good solution when the customer traffic is wholly IP, customer routing is reasonably simple, and the customer sites are connected to the SP with a variety of layer 2 technologies. In a layer 3 MPLSVPN environment, internetworking depends on both the service provider and customer using the same routing and layer 3 protocols. Because pure IPSec VPNs require each end of the tunnel to have a unique address, special care must be taken when implementing IPSec VPNs in environments using private IP addressing based on network address translation. Although several vendors provide solutions to this problem, this adds more management complexity in pure IPSec VPNs. One limitation of layer 2 MPLS-VPNs is the requirement that all connected VPN sites, using the same provider, use the same data-link connectivity. On the other hand, the various sites of a layer 3 MPLS-VPN can connect to the service provider with any supported data-link connectivity. For example, some sites may connect with Frame Relay circuits and others with Ethernet. Because the service provider in a layer 3 MPLS-VPN can also handle IP routing for the customer, the customer edge router need only participate with the provider edge router. This is in contrast to layer 2 MPLS-VPNs, wherein the customer edge router must deal with an unknown 126

AU1518Ch08Frame Page 127 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs number of router peers. The traditional layer 2 problem of n*(n – 1)/2 inherent to mesh topologies carries through to layer 2 MPLS-VPNs as well. Prioritization via class of service is available in layer 3 MPLS-VPNs because the provider edge router has visibility into the actual IP data layer. As such, customers can assign priorities to traffic flows, and service providers can then provide a guaranteed service level for those IP traffic flows. Despite the complexities, service providers can take advantage of layer 3 IP MPLS-VPNs to offer secure differentiated services. For example, due to the use of prioritization protocols such as DiffServ and RSVP, service providers are no longer hindered by business models based on flatrate pricing or time and distance. MPLS allows them to meet the challenges of improving customer service interaction, offer new differentiated premium services, and establish new revenue streams. SUMMARY VPN technology has come a long way since its early beginnings. IPSec is no longer the only standardized option for creating and managing enterprise and service provider VPNs. The Web-based application interface is being leveraged to provide simple, easily deployable, and easily manageable remote access and extranet VPNs. The strategy for use is as a complementary — not replacement — remote access VPN for strategic applications that benefit from Web browser user interfaces. So-called clientless or Web browser-based VPNs are targeted to users who frequently log onto their corporate servers several times a day for e-mails, calendar updates, shared folders, and other collaborative information sharing. Most of these new Web browser-based VPNs use hardware platforms using a three-tiered architecture consisting of a Web browser user interface, reverse proxy function, and reference monitor-like middleware that transforms back-end application protocols into browser-readable format for presentation to end users. Benefits of this new approach include ease of training remote users and elimination of compatibility issues when installing software on remote systems. Drawbacks include lack of support for legacy applications and limited throughput and scalability for large-scale and carrier-class VPNs. The promise of any-to-any carrier-class and large-enterprise VPNs is being realized as MPLS-VPN standards develop and technology matures. Interservice provider capability allows for the enforcement of true end-toend quality-of-service (QoS) guarantees across different provider networks. Multi-Protocol Label Switching can be accomplished at two levels: layer 2 for maximum flexibility, low-impact migrations from legacy layer 2 connectivity, and layer 3 for granular service offerings and management of IP VPNs. MPLS allows a service provider to deliver many services using only one network infrastructure. Benefits for service providers include reduced operational costs, greater scalability, faster provisioning of services, 127

AU1518Ch08Frame Page 128 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY and competitive advantage in a commodity-perceived market. Large enterprises benefit from more efficient use of available bandwidth, increased security, and extensible use of existing well-known networking protocols. Users benefit from the increased interoperability among multiple service providers and consistent end-to-end service guarantees as MPLS products improve. In MPLS-based VPNs, confidentiality, or data privacy, is enhanced by the use of labels that provide virtual tunnel separation. Note that encryption is not accounted for in the MPLS specifications. Availability is provided through various routing techniques allowed by the specifications. MPLS only provides for layer 2 data-link integrity. Higher-layer controls should be applied accordingly. Further Reading http://www.mplsforum.org/ www.mplsworld.com http://www.juniper.net/techcenter/techpapers/200012.html h t t p : / / w w w. c i s c o . c o m / u n i v e r c d / c c / t d / d o c / p r o d u c t / s o f t ware/ios120/120newft/120t/120t5/vpn.htm http://www.nortelnetworks.com/corporate/technology/mpls/doclib.html http://advanced.comms.agilent.com/insight/2001–08/ http://www.ericsson.com/datacom/emedia/qoswhite_paper_317.pdf http://www.riverstonenet.com/technology/whitepapers.shtml http://www.equipecom.com/whitepapers.html http://www.convergedigest.com/Bandwidth/mpls.htm http://www.convergedigest.com/Bandwidth/mpls.htm

ABOUT THE AUTHOR Keith Pasley, CISSP, CNE, is a senior security technologist with Ciphertrust in Atlanta, Georgia.

128

AU1518Ch09Frame Page 129 Thursday, November 14, 2002 6:22 PM

Chapter 9

An Examination of Firewall Architectures Paul A. Henry, CISSP

Today, the number-one and number-two (in sales) firewalls use a technique known as stateful packet filtering, or SPF. SPF has the dual advantages of being fast and flexible and this is why it has become so popular. Notice that I didn’t even mention security, as this is not the number-one reason people choose these firewalls. Instead, SPF is popular because it is easy to install and doesn’t get in the way of business as usual. It is as if you hired a guard for the entry to your building who stood there waving people through as fast as possible. — Rik Farrow, World-renowned independent security consultant July 2000, Foreword Tangled Web — Tales of Digital Crime from the Shadows of Cyberspace Firewall customers once had a vote, and voted in favor of transparency, performance and convenience instead of security; nobody should be surprised by the results. — From an e-mail conversation with Marcus J. Ranum, “Grandfather of Firewalls,” Firewall Wizard Mailing List, October 2000

FIREWALL FUNDAMENTALS: A REVIEW The current state of insecurity in which we find ourselves today calls for a careful review of the basics of firewall architectures. The level of protection that any firewall is able to provide in securing a private network when connected to the public Internet is directly related to the architectures chosen for the firewall by the respective vendor. Generally speaking, most commercially available firewalls utilize one or more of the following firewall architectures: 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

129

AU1518Ch09Frame Page 130 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • • • • • • •

Static packet filter Dynamic (stateful) packet filter Circuit-level gateway Application-level gateway (proxy) Stateful inspection Cutoff proxy Air gap

NETWORK SECURITY: A MATTER OF BALANCE Network security is simply the proper balance of trust and performance. All firewalls rely on the inspection of information generated by protocols that function at various layers of the OSI (Open Systems Interconnection) model. Knowing the OSI layer at which a firewall operates is one of the keys to understanding the different types of firewall architectures. • Generally speaking, the higher up the OSI layer the architecture goes to examine the information within the packet, the more processor cycles the architecture consumes. • The higher up in the OSI layer at which an architecture examines packets, the greater the level of protection the architecture provides because more information is available upon which to base decisions. Historically, there had always been a recognized trade-off in firewalls between the level of trust afforded and speed (throughput). Faster processors and the performance advantages of symmetric multi-processing (SMP) have narrowed the performance gap between the traditional fast packet filters and high overhead-consuming proxy firewalls. One of the most important factors in any successful firewall deployment is who makes the trust/performance decisions: (1) the firewall vendor, by limiting the administrator’s choices of architectures, or (2) the administrator, in a robust firewall product that provides for multiple firewall architectures. In examining the firewall architectures in Exhibit 9-1, looking within the IP packet, the most important fields are (see Exhibits 9-2 and 9-3): • • • •

IP Header TCP Header Application-Level Header Data/payload Header

STATIC PACKET FILTER The packet-filtering firewall is one of the oldest firewall architectures. A static packet filter operates at the network layer or OSI layer 3 (see Exhibit 9-4). 130

AU1518Ch09Frame Page 131 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

OSI Model Application Proxy

TCP/IP Model

Application Presentation

Circuit Gateway

FTP

Telnet

SMTP

Other

Session TCP

Transport Packet Filter - SPF

Network Data Link Physical

UDP IP

Ethernet

FDDI

X.25

Other

Exhibit 9-1. Firewall architectures.

Source Source Destination Destination IP Address Port

Application State and Data Flow

Payload

IP Header TCP Header

Application Level Header

Data

TCP Header Segment Bits 0 4 8 12 16 20 24 28 1 Source Port Destination Port 2 Sequence Number 3 Acknowledgment Number 4 Offset Reserved Flags Window 5 Checksum Urgent Pointer 6 Options Padding Data begins here . . .

Header

1 2 3 4 5 6

IP Header Segment Bits 31 0 4 8 12 16 20 24 28 Version IHL Type of Service Total Length Identification Flags Fragmentation Offset Time to Live Protocol Header Checksum Source Address Destination Address Options Padding Data begins here . . .

31

Header

Words

Words

Exhibit 9-2. IP packet structure.

Exhibit 9-3. IP header segment versus TCP header segment. 131

AU1518Ch09Frame Page 132 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

Network Interface

External Network

Internal Network

Exhibit 9-4. Static packet filter operating at the network layer.

Source Source Destination Destination IP Address Port

Application State and Data Flow

PayLoad

IP Header TCP Header

Application Level Header

Data

Packet Filter

Exhibit 9-5. Static packet filter IP packet structure.

The decision to accept or deny a packet is based upon an examination of specific fields within the packet’s IP and protocol headers (see Exhibit 9-5): • Source address • Destination address • Application or protocol 132

AU1518Ch09Frame Page 133 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures • Source port number • Destination port number Before forwarding a packet, the firewall compares the IP Header and TCP Header against a user-defined table — rule base — containing the rules that dictate whether the firewall should deny or permit packets to pass. The rules are scanned in sequential order until the packet filter finds a specific rule that matches the criteria specified in the packet-filtering rule. If the packet filter does not find a rule that matches the packet, then it imposes a default rule. The default rule explicitly defined in the firewall’s table typically instructs the firewall to drop a packet that meets none of the other rules. There are two schools of thought on the default rule used with the packet filter: (1) ease of use and (2) security first. Ease of use proponents prefer a default allow all rule that permits all traffic unless it is explicitly denied by a prior rule. Security first proponents prefer a default deny all rule that denies all traffic unless explicitly allowed by a prior rule. Within the static packet-filter rules database, the administrator can define rules that determine which packets are accepted and which packets are denied. The IP Header information allows the administrator to write rules that can deny or permit packets to and from a specific IP address or range of IP addresses. The TCP Header information allows the administrator to write service-specific rules, that is, allow or deny packets to or from ports related to specific services. The administrator can write rules that allow certain services such as HTTP from any IP address to view the Web pages on the protected Web server. The administrator can also write rules that block a certain IP address or entire ranges of addresses from using the HTTP service and viewing the Web pages on the protected server. In the same respect, the administrator can write rules that allow certain services such as SMTP from a trusted IP address or range of IP addresses to access files on the protected mail server. The administrator could also write rules that block access for certain IP addresses or entire ranges of addresses to access the protected FTP server. The configuration of packet-filter rules can be difficult because the rules are examined in sequential order. Great care must be taken in the order in which packet-filtering rules are entered into the rule base. Even if the administrator manages to create effective rules in the proper order of precedence, a packet filter has one inherent limitation: A packet filter only examines data in the IP Header and TCP Header; it cannot know the difference between a real and a forged address. If an address is present and meets the packet-filter rules along with the other rule criteria, the packet will be allowed to pass. 133

AU1518Ch09Frame Page 134 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Suppose the administrator took the precaution to create a rule that instructed the packet filter to drop any incoming packets with unknown source addresses. This packet-filtering rule would make it more difficult, but not impossible, for a hacker to access at least some trusted servers with IP addresses. The hacker could simply substitute the actual source address on a malicious packet with the source address of a known trusted client. This common form of attack is called IP address spoofing. This form of attack is very effective against a packet filter. The CERT Coordination Center has received numerous reports of IP spoofing attacks, many of which resulted in successful network intrusions. Although the performance of a packet filter can be attractive, this architecture alone is generally not secure enough to keep out hackers determined to gain access to the protected network. Equally important is what the static packet filter does not examine. Remember that in the static packet filter, only specific protocol headers are examined: (1) Source–Destination IP Address and (2) Source–Destination Port numbers (services). Hence, a hacker can hide malicious commands or data in unexamined headers. Further, because the static packet filter does not inspect the packet payload, the hacker has the opportunity to hide malicious commands or data within the packet’s payload. This attack methodology is often referred to as a covert channel attack and is becoming more popular. Finally, the static packet filter is not state aware. Simply put, the administrator must configure rules for both sides of the conversation to a protected server. To allow access to a protected Web server, the administrator must create a rule that allows both the inbound request from the remote client as well as the outbound response from the protected Web server. Of further consideration is that many services such as FTP and e-mail servers in operation today require the use of dynamically allocated ports for responses, so an administrator of a static packet-filtering-based firewall has little choice but to open up an entire range of ports with static packetfiltering rules. Static packet filter considerations include: • Pros: — Low impact on network performance — Low cost, now included with many operating systems • Cons: — Operates only at network layer and therefore only examines IP and TCP Headers — Unaware of packet payload; offers low level of security — Lacks state awareness; may require numerous ports be left open to facilitate services that use dynamically allocated ports — Susceptible to IP spoofing — Difficult to create rules (order of precedence) — Only provides for a low level of protection 134

AU1518Ch09Frame Page 135 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-6. Advanced dynamic packet filter operating at the transport layer.

DYNAMIC (STATEFUL) PACKET FILTER The dynamic (stateful) packet filter is the next step in the evolution of the static packet filter. As such, it shares many of the inherent limitations of the static packet filter with one important difference: state awareness. The typical dynamic packet filter, like the static packet filter, operates at the network layer or OSI layer 3. An advanced dynamic packet filter may operate up into the transport layer — OSI layer 4 (see Exhibit 9-6) — to collect additional state information. Most often, the decision to accept or deny a packet is based on examination of the packet’s IP and Protocol Headers: • Source address • Destination address • Application or protocol 135

AU1518Ch09Frame Page 136 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • Source port number • Destination port number In simplest terms, the typical dynamic packet filter is aware of the difference between a new and an established connection. Once a connection is established, it is entered into a table that typically resides in RAM. Subsequent packets are compared to this table in RAM, most often by software running at the operating system (OS) kernel level. When the packet is found to be an existing connection, it is allowed to pass without any further inspection. By avoiding having to parse the packet-filter rule base for each and every packet that enters the firewall and by performing this alreadyestablished connection table test at the kernel level in RAM, the dynamic packet filter enables a measurable performance increase over a static packet filter. There are two primary differences in dynamic packet filters found among firewall vendors: 1. Support of SMP 2. Connection establishment In writing the firewall application to fully support SMP, the firewall vendor is afforded up to a 30 percent increase in dynamic packet filter performance for each additional processor in operation. Unfortunately, many implementations of dynamic packet filters in current firewall offerings operate as a single-threaded process, which simply cannot take advantage of the benefits of SMP. Most often to overcome the performance limitation of their single-threaded process, these vendors require powerful and expensive RISC processor-based servers to attain acceptable levels of performance. As available processor power has increased and multi-processor servers have become widely utilized, this single-threaded limitation has become much more visible. For example, vendor A running on an expensive RISC-based server offers only 150 Mbps dynamic packet filter throughput, while vendor B running on an inexpensive off-the-shelf Intel multi-processor server can attain dynamic packet filtering throughputs of above 600 Mbps. Almost every vendor has its own proprietary methodology for building the connection table; but beyond the issues discussed above, the basic operation of the dynamic packet filter for the most part is essentially the same. In an effort to overcome the performance limitations imposed by their single-threaded, process-based dynamic packet filters, some vendors have taken dangerous shortcuts when establishing connections at the firewall. RFC guidelines recommend following the three-way handshake to establish a connection at the firewall. One popular vendor will open a new connection upon receipt of a single SYN packet, totally ignoring RFC recommendations. 136

AU1518Ch09Frame Page 137 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures In effect, this exposes the servers behind the firewall to single-packet attacks from spoofed IP addresses. Hackers gain great advantage from anonymity. A hacker can be much more aggressive in mounting attacks if he can remain hidden. Similar to the example in the examination of a static packet filter, suppose the administrator took the precaution to create a rule that instructed the packet filter to drop any incoming packets with unknown source addresses. This packet-filtering rule would make it more difficult, but, again, not impossible for a hacker to access at least some trusted servers with IP addresses. The hacker could simply substitute the actual source address on a malicious packet with the source address of a known trusted client. In this attack methodology, the hacker assumes the IP address of the trusted host and must communicate through the three-way handshake to establish the connection before mounting an assault. This provides additional traffic that can be used to trace back to the hacker. When the firewall vendor fails to follow RFC recommendations in the establishment of the connection and opens a connection without the threeway handshake, the hacker can simply spoof the trusted host address and fire any of the many well-known single-packet attacks at the firewall, or servers protected by the firewall, while maintaining complete anonymity. One presumes that administrators are unaware that their popular firewall products operate in this manner; otherwise, it would be surprising that so many have found this practice acceptable following the many historical well-known single-packet attacks like LAND, Ping of Death, and Tear Drop that have plagued administrators in the past. Dynamic packet filter considerations include: • Pros: — Lowest impact of all examined architectures on network performance when designed to be fully SMP-compliant — Low cost, now included with some operating systems — State awareness provides measurable performance benefit • Cons: — Operates only at network layer, and therefore only examines IP and TCP Headers — Unaware of packet payload, offers low level of security — Susceptible to IP spoofing — Difficult to create rules (order of precedence) — Can introduce additional risk if connections can be established without following the RFC-recommended three-way handshake — Only provides for a low level of protection 137

AU1518Ch09Frame Page 138 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-7. Circuit-level gateway operating at the session layer.

CIRCUIT-LEVEL GATEWAY The circuit-level gateway operates at the session layer — OSI layer 5 (see Exhibit 9-7). In many respects, a circuit-level gateway is simply an extension of a packet filter in that it typically performs basic packet filter operations and then adds verification of proper handshaking and the legitimacy of the sequence numbers used to establish the connection. The circuit-level gateway examines and validates TCP and User Datagram Protocol (UDP) sessions before opening a connection, or circuit, through the firewall. Hence, the circuit-level gateway has more data to act upon than a standard static or dynamic packet filter. Most often, the decision to accept or deny a packet is based upon examining the packet’s IP and TCP Headers (see Exhibit 9-8): 138

AU1518Ch09Frame Page 139 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Source Source Destination Destination IP Address Port

Application State and Data Flow

Payload

IP Header TCP Header

Application Level Header

Data

Circuit Level Gateway

Exhibit 9-8. Circuit-level gateway IP packet structure.

• • • • • •

Source address Destination address Application or protocol Source port number Destination port number Handshaking and sequence numbers

Similar to a packet filter, before forwarding the packet, a circuit-level gateway compares the IP Header and TCP Header against a user-defined table containing the rules that dictate whether the firewall should deny or permit packets to pass. The circuit-level gateway then determines that a requested session is legitimate only if the SYN flags, ACK flags, and sequence numbers involved in the TCP handshaking between the trusted client and the untrusted host are logical. If the session is legitimate, the packet-filter rules are scanned until one is found that agrees with the information in a packet’s full association. If the packet filter does not find a rule that applies to the packet, then it imposes a default rule. The default rule explicitly defined in the firewall’s table typically instructs the firewall to drop a packet that meets none of the other rules. The circuit-level gateway is literally a step up from a packet filter in the level of security it provides. Further, like a packet filter operating at a low level in the OSI model, it has little impact on network performance. However, once a circuit-level gateway establishes a connection, any application can run across that connection because a circuit-level gateway filters packets only at the session and network layers of the OSI model. In other words, a circuit-level gateway cannot examine the data content of the packets it relays between a trusted network and an untrusted network. The potential exists to slip harmful packets through a circuit-level gateway to a server behind the firewall. Circuit-level gateway considerations include: • Pros: — Low to moderate impact on network performance — Breaks direct connection to server behind firewall — Higher level of security than a static or dynamic (stateful) packet filter 139

AU1518Ch09Frame Page 140 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • Cons: — Shares many of the same negative issues associated with packet filters — Allows any data to simply pass through the connection — Only provides for a low to moderate level of security APPLICATION-LEVEL GATEWAY Like a circuit-level gateway, an application-level gateway intercepts incoming and outgoing packets, runs proxies that copy and forward information across the gateway, and functions as a proxy server, preventing any direct connection between a trusted server or client and an untrusted host. The proxies that an application-level gateway runs often differ in two important ways from the circuit-level gateway: 1. The proxies are application specific. 2. The proxies examine the entire packet and can filter packets at the application layer of the OSI model (see Exhibit 9-9). Unlike the circuit-level gateway, the application-level gateway accepts only packets generated by services they are designed to copy, forward, and filter. For example, only an HTTP proxy can copy, forward, and filter HTTP traffic. If a network relies only on an application-level gateway, incoming and outgoing packets cannot access services for which there is no proxy. For example, if an application-level gateway ran FTP and HTTP proxies, only packets generated by these services could pass through the firewall. All other services would be blocked. The application-level gateway runs proxies that examine and filter individual packets, rather than simply copying them and recklessly forwarding them across the gateway. Application-specific proxies check each packet that passes through the gateway, verifying the contents of the packet up through the application layer (layer 7) of the OSI model. These proxies can filter on particular information or specific individual commands in the application protocols the proxies are designed to copy, forward, and filter. As an example, an FTP application-level gateway can filter on dozens of commands to allow a high degree of granularity on the permissions of specific users of the protected FTP service. Current-technology application-level gateways are often referred to as strong application proxies. A strong application proxy extends the level of security afforded by the application-level gateway. Instead of copying the entire datagram on behalf of the user, a strong application proxy actually creates a brand-new empty datagram inside the firewall. Only those commands and data found acceptable to the strong application proxy are copied from the original datagram outside the firewall to the new datagram inside the firewall. Then, and only then, is this new datagram forwarded to 140

AU1518Ch09Frame Page 141 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-9. Proxies filtering packets at the application layer.

the protected server behind the firewall. By employing this methodology, the strong application proxy can mitigate the risk of an entire class of covert channel attacks. An application-level gateway filters information at a higher OSI layer than the common static or dynamic packet filter, and most automatically create any necessary packet filtering rules, usually making them easier to configure than traditional packet filters. By facilitating the inspection of the complete packet, the applicationlevel gateway is one of the most secure firewall architectures available. However, historically some vendors (usually those that market stateful inspection firewalls) and users made claims that the security an application-level gateway offers had an inherent drawback — a lack of transparency. 141

AU1518Ch09Frame Page 142 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY In moving software from older 16-bit code to current technology’s 32-bit environment, and with the advent of SMP, many of today’s application-level gateways are just as transparent as they are secure. Users on the public or trusted network in most cases do not notice that they are accessing Internet services through a firewall. Application-level gateway considerations include: • Pros: — Application gateway with SMP affords a moderate impact on network performance. — Breaks direct connection to server behind firewall, eliminating the risk of an entire class of covert channel attacks. — Strong application proxy that inspects protocol header lengths can eliminate an entire class of buffer overrun attacks. — Highest level of security. • Cons: — Poor implementation can have a high impact on network performance. — Must be written securely. Historically, some vendors have introduced buffer overruns within the application gateway. — Vendors must keep up with new protocols. A common complaint of application-level gateway users is lack of timely vendor support for new protocols. — A poor implementation that relies on the underlying OS Inetd daemon will suffer from a severe limitation to the number of allowed connections in today’s demanding high simultaneous session environment. STATEFUL INSPECTION Stateful inspection combines the many aspects of dynamic packet filtering, and circuit-level and application-level gateways. While stateful inspection has the inherent ability to examine all seven layers of the OSI model (see Exhibit 9-10), in the majority of applications observed by the author, stateful inspection was operated only at the network layer of the OSI model and used only as a dynamic packet filter for filtering all incoming and outgoing packets based on source and destination IP addresses and port numbers. While the vendor claims this is the fault of the administrator’s configuration, many administrators claim that the operating overhead associated with the stateful inspection process prohibits its full utilization. While stateful inspection has the inherent ability to inspect all seven layers of the OSI model, most installations only operate as a dynamic packet filter at the network layer of the model. 142

AU1518Ch09Frame Page 143 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-10. Stateful inspection examining all seven layers of the OSI model.

As indicated, stateful inspection can also function as a circuit-level gateway, determining whether the packets in a session are appropriate. For example, stateful inspection can verify that inbound SYN and ACK flags and sequence numbers are logical. However, in most implementations the stateful inspection-based firewall operates only as a dynamic packet filter and, dangerously, allows new connections to be established with a single SYN packet. A unique limitation of one popular stateful inspection implementation is that it does not provide the ability to inspect sequence numbers on outbound packets from users behind the firewall. This leads to a flaw whereby internal users can easily spoof IP address of other internal users to open holes through the associated firewall for inbound connections. Finally, stateful inspection can mimic an application-level gateway. Stateful inspection can evaluate the contents of each packet up through the application layer and ensure that these contents match the rules in the administrator’s network security policy. 143

AU1518Ch09Frame Page 144 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Better Performance, But What about Security? Like an application-level gateway, stateful inspection can be configured to drop packets that contain specific commands within the Application Header. For example, the administrator could configure a stateful inspection firewall to drop HTTP packets containing a Put command. However, historically the performance impact of application-level filtering by the single-threaded process of stateful inspection has caused many administrators to abandon its use and to simply opt for dynamic packet filtering to allow the firewall to keep up with network load requirements. In fact, the default configuration of a popular stateful inspection firewall utilizes dynamic packet filtering and not stateful inspection of the most popular protocol on today’s Internet — HTTP traffic. Do Current Stateful Inspection Implementations Expose the User to Additional Risks? Unlike an application-level gateway, stateful inspection does not break the client/server model to analyze application-layer data. An applicationlevel gateway creates two connections: one between the trusted client and the gateway, and another between the gateway and the untrusted host. The gateway then copies information between these two connections. This is the core of the well-known proxy versus stateful inspection debate. Some administrators insist that this configuration ensures the highest degree of security; other administrators argue that this configuration slows performance unnecessarily. In an effort to provide a secure connection, a stateful inspection-based firewall has the ability to intercept and examine each packet up through the application layer of the OSI model. Unfortunately, because of the associated performance impact of the single-threaded stateful inspection process, this configuration is not the one typically deployed. Looking beyond marketing hype and engineering theory, stateful inspection relies on algorithms within an inspection engine to recognize and process application-layer data. These algorithms compare packets against known bit patterns of authorized packets. Vendors have claimed that, theoretically, they are able to filter packets more efficiently than applicationspecific proxies. However, most stateful inspection engines represent a single-threaded process. With current-technology, SMP-based applicationlevel gateways operating on multi-processor servers, the gap has dramatically narrowed. As an example, one vendor’s SMP-capable multi-architecture firewall that does not use stateful inspection outperforms a popular stateful inspection-based firewall up to 4:1 on throughput and up to 12:1 on simultaneous sessions. Further, due to limitations in the inspection language used in stateful inspection engines, application gateways are now commonly used to fill in the gaps. Stateful inspection considerations include: 144

AU1518Ch09Frame Page 145 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures • Pros: — Offers the ability to inspect all seven layers of the OSI model and is user configurable to customize specific filter constructs. — Does not break the client/server model. — Provides an integral dynamic (stateful) packet filter. — Fast when operated as dynamic packet filter; however, many SMPcompliant dynamic packet filters are actually faster. • Cons: — The single-threaded process of the stateful inspection engine has a dramatic impact on performance, so many users operate the stateful inspection-based firewall as nothing more than a dynamic packet filter. — Many believe the failure to break the client/server model creates an unacceptable security risk because the hacker has a direct connection to the protected server. — A poor implementation that relies on the underlying OS Inetd daemon will suffer from a severe limitation to the number of allowed connections in today’s demanding high simultaneous session environment. — Low level of security. No stateful inspection-based firewall has achieved higher than a Common Criteria EAL 2. Per the Common Criteria EAL 2 certification documents, EAL 2 products are not intended for use in protecting private networks when connecting to the public Internet. CUTOFF PROXY The cutoff proxy is a hybrid combination of a dynamic (stateful) packet filter and a circuit-level proxy. In the most common implementations, the cutoff proxy first acts as a circuit-level proxy in verifying the RFC-recommended three-way handshake and then switches over to a dynamic packet filtering mode of operation. Hence, it initially works at the session layer — OSI layer 5 — and then switches to a dynamic packet filter working at the network layer — OSI layer 3 — after the connection is completed (see Exhibit 9-11). The cutoff proxy verifies the RFC-recommended three-way handshake and then switches to a dynamic packet filter mode of operation. Some vendors have expanded the capability of the basic cutoff proxy to reach all the way up into the application layer to handle limited authentication requirements (FTP type) before switching back to a basic dynamic packet-filtering mode of operation. We pointed out what the cutoff proxy does; now, more importantly, we need to discuss what it does not do. The cutoff proxy is not a traditional circuit-level proxy that breaks the client/server model for the duration of the connection. There is a direct connection established between the remote 145

AU1518Ch09Frame Page 146 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Network Interface

External Network

Beginning of Transmission

End of Transmission

Application

Application

Presentation

Presentation

Session

Session

Transport

Transport

Network

Network

Data Link

Data Link

Physical

Physical

Network Interface

Internal Network

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-11. Cutoff proxy filtering packets.

client and the protected server behind the firewall. This is not to say that a cutoff proxy does not provide a useful balance between security and performance. At issue with respect to the cutoff proxy are vendors who exaggerate by claiming that their cutoff proxy offers a level of security equivalent to a traditional circuit-level gateway with the added benefit of the performance of a dynamic packet filter. In clarification, this author believes that all firewall architectures have their place in Internet security. If your security policy requires authentication of basic services and examination of the three-way handshake and does not require breaking of the client/server model, the cutoff proxy is a good fit. However, administrators must be fully aware and understand that a cutoff proxy clearly is not equivalent to a circuit-level proxy because the client/server model is not broken for the duration of the connection. 146

AU1518Ch09Frame Page 147 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Internet

External Host

Internal Host

Content Inspection Control Station SCSI-Based Memory Bank

Secure Data Switch

Exhibit 9-12. Air gap architecture.

Cutoff proxy considerations include: • Pros: — There is less impact on network performance than in a traditional circuit gateway. — IP spoofing issue is minimized as the three-way connection is verified. • Cons: — Simply put, it is not a circuit gateway. — It still has many of the remaining issues of a dynamic packet filter. — It is unaware of packet payload and thus offers low level of security. — It is difficult to create rules (order of precedence). — It can offer a false sense of security because vendors incorrectly claim it is equivalent to a traditional circuit gateway. AIR GAP The latest entry into the array of available firewall architectures is the air gap. At the time of this writing, the merits of air gap technology remain hotly debated among the security-related Usenet news groups. With air gap technology, the external client connection causes the connection data to be written to a SCSI e-disk (see Exhibit 9-12). The internal connection then reads this data from the SCSI e-disk. By breaking the direct connection between the client to the server and independently writing to and reading from the SCSI e-disk, the respective vendors believe they have provided a higher level of security and a resultant “air gap.” Air gap vendors claim that, while the operation of air gap technology resembles that of the application-level gateway (see Exhibit 9-13), an important difference is the separation of the content inspection from the 147

AU1518Ch09Frame Page 148 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-13. Air gap operating at the application layer.

“front end” by the isolation provided by the air gap. This may very well be true for those firewall vendors that implement their firewall on top of a standard commercial operating system. But with the current-technology firewall operating on a kernel-hardened operating system, there is little distinction. Simply put, those vendors that chose to implement kernel-level hardening of the underlying operating system utilizing multilevel security (MLS) or containerization methodologies provide no less security than current air gap technologies. The author finds it difficult to distinguish air gap technology from application-level gateway technology. The primary difference appears to be that air gap technology shares a common SCSI e-disk, while application-level technology shares common RAM. One must also consider the performance limitations of establishing the air gap in an external process (SCSI drive) 148

AU1518Ch09Frame Page 149 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures and the high performance of establishing the same level of separation in a secure kernel-hardened operating system running in kernel memory space. Any measurable benefit of air gap technology has yet to be verified by any recognized third-party testing authority. Further, the current performance of most air gap-like products falls well behind that obtainable by traditional application-level gateway based products. Without a verifiable benefit to the level of security provided, the necessary performance costs are prohibitive for many system administrators. Air gap considerations include: • Pros: — It breaks direct connection to the server behind the firewall, eliminating the risk of an entire class of covert channel attacks. — Strong application proxy that inspects protocol header lengths can eliminate an entire class of buffer overrun attacks. — As with an application-level gateway, an air gap can potentially offer a high level of security. • Cons: — It can have a high negative impact on network performance. — Vendors must keep up with new protocols. A common complaint of application-level gateway users is the lack of timely response from a vendor to provide application-level gateway support for a new protocol. — It is currently not verified by any recognized third-party testing authority. OTHER CONSIDERATIONS ASIC-Based Firewalls Looking at typical ASIC-based offerings, the author finds that virtually all are VPN/firewall hybrids. These hybrids provide fast VPN capabilities but most often are only complemented with a limited single-architecture stateful firewall capability. Today’s security standards are in flux, so ASIC designs must be left programmable or “soft” enough that the full speed of ASICs simply cannot be unleashed. ASIC technology most certainly brings a new level of performance to VPN operations. IPSec VPN encryption and decryption run inarguably better in hardware than in software. However, in most accompanying firewall implementations, a simple string comparison (packet to rule base) is the only functionality that is provided within the ASIC. Hence, the term “ASIC-based firewall” is misleading at best. The majority of firewall operations in ASIC-based firewalls are performed in software operating on microprocessors. These 149

AU1518Ch09Frame Page 150 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY firewall functions often include NAT, routing, cutoff proxy, authentication, alerting, and logging. When you commit to an ASIC, you eliminate the flexibility necessary to deal with future Internet security issues. Network security clearly remains in flux. While an ASIC can be built to be good enough for a particular purpose or situation, is good enough today really good enough for tomorrow’s threats? Hardware-Based Firewalls The term hardware-based firewall is another point of confusion in today’s firewall market. For clarification, most hardware-based firewalls are products that have simply eliminated the spinning media (hard disk drive) associated with the typical server or appliance-based firewalls. Most hardware firewalls are either provided with some form of solidstate disk, or they simply boot from ROM, load the OS and application from firmware to RAM, and then operate in a manner similar to a conventional firewall. The elimination of the spinning media is both a strength and a weakness of a hardware-based firewall. Strength is derived from limited improvements in MTBF and environmental performance by eliminating the spinning media. Weakness is present in severe limitations to the local alerting and logging capability, which most often requires a separate logging server to achieve any usable historical data retention. OTHER CONSIDERATIONS: A BRIEF DISCUSSION OF OS HARDENING One of the most misunderstood terms in network security with respect to firewalls today is OS hardening or hardened OS. Many vendors claim their network security products are provided with a hardened OS. What you will find in virtually all cases is that the vendor simply turned off or removed unnecessary services and patched the operating system or OS for known vulnerabilities. Clearly, this is not a hardened OS but really a patched OS. What Is a Hardened OS? A hardened OS (see Exhibit 9-14) is one in which the vendor has modified the kernel source code to provide for a mechanism that clearly provides a security perimeter among the non-secure application software, the secure application software, and the network stack. This eliminates the risk of the exploitation of a service running on the hardened OS that could otherwise provide root-level privilege to the hacker. In a hardened OS, the security perimeter is established using one of two popular methodologies: 150

AU1518Ch09Frame Page 151 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures Security Attack

Security Attack

Non-Secure Application Software

Firewall Secure Application Software

Network Attack

Security Perimeter Evaluated - Secure OS

Evaluated - Secure Network

Computer Hardware

Exhibit 9-14. Hardened OS.

1. Multi-Level Security (MLS): establishes a perimeter through the use of labels assigned to each packet and applies rules for the acceptance of said packets at various levels of the OS and services 2. Compartmentalization: provides a sandbox approach whereby an application effectively runs in a dedicated kernel space with no path to another object within the kernel Other security-related enhancements typically common in kernel-level hardening methodologies include: • • • •

Separation of event logging from root Mandatory access controls File system security enhancements Log EVERYTHING from all running processes

What Is a Patched OS? A patched OS is typically a commercial OS from which the administrator turns off or removes all unnecessary services and installs the latest security patches from the OS vendor. A patched OS has had no modifications made to the kernel source code to enhance security. Is a Patched OS as Secure as a Hardened OS? Simply put, no. A patched OS is only secure until the next vulnerability in the underlying OS or allowed services is discovered. An administrator may argue that, when he has completed installing his patches and turning off services, his OS is, in fact, secure. The bottom-line question is: with 151

AU1518Ch09Frame Page 152 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY more than 100 new vulnerabilities being posted to Bug Traq each month, how long will it remain secure? How Do You Determine if a Product Is Provided with a Hardened OS? If the product was supplied with a commercial OS, you can rest assured that it is not a hardened OS. The principal element here is that, to harden an OS, you must own the source code to the OS so you can make the necessary kernel modification to harden the OS. If you really want to be sure, ask the vendor to provide third-party validation that the OS is, in fact, hardened at the kernel level (e.g., http://www.radium.ncsc.mil/tpep/epl/historical. html). Why Is OS Hardening Such an Important Issue? Too many in the security industry have been lulled into a false sense of security. Decisions on security products are based primarily on popularity and price, with little regard for the actual security the product can provide. Where Can You Find Additional Information about OS Vulnerabilities? • • • • •

www.securiteam.com www.xforce.iss.net www.rootshell.com www.packetstorm.securify.com www.insecure.org/sploits.html

Where Can You Find Additional Information about Patching an OS? More than 40 experts in the SANS community have worked together over a full year to create the following elegant and effective scripts: • For Solaris, http://yassp.parc.xerox.com/ • For Red Hat Linux, http://www.sans.org/newlook/projects/bastille_ linux.htm Lance Spitzner (http://www.enteract.com/~lspitz/pubs.html) has written a number of excellent technical documents, including: • Armoring Linux • Armoring Solaris • Armoring NT Stanford University (http://www.stanford.edu/group/itss-ccs/security/ Bestuse/Systems/) has also released a number of informative technical documents: • Red Hat Linux • Solaris • SunOS 152

AU1518Ch09Frame Page 153 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures • AIX 4.x • HPUX • NT CONCLUSION Despite claims by various vendors, no single firewall architecture is the “holy grail” in network security. It has been said many times and in many ways by network security experts: if you believe any one technology is going to solve the Internet security problem, you do not understand the technology and you do not understand the problem. Unfortunately for the Internet community at large, many administrators today design the security policy for their organizations around the limited capabilities of a specific vendor’s product. The author firmly believes all firewall architectures have their respective place or role in network security. Selection of any specific firewall architecture should be a function of the organization’s security policy and should not be based solely on the limitation of the vendor’s proposed solution. The proper application of multiple firewall architectures to support the organization’s security policy in providing the acceptable balance of trust and performance is the only viable methodology in securing a private network when connecting to the public Internet. One of the most misunderstood terms in network security with respect to firewalls today is OS hardening, or hardened OS. Simply put, turning off or removing a few unnecessary services and patching for known product vulnerabilities does not build a hardened OS. Hardening an OS begins with modifying the OS software at the kernel level to facilitate building a security perimeter. This security perimeter isolates services and applications from providing root access in the event of application- or OS-provided service compromise. Effectively, only a properly implemented hardened OS with a barrier at the kernel level will provide for an impenetrable firewall platform. References

This text is based on numerous books, white papers, presentations, vendor literature, and various Usenet newsgroup discussions I have read or participated in throughout my career. Any failure to cite any individual for anything that in any way resembles a previous work is unintentional. ABOUT THE AUTHOR Paul Henry, CISSP, an information security expert who has worked in the security field for more than 20 years, has provided analysis and research support on numerous complex network security projects in Asia, the Middle East, and North America, including several multimillion dollar 153

AU1518Ch09Frame Page 154 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY network security projects such as Saudi Arabia’s National Banking System and the DoD Satellite Data Project USA. Henry has given keynote speeches at security seminars and conferences worldwide on topics including DDoS attack risk mitigation, firewall architectures, intrusion methodology, enterprise security, and security policy development. An accomplished author, Henry has also published numerous articles and white papers on firewall architectures, covert channel attacks, distributed denial-of-service (DDoS) attacks, and buffer overruns. Henry has also been interviewed by ZD Net, the San Francisco Chronicle, the Miami Herald, NBC Nightly News, CNBC Asia, and many other media outlets.

154

AU1518Ch10Frame Page 155 Thursday, November 14, 2002 6:22 PM

Chapter 10

Deploying Host-Based Firewalls across the Enterprise: A Case Study Jeffery Lowder, CISSP

Because hosts are exposed to a variety of threats, there is a growing need for organizations to deploy host-based firewalls across the enterprise. This chapter outlines the ideal features of a host-based firewall — features that are typically not needed or present in a purely personal firewall software implementation on a privately owned PC. In addition, the author describes his own experiences with, and lessons learned from, deploying agentbased, host-based firewalls across an enterprise. The author concludes that host-based firewalls provide a valuable additional layer of security. A SEMANTIC INTRODUCTION Personal firewalls are often associated with (and were originally designed for) home PCs connected to “always-on” broadband Internet connections. Indeed, the term personal firewall is itself a vestige of the product’s history: originally distinguished from enterprise firewalls, personal firewalls were initially viewed as a way to protect home PCs.1 Over time, it was recognized that personal firewalls had other uses. The security community began to talk about using personal firewalls to protect notebooks that connect to the enterprise LAN via the Internet and eventually protecting notebooks that physically reside on the enterprise LAN.

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

155

AU1518Ch10Frame Page 156 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Consistent with that trend — and consistent with the principle of defense-in-depth — it can be argued that the time has come for the potential usage of personal firewalls to be broadened once again. Personal firewalls should really be viewed as host-based firewalls. As soon as one makes the distinction between host-based and network-based firewalls, the additional use of a host-based firewall becomes obvious. Just as organizations deploy host-based intrusion detection systems (IDS) to provide an additional detection capability for critical servers, organizations should consider deploying host-based firewalls to provide an additional layer of access control for critical servers (e.g., exchange servers, domain controllers, print servers, etc.). Indeed, given that many host-based firewalls have an IDS capability built in, it is conceivable that, at least for some small organizations, host-based firewalls could even replace specialized host-based IDS software. The idea of placing one firewall behind another is not new. For years, security professionals have talked about using so-called internal firewalls to protect especially sensitive back-office systems.2 However, internal firewalls, like network-based firewalls in general, are still dedicated devices. (This applies to both firewall appliances such as Cisco’s PIX and softwarebased firewalls such as Symantec’s Raptor.) In contrast, host-based firewalls require no extra equipment. A host-based firewall is a firewall software package that runs on a preexisting server or client machine. Given that a host-based firewall runs on a server or client machine (and is responsible for protecting only that machine), host-based firewalls offer greater functionality than network-based firewalls, even including internal firewalls that are dedicated to protecting a single machine. Whereas both network- and host-based firewalls have the ability to filter inbound and outbound network connections, only host-based firewalls possess the additional capabilities of blocking network connections linked to specific programs and preventing the execution of mail attachments. To put this into proper perspective, consider the network worm and Trojan horse program QAZ, widely suspected to be the exploit used in the November 2000 attack on Microsoft’s internal network. QAZ works by hijacking the NOTEPAD.EXE program. From the end user’s perspective, Notepad still appears to run normally; but each time Notepad is launched, QAZ sends an e-mail message (containing the IP address of the infected machine) to some address in China.3 Meanwhile, in the background, the Trojan patiently waits for a connection on TCP port 7597, through which an intruder can upload and execute any applications.4 Suppose QAZ were modified to run over TCP port 80 instead.5 While all firewalls can block outbound connections on TCP port 80, implementing such a configuration would interfere with legitimate traffic. Only a host-based firewall can block an outbound connection on TCP port 80 associated with NOTEPAD.EXE and notify the user of the event. As Steve Riley notes, “Personal firewalls 156

AU1518Ch10Frame Page 157 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study that monitor outbound connections will raise an alert; seeing a dialog with the notice ‘Notepad is attempting to connect to the Internet’ should arouse anyone’s suspicions.”6 STAND-ALONE VERSUS AGENT-BASED FIREWALLS Host-based firewalls can be divided into two categories: stand-alone and agent-based.7 Stand-alone firewalls are independent of other network devices in the sense that their configuration is managed (and their logs are stored) on the machine itself. Examples of stand-alone firewalls include ZoneAlarm, Sygate Personal Firewall Pro, Network Associates’ PGP Desktop Security, McAfee Personal Firewall,8 Norton Internet Security 2000, and Symantec Desktop Firewall. In contrast, agent-based firewalls are not locally configured or monitored. Agent-based firewalls are configured from (and their logs are copied to) a centralized enterprise server. Examples of agent-based firewalls include ISS RealSecure Desktop Protector (formerly Network ICE’s Black ICE Defender) and InfoExpress’s CyberArmor Personal Firewall. We chose to implement agent-based firewall software on our hosts. While stand-alone firewalls are often deployed as an enterprise solution, we wanted the agent-based ability to centrally administer and enforce a consistent access control list (ACL) across the enterprise. And as best practice dictates that the logs of network-based firewalls be reviewed on a regular basis, we wanted the ability to aggregate logs from host-based firewalls across the enterprise into a single source for regular review and analysis. OUR PRODUCT SELECTION CRITERIA Once we adopted an agent-based firewall model, our next step was to select a product. Again, as of the time this chapter was written, our choices were RealSecure Desktop Protector or CyberArmor. We used the following criteria to select a product:9 • Effectiveness in blocking attacks. The host-based firewall should effectively deny malicious inbound traffic. It should also at least be capable of effectively filtering outbound connections. As Steve Gibson argues, “Not only must our Internet connections be fortified to prevent external intrusion, they also [must] provide secure management of internal extrusion.”10 By internal extrusion, Gibson is referring to outbound connections initiated by Trojan horses, viruses, and spyware. To effectively filter outbound connections, the host-based firewall must use cryptographic sums. The host-based firewall must first generate cryptographic sums for each authorized application and then regenerate and compare that sum to the one stored in the database before any program (no matter what the filename) is allowed access. If the application 157

AU1518Ch10Frame Page 158 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY







• •







• • •

158

does not maintain a database of cryptographic sums for all authorized applications (and instead only checks filenames or file paths), the host-based firewall may give an organization a false sense of security. Centralized configuration. Not only did we need the ability to centrally define the configuration of the host-based firewall, we also required the ability to enforce that configuration. In other words, we wanted the option to prevent end users from making security decisions about which applications or traffic to allow. Transparency to end users. Because the end users would not be making any configuration decisions, we wanted the product to be as transparent to them as possible. For example, we did not want users to have to ‘tell’ the firewall how their laptops were connected (e.g., corporate LAN, home Internet connection, VPN, extranet, etc.) in order to get the right policy applied. In the absence of an attack, we wanted the firewall to run silently in the background without noticeably degrading performance. (Of course, in the event of an attack, we would want the user to receive an alert.) Multiple platform support. If we were only interested in personal firewalls, this would not have been a concern. (While Linux notebooks arguably might need personal firewall protection, we do not have such machines in our environment.) However, because we are interested in implementing host-based firewalls on our servers as well as our client PCs, support for multiple operating systems is a requirement. Application support. The firewall must be compatible with all authorized applications and the protocols used by those applications. VPN support. The host-based firewall must support our VPN implementation and client software. In addition, it must be able to detect and transparently adapt to VPN connections. Firewall architecture. There are many options for host-based firewalls, including packet filtering, application-level proxying, and stateful inspection. IDS technology. Likewise, there are several different approaches to IDS technology, each with its own strengths and weaknesses. The number of attacks detectable by a host-based firewall will clearly be relevant here. Ease of use and installation. As an enterprisewide solution, the product should support remote deployment and installation. In addition, the central administrative server should be (relatively) easy to use and configure. Technical support. Quality and availability are our prime concerns. Scalability. Although we are a small company, we do expect to grow. We need a robust product that can support a large number of agents. Disk space. We were concerned about the amount of disk space required on end-user machines as well as the centralized policy and logging server. For example, does the firewall count the number of times

AU1518Ch10Frame Page 159 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study an attack occurs rather than log a single event for every occurrence of an attack? • Multiple policy groups. Because we have diverse groups of end users, each with unique needs, we wanted the flexibility to enforce different policies on different groups. For example, we might want to allow SQLNet traffic from our development desktops while denying such traffic for the rest of our employees. • Reporting. As with similar enterprise solutions, an ideal reporting feature would include built-in reports for top intruders, targets, and attack methods over a given period of time (e.g., monthly, weekly, etc.). • Cost. As a relatively small organization, we were especially concerned about the cost of selecting a high-end enterprise solution. OUR TESTING METHODOLOGY We eventually plan to install and evaluate both CyberArmor and RealSecure Desktop Protector by conducting a pilot study on each product with a small, representative sample of users. (At the time this chapter was written, we were nearly finished with our evaluation of CyberArmor and about to begin our pilot study of ISS Real Secure.) While the method for evaluating both products according to most of our criteria is obvious, our method for testing one criterion deserves a detailed explanation: effectiveness in blocking attacks. We tested the effectiveness of each product in blocking unauthorized connections in several ways: • Remote Quick Scan from HackYourself.com.11 From a dial-up connection, we used HackYourself.com’s Quick Scan to execute a simple and remote TCP and UDP port scan against a single IP address. • Nmap scan. We used nmap to conduct two different scans. First, we performed an ACK scan to determine whether the firewall was performing stateful inspection or a simple packet filter. Second, we used nmap’s operating system fingerprinting feature to determine whether the host-based firewall effectively blocked attempts to fingerprint target machines. • Gibson Research Corporation’s LeakTest. LeakTest determines a firewall product’s ability to effectively filter outbound connections initiated by Trojans, viruses, and spyware.12 This tool can test a firewall’s ability to block LeakTest when it masquerades as a trusted program (OUTLOOK.EXE). • Steve Gibson’s TooLeaky. TooLeaky determines whether the firewall blocks unauthorized programs from controlling trusted programs. The TooLeaky executable tests whether this ability exists by spawning Internet Explorer to send a short, innocuous string to Steve Gibson’s Web site, and then receiving a reply.13 • Firehole. Firehole relies on a modified dynamic link library (DLL) that is used by a trusted application (Internet Explorer). The test is whether 159

AU1518Ch10Frame Page 160 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY the firewall allows the trusted application, under the influence of the malicious DLL, to send a small text message to a remote machine. The message contains the currently logged-on user’s name, the name of the computer, and a message claiming victory over the firewall and the time the message was sent.14 CONFIGURATION One of our reasons for deploying host-based firewalls was to provide an additional layer of protection against Trojan horses, spyware, and other programs that initiate outbound network connections. While host-based firewalls are not designed to interfere with Trojan horses that do not send or receive network connections, they can be quite effective in blocking network traffic to or from an unauthorized application when configured properly. Indeed, in one sense, host-based firewalls have an advantage over anti-virus software. Whereas anti-virus software can only detect Trojan horses that match a known signature, host-based firewalls can detect Trojan horses based on their network behavior. Host-based firewalls can detect, block, and even terminate any unauthorized application that attempts to initiate an outbound connection, even if that connection is on a well-known port like TCP 80 or even if the application causing that connection appears legitimate (NOTEPAD.EXE). However, there are two well-known caveats to configuring a host-based firewall to block Trojan horses. First, the firewall must block all connections initiated by new applications by default. Second, the firewall must not be circumvented by end users who, for whatever reason, click “yes” whenever asked by the firewall if it should allow a new application to initiate outbound traffic. Taken together, these two caveats can cause the cost of ownership of host-based firewalls to quickly escalate. Indeed, other companies that have already implemented both caveats report large numbers of help desk calls from users wanting to get a specific application authorized.15 Given that we do not have a standard desktop image and given that we have a very small help desk staff, we decided to divide our pilot users into two different policy groups: pilot-tech-technical and pilot-normal-regular (See Exhibit 10-1). The first configuration enabled users to decide whether to allow an application to initiate an outbound connection. This configuration was implemented only on the desktops of our IT staff. The user must choose whether to allow or deny the network connection requested by the connection. Once the user makes that choice, the host-based firewall generates a checksum and creates a rule reflecting the user’s decision. (See Exhibit 10-2 for a sample rule set in CyberArmor.) 160

AU1518Ch10Frame Page 161 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study

Exhibit 10-1. CyberArmor policy groups.

Exhibit 10-2. Sample user-defined rules in CyberArmor.

The second configuration denied all applications by default and only allowed applications that had been specifically authorized. We applied this configuration on all laptops outside our IT organization, because we did not want to allow nontechnical users to make decisions about the configuration of their host-based firewall. LESSONS LEARNED Although at the time this chapter was finished we had not yet completed our pilot studies on both host-based firewall products, we had already 161

AU1518Ch10Frame Page 162 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY learned several lessons about deploying agent-based, host-based firewalls across the enterprise. These lessons may be summarized as follows. 1. Our pilot study identified one laptop with a nonstandard and, indeed, unauthorized network configuration. For small organizations that do not enforce a standard desktop image, this should not be a surprise. 2. The ability to enforce different policies on different machines is paramount. This was evident from our experience with the host-based firewall to restrict outbound network connections. By having the ability to divide our users into two groups, those we would allow to make configuration decisions and those we would not, we were able to get both flexibility and security. 3. As is the case with network-based intrusion detection systems, our experience validated the need for well-crafted rule sets. Our configuration includes a rule that blocks inbound NetBIOS traffic. Given the amount of NetBIOS traffic present on both our internal network as well as external networks, this generated a significant amount of alerts. This, in turn, underscored the need for finely tuned alerting rules. 4. As the author has found when implementing network-based firewalls, the process of constructing and then fine-tuning a host-based firewall rule set is time consuming. This is especially true if one decides to implement restrictions on outbound traffic (and not allow users or a portion of users to make configuration decisions of their own), because one then has to identify and locate the exact file path of each authorized application that has to initiate an outbound connection. While this is by no means an insurmountable problem, there was a definite investment of time in achieving that configuration. 5. We did not observe any significant performance degradation on end user machines caused by the firewall software. At the time this chapter was written, however, we had not yet tested deploying hostbased firewall software on critical servers. 6. Our sixth observation is product specific. We discovered that the built-in reporting tool provided by CyberArmor is primitive. There is no built-in support for graphical reports, and it is difficult to find information using the text reporting. For example, using the built-in text-reporting feature, one can obtain an “alarms” report. That report, presented in spreadsheet format, merely lists alarm messages and the number of occurrences. Source IP addresses, date, and time information are not included in the report. Moreover, the alarm messages are somewhat cryptic. (See Exhibit 10-3 for a sample CyberArmor Alarm Report.) While CyberArmor is compatible with Crystal Reports, using Crystal Reports to produce useful reports requires extra software and time. 162

AU1518Ch10Frame Page 163 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study

Exhibit 10-3. Sample CyberArmor alarm report.

HOST-BASED FIREWALLS FOR UNIX? Host-based firewalls are often associated with Windows platforms, given the history and evolution of personal firewall software. However, there is no reason in theory why host-based firewalls cannot (or should not) be implemented on UNIX systems as well. To be sure, some UNIX packet filters already exist, including ipchains, iptables, and ipfw.16 Given that UNIX platforms have not been widely integrated into commercial host-based firewall products, these utilities may be very useful in an enterprisewide hostbased firewall deployment. However, such tools generally have two limitations worth noting. First, unlike personal firewalls, those utilities are packet filters. As such, they do not have the capability to evaluate an outbound network connection according to the application that generated the connection. Second, the utilities are not agent based. Thus, as an enterprise solution, those tools might not be easily scalable. The lack of an agent-based architecture in such tools might also make it difficult to provide centralized reporting on events detected on UNIX systems. CONCLUSIONS While host-based firewalls are traditionally thought of as a way to protect corporate laptops and privately owned PCs, host-based firewalls can also provide a valuable layer of additional protection for servers. Similarly, while host-based firewalls are typically associated with Windows platforms, 163

AU1518Ch10Frame Page 164 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY they can also be used to protect UNIX systems as well. Moreover, hostbased firewalls can be an effective tool for interfering with the operation of Trojan horses and similar applications. Finally, using an agent-based architecture can provide centralized management and reporting capability over all host-based firewalls in the enterprise. Acknowledgments

The author wishes to acknowledge Frank Aiello and Derek Conran for helpful suggestions. The author is also grateful to Lance Lahr, who proofread an earlier version of this chapter. References 1. Michael Cheek, Personal firewalls block the inside threat. Gov. Comp. News 19:3 (3 April 2000). Spotted electronically at , February 6, 2002. 2. William R. Cheswick and Steven M. Bellovin, Firewalls and Internet Security: Repelling the Wily Hacker (New York: Addison-Wesley, 1994), pp. 53–54. 3. F-Secure Computer Virus Information Pages: QAZ (, January 2001), spotted February 6, 2002. 4. TROJ_QAZ.A — Technical Details (, October 28, 2000), spotted February 6, 2002. 5. Steve Riley, Is Your Generic Port 80 Rule Safe Anymore? (, February 5, 2001), spotted February 6, 2002. 6. Steve Riley, Is Your Generic Port 80 Rule Safe Anymore? (, February 5, 2001), spotted February 6, 2002. 7. Michael Cheek, Personal firewalls block the inside threat. Gov. Comp. News 19:3 (3 April 2000). Spotted electronically at , February 6, 2002. 8. Although McAfee is (at the time this chapter was written) currently in Beta testing with its own agent-based product, Personal Firewall 7.5, that product is not scheduled to ship until late March 2002. See Douglas Hurd, The Evolving Threat (, February 8, 2002), spotted February 8, 2002. 9. Cf. my discussion of network-based firewall criteria in Firewall Management and Internet Attacks in Information Security Management Handbook (4th ed., New York: Auerbach, 2000), pp. 118–119. 10. Steve Gibson, LeakTest — Firewall Leakage Tester (, January 24, 2002), spotted February 7, 2002. 11. Hack Yourself Remote Computer Network Security Scan (, 2000), spotted February 7, 2002. 12. Leak Test — How to Use Version 1.x (, November 3, 2001), spotted February 7, 2002. 13. Steve Gibson, Why Your Firewall Sucks :-) (, November 5, 2001), spotted February 8, 2002. 14. By default, this message is sent over TCP port 80 but this can be customized. See Robin Keir, Firehole: How to Bypass Your Personal Firewall Outbound Detection (, November 6, 2001), spotted February 8, 2002. 15. See, for example, Barrie Brook and Anthony Flaviani, Case Study of the Implementation of Symantec’s Desktop Firewall Solution within a Large Enterprise (, February 8, 2002), spotted February 8, 2002.

164

AU1518Ch10Frame Page 165 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study 16. See Rusty Russell, Linux IPCHAINS-HOWTO (, July 4, 2000), spotted March 29, 2002; Oskar Andreasson, Iptables Tutorial 1.1.9 (, 2001), spotted March 29, 2002; and Gary Palmer and Alex Nash, Firewalls (, 2001), spotted March 29, 2002. I am grateful to an anonymous reviewer for suggesting I discuss these utilities in this chapter.

ABOUT THE AUTHOR Jeffery Lowder, CISSP, GSEC, is currently working as an independent information security consultant. His interests include firewalls, intrusion detection systems, UNIX security, and incident response. Previously, he has served as the director, security and privacy, for Elemica, Inc.; senior security consultant for PricewaterhouseCoopers, Inc.; and director, network security, at the U.S. Air Force Academy.

165

AU1518Ch10Frame Page 166 Thursday, November 14, 2002 6:22 PM

AU1518Ch11Frame Page 167 Thursday, November 14, 2002 6:21 PM

Chapter 11

Overcoming Wireless LAN Security Vulnerabilities Gilbert Held

The IEEE 802.11b specification represents one of three wireless LAN standards developed by the Institute of Electrical and Electronic Engineers. The original standard, which was the 802.11 specification, defined wireless LANs using infrared, Frequency Hopping Spread Spectrum (FHSS), and Direct Sequence Spread Spectrum (DSSS) communications at data rates of 1 and 2 Mbps. The relatively low operating rate associated with the original IEEE 802.11 standard precluded its widespread adoption. The IEEE 802.11b standard is actually an annex to the 802.11 standard. This annex specifies the use of DSSS communications to provide operating rates of 1, 2, 5.5, and 11 Mbps. A third IEEE wireless LAN standard, IEEE 802.11a, represents another annex to the original standard. Although 802.11- and 802.11b-compatible equipment operate in the 2.4-GHz unlicensed frequency band, to obtain additional bandwidth to support higher data rates resulted in the 802.11a standard using the 5-GHz frequency band. Although 802.11a equipment can transfer data at rates up to 54 Mbps, because higher frequencies attenuate more rapidly than lower frequencies, approximately four times the number of access points are required to service a given geographic area than if 802.11b equipment is used. Due to this, as well as the fact that 802.11b equipment reached the market prior to 802.11a devices, the vast majority of wireless LANs are based on the use of 802.11b compatible equipment. SECURITY Under all three IEEE 802.11 specifications, security is handled in a similar manner. The three mechanisms that affect wireless LAN security under 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

167

AU1518Ch11Frame Page 168 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY the troika of 802.11 specifications include the specification of the network name, authentication, and encryption. Network Name To understand the role of the network name requires a small diversion to discuss a few wireless LAN network terms. Each device in a wireless LAN is referred to as a station, to include both clients and access points. Client stations can communicate directly with one another, referred to as ad hoc networking. Client stations can also communicate with other clients, both wireless and wired, through the services of an access point. The latter type of networking is referred to as infrastructure networking. In an infrastructure networking environment, the group of wireless stations to include the access point form what is referred to as a basic service set (BSS). The basic service set is identified by a name. That name, which is formally referred to as the service set identifier (SSID), is also referred to as the network name. One can view the network name as a password. Each access point normally is manufactured with a set network name that can be changed. To be able to access an access point, a client station must be configured with the same network name as that configured on the access point. Unfortunately, there are three key reasons why the network name is almost valueless as a password. First, most vendors use a well-known default setting that can be easily learned by surfing to the vendor’s Web site and accessing the online manual for their access point. For example, Netgear uses the network name “Wireless.” Second, access points periodically transmit beacon frames that define their presence and operational characteristics to include their network name. Thus, the use of a wireless protocol analyzer, such as the WildPackets’ Airopeek or Sniffer Technologies’ Wireless Sniffer could be used to record beacon frames as a mechanism to learn the network name. A third problem associated with the use of the network name as a password for access to an access point is the fact that there are two client settings that can be used to override most access point network name settings. The configuration of a client station to a network name of “ANY” or its setting to a blank can normally override the setting of a network name or an access point. Exhibit 11-1 illustrates an example of the use of the SMC Networks’ EZ Connect Wireless LAN Configuration Utility program to set the SSID to a value of “ANY.” Once this action was accomplished, this author was able to access a Netgear wireless router/access point whose SSID was by default set to a value of “Wireless.” Thus, the use of the SSID or network name as a password to control access to a wireless LAN needs to be considered as a facility easily compromised, as well as one that offers very limited potential. 168

AU1518Ch11Frame Page 169 Thursday, November 14, 2002 6:21 PM

Overcoming Wireless LAN Security Vulnerabilities

Exhibit 11-1. Setting the value of the SSID or network name to “ANY”.

Authentication A second security mechanism included within all three IEEE wireless LAN specifications is authentication. Authentication represents the process of verifying the identity of a wireless station. Under the IEEE 802.11 standard to include the two addenda, authentication can be either open or shared key. Open authentication in effect means that the identity of a station is not checked. The second method of authentication, which is referred to as shared key, assumes that when encryption is used, each station that has the correct key and is operating in a secure mode represents a valid user. Unfortunately, as soon noted, shared key authentication is vulnerable because the WEP key can be learned by snooping on the radio frequency. Encryption The third security mechanism associated with IEEE 802.11 networks is encryption. The encryption used under the 802.11 series of specifications 169

AU1518Ch11Frame Page 170 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 11-2. WEP settings.

is referred to as Wired Equivalent Privacy (WEP). The initial goal of WEP is reflected by its name. That is, its use is designed to provide a level of privacy equivalent to that occurring when a person uses a wired LAN. Thus, some of the vulnerabilities uncovered concerning WEP should not be shocking because the goal of WEP is not to bulletproof a network. Instead, it is to simply make over the air transmission difficult for a third party to understand. However, as we will note, there are several problems associated with the use of WEP that make it relatively easy for a third party to determine the composition of network traffic flowing on a network. Exhibit 11-2 illustrates the pull-down menu of the WEP settings from the SMC Networks’ wireless LAN Configuration Utility program. Note in the exhibit of the WEP pull-down menu that the highlighted entry of “Disabled” represents the default setting. This means that, by default, WEP is disabled; and unless you alter the configuration on your client stations and access points, any third party within transmission range could use a wireless LAN protocol analyzer to easily record all network activity. In fact, during the 170

AU1518Ch11Frame Page 171 Thursday, November 14, 2002 6:21 PM

Overcoming Wireless LAN Security Vulnerabilities year 2001, several articles appeared in The New York Times and The Wall Street Journal concerning the travel of two men in a van from one parking lot to another in Silicon Valley. Using a directional antenna focused at each building from a parking lot and a notebook computer running a wireless protocol analyzer program, these men were able to easily read most network traffic because most networks were set up using WEP disabled. Although enabling WEP makes it more difficult to decipher traffic, the manner by which WEP encryption occurs has several shortcomings. Returning to Exhibit 11-2, note that the two WEP settings are shown as “64 Bit” and “128 Bit.” Although the use of 64- and 128-bit encryption keys may appear to represent a significant barrier to decryption, the manner by which WEP encryption occurs creates several vulnerabilities. An explanation follows. WEP encryption occurs via the creation of a key that is used to generate a pseudo-random binary string that is modulo-2 added to plaintext to create ciphertext. The algorithm that uses the WEP key is a stream cipher, meaning it uses the key to create an infinite pseudo-random binary string. Exhibit 11-3 illustrates the use of SMC Networks’ Wireless LAN Configuration Utility program to create a WEP key. SMC Networks simplifies the entry of a WEP key by allowing the user to enter a passphrase. Other vendors may allow the entry of hex characters or alphanumeric characters. Regardless of the manner by which a WEP key is entered, the total key length consists of two elements: an initialization vector (IV) that is 24 bits in length and the entered WEP key. Because the IV is part of the key, this means that a user constructing a 64-bit WEP key actually specifies 40 bits in the form of a passphrase or ten hex digits, or 104 bits in the form of a passphrase or 26 hex digits for a 128-bit WEP key. Because wireless LAN transmissions can easily be reflected off surfaces and moving objects, multiple signals can flow to a receiver. Referred to as multipath transmission, the receiver needs to select the best transmission and ignore the other signals. As one might expect, this can be a difficult task, resulting in a transmission error rate considerably higher than that encountered on wired LANs. Due to this higher error rate, it would not be practical to use a WEP key by itself to create a stream cipher that continues for infinity. This is because a single bit received in error would adversely affect the decryption of subsequent data. Recognizing this fact, the IV is used along with the digits of the WEP key to produce a new WEP key on a frame-by-frame basis. While this is a technically sound action, unfortunately the 24-bit length of the IV used in conjunction with a 40- or 104-bit fixed length WEP key causes several vulnerabilities. First, the IV is transmitted in the clear, allowing anyone with appropriate equipment to record its composition along with the encrypted 171

AU1518Ch11Frame Page 172 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 11-3. Creating a WEP encryption key.

frame data. Because the IV is only 24 bits in length, it will periodically repeat. Thus, capturing two or more of the same IVs and the encrypted text makes it possible to perform a frequency analysis of the encrypted text that can be used as a mechanism to decipher the captured data. For example, assume one has captured several frames that had the same IV. Because “e” is the most common letter used in the English language followed by the letter “t,” one would begin a frequency analysis by searching for the most common letter in the encrypted frames. If the letter “x” was found to be the most frequent, there would be a high probability that the plaintext letter “e” was encrypted as the letter “x.” Thus, the IV represents a serious weakness that compromises encryption. During mid-2001, researchers at Rice University and AT&T Laboratories discovered that by monitoring approximately five hours of wireless LAN traffic, it became possible to determine the WEP key through a series of mathematical manipulations, regardless of whether a 64-bit or 128-bit key was used. This research was used by several software developers to produce 172

AU1518Ch11Frame Page 173 Thursday, November 14, 2002 6:21 PM

Overcoming Wireless LAN Security Vulnerabilities programs such as Airsnort, whose use enables a person to determine the WEP key in use and to become a participant on a wireless LAN. Thus, the weakness of the WEP key results in shared key authentication being compromised as a mechanism to validate the identity of wireless station operators. Given an appreciation for the vulnerabilities associated with wireless LAN security, one can now focus on the tools and techniques that can be used to minimize or eliminate such vulnerabilities. MAC ADDRESS CHECKING One of the first methods used to overcome the vulnerabilities associated with the use of the network name or SSID, as well as shared key authentication, was MAC address checking. Under MAC address checking, the LAN manager programs the MAC address of each client station into an access point. The access point only allows authorized MAC addresses occurring in the source address field of frames to use its facilities. Although the use of MAC address checking provides a significant degree of improvement over the use of a network name for accessing the facilities of an access point, by itself it does nothing to alter the previously mentioned WEP vulnerabilities. To attack the vulnerability of WEP, several wireless LAN equipment vendors introduced the use of dynamic WEP keys. Dynamic WEP Keys Because WEP becomes vulnerable by a third party accumulating a significant amount of traffic that flows over the air using the same key, it becomes possible to enhance security by dynamically changing the WEP key. Several vendors have recently introduced dynamic WEP key capabilities as a mechanism to enhance wireless security. Under a dynamic key capability, a LAN administrator, depending on the product used, may be able to configure equipment to either exchange WEP keys on a frame-byfame basis or at predefined intervals. The end result of this action is to limit the capability of a third party to monitor a sufficient amount of traffic that can be used to either perform a frequency analysis of encrypted data or to determine the WEP key in use. While dynamic WEP keys eliminate the vulnerability of a continued WEP key utilization, readers should note that each vendor supporting this technology does so on a proprietary basis. This means that if one anticipates using products from multiple vendors, one may have to forego the use of dynamic WEP keys unless the vendors selected have cross-licensed their technology to provide compatibility between products. Having an appreciation for the manner by which dynamic WEP keys can enhance encryption security, this discussion of methods to minimize wireless security vulnerabilities concludes with a brief discussion of the emerging IEEE 802.1x standard. 173

AU1518Ch11Frame Page 174 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY THE IEEE 802.1X STANDARD The IEEE 802.1x standard is being developed to control access to both wired and wireless LANs. Although the standard was not officially completed during early 2002, Microsoft added support for the technology in its Windows XP operating system released in October 2001. Under the 802.1x standard, a wireless client station attempting to access a wired infrastructure via an access point will be challenged by the access point to identify itself. The client will then transmit its identification to the access point. The access point will forward the challenge response to an authentication server located on the wired network. Upon authentication, the server will inform the access point that the wireless client can access the network, resulting in the access point allowing frames generated by the client to flow onto the wired network. While the 802.1x standard can be used to enhance authentication, by itself it does not enhance encryption. Thus, one must consider the use of dynamic WEP keys as well as proprietary MAC address checking or an 802.1x authentication method to fully address wireless LAN security vulnerabilities. Additional Reading Held, G., “Wireless Application Directions,” Data Communications Management (April/May 2002). Lee, D.S., “Wireless Internet Security,” Data Communications Management (April/May 2002).

ABOUT THE AUTHOR Gilbert Held is an award-winning author and lecturer. Gil is the author of over 40 books and 450 technical articles. Some of Gil’s recent book titles include Building a Wireless Office and The ABCs of IP Addressing, published by Auerbach Publications. Gil can be reached via e-mail at [email protected].

174

AU1518Ch12Frame Page 175 Thursday, November 14, 2002 6:21 PM

Chapter 12

Voice Security Chris Hare, CISSP, CISA

Most security professionals in today’s enterprise spend much of their time working to secure access to corporate electronic information. However, voice and telecommunications fraud still costs the corporate business communities millions of dollars each year. Most losses in the telecommunications arena stem from toll fraud, which is perpetrated by many different methods. Millions of people rely upon the telecommunication infrastructure for their voice and data needs on a daily basis. This dependence has resulted in the telecommunications system being classed as a critical infrastructure component. Without the telephone, many of our daily activities would be more difficult, if not almost impossible. When many security professionals think of voice security, they automatically think of encrypted telephones, fax machines, and the like. However, voice security can be much simpler and start right at the device to which your telephone is connected. This chapter looks at how the telephone system works, toll fraud, voice communications security concerns, and applicable techniques for any enterprise to protect its telecommunication infrastructure. Explanations of commonly used telephony terms are found throughout the chapter. POTS: PLAIN OLD TELEPHONE SERVICE Most people refer to it as “the phone.” They pick up the receiver, hear the dial tone, and make their calls. They use it to call their families, conduct business, purchase goods, and get help or emergency assistance. And they expect it to work all the time. The telephone service we use on a daily basis in our homes is known in the telephony industry as POTS, or plain old telephone service. POTS is delivered to the subscriber through several components (see Exhibit 12-1): • The telephone handset • Cabling 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

175

AU1518Ch12Frame Page 176 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Line Card

Telephone Company Central Office

Exhibit 12-1. Components of POTS.

• A line card • A switching device The telephone handset, or station, is the component with which the public is most familiar. When the customer picks up the handset, the circuit is closed and established to the switch. The line card signals to the processor in the switch that the phone is off the hook, and a dial tone is generated. The switch collects the digits dialed by the subscriber, whether the subscriber is using a pulse phone or Touch-Tone®. A pulse phone alters the voltage on the phone line, which opens and closes a relay at the switch. This is the cause of the clicks or pulses heard on the line. With Touch-Tone dialing, a tone generator at the switch creates the tones for dialing the call. The processor in the switch accepts the digits and determines the best way to route the call to the receiving subscriber. The receiving telephone set may be attached to the same switch, or connected to another halfway around the world. Regardless, the routing of the call happens in a heartbeat due to a very complex network of switches, signaling, and routing. However, the process of connecting the telephone to the switching device, or to connect switching devices together to increase calling capabilities, uses lines and trunks. Connecting Things Together The problem with most areas of technology is with terminology. The telephony industry is no different. Trunks and lines both refer to the same thing — the circuitry and wiring used to deliver the signal to the subscriber. The fundamental difference between them is where they are used. Both trunks and lines can be digital or analog. The line is primarily associated with the wiring from the telephone switch to the subscriber (see Exhibit 12-2). This can be either the residential or business subscriber, 176

AU1518Ch12Frame Page 177 Thursday, November 14, 2002 6:21 PM

Voice Security

Line Card Line

Trunk

Exhibit 12-2. Trunks and lines.

connected directly to the telephone company’s switch, or to a PBX. Essentially, the line typically is associated with carrying the communications of a single subscriber to the switch. The trunk, on the other hand, is generally the connection from the PBX to the telephone carrier’s switch, or from one switch to another. A trunk performs the same function as the line. The only difference is the amount of calls or traffic the two can carry. Because the trunk is used to connect switches together, the trunk can carry much more traffic and calls than the line. The term circuit is often used to describe the connection from one device to the other, without attention for the type of connection, analog or digital, or the devices on either end (station or device). Analog versus Digital Both the trunk and the line can carry either analog or digital signals. That is to say, they can only carry one type at a time. Conceptually, the connection from origin to destination is called a circuit, and there are two principal circuit types. Analog circuits are used to carry voice traffic and digital signals after conversion to sounds. While analog is traditionally associated with voice circuits, many voice calls are made and processed through digital equipment. However, the process of analog/digital conversion is an intense technical discussion and is not described here. An analog circuit uses the variations in amplitude (volume) and frequency to transmit the information from one caller to the other. The circuit has an available bandwidth of 64K, although 8K of the available bandwidth is used for signaling between the handset and the switch, leaving 56K for the actual voice or data signals. 177

AU1518Ch12Frame Page 178 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Think about connecting a computer modem to a phone line. The maximum available speed the modem can function at is 56K. The rationale for the 56K modem should be obvious now. However, most people know a modem connection is rarely made at 56K due to the quality of the circuit, line noise, and the distance from the subscriber to the telephone carrier’s switch. Modems are discussed again later in the chapter. Because analog lines carry the actual voice signals for the conversation, they can be easily intercepted. Anyone with more than one phone in his or her house has experienced the problem with eavesdropping. Anyone who can access the phone circuit can listen to the conversation. A phone tap is not really required — only knowledge of which wires to attach to and a telephone handset. However, despite the problem associated with eavesdropping, many people do not concern themselves too much with the possibility someone may be listening to their phone call. The alternative to analog is digital. While the analog line uses sound to transmit information, the digital circuit uses digital signals to represent data. Consequently, the digital circuit technologies are capable of carrying significantly higher speeds as the bandwidth increases on the circuit. Digital circuits offer a number of advantages. They can carry higher amounts of data traffic and more simultaneous telephone calls than an analog circuit. They offer better protection from eavesdropping and wiretapping due to their design. However, despite the digital signal, any telephone station sharing the same circuit can still eavesdrop on the conversation without difficulty. The circuits are not the principal cause of security problems. Rather, the concern for most enterprises and individuals arises from the unauthorized and inappropriate use of those circuits. Lines and trunks can be used in many different ways and configurations to provide the required level of service. Typically, the line connected to a station offers both incoming and outgoing calls. However, this does not have to be the case in all situations. Direct Inward Dial (DID) If an outside caller must be connected with an operator before reaching their party in the enterprise, the system is generally called a key switch PBX. However, many PBX systems offer direct inward dial, or DID, where each telephone station is assigned a telephone number that connects the external caller directly to the call recipient. Direct inward dial makes reaching the intended recipient easier because no operator is involved. However, DID also has disadvantages. Modems 178

AU1518Ch12Frame Page 179 Thursday, November 14, 2002 6:21 PM

Voice Security connected to DID services can be reached by authorized and unauthorized persons alike. It also makes it easier for individuals to call and solicit information from the workforce, without being screened through a central operator or attendant. Direct Outward Dial (DOD) Direct outward dial is exactly the opposite of DID. Some PBX installations require the user to select a free line on their phone or access an operator to place an outside call. With DOD, the caller picks up the phone, dials an access code, such as the digit 9, and then the external phone number. The call is routed to the telephone carrier and connected to the receiving person. The telephone carrier assembles the components described here to provide service to its subscribers. The telephone carriers then interconnect their systems through gateways to provide the public switched telephone network. THE PUBLIC SWITCHED TELEPHONE NETWORK (PSTN) The pubic switched telephone network is a collection of telephone systems maintained by telephone carriers to provide a global communications infrastructure. It is called the public switched network because it is accessible to the general public and it uses circuit-switching technology to connect the caller to the recipient. The goal of the PSTN is to connect the two parties as quickly as possible, using the shortest possible route. However, because the PSTN is dynamic, it can often configure and route the call over a more complex path to achieve the call connection on the first attempt. While this is extremely complex on a national and global scale, enterprises use a smaller version of the telephone carrier switch called a PBX (or private branch exchange). THE PRIVATE AREA BRANCH EXCHANGE (PABX) The private area branch exchange, or PABX, is also commonly referred to as a PBX. Consequently, you will see the terms used interchangeably. The PBX is effectively a telephone switch for an enterprise; and, like the enterprise, it comes in different sizes. The PBX provides the line card, call processor, and some basic routing. The principal difference is how the PBX connects to the telephone carrier’s network. If we compare the PBX to a router in a data network connecting to the Internet, both devices know only one route to send information, or telephone calls, to points outside the network (see Exhibit 12-3). 179

AU1518Ch12Frame Page 180 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Telephone Company Switch PBX

Exhibit 12-3. PBX connection.

Exhibit 12-4. Network class-of-service levels.

Level

Internal

Local Seven-Digit Dialing

1 2 3 4

X X X X

X X X

Local Ten-Digit Dialing

Domestic Long Distance

International Long Distance

X X X

X X

X

The PBX has many telephone stations connected to it, like the telephone carrier’s switch. The PBX knows how to route calls to the stations connected directly to the same PBX. A call for an external telephone number is routed to the carrier’s switch, which then processes the call and routes it to the receiving station. Both devices have similar security issues, although the telephone carrier has specific concerns: the telephone communications network is recognized as a critical infrastructure element, and there is liability associated with failing to provide service. The enterprise rarely has to deal with these issues; however, the enterprise that fails to provide sufficient controls to prevent the compromise of its PBX may also face specific liabilities. Network Class of Service (NCOS) Each station on the phone PBX can be configured with a network class of service, or NCOS. The NCOS defines the type of calls the station can make. Exhibit 12-4 illustrates different NCOS levels. When examining the table, we can see that each different class of service offers new abilities for the user at the phone station. Typically, class of service is assigned to the station and not the individual, because few phone systems require user authentication before placing the call. 180

AU1518Ch12Frame Page 181 Thursday, November 14, 2002 6:21 PM

Voice Security NOTE: Blocking specific phone numbers or area codes, such as 976, 900, or 809, is not done at the NCOS level but through other call-blocking methods available in the switch.

Through assigning NCOS to various phones, some potential security problems can be avoided. For example, if your enterprise has a phone in the lobby, it should be configured with a class of service low enough to allow calls to internal extensions or local calls only. Long distance should not be permitted from any open-area phone due to the cost associated with those calls. In some situations, it may be desirable to limit the ability of a phone station to receive calls, while still allowing outgoing calls. This can be defined as another network class of service, without affecting the capabilities of the other stations. However, not all PBX systems have this feature. If your enterprise systems have it, it should be configured to allow the employees only the ability to make the calls that are required for their specific job responsibilities. VOICEMAIL Voicemail is ubiquitous with communications today. However, voicemail is often used as the path to the telephone system and free phone calls for the attacker — and toll fraud for the system owner. Voicemail is used for recording telephone messages for users who are not available to answer their phones. The user accesses messages by entering an identifier, which is typically their phone extension number, and a password. Voicemail problems typically revolve around password management. Because voicemail must work with the phone, the password can only contain digits. This means attacking the password is relatively trivial from the attacker’s perspective. Consequently, the traditional password and account management issues exist here as in other systems: • • • • •

Passwords the same as the account name No password complexity rules No password aging or expiry No account lockout Other voicemail configuration issues

A common configuration problem is through-dialing. With through-dialing, the system accepts a phone number and places the call. The feature can be restricted to allow only internal or local numbers, or to disable it. If through-dialing is allowed and not properly configured, the enterprise now pays the bills for the long-distance or other toll calls made. 181

AU1518Ch12Frame Page 182 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Attackers use stale mailboxes — those that have not been accessed in a while — to attempt to gain access to the mailbox. If the mailbox password is obtained, and the voicemail system is configured to allow through-dialing, the attackers are now making free calls. The attacker first changes the greeting on the mailbox to a simple “yes.” Now, any collect call made through an automated system expecting the word response “yes” is automatically accepted. The enterprise pays the cost of the call. The attacker enters the account identifier, typically the phone extension for the mailbox, and the password. Once authenticated by the voicemail system, the attacker then enters the appropriate code and phone number for the external through-call. If there are no restrictions on the digits available, the attacker can dial any phone number anywhere in the world. The scenario depicted here can be avoided using simple techniques applicable to most systems: • • • • • • •

Change the administrator and attendant passwords. Do not use the extension number as the initial password. Disable through-dialing. Configure voicemail to use a minimum of six digits for the password. Enable password history options if available. Enable password expiration if available. Remove stale mailboxes.

Properly configured, voicemail is a powerful tool for the enterprise, as is the data network and voice conferencing. VOICE CONFERENCING Many enterprises use conference calls to regularly conduct business. In the current economic climate, many enterprises use conference calls as the cost-efficient alternative to travel for meetings across disparate locations. The conference call uses a “bridge,” which accepts the calls and determines which conference the caller is to be routed to based upon the phone number and the conference call password. The security options available to the conference call bridge are technology dependent. Regardless, participants on the conference call should be reminded not to discuss enterprise-sensitive information because anyone who acquires or guesses the conference call information could join the call. Consequently, conference call participant information should be protected to limit participation. Conference bridges are used for single-time, repetitive, and ad hoc calls using various technologies. Some conference call vendors provide services allowing anyone in the enterprise to have an on-demand conference bridge. These conference bridges use a “host” or chairperson who must be 182

AU1518Ch12Frame Page 183 Thursday, November 14, 2002 6:21 PM

Voice Security present to start the conference call. The chairperson has a second passcode, used to initiate the call. Any user who learns the host or chairperson code can use the bridge at any time. Security issues regarding conference bridges include: • • • •

Loss of the chairperson code Unauthorized use of the bridge Inappropriate access to the bridge Loss of sensitive information on the bridge

All of these issues are addressed through proper user awareness — which is fortunate because few enterprises actually operate their own conference bridge, relying instead upon the telephone carrier to maintain the configurations. If possible, the conference bridge should be configured with the following settings and capabilities: • The conference call cannot start until the chairperson is present. • All participants should be disconnected when the chairperson disconnects from the bridge. • The chairperson should have the option of specifying a second security access code to enter the bridge. • The chairperson should have commands available to manipulate the bridge, including counting the number of ports in use, muting or un-muting the callers, locking the bridge, and reaching the conference operator. The chairperson’s commands are important for the security of the conference call. Once all participants have joined, the chairperson should verify everyone is there and then lock the bridge. This prevents anyone from joining the conference call. SECURITY ISSUES Throughout the chapter, we have discussed technologies and security issues. However, regardless of the specific configuration of the phone system your enterprise is using, there are some specific security concerns you should be knowledgeable of. Toll Fraud Toll fraud is a major concern for enterprises, individuals, and the telephone carriers. Toll fraud occurs when toll-based or chargeable telephone calls are fraudulently made. There are several methods of toll fraud, including inappropriate use by authorized users, theft of services, calling cards, and direct inward dialing to the enterprise’s communications system. 183

AU1518Ch12Frame Page 184 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY According to a 1998 Consumer News report, about $4 billion are lost to toll fraud annually. The report is available online at the URL http://www.fcc. gov/Bureaus/Common_Carrier/Factsheets/ttf&you.pdf The cost of the fraud is eventually passed on to the businesses and consumers through higher communications costs. In some cases, the telephone carrier holds the subscriber responsible for the charges, which can be devastating. Consequently, enterprises can pay for toll fraud insurance, which pays the telephone carrier after the enterprise pays the deductible. While toll fraud insurance sounds appealing, it is expensive and the deductibles are generally very high. It is not impossible to identify toll fraud within your organization. If you have a small enterprise, simply monitoring the phone usage for the various people should be enough to identify calling patterns. For larger organizations, it may be necessary to get calling information from the PBX for analysis. For example, if you can capture the call records from each telephone call, it is possible to assign a cost for each telephone call. Inappropriate Use of Authorized Access Every employee in an enterprise typically has a phone on the desk, or access to a company-provided telephone. Most employees have the ability to make long-distance toll calls from their desks. While most employees make long-distance calls on a daily basis as part of their jobs, many will not think twice to make personal long-distance calls at the enterprise’s expense. Monitoring this type of usage and preventing it is difficult for the enterprise. Calling patterns, frequently called number analysis, and advising employees of their monthly telecommunications costs are a few ways to combat this problem. Additionally, corporate policies regarding the use of corporate telephone services and penalties for inappropriate use should be established if your enterprise does not have them already. Finally, many organizations use billing or authorization codes when making long-distance phone calls to track the usage and bill the charges to specific departments or clients. However, if your enterprise has its own PBX with conditional toll deny (CTD) as a feature, you should considering enabling this on phone stations where long-distance or toll calls are not permitted. For example, users should not be able to call specific phone numbers or area codes. Alternatively, a phone station may be denied toll-call privileges altogether. However, in Europe, implementing CTD is more difficult to implement because it is not uncommon to call many different countries in a single day. Consequently, management of the CTD parameters becomes very difficult. CTD can be configured as a specific option in an NCOS definition, as discussed earlier in the chapter. 184

AU1518Ch12Frame Page 185 Thursday, November 14, 2002 6:21 PM

Voice Security Calling Cards Calling cards are the most common form of toll fraud. Calling-card numbers are stolen and sold on a daily basis around the world. Calling-card theft typically occurs when an individual observes the subscriber entering the number into a public phone. The card number is then recorded by the thief and sold to make other calls. Calling-card theft is a major problem for telephone carriers, who often have specific fraud units for tracking thieves, and calling software, which monitors the calling patterns and alerts the fraud investigators to unusual calling patterns. In some cases, hotels will print the calling-card number on the invoices provided to their guests, making the numbers available to a variety of people. Additionally, if the PBX is not configured correctly, the calling-card information is shown on the telephone display, making it easy for anyone nearby to see the digits and use the number. Other PBX-based problems include last number redial. If the PBX supports last number redial, any employee can recall the last number dialed and obtain the access and calling-card numbers. Employees should be aware of the problems and costs associated with the illegitimate use of calling cards. Proper protection while using a calling card includes: • Shielding the number with your hands when entering it • Memorizing the number so you do not have a card visible when making the call • Ensuring your company PBX does not store the digits for last number redial • Ensuring your enterprise PBX does not display the digits on the phone for an extended period of time Calling cards provide a method for enterprise employees to call any number from any location. However, some enterprises may decide this is not appropriate for their employees. Consequently, they may offer DISA access to the enterprise phone network as an alternative. DISA Direct inward system access, or DISA, is a service available on many PBX systems. DISA allows a user to dial an access number, enter an authorization code, and appear to the PBX as an extension. This allows callers to make calls as if they were in the office building, whether the calls are internal to the PBX or external to the enterprise. DISA offers some distinct advantages. For example, it removes the need to provide calling cards for your employees because they can call a number and 185

AU1518Ch12Frame Page 186 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY be part of the enterprise voice network. Additionally, long-distance calls placed through DISA services are billed at the corporate rate because the telephone carrier sees the calls as originating from the enterprise. DISA’s advantages also represent problems. If the DISA access number becomes known, an unauthorized user only needs to try random numbers to form an authorization code. Given enough time, they will eventually find one and start making what are free calls from their perspective. However, your enterprise pays the bill. DISA authorization codes, which must be considered passwords, are numeric only because there is no way to enter alphabetic letters on the telephone keypad. Consequently, even an eight-number authorization code is easily defeated. If your organization does use DISA, there are some things you can do to assist in preventing fraudulent access of the service: • Frequent analysis of calling patterns • Monthly “invoices” to the DISA subscribers to keep them aware of the service they are using • Using a minimum of eight-digit authorization codes • Forcing changes of the authorization codes every 30 days • Disabling inactive DISA authorization codes if they are not used for a prescribed period of time or a usage limit is reached • Enabling authorization code alarms to indicate attempts to defeat or guess DISA authorization codes The methods discussed are often used by attackers to gain access to the phone system and make unauthorized telephone calls. However, technical aspects aside, some of the more skillful events occur through social engineering techniques. SOCIAL ENGINEERING The most common ploy from a social engineering perspective is to call an unsuspecting person, indicate the attacker is from the phone company, and request an outside line. The attacker then makes the phone call to the desired location, talks for as long as required, and hangs up. As long as they can find numbers to dial and do not have to go through a central operator, this can go on for months. Another social engineering attack occurs when a caller claims to be a technical support person. The attacker will solicit confidential information, such as passwords, access numbers, or ID information, all under the guise of providing support or maintenance support to ensure the user’s service is not disrupted. In actuality, the attacker is gathering sensitive 186

AU1518Ch12Frame Page 187 Thursday, November 14, 2002 6:21 PM

Voice Security information for better understanding of the enterprise environment and enabling them to perform an attack. OTHER VOICE SERVICES There are other voice services that also create issues for the enterprise, including modems, fax, and wireless services. Modems Modems are connected to the enterprise through traditional technologies using the public switched telephone network. Modems provide a method of connectivity through the PSTN to the enterprise data network. When installed on a DID circuit, the modem answers the phone when an incoming call is received. Attackers have regularly looked for these modems using war-dialing techniques. If your enterprise must provide modems to connect to the enterprise data network, these incoming lines should be outside the normal enterprise’s normal dialing range. This makes it more difficult for the attacker to find. However, because many end stations are analog, the user could connect the modem to the desktop phone without anyone’s knowledge. This is another advantage of digital circuits. While digital-to-analog converters exist to connect a modem to a digital circuit, this is not infallible technology. Should your enterprise use digital circuits to the desktop, you should implement a program to document and approve all incoming analog circuits and their purpose. This is very important for modems due to their connectivity to the data network. Fax The fax machine is still used in many enterprises to send information not easily communicated through other means. The fax transmission sends information such as scanned documents to the remote fax system. The principal concern with fax is the lack of control over the document at the receiving end. For example, if a document is sent to me using a fax in a shared area, anyone who checks the fax machine can read the message. If the information in the fax is sensitive, private, or otherwise classified, control of the information should be considered lost. A second common problem is misdirected faxes. That is, the fax is successfully transmitted, but to the wrong telephone number. Consequently, the intended recipient does not receive the fax. However, fax can be controlled through various means such as dedicated fax machines in controlled areas. For example, 187

AU1518Ch12Frame Page 188 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • Contact the receiver prior to sending the fax. • Use a dedicated and physically secure fax if the information requires it. • Use a cover page asking for immediate delivery to the recipient. • Use a cover page asking for notification if the fax is misdirected. Fax requires the use of analog lines because it uses a modem to establish the connection. Consequently, the inherent risks of the analog line are applicable here. If an attacker can monitor the line, he may be able to intercept the modem tones from the fax machine and read the fax. Addressing this problem is achieved through encrypted fax if document confidentiality is an ultimate concern. Encrypted fax requires a common or shared key between the two fax machines. Once the connection is established, the document is sent using the shared encryption key and subsequently decoded and printed on the receiving fax machine. If the receiving fax machine does not have the shared key, it cannot decode the fax. Given the higher cost of the encrypted fax machine, it is only a requirement for the most highly classified documents. Cellular and Wireless Access Cellular and wireless access to the enterprise is also a problem due to the issues associated with cellular. Wireless access in this case does not refer to wireless access to the data network, but rather wireless access to the voice network. However, this type of access should concern the security professional because the phone user will employ services such as calling cards and DISA to access the enterprise’s voice network. Because cellular and wireless access technologies are often subject to eavesdropping, the DISA access codes or calling card could potentially be retrieved from the wireless caller. The same is true for conversations — if the conversation between the wireless caller and the enterprise user is of a sensitive nature, it should be conducted over wireless. Additionally, the chairperson for a conference call should find out if there is anyone on the call who is on a cell phone and determine if that level of access is appropriate for the topic to be discussed. VOICE-OVER-IP: THE FUTURE The next set of security challenges for the telecommunications industry is Voice-over-IP. The basis for the technology is to convert the voice signals to packets, which are then routed over the IP network. Unlike the traditional circuit-switched voice network, voice over IP is a packet-switched 188

AU1518Ch12Frame Page 189 Thursday, November 14, 2002 6:21 PM

Voice Security network. Consequently, the same type of problems found in a data network are found in the voice over IP technology. There are a series of problems in the Voice-over-IP technologies, on which the various vendors are collaborating to establish the appropriate standards to protect the privacy of the Voice-over-IP telephone call. Some of those issues include: • No authentication of the person making the call • No encryption of the voice data, allowing anyone who can intercept the packet to reassemble it and hear the voice data • Quality of service, because the data network has not been traditionally designed to provide the quality-of-service levels associated with the voice network The complexities in the Voice-over-IP arena for both the technology and related security issues will continue to develop and resolve themselves over the next few years. SUMMARY This chapter introduced the basics of telephone systems and security issues. The interconnection of the telephone carriers to establish the public switched telephone network is a complex process. Every individual demands there be a dial tone when they pick up the handset of their telephone. Such is the nature of this critical infrastructure. However, enterprises often consider the telephone their critical infrastructure as well, whether they get their service directly from the telephone carrier or use a PBX to provide internal services, which is connected to the public network. The exact configurations and security issues are generally very specific to the technology in use. This chapter has presented some of the risks and prevention methods associated with traditional voice security. The telephone is the easiest way to obtain information from a company, and the fastest method of moving information around in a nondigital form. Aside from implementing the appropriate configurations for your technologies, the best defense is ensuring your users understand their role in limiting financial and information losses through the telephone network. Acknowledgments

The author wishes to thank Beth Key, a telecommunications security and fraud investigator from Nortel Networks’ voice service department. Ms. Key provided valuable expertise and support during the development of this chapter. 189

AU1518Ch12Frame Page 190 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Mignona Cote of Nortel Networks’ security vulnerabilities team provided her experiences as an auditor in a major U.S. telecommunications carrier prior to joining Nortel Networks. The assistance of both these remarkable women contributed to the content of this chapter and are examples of the quality and capabilities of the women in our national telecommunications industry. References PBX Vulnerability Analysis, Finding Holes in Your PBX before Someone Else Does, U.S. Department of Commerce, NIST Special Pub. 800-24, http://csrc.nist.gov/publications/nistpubs/80024/sp800-24pbx.pdf. Security for Private Branch Exchange Systems, http://csrc.nist.gov/publications/nistbul/ itl00-08.txt.

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

190

AU1518Ch13Frame Page 191 Thursday, November 14, 2002 6:20 PM

Chapter 13

Secure Voice Communications (VoI) Valene Skerpac, CISSP

Voice communications is in the midst of an evolution toward network convergence. Over the past several decades, the coalescence of voice and data through the circuit-based, voice-centric public switched telephone network (PSTN) has been limited. Interconnected networks exist today, each maintaining its own set of devices, services, service levels, skill sets, and security standards. These networks anticipate the inevitable and ongoing convergence onto packet- or cell-based, data-centric networks primarily built for the Internet. Recent deregulation changes and cost savings, as well as the potential for new media applications and services, are now driving a progressive move toward voice over some combination of ATM, IP, and MPLS. This new generation network aims to include novel types of telephony services that utilize packet-switching technology to receive transmission efficiencies while also allowing voice to be packaged in more standard data applications. New security models that include encryption and security services are necessary in telecommunication devices and networks. This chapter reviews architectures, protocols, features, quality-of-service (QoS), and security issues associated with traditional circuit-based landline and wireless voice communication. The chapter then examines convergence architectures, the effects of evolving standards-based protocols, new quality-of-service methods, and related security issues and solutions. CIRCUIT-BASED PSTN VOICE NETWORK The PSTN has existed in some form for over 100 years. It includes telephones, local and interexchange trunks, transport equipment, and 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

191

AU1518Ch13Frame Page 192 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY exchanges; and it represents the whole traditional public telephone system. The foundation for the PSTN is dedicated 64 kbps circuits. Two kinds of 64 kbps pulse code modulation techniques are used to encode human analog voice signals into digital streams of 0s and 1s (mu-law, the North American standard; and a-law, the European standard). The PSTN consists of the local loop that physically connects buildings via landline copper wires to an end office switch called the central office or Class 5 switch. Communication between central offices connected via trunks is performed through a hierarchy of switches related to call patterns. Many signaling techniques are utilized to perform call control functions. For example, analog connections to the central office use dual-tone multifrequency (DTMF) signaling, an in-band signaling technique transmitted over the voice path. Central office connections through a T1/E1 or T3/E3 use in-band signaling techniques such as MF or robbed bit. After World War II, the PSTN experienced high demand for greater capacity and increased function. This initiated new standards efforts, which eventually led to the organization in 1956 of the CCITT, the Comité Consultatif International de Télephonie et de Télégraphie, also known as the ITU-T, International Telecommunication Union Telecommunication Standardization Sector. Recommendations known as Signaling System 7 (SS7) were created, and in 1980 a version was completed for implementation. SS7 is a means of sending messages between switches for basic call control and for custom local area signaling services (CLASS). The move to SS7 represented a change to common-channel signaling versus its predecessor, per-trunk signaling. SS7 is fundamental to today’s networks. Essential architectural aspects of SS7 include a packet data network that controls and operates on top of the underlying voice networks. Second, a completely different transmission path is utilized for signaling information of voice and data traffic. The signaling system is a packet network optimized to speedily manage many signaling messages over one channel; it supports required functions such as call establishment, billing, and routing. Architecturally, the SS7 network consists of three components, as shown in Exhibit 13-1: service switch points (SSPs), service control points (SCPs), and signal transfer points (STPs). SSP switches originate and terminate calls communicating with customer premise equipment (CPE) to process calls for the user. SCPs are centralized nodes that interface with the other components through the STP to perform functions such as digit translation, call routing, and verification of credit cards. SCPs manage the network configuration and callcompletion database to perform the required service logic. STPs translate and route SS7 messages to the appropriate network nodes and databases. In addition to the SS7 signaling data link, there are a number of other SS7 192

AU1518Ch13Frame Page 193 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI)

SCP

STP

STP

STP

SCP

SCP

STP

STP

STP

SCP

SSP

SSP

Exhibit 13-1. Diagram of SS7 key components and links.

links between the SS7 components whereby certain links help to ensure a reliable SS7 network. Functional benefits of SS7 networks include reduced post-dialing delay, increased call completion, and connection to the intelligent network (IN). SS7 supports shared databases among switches, providing the groundwork for IN network-based services such as 800 services and advanced intelligent networks (AINs). SS7 enables interconnection and enhanced services, making the whole next generation and conversion possible. The PSTN assigns a unique number to each telephone line. There are two numbering plans: the North American numbering plan (NANP) and the ITU-T international numbering plan. NANP is an 11-digit or 1+10 dialing plan, whereas the ITU-T is no more than 15 digits, depending on the needs of the country. Commonly available PSTN features are call waiting, call forwarding, and three-way calling. With SS7 end to end, CLASS features such as ANI, call blocking, calling line ID blocking, automatic callback, and call return (*69) are ready for use. Interexchange carriers (IXCs) sell business features including circuit-switched long distance, calling cards, 800/888/877 numbers, VPNs (where the telephone company manages a private dialing plan), private leased lines, and virtual circuits (Frame Relay or ATM). Security features may include line restrictions, employee authorization codes, virtual access to private networks, and detailed call records to track unusual activity. The PSTN is mandated to perform emergency services. The basic U.S. 911 relays the calling party’s telephone number to public safety answering points (PSAPs). Enhanced 911 requirements include the location of the calling party, with some mandates as stringent as location within 50 meters of the handset. The traditional enterprise private branch exchange (PBX) is crucial to the delivery of high availability, quality voice, and associated features to the end user. It is a sophisticated proprietary computer-based switch that 193

AU1518Ch13Frame Page 194 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY operates as a small, in-house phone company with many features and external access and control. The PBX architecture separates switching and administrative functions, is designed for 99.999 percent reliability, and often integrates with a proprietary voicemail system. Documented PBX threats and baseline security methods are well known and can be referenced in the document PBX Vulnerability Analysis by NIST, special publication 800–24. Threats to the PBX include toll fraud theft, eavesdropping on conversations, unauthorized access to routing and address data, data alteration of billing information and system tables to gain additional services, unauthorized access, denial-of-service attacks, and a passive traffic analysis attack. Voice messages are also prone to threats of eavesdropping and accidental or purposeful forwarding. Baseline security policies and controls methods, which to a certain extent depend on the proprietary equipment, need to be implemented. Control methods include manual assurance of database integrity, physical security, operations security, management-initiated controls, PBX system control, and PBX system terminal access control such as password control. Many telephone and system configuration practices need to be developed and adhered to. These include blocking well-known non-call areas or numbers, restart procedures, software update protection using strong error detection based on cryptography, proper routing through the PBX, disabling open ports, and configuration of each of the many PBX features. User quality-of-service (QoS) expectations of basic voice service are quite high in the area of availability. When people pick up the telephone, they expect a dial tone. Entire businesses are dependent on basic phone service, making availability of service critical. Human voice interaction requires delays of no more than 250 milliseconds. Carriers experienced fraud prior to the proliferation of SS7 out-of-band signaling utilized for the communication of call establishment and billing information between switches. Thieves attached a box that generated the appropriate signaling tones, permitting a perpetrator to take control of signaling between switches and defeat billing. SS7 enhanced security and prevented unauthorized use. Within reasonable limitations, PSTN carriers have maintained closed circuit-based networks that are not open to public protocols except under legal agreements with specified companies. In the past, central offices depended on physical security, passwords system access, a relatively small set of trained individuals working with controlled network information, network redundancy, and deliberate change control. U.S. telephone carriers are subject to the Communications Assistance for Law Enforcement Act (CALEA) and need to provide access points and certain information when a warrant has been issued for authorized wiretapping. 194

AU1518Ch13Frame Page 195 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) The network architecture and central office controls described above minimized security exposures, ensuring that high availability and QoS expectations were essentially met. While it is not affordable to secure the entire PSTN, such are the requirements of certain government and commercial users. Encryption of the words spoken into a telephone and decryption of them as they come out of the other telephone is the singular method to implement a secure path between two telephones at arbitrary locations. Such a secure path has never broadly manifested itself costeffectively for commercial users. Historically, PSTN voice scramblers have existed since the 1930s but equipment was large, complicated, and costly. By the 1960s, the KY-3 came to market as one of the first practical voice encryption devices. The secure telephone unit, first generation (STU-1) was introduced in 1970, followed in 1975 by the STU-II used by approximately 10,000 users. In 1987, the U.S. National Security Agency (NSA) approved STU-III and made secure telephone service available to defense contractors where multiple vendors such as AT&T, GE, and Motorola offered user-friendly deskset telephones for less than U.S.$2000. During the 1990s, systems came to market such as an ISDN version of STU called STE, offered by L3 Communications, AT&T Clipper phone, Australian Speakeasy, and British Brent telephone. Also available today are commercial security telephones or devices inserted between the handset and telephone that provide encryption at costs ranging from U.S.$100 to $2000, depending on overall capability. WIRELESS VOICE COMMUNICATION NETWORKS Wireless technology in radio form is more than 100 years old. Radio transmission is the induction of an electrical current at a remote location, intended to communicate information whereby the current is produced via the propagation of an electromagnetic wave through space. The wireless spectrum is a space that the world shares, and there are several methods for efficient spectrum reuse. First, the space is partitioned into smaller coverage areas or cells for the purpose of reuse. Second, a multiple access technique is used to allow the sharing of the spectrum among many users. After the space has been specified and multiple users can share a channel, spread spectrum, duplexing, and compression techniques to utilize the bandwidth with even better efficiency are applied. In digital cellular systems, time division multiplexing (TDMA) and code division multiple (CDMA) access techniques exist. TDMA first splits the frequency spectrum into a number of channels and then applies time division multiplexing to operate multiple users interleaved in time. TDMA standards include Global System for Mobile Communications (GSM), Universal Wireless Communications (UWC), and Japanese Digital Cellular (JDC). CDMA employs universal frequency reuse, whereby everybody utilizes the 195

AU1518Ch13Frame Page 196 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

PSTN

Gateway Mobile Switching Center Base Station Controller

Base Station Controller

Mobile Switching Center

Legend

Mobile Station

Base Transceiver Station (BTS)

Visitor Location Register Home Location Register Authentication Center Equipment Identify Register

Exhibit 13-2. Digital cellular architecture.

same frequency at the same time and each conversation is uniquely encoded, providing greater capacity over other techniques. First-generation CDMA standards and second-generation wideband CDMA (WCDMA) both use a unique code for each conversation and a spread spectrum method. WCDMA uses bigger channels, providing for greater call capacity and longer encoding strings than CDMA, increasing security and performance. Multiple generations of wireless WANs have evolved in a relatively short period of time. The first-generation network used analog transmission and was launched in Japan in 1979. By 1992, second-generation (2G) digital networks were operational at speeds primarily up to 19.2 kbps. Cellular networks are categorized as analog and digital cellular, whereas PCS, a shorter-range, low-power technology, was digital from its inception. Today, cellular networks have evolved to the 2.5G intermediate-generation network, which provides for enhanced data services on present 2G digital platforms. The third-generation (3G) network includes digital transmission. It also provides for an always-on per-user and terminal connection that supports multimedia broadband applications and data speeds of 144 kbps to 384 kbps, potentially up to 2 Mbps in certain cases. The 3G standards are being developed in Europe and Asia, but worldwide deployment has been slow due to large licensing and build costs. There are many competing cellular standards that are impeding the overall proliferation and interoperability of cellular networks. Digital cellular architecture, illustrated in Exhibit 13-2, resembles the quickly disappearing analog cellular network yet is expanded to provide for greater capacity, improved security, and roaming capability. A base 196

AU1518Ch13Frame Page 197 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) transceiver station (BTS), which services each cell, is the tower that transmits signals to and from the mobile unit. Given the large number of cells required to address today’s capacity needs, a base station controller (BSC) is used to control a set of base station transceivers. The base station controllers provide information to the mobile switching center (MSC), which accesses databases that enable roaming, billing, and interconnection. The mobile switching center interfaces with a gateway mobile switching center that interconnects with the PSTN. The databases that make roaming and security possible consist of a home location register, visitor location register, authentication center, and equipment identity register. The home location register maintains subscriber information, with more extensive management required for those registered to that mobile switching center area. The visitor location register logs and periodically forwards information about calls made by roaming subscribers for billing and other purposes. The authentication center is associated with the home location register; it protects the subscriber from unauthorized access, delivering security features including encryption, customer identification, etc. The equipment identity register manages a database of equipment, also keeping track of stolen or blacklisted equipment. Prior to digital cellular security techniques, there was a high amount of toll fraud. Thieves stood on busy street corners, intercepted electronic identification numbers and phone numbers, and then cloned chips. The digitization of identification information allowed for its encryption and enhanced security. Policies and control methods are required to further protect against cellular phone theft. Methods include the use of an encrypted PIN code to telephone access and blocking areas or numbers. Privacy across the air space is improved using digital cellular compression and encoding techniques; CDMA encoding offers the greatest protection of the techniques discussed. Despite security improvements in the commercial cellular networks, end-to-end security remains a challenge. Pioneering efforts for many of the digital communication, measurement, and data techniques available today were performed in a successful attempt to secure voice communication using FSK–FDM radio transmission during World War II. The SIGSALY system was first deployed in 1943 by Bell Telephone Laboratories, who began the investigation of encoding techniques in 1936 to change voice signals into digital signals and then reconstruct the signals into intelligible voice. The effort was spurred on by U.K. and U.S. allies who needed a solution to replace the vulnerable transatlantic high-frequency radio analog voice communications system called A-3. SIGSALY was a twelve-channel system; ten channels each measured the power of the voice signal in a portion of the whole voice frequency spectrum between 250 and 3000 Hz, and two channels provided information regarding the pitch of the speech and presence of 197

AU1518Ch13Frame Page 198 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY unvoiced (hiss) energy. Encryption keys were generated from thermal noise information (output of mercury-vapor rectifier vacuum tubes) sampled every 20 milliseconds and quantized into six levels of equal probability. The level information was converted into channels of a frequency-shift-keyed audio tone signal, which represented the encryption key, and was then recorded on three hard vinyl phonograph records. The physical transportation and distribution of the records provided key distribution. In the 1970s, U.S. Government wireless analog solutions for high-grade end-to-end crypto and authentication became available, though still at a high cost compared to commercial offerings. Secure telephone solutions included STU-III compatible, Motorola, and CipherTac2K. STU-III experienced compatibility problems with 2G and 3G networks. This led to the future narrow-band digital terminal (FNBDT) — a digital secure voice protocol operating at the transport layer and above for most data/voice network configurations across multiple media — and mixed excitation linear prediction vocoder (MELP) — an interoperable 2400-bps vocoder specification. Most U.S. Government personnel utilize commercial off-the-shelf solutions for sensitive but unclassified methods that rely on the commercial wireless cellular infrastructure. NETWORK CONVERGENCE Architecture Large cost-saving potentials and the promise of future capabilities and services drive the move to voice over a next-generation network. New SS7 switching gateways are required to support legacy services and signaling features and to handle a variety of traffic over a data-centric infrastructure. In addition to performing popular IP services, the next-generation gateway switch needs to support interoperability between PSTN circuits and packet-switching networks such as IP backbones, ATM networks, Frame Relay networks, and emerging Multi-Protocol Label Switching (MPLS) networks. A number of overlapping multimedia standards exist, including H.323, Session Initiation Protocol (SIP), and Media Gateway Control Protocol (MGCP). In addition to the telephony-signaling protocols encompassed within these standards, network elements that facilitate VoIP include VoIP gateways, the Internet telephony directory, media gateways, and softswitches. An evolution and blending of protocols, and gateway and switch functions continues in response to vendors’ competitive searches for market dominance. Take an example of a standard voice call initiated by a user located in a building connected to the central office. The central office links to an SS7 media gateway switch that can utilize the intelligence within the SS7 network to add information required to place the requested call. The call then continues on a packet basis through switches or routers until it reaches a 198

AU1518Ch13Frame Page 199 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI)

IP Phone

Directories Public Switched Telephone Network

Back-Office Systems

SS7 Softswitch

Trunks

Media Gateway

Media Gateway

Cable Network Public Switched Telephone Network

POTS Phone

Exhibit 13-3. VoIP network architecture.

destination media gateway switch, where the voice is unpackaged, undigitalized, and sent to the phone called. Voice-over-IP (VoIP) changes voice into packets for transmission over a TCP/IP network. VoIP gateways connect the PSTN and the packet-switched Internet and manage the addressing across networks so that PCs and phones can talk to each other. Exhibit 13-3 illustrates major VoIP network components. The VoIP gateway performs packetization and compression of the voice, enhancement of the voice through voice techniques, DTMF signaling capability, voice packet routing, user authentication, and call detail recording for billing purposes. Many solutions exist, such as enterprise VoIP gateway routers, IP PBXs, service-provider VoIP gateways, VoIP access concentrators, and SS7 gateways. The overlapping functionality of the different types of gateways will progress further as mergers and acquisitions continue to occur. When the user dials the number from a VoIP telephone, the VoIP gateway communicates the number to the server; the callagent software (softswitch) decides what the IP address is for the destination call number and presents back the IP address to the VoIP gateway. The gateway converts the voice signal to IP format, adds the address of the destination node, and sends the signal. The softswitch could be utilized again if enhanced services are required for additional functions. Media gateways interconnect with the SS7 network, enabling interoperability between the PSTN and packet-switched domains. They handle IP services and support various telephony-signaling protocols and Class 4 and Class 5 services. Media servers include categories of VoIP trunking gateways, VoIP access gateways, and network access service devices. Vocoders compress and transmit audio over the network; they are another evolving area of standards for Voice-over-the-Internet (VoI). Vocoders used for VoI such as G.711 (48, 56, and 64 kbps high-bit rate) and 199

AU1518Ch13Frame Page 200 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY G.723 (5.3 and 6.3 kbps high-bit rate) are based on existing standards created for digital telephony applications, limiting the telephony signal band of 200–3400 Hz with 8 kHz sampling. This toll-level audio quality is geared for the minimum a human ear needs to recognize speech and is not nearly that of face-to-face communications. With VoIP in a wideband IP end-to-end environment, better vocoders are possible that can achieve more transparent communication and better speaker recognition. New ITU vocoders — G.722.1 operating at 24 kbps and 32 kbps rates and 16 kHz sampling rate — are now used in some IP phone applications. The third-generation partnership project (3GPP)/ETSI (for GSM and WCDMA) merged on the adaptive multi-rate wideband (AMR-WB) at the 50–7000 Hz bandwidth to form the newly approved ITU G722.2 standard, which provides better voice quality at reduced bit rates and allows seamless interface between VoIP systems and wireless base stations. This eliminates the normal degradation of voice quality between vocoders of different systems. Numbering The Internet telephony directory, an IETF RFC known as ENUM services, is an important piece in the evolving VoI solution. ENUM is a standard for mapping telephone numbers to an IP address, a scheme wherein DNS maps PSTN phone numbers to appropriate URLs based on the E.164 standard. To enable a faster time to market, VoIP continues as new features and service models supporting the PSTN and associated legacy standards are introduced. For example, in response to DTMF tone issues, the IETF RFC RTP Payload for DTMF Digits, Telephony Tones and Telephony Signals evolved, which specifies how to carry and format tones and events using RTP. In addition to the incorporation of traditional telephone features and new integrated media features, VoIP networks need to provide emergency services and comply with law enforcement surveillance requirements. The requirements as well as various aspects of the technical standards and solutions are evolving. The move toward IP PBXs is evolving. Companies that cost-effectively integrate voice and data between locations can utilize IP PBXs on their IP networks, gaining additional advantages from simple moves and changes. Challenges exist regarding the nonproprietary telephony-grade server reliability (built for 99.99 percent reliability) and power distribution compared to traditional PBXs. Complete solutions related to voice quality, QoS, lack of features, and cabling distance limitations are yet evolving. A cost-effective, phased approach to an IP converged system (for example, an IP card in a PBX) enables the enterprise to make IP migration choices, support new applications such as messaging, and maintain the traditional PBX investment where appropriate. The move toward computer telephony greatly 200

AU1518Ch13Frame Page 201 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) increases similar types of PBX security threats discussed previously and is explored further in the “VoI Security” section of this chapter. Quality-of-Service (QoS) Network performance requirements are dictated by both the ITU SS7/C7 standards and user expectations. The standard requires that the end-toend call-setup delay cannot exceed 20 to 30 seconds after the ISDN User Part (ISUP) initial address message (IAM) is sent; users expect much faster response times. Human beings do not like delays when they communicate; acceptable end-to-end delays usually need to meet the recommended 150 milliseconds. QoS guarantees, at very granulated levels of service, are a requirement of next-generation voice networks. QoS is the ability to deliver various levels of service to different kinds of traffic or traffic flows, providing the foundation for tiered pricing based on class-of-service (CoS) and QoS. QoS methods fall into three major categories: first is an architected approach such as ATM; second is a per-flow or session method such as with the reservation protocol of IETF IntServ definitions and MPLS specifications; and third is a packet labeling approach utilizing a QoS priority mark as specified in 802.1p and IETF DiffServ. ATM is a cell-based (small cell), wide area network (WAN) transport that came from the carrier environment for streaming applications. It is connection oriented, providing a way to set up a predetermined path between source and destination; and it allows for control of network resources in real-time. ATM network resource allocation of CoS and QoS provisioning is well defined; there are four service classes based on traffic characteristics. Further options include the definition of QoS and traffic parameters at the cell level that establish service classes and levels. ATM transmission-path virtual circuits include virtual paths and their virtual channels. The ATM virtual path groups the virtual channels that share the same QoS definitions, easing network management and administration functions. IP is a flexible, efficient, connectionless, packet-based network transport that extends all the way to the desktop. Packet-switching methods have certain insufficiencies, including delays due to store-and-forward packetswitching mechanisms, jitter, and packet loss. Jitter is the delay in sending bits between two switches. Jitter results in both an end-to-end delay and delay differences between switches that adversely affect certain applications. As congestion occurs at packet switches or routers, packets are lost, hampering real-time applications. Losses of 30 or 40 percent in the voice stream could result in speech with missing syllables that sounds like gibberish. 201

AU1518Ch13Frame Page 202 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY IntServ and DiffServ are two IP schemes for QoS. IntServ broadens a best-efforts service model, enabling the management of end-to-end packet delays. IntServ reserves resources on a per-flow basis and requires Resource Reservation Protocol (RSVP) as a setup protocol that guarantees bandwidth and a limit to packet delay using router-to-router signaling schemes. Participating protocols include the Real-time Transport Protocol (RTP), which is the transport protocol in which receivers sequence information through packet headers. Real-Time Control Protocol (RTCP) gives feedback of status from senders to receivers. RTP and RTCP are ITU standards under H.225. Real-Time Streaming Protocol (RTSP) runs on top of IP Multicast, UDP, RTP, and RTCP. RSVP supports both IPv4 and IPv6, and is important to scalability and security; it provides a way to ensure that policy-based decisions are followed. DiffServ is a follow-on QoS approach to IntServ. DiffServ is based on a CoS model; it uses a specified set of building blocks from which many services can be built. DiffServ implements a prioritization scheme that differentiates traffic using certain bits in each packet (IPv4 type-of-service [ToS] byte or IPv6 traffic class byte) that designate how a packet is to be forwarded at each network node. The move to IPv6 is advantageous because the ToS field has limited functionality and there are various interpretations. DiffServ uses traffic classification to prioritize the allocation of resources. The IETF DiffServ draft specifies a management information base, which would allow for DiffServ products to be managed by Simple Network Management Protocol (SNMP). Multi-Protocol Label Switching (MPLS) is an evolving protocol with standards originally out of the IETF that designates static IP paths. It provides for the traffic engineering capability essential to QoS control and network optimization, and it forms a basis for VPNs. Unlike IP, MPLS can direct traffic through different paths to overcome IP congested route conditions that adversely affect network availability. To steer IPv4 or IPv6 packets over a particular route through the Internet, MPLS adds a label to the packet. To enable routers to direct classes of traffic, MPLS also labels the type of traffic, path, and destination information. A packet on an MPLS network is transmitted through a web of MPLS-enabled routers or ATM switches called label-switching routers (LSRs). At each hop in the MPLS network, the LSR uses the local label to index a forwarding table, which designates a new label to each packet, and sends the packet to an output port. Routes can be defined manually or via RSVP-TE (RSVP with traffic engineering extensions) or MPLS Label Distribution Protocol (LDP). MPLS supports the desired qualities of circuit-switching technology such as bandwidth reservation and delay variation as well as a best-efforts hop-by-hop routing. Using MPLS, service providers can build VPNs with the benefits of both ATM-like QoS and the flexibility of IP. The potential capabilities of the encapsulating label-based protocol continues to grow; however, there are 202

AU1518Ch13Frame Page 203 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) a number of issues between the IETF and MPLS Forum that need full resolution, such as the transfer of ToS markings from IP headers to MPLS labels and standard LSR interpretation when using MPLS with DiffServ. The management of voice availability and quality issues is performed through policy-based networking. Information about individual users and groups is associated with network services or classes of service. Network protocols, methods, and directories used to enable the granular time-sensitive requirements of policy-based QoS are Common Open Policy Services (COPS), Directory Enabled Networking (DEN), and Lightweight Directory Access Protocol (LDAP). VOI Security Threats to voice communication systems increase given the move to the inherently open Internet. Voice security policies, procedures, and methods discussed previously reflect the legacy closed voice network architecture; they are not adequate for IP telephony networks, which are essentially wide open and require little or no authentication to gain access. New-generation networks require protection from attacks across the legacy voice network, wireless network, WAN, and LAN. Should invalid signaling occur on the legacy network, trunk groups could be taken out of service, calls placed to invalid destinations, resources locked up without proper release, and switches directed to incorrectly reduce the flow of calls. As new IP telephony security standards and vendor functions continue to evolve, service providers and enterprises can make use of voice-oriented firewalls as well as many of the same data security techniques to increase voice security. Inherent characteristics of Voice-over-IP protocols and multimedia security schemes are in conflict with many current methods used by firewalls or network address translation (NAT). Although no official standards exist, multiple security techniques are available to operate within firewall and NAT constraints. These methods typically use some form of dynamic mediation of ports and addresses whereby each scheme has certain advantages given the configuration and overall requirements of the network. Security standards, issues, and solutions continue to evolve as security extensions to signaling protocols, related standards, and products likewise evolve and proliferate. SIP, H.323, MGCP, and Megaco/H.248 signaling protocols use TCP as well as UDP for call setup and transport. Transport addresses are embedded in the protocol messages, resulting in a conflict of interest. Secure firewall rules specify static ports for desirable data block H.323 because the signaling protocol uses dynamically allocated port numbers. Related issues trouble NAT devices. An SIP user on an internal network behind a NAT sends an INVITE message to another user outside the network. The outside user extracts the FROM address from the INVITE message and sends a 200(Ok) 203

AU1518Ch13Frame Page 204 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY response back. Because the INVITE message comes from behind the NAT, the FROM address is not correct. The call never connects because the 200 response message does not succeed. H.323 and SIP security solution examples available today are described. H.323, an established ITU standard designed to handle real-time voice and videoconferencing, has been used successfully for VoIP. The standard is based on the IETF Real-Time Protocol (RTP) and Real-Time Control Protocol (RTCP) in addition to other protocols for call signaling and data and audiovisual communications. This standard is applied to peer-to-peer applications where the intelligence is distributed throughout the network. The network can be partitioned into zones, and each zone is under the control of an intelligent gatekeeper. One voice firewall solution in an H.323 environment makes use of the mediating element that intervenes in the logical process of call setup and tear-down, handles billing capabilities, and provides high-level policy control. In this solution, the mediating element is the H323 gatekeeper; it is call-state aware and trusted to make networkwide policy decisions. The data ports of the voice firewall device connect to the output of the H.323 gateway device. The gatekeeper incorporates firewall management capabilities via API calls; it controls connections to the voice firewall device that opens dynamic “pinholes,” which permit the relevant traffic through the voice firewall. Voice firewalls are configured with required pinholes and policy for the domain, and no other traffic can flow through the firewall. For each call setup, additional pinholes are configured dynamically to permit the precise traffic required to carry that call; and no other traffic is allowed. The voice firewall simplicity using stateless packet filtering can perform faster at lower costs compared to a traditional application firewall, with claims of 100 calls per second to drill and seal pinholes and a chassis that supports hundreds of simultaneous calls with less than one millisecond of latency SIP, an increasingly popular approach, operates at the application layer of the OSI model and is based on IETF RFC 2543. SIP is a peer-to-peer signaling protocol controlling the creation, modification, and termination of sessions with one or more participants. SIP establishes a temporary call to the server, which performs required, enhanced service logic. The SIP stack consists of SIP using Session Description Protocol (SDP), RTCP, and RTP. Recent announcements — a Windows XP® SIP telephony client and designation of SIP as the signaling and call control standard for IP 3G mobile networks — have accelerated service providers’ deployments of SIP infrastructures. Comprehensive firewall and NAT security solutions for SIP service providers include a combination of technologies, including an edge proxy, a firewall control proxy, and a media-enabled firewall. An edge proxy acts as a guard, serving the incoming and outgoing SIP signaling traffic. It performs 204

AU1518Ch13Frame Page 205 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) authentication and authorization of services through transport layer security (TLS) and hides the downstream proxies from the outside network. The edge proxy forwards calls from trusted peers to the next internal hop. The firewall control proxy works in conjunction with the edge proxy and firewall. For each authorized media stream, it dynamically opens and closes pinhole pairs in the firewall. The firewall control proxy also operates closely with the firewall to perform NAT and remotely manages firewall policy and message routing. Dynamic control and failover functions of these firewall control proxies provide the additional required reliability in the service provider network. The media-enabled firewall is a transparent, non-addressable VoIP firewall that does not allow access to the internal network except from the edge proxy. Carrier-class high-performance firewalls can limit entering traffic to the edge proxy and require a secure TLS connection for only media traffic for authorized calls. Enterprise IP Telephony Security Threats associated with conversation eavesdropping, call recording and modification, and voicemail forwarding or broadcasting are greater in a VoIP network, where voice files are stored on servers and control and media flows reside on the open network. Threats related to fraud increase given the availability of control information on the network such as billing and call routing. Given the minimal authentication functionality of voice systems, threats related to rogue devices or users increase and can also make it more difficult to track the hacker of a compromised system if an attack is initiated in a phone system. Protection needs to be provided against denial-of-service (DoS) conditions, malicious software to perform a remote boot, TCP SYN flooding, ping of death, UDP fragment flooding, and ICMP flooding attacks. Control and data flows are prone to eavesdropping and interception given the use of packet sniffers and tools to capture and reassemble generally unencrypted voice streams. Viruses and Trojan horse attacks are possible against PCbased phones that connect to the voice network. Other attacks include a caller identity attack on the IP phone system to gain access as a legitimate user or administrator. Attacks to user registration on the gatekeeper could result in redirected calls. IP spoofing attacks using trusted IP addresses could fool the network that a hacker conversation is that of a trusted computer such as the IP-PBX, resulting in a UDP flood of the voice network. Although attack mitigation is a primary consideration in VoIP designs, issues of QoS, reliability, performance, scalability, authentication of users and devices, availability, and management are crucial to security. VoIP security requirements are different than data security requirements for several reasons. VoIP applications are under no-downtime, high-availability requirements, operate in a badly behaved manner using dynamically 205

AU1518Ch13Frame Page 206 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY negotiated ports, and are subject to extremely sensitive performance needs. VoIP security solutions are comprehensive; they include signaling protocols, operating systems, administration interface; and they need to fit into existing security environments consisting of firewalls, VPNs, and access servers. Security policies must be in place because they form a basis for an organization’s acceptance of benefits and risks associated with VoIP. Certain signaling protocol security recommendations exist and are evolving. For example, the ITU-T H.235 Recommendation under the umbrella of H.323 provides for authentication, privacy, and integrity within the current H-Series protocol framework. Vendor products, however, do not necessarily fully implement such protection. In the absence of widely adopted standards, today’s efforts rely on securing the surrounding network and its components. Enterprise VoIP security design makes use of segmentation and the switched infrastructure for QoS, scalability, manageability, and security. Today, layer 3 segmentation of IP voice from the traditional IP data network aids in the mitigation of attacks. A combination of virtual LANs (VLANs), access control, and stateful firewall provides for voice and data segmentation at the network access layer. Data devices on a separate segment from the voice segment cannot instigate call monitoring, and the use of a switched infrastructure baffles devices on the same segment sufficiently to prevent call monitoring and maintain confidentiality. Not all IP phones with data ports, however, support other than basic layer 2 connectivity that acts as a hub, combining the data and voice segments. Enhanced layer 2 support is required in the IP phone for VLAN technology (like 802.1q), which is one aspect needed to perform network segmentation today. The use of PC-based IP phones provides an avenue for attacks such as a UDP flood DoS attack on the voice segment making a stateful firewall that brokers the data–voice interaction required. PC-based IP phones are more susceptible to attacks than closed custom operating system IP phones because they are open and sit within the data network that is prone to network attacks such as worms or viruses. Controlling access between the data and voice segments uses a strategically located stateful firewall. The voice firewall provides host-based DoS protection against connection starvation and fragmentation attacks, dynamic per-port-granular access through the firewall, spoof mitigation, and general filtering. Typical authorized connections such as voicemail connections in the data segment, call establishment, voice browsing via the voice segment proxy server, IP phone configuration setting, and voice proxy server data resource access generally use well-known TCP ports or a combination of well-known TCP ports and UDP. The VoIP firewall handles known TCP traditionally and opens port-level-granular access for UDP between segments. If higher-risk PC-based IP phones are utilized, it is possible to implement a private address space for IP telephony devices as provided by RFC 1918. Separate 206

AU1518Ch13Frame Page 207 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) address spaces reduce potential traffic communication outside the network and keep hackers from being able to scan a properly configured voice segment for vulnerabilities. The main mechanism for device authentication of IP phones is via the MAC address. Assuming automatic configuration has been disabled, an IP phone that tries to download a network configuration from an IP-PBX needs to exhibit a MAC address known to the IP-PBX to proceed with the configuration process. This precludes the insertion of a rogue phone into the network and subsequent call placement unless a MAC address is spoofed. User log-on is supported on some IP phones for device setup as well as identification of the user to the IP-PBX, although this could be inconvenient in certain environments. To prevent rogue device attacks, employ traditional best practice regarding locking down switched ports, segments, and services holds. In an IP telephony environment, several additional methods could be deployed to further guard against such attacks. Assignment of static IP addresses to known MAC addresses versus Dynamic Host Configuration Protocol (DHCP) could be used so that, if an unknown device is plugged into the network, it does not receive an address. Also, assuming segmentation, separate voice and data DHCP servers means that a DoS attack on the DHCP data segment server has little chance of affecting the voice segment. The temporary use only when needed guideline should be implemented for the commonly available automatic phone registration feature that bootstraps an unknown phone with a temporary configuration. A MAC address monitoring tool on the voice network that tracks changes in MAC to IP address pairings could be helpful, given that voice MAC addresses are fairly static. Assuming network segmentation, filtering could be used to limit devices from unknown segments as well as keeping unknown devices within the segment from connecting to the IPPBX. Voice servers are prone to similar attacks as data servers and therefore could require tools such as an intrusion detection system (IDS) to alarm, log, and perhaps react to attack signatures found in the voice network. There are no voice control protocol attack signatures today, but an IDS could be used for UDP DoS attack and HTTP exploits that apply to a voice network. Protection of servers also includes best practices, such as disabling unnecessary services, applying OS patches, turning off unused voice features, and limiting the number of applications running on the server. Traditional best practices should be followed for the variety of voice server management techniques, such as HTTP, SSL, and SNMP. Wireless Convergence Wireless carriers look to next-generation networks to cost-effectively accommodate increased traffic loads and to form a basis for a pure packet 207

AU1518Ch13Frame Page 208 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY network as they gradually move toward 3G networks. The MSCs in a circuitswitched wireless network as described earlier in this chapter interconnect in a meshed architecture that lacks easy scaling or cost-effective expansion; a common packet infrastructure to interconnect MSCs could overcome limitations and aid in the move to 3G networks. In this architecture, the common packet framework uses packet tandems consisting of centralized MGCs or softswitches that control distributed MGs deployed and located with MSCs. TDM trunks from each MSC are terminated on an MG that performs IP or ATM conversion under the management of the softswitch. Because point-to-point connections no longer exist between MSCs, a less complicated network emerges that requires less bandwidth. Now MSCs can be added to the network with one softswitch connection instead of multiple MSC connections. Using media gateways negates the need to upgrade software at each MSC to deploy next-generation services, and it offloads precious switching center resources. Centrally located softswitches with gateway intelligence can perform lookups and route calls directly to the serving MSC versus the extensive routing required among MSCs or gateway MSCs to perform lookups at the home location register. With the progression of this and other IP-centric models, crucial registration, authentication, and equipment network databases need to be protected. Evolving new-generation services require real-time metering and integration of session management with the transfer data. Service providers look to support secure virtual private networks (VPNs) between subscribers and providers of content, services, and applications. While the emphasis of 2.5G and 3G mobile networks is on the delivery of data and new multimedia applications, current voice services must be sustained and new integrated voice capabilities exploited. Regardless of specific implementations, it is clear that voice networks and systems will continue to change along with new-generation networks. References Telecommunications Essentials, Addison-Wesley, 2002, Lillian Goleniewski. Voice over IP Fundamentals, Cisco Press, 2002, Jonathan Davidson and James Peters. SS7 Tutorial, Network History, 2001, SS8 Networks. Securing future IP-based phone networks, ISSA Password, Sept/Oct. 2001, David K. Dumas, CISSP. SAFE: IP Telephony Security in Depth, Cisco Press, 2002, Jason Halpern. Security Analysis of IP-Telephony Scenarios, Darmstadt University of Technology, KOM — Industrial Process and System Communications, 2001, Utz Roedig. Deploying a Dynamic Voice over IP Firewall with IP Telephony Applications, Aravox Technologies, 2001, Andrew Molitor. Building a strong foundation for SIP-based networks, Internet Telephony, February 2002, Erik Giesa and Matt Lazaro. Traversal of IP Voice and Video Data through Firewalls and NATS, RADVision, 2001.

208

AU1518Ch13Frame Page 209 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) PBX Vulnerability Analysis, Finding Holes in Your PBX Before Someone Else Does, U.S. Department of Commerce, National Institute of Standards and Technology, Special Publication 80024. The Start of the Digital Revolution: SIGSALY Secure Digital Voice Communications in World War II, The National Security Agency (NSA), J.V. Boone and R.R. Peterson. Wireless carriers address network evolution with packet technology, Internet Telephony, November 2001, Ravi Ravishankar.

GLOSSARY OF TERMS AIN (Advanced Intelligent Network) — The second generation of intelligent networks, which was pioneered by Bellcore and later spun off as Telcordia. A common service-independent network architecture geared to quickly produce customizable telecommunication services. ATM (Asynchronous Transfer Mode) — A cell-based international packet-switching standard where each packet has a uniform cell size of 53 bytes. It is a high-bandwidth, fast packet-switching and multiplexing method that enables end-to-end communication of multimedia traffic. ATM is an architected quality-of-service solution that facilitates multi-service and multi-rate connections using a high-capacity, lowlatency switching method. CCITT (Comité Consultatif International de Télephonie et de Télégraphie) — Advisory committee to the ITU, now known as the ITU-T that influences engineers, manufacturers, and administrators. CoS (Class-of-Service) — Categories of subscribers or traffic corresponding to priority levels that form the basis for network resource allocation. CPE (Customer Premise Equipment) — Equipment owned and managed by the customer and located on the customer premise. DTMF (Dual-Tone Multi-Frequency Signaling) — A signaling technique for pushbutton telephone sets in which a matrix combination of two frequencies, each from a set of four, is used to send numerical address information. The two sets of four frequencies are (1) 697, 770, 852, and 941 Hz; and (2) 1209, 1336, 1477, and 1633 Hz. IP (Internet Protocol) — A protocol that specifies data format and performs routing functions and path selection through a TCP/IP network. These functions provide techniques for handling unreliable data and specifying the way network nodes process data, how to perform error processing, and when to throw out unreliable data. IN (Intelligent Network) — An advanced services architecture for telecommunications networks. ITU-T (International Telecommunication Union) — A telecommunications advisory committee to the ITU that influences engineers, manufacturers, and administrators MPLS (Multi-Protocol Label Switching) — An IETF effort designed to simplify and improve IP packet exchange and provide network operators with a flexible way to engineer traffic during link failures and congestion. MPLS integrates information 209

AU1518Ch13Frame Page 210 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY about network links (layer 2) such as bandwidth, latency, and utilization with the IP (layer 3) into one system. NIST (National Institute of Standards and Technology) — A U.S. national group that was referred to as the National Bureau of Standards prior to 1988. PBX (Private Branch Exchange) — A telephone switch residing at the customer location that sets up and manages voice-grade circuits between telephone users and the switched telephone network. Customer premise switching is usually performed by the PBX as well as a number of additional enhanced features, such as least-cost routing and call-detail recording. PSTN (Public Switched Telephone Network) — The entire legacy public telephone network, which includes telephones, local and interexchange trunks, communication equipment, and exchanges. QoS (Quality-of-Service) — A network service methodology where network applications specify their requirements to the network prior to transmission, either implicitly by the application or explicitly by the network manager. RSVP (Reservation Resource Protocol) — An Internet protocol that enables QoS; an application can reserve resources along a path from source to destination. RSVP-enabled routers then schedule and prioritize packets in support of specified levels of QoS. RTP (Real-Time Transport Protocol) — A protocol that transmits real-time data on the Internet. Sending and receiving applications use RTP mechanisms to support streaming data such as audio and video. RTSP (Real-Time Streaming Protocol) — A protocol that runs on top of IP multicasting, UDP, RTP, and RTCP. SCP (Service Control Point) — A centralized node that holds service logic for call management. SSP (Service-Switching Point) — An origination or termination call switch. STP (Service Transfer Point) — A switch that translates SS7 messages and routes them to the appropriate network nodes and databases. SS7 (Signaling System 7) — An ITU-defined common signaling protocol that offloads PSTN data traffic congestion onto a wireless or wireline digital broadband network. SS7 signaling can occur between any SS7 node, and not only between switches that are immediately connected to one another.

ABOUT THE AUTHOR Valene Skerpac, CISSP, is past chairman of the IEEE Communications Society. Over the past 20 years, she has held positions at IBM and entrepreneurial security companies. Valene is currently president of iBiometrics, Inc.

210

AU1518Ch14Frame Page 211 Thursday, November 14, 2002 6:19 PM

Chapter 14

Packet Sniffers: Use and Misuse Steve A. Rodgers, CISSP

A packet sniffer is a tool used to monitor and capture data traveling over a network. The packet sniffer is similar to a telephone wiretap; but instead of listening to phone conversations, it listens to network packets and conversations between hosts on the network. The word sniffer is generically used to describe packet capture tools, similar to the way crescent wrench is used to describe an adjustable wrench. The original sniffer was a product created by Network General (now a division of Network Associates called Sniffer Technologies). Packet sniffers were originally designed to assist network administrators in troubleshooting their networks. Packet sniffers have many other legitimate uses, but they also have an equal number of sinister uses. This chapter discusses some legitimate uses for sniffers, as well as several ways an unauthorized user or hacker might use a sniffer to compromise the security of a network. HOW DO PACKET SNIFFERS WORK? The idea of sniffing or packet capturing may seem very high-tech. In reality it is a very simple technology. First, a quick primer on Ethernet. Ethernet operates on a principle called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). In essence, the network interface card (NIC) attempts to communicate on the wire (or Ethernet). Because Ethernet is a shared technology, the NIC must wait for an “opening” on the wire before communicating. If no other host is communicating, then the NIC simply sends the packet. If, however, another host is already communicating, the network card will wait for a random, short period of time and then try to retransmit. Normally, the host is only interested in packets destined for its address; but because Ethernet is a shared technology, all the packet sniffer needs to do is turn the NIC on in promiscuous mode and “listen” to the packets on 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

211

AU1518Ch14Frame Page 212 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 14-1. Summary window with statistics about the packets as they are being captured.

the wire. The network adapter can capture packets from the data-link layer all the way through the application layer of the OSI model. Once these packets have been captured, they can be summarized in reports or viewed individually. In addition, filters can be set up either before or after a capture session. A filter allows the capturing or displaying of only those protocols defined in the filter. ETHEREAL Several software packages exist for capturing and analyzing packets and network traffic. One of the most popular is Ethereal. This network protocol analyzer can be downloaded from http://www.ethereal.com/ and installed in a matter of minutes. Various operating systems are supported, including Sun Solaris, HP-UX, BSD (several distributions), Linux (several distributions), and Microsoft Windows (95/98/ME, NT4/2000/XP). At the time of this writing, Ethereal was open-source software licensed under the GNU General Public License. After download and installation, the security practitioner can simply click on “Capture” and then “Start,” choose the appropriate network adapter, and then click on “OK.” The capture session begins, and a summary window displays statistics about the packets as they are being captured (see Exhibit 14-1). 212

AU1518Ch14Frame Page 213 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse

Exhibit 14-2. The Ethereal capture session.

Simply click on “Stop” to end the capture session. Exhibit 14-2 shows an example of what the Ethereal capture session looks like. The top window of the session displays the individual packets in the capture session. The information displayed includes the packet number, the time the packet arrived since the capture was started, the source address of the packet, the destination address of the packet, the protocol, and other information about the packet. The second window parses and displays the individual packet in an easily readable format, in this case packet number one. Further detail regarding the protocol and the source and destination addresses is displayed in summary format. The third window shows a data dump of the packet displaying both the hex and ASCII values of the entire packet. Further packet analysis can be done by clicking on the “Tools” menu. Clicking on “Protocol Hierarchy Statistics” will generate a summary report of the protocols captured during the session. Exhibit 14-3 shows an example of what the protocol hierarchy statistics would look like. 213

AU1518Ch14Frame Page 214 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 14-3. The protocol hierarchy statistics.

The security practitioner can also get overall statistics on the session, including total packets captured, elapsed time, average packets per second, and the number of dropped packets. Ethereal is a very powerful tool that is freely available over the Internet. While it may take an expert to fully understand the capture sessions, it does not take an expert to download and install the tool. Certainly the aspiring hacker would have no trouble with the installation and configuration. The security practitioner should understand the availability, features, and ease of use of packet sniffers like Ethereal. Having an awareness of these tools will allow the security practitioner to better understand how the packet sniffer could be used to exploit weaknesses and how to mitigate risk associated with them. LEGITIMATE USES Because the sniffer was invented to help network administrators, many legitimate uses exist for them. Troubleshooting was the first use for the sniffer, but performance analysis quickly followed. Now, many uses for sniffers exist, including those for intrusion detection. Troubleshooting The most obvious use for a sniffer is to troubleshoot a network or application problem. From a network troubleshooting perspective, capture tools can tell the network administrator how many computers are communicating on a network segment, what protocols are used, who is sending or receiving the most traffic, and many other details about the network and its hosts. For example, some network-centric applications are very complex 214

AU1518Ch14Frame Page 215 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse and have many components. Here is a list of some components that play a role in a typical client/server application: • • • • • • • •

Client hardware Client software (OS and application) Server hardware Server software (OS and application) Routers Switches Hubs Ethernet network, T1s, T3s, etc.

This complexity often makes the application extremely difficult to troubleshoot from a network perspective. A packet sniffer can be placed anywhere along the path of the client/server application and can unravel the mystery of why an application is not functioning correctly. Is it the network? Is it the application? Perhaps it has to do with lookup issues in a database. The sniffer, in the hands of a skilled network analyst, can help determine the answers to these questions. A packet sniffer is a powerful troubleshooting tool for several reasons. It can filter traffic based on many variables. For example, let us say the network administrator is trying to troubleshoot a slow client/server application. He knows the server name is slopoke.xyzcompany.com and the host’s name is impatient.xyzcompany.com. The administrator can set up a filter to only watch traffic between the server and client. The placement of the packet sniffer is critical to the success of the troubleshooting. Because the sniffer only sees packets on the local network segment, the sniffer must be placed in the correct location. In addition, when analyzing the capture, the analyst must keep the location of the packet sniffer in mind in order to interpret the capture correctly. If the analyst suspects the server is responding slowly, the sniffer could be placed on the same network segment as the server to gather as much information about the server traffic as possible. Conversely, if the client is suspected of being the cause, the sniffer should be placed on the same network segment as the client. It may be necessary to place the tool somewhere between the two endpoints. In addition to placement, the network administrator may need to set up a filter to only watch certain protocols. For instance, if a Web application using HTTP on port 80 is having problems, it may be beneficial to create a filter to only capture HTTP packets on port 80. This filter will significantly reduce the amount of data the troubleshooting will need to sift through to find the problem. Keep in mind, however, that setting this filter can configure the sniffer to miss important packets that could be the root cause of the problem. 215

AU1518Ch14Frame Page 216 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Performance and Network Analysis Another legitimate use of a packet sniffer is for network performance analysis. Many packet sniffer tools can also provide a basic level of network performance and analysis. They can display the general health of the network, network utilization, error rates, summary of protocols, etc. Specialized performance management tools use specialized packet sniffers called RMON probes to capture and forward information to a reporting console. These systems collect and store network performance and analysis information in a database so the information can be displayed on an operator console, or displayed in graphs or summary reports. Network-Based Intrusion Detection Network-based intrusion detection systems (IDSs) use a sniffer-like packet capture tool as the primary means of capturing data for analysis. A network IDS captures packets and compares the packet signatures to its database of attacks for known attack signatures. If it sees a match, it logs the appropriate information to the IDS logs. The security practitioner can then go back and review these logs to determine what happened. If in fact the attack was successful, this information can later be used to determine how to mitigate the attack or vulnerability to prevent it from happening in the future. Verifying Security Configurations Just as the network administrator may use the sniffer to troubleshoot a network problem, so too can the security practitioner use the sniffer to verify security configurations. A security practitioner may use a packet sniffer to review a VPN application to see if data is being transferred between gateways or hosts in encrypted format. The packet sniffer can also be used to verify a firewall configuration. For example, if a security practitioner has recently installed a new firewall, it would be prudent to test the firewall to make sure its configuration is stopping the protocols it has been configured to stop. The security practitioner can place a packet sniffer on the network behind the firewall and then use a separate host to scan ports of the firewall, or open up connections to hosts that sit behind the firewall. If the firewall is configured correctly, it will only allow ports and connections to be established based on its rule set. Any discrepancies could be reviewed to determine if the firewall is misconfigured or if there is simply an underlying problem with the firewall architecture. MISUSE Sniffing has long been one of the most popular forms of passive attacks by hackers. The ability to “listen” to network conversations is very powerful and 216

AU1518Ch14Frame Page 217 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse intriguing. A hacker can use the packet sniffer for a variety of attacks and information-gathering activities. They may be installed to capture usernames and passwords, gather information on other hosts attached to the same network, read e-mail, or capture other proprietary information or data. Hackers are notorious for installing root kits on their victim hosts. These root kits contain various programs designed to circumvent security on a host and allow a hacker to access a host without the administrator’s knowledge. Most modern root kits, or backdoor programs, include tools such as stealth backdoors, keystroke loggers, and often specialized packet sniffers that can capture sensitive information. The SubSeven backdoor for Windows even includes a remotely accessible GUI (graphical user interface) packet sniffer. The GUI makes the packet sniffer easily accessible and simple to use. The packet sniffer can be configured to collect network traffic, save this information into a log, and relay these logs. Network Discovery Information gathering is one of the first steps hackers must take when attacking a host. In this phase of the attack, they are trying to learn as much about a host or network as they can. If the attackers have already compromised a host and installed a packet sniffer, they can quickly learn more about the compromised host as well as other hosts with whom that host communicates. Hosts are often configured to trust one another. This trust can quickly be discovered using a packet sniffer. In addition, the attacker can quickly learn about other hosts on the same network by monitoring the network traffic and activity. Network topology information can also be gathered. By reviewing the IP addresses and subnets in the captures, the attacker can quickly get a feel for the layout of the network. What hosts exist on the network and are critical? What other subnets exist on the network? Are there extranet connections to other companies or vendors? All of these questions can be answered by analyzing the network traffic captured by the packet sniffer. Credential Sniffing Credential sniffing is the act of using a packet capture tool to specifically look for usernames and passwords. Several programs exist only for this specific purpose. One such UNIX program called Esniff.c only captures the first 300 bytes of all Telnet, FTP, and rlogin sessions. This particular program can capture username and password information very quickly and efficiently. In the Windows environment, L0phtcrack is a program that contains a sniffer that can capture hashed passwords used by Windows systems using LAN manager authentication. Once the hash has been captured, the 217

AU1518Ch14Frame Page 218 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Exhibit 14-4. Protocols vulnerable to packet sniffing. Protocol

Vulnerability

Telnet and rlogin HTTP

Credentials and data are sent in cleartext Basic authentication sends credentials in a simple encoded form, not encrypted; easily readable if SSL or other encryption is not used Credentials and data are sent in cleartext Credentials and data are sent in cleartext Community strings for SNMPv1 (the most widely used) are sent in cleartext, including both public and private community strings

FTP POP3 and IMAP SNMP

L0phtcrack program runs a dictionary attack against the password. Depending on the length and complexity of the password, it can be cracked in a matter of minutes, hours, or days. Another popular and powerful password sniffing program is dsniff. This tool’s primary purpose is credential sniffing and can be used on a wide range of protocols including, but not limited to, HTTP, HTTPS, POP3, and SSH. Use of a specific program like Esniff.c, L0phtcrack, or dsniff is not even necessary, depending on the application or protocol. A simple packet sniffer tool in the hands of a skilled hacker can be very effective. This is due to the very insecure nature of the various protocols. Exhibit 14-4 lists some of the protocols that are susceptible to packet sniffing. E-Mail Sniffing How many network administrators or security practitioners have sent or received a password via e-mail? Most, if not all, have at some point in time. Very few e-mail systems are configured to use encryption and are therefore vulnerable to packet sniffers. Not only is the content of the e-mail vulnerable but the usernames and passwords are often vulnerable as well. POP3 (Post Office Protocol version 3) is a very popular way to access Internet e-mail. POP3 in its basic form uses usernames and passwords that are not encrypted. In addition, the data can be easily read. Security is always a balance of what is secure and what is convenient. Accessing e-mail via a POP3 client is very convenient. It is also very insecure. One of the risks security practitioners must be aware of is that, by allowing POP3 e-mail into their enterprise network, they may also be giving hackers both a username and password to access their internal network. Many systems within an enterprise are configured with the same usernames; and from the user’s standpoint, they often synchronize their passwords across multiple systems for simplicity’s sake or possibly use a single 218

AU1518Ch14Frame Page 219 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse sign-on system. For example, say John Smith has a username of “JSMITH” and has a password of “FvYQ-6d3.” His username would not be difficult to guess, but his password is fairly complex and contains a random string of characters and numbers. The enterprise network that John is accessing has decided to configure its e-mail server to accept POP3 connections because several users, including John, wanted to use a POP3 client to remotely access their e-mail. The enterprise also has a VPN device configured with the same username and password as the e-mail system. If attackers compromise John’s password via a packet sniffer watching the POP3 authentication sequence, they may quickly learn they now have access directly into the enterprise network using the same username and password on the Internet-accessible host called “VPN.” This example demonstrates the vulnerability associated with allowing certain insecure protocols and system configurations. Although the password may not have been accessible through brute force, the attackers were able to capture the password in the clear along with its associated username. In addition, they were able to capitalize on the vulnerability by applying the same username and password to a completely separate system. ADVANCED SNIFFING TOOLS Switched Ethernet Networks “No need to worry. I have a switched Ethernet network.” Wrong! It used to be common for network administrators to refer to a switched network as secure. While it is true they are more secure, several vulnerabilities and techniques have surfaced over the past several years that make them less secure. Reconfigure SPAN/Mirror Port. The most obvious way to capture packets

in a switched network is to reconfigure the switch to send all packets to the port into which the packet sniffer is plugged. This can be done with one simple command line in a Cisco router. Once configured, the switch will send all packets for a port, group of ports, or even an entire VLAN directly to the specified port. This emphasizes the need for increased switch security in today’s environments. A single switch without a password, or with a simple password, can allow an intruder access to a plethora of data and information. Incidentally, this is an excellent reason why a single Ethernet switch should not be used inside and outside a firewall. Ideally, the outside, inside, and DMZ should have their own separate physical switches. Also, use a stronger form of authentication on the network devices other than passwords only. If passwords must be used, make sure they are very complex; and do not use the same password for the outside, DMZ, and inside switches. 219

AU1518Ch14Frame Page 220 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Switch Jamming. Switch jamming involves overflowing the address table of a switch with a flood of false MAC addresses. For some switches this will cause the switch to change from “bridging” mode into “repeating” mode, where all frames are broadcast to all ports. When the switch is in repeating mode, it acts like a hub and allows an attacker to capture packets as if they were on the same local area network. ARP Redirect. An ARP redirect is where a host is configured to send a false ARP request to another host or router. This false request essentially tricks the target host or router into sending traffic destined for the victim host to the attack host. Packets are then forwarded from the attacker’s computer back to the victim host, so the victim cannot tell the communication is being intercepted. Several programs exist that allow this to occur, such as ettercap, angst, and dsniff. ICMP Redirect. An ICMP redirect is similar to the ARP redirect, but in this case the victim’s host is told to send packets directly to an attacker’s host, regardless of how the switch thinks the information should be sent. This too would allow an attacker to capture packets to and from a remote host. Fake MAC Address. Switches forward information based on the MAC (Media Access Control) address of the various hosts to which it is connected. The MAC address is a hardware address that is supposed to uniquely identify each node of a network. This MAC address can be faked or forged, which can result in the switch forwarding packets (originally destined for the victim’s host) to the attacker’s host. It is possible to intercept this traffic and then forward the traffic back to the victim computer, so the victim host does not know the traffic is being intercepted. Other Switch Vulnerabilities. Several other vulnerabilities related to switched networks exist; but the important thing to remember is that, just because a network is built entirely of switches, it does not mean that the network is not vulnerable to packet sniffing. Even without exploiting a switch network vulnerability, an attacker could install a packet sniffer on a compromised host.

Wireless Networks Wireless networks add a new dimension to packet sniffing. In the wired world, an attacker must either remotely compromise a system or gain physical access to the network in order to capture packets. The advent of the wireless network has allowed attackers to gain access to an enterprise without ever setting foot inside the premises. For example, with a simple setup including a laptop, a wireless network card, and software packages downloaded over the Internet, an attacker has the ability to detect, connect to, and monitor traffic on a victim’s network. 220

AU1518Ch14Frame Page 221 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse Exhibit 14-5. Suggestions for mitigating risk associated with insecure protocols. Insecure Protocol Secure Solution Telnet and rlogin HTTP FTP POP3 and IMAP SNMP

Replace Telnet or rlogin with Secure Shell (SSH) Run the HTTP or HTTPS session over a Secure Sockets Layer (SSL) or Transport Layer Security (TLS) connection Replace with secure copy (SCP) or create an IPSec VPN between the hosts Replace with SMIME or use PGP encryption Increase the security by using SNMPv2 or SNMPv3, or create a management IPSec VPN between the host and the network management server

The increase in the popularity of wireless networks has also been followed by an increase in war-driving. War-driving is the act of driving around in a car searching for wireless access points and networks with wireless sniffer-like tools. The hacker can even configure a GPS device to log the exact location of the wireless network. Information on these wireless networks and their locations can be added to a database for future reference. Several sites on the Internet even compile information that people have gathered from around the world on wireless networks and their locations. REDUCING THE RISK There are many ways to reduce the risk associated with packet sniffers. Some of them are easy to implement, while others take complete reengineering of systems and processes. Use Encryption The best way to mitigate risk associated with packet sniffers is to use encryption. Encryption can be deployed at the network level, in the applications, and even at the host level. Exhibit 14-5 lists the “insecure” protocols discussed in the previous section, and suggests a “secure” solution that can be deployed. Security practitioners should be aware of the protocols in use on their networks. They should also be aware of the protocols used to connect to and transfer information outside their network (either over the Internet or via extranet connections). A quick way to determine if protocols vulnerable to sniffing are being used is to check the rule set on the Internet or extranet firewalls. If insecure protocols are found, the security practitioner should investigate each instance and determine exactly what information is being transferred and how sensitive the information is. If the information is sensitive and a more secure alternative exists, the practitioner should recommend and implement a secure alternative. Often, this requires the 221

AU1518Ch14Frame Page 222 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY security practitioner to educate the users on the issues associated with using insecure means to connect to and send information to external parties. IPSec VPNs. A properly configured IPSec VPN can significantly reduce the risk associated with insecure protocols as well. The VPN can be configured from host to host, host to gateway, or gateway to gateway, depending on the environment and its requirements. The VPN “tunnels” the traffic in a secure fashion that prevents an attacker from sniffing the traffic as it traverses the network. Keep in mind, however, that even if a VPN is installed, an attack could still compromise the endpoints of the VPN and have access to the sensitive information directly on the host. This highlights the increased need for strong host security on the VPN endpoint, whether it is a Windows client connecting from a home network or a VPN router terminating multiple VPN connections.

Use Strong Authentication Because passwords are vulnerable to brute-force attack or outright sniffing over the network, an obvious risk mitigation would be to stop using passwords and use a stronger authentication mechanism. This could involve using Kerberos, token cards, smart cards, or even biometrics. The security practitioner must take into consideration the business requirements and the costs associated with each solution before determining which authentication method suits a particular system, application, or enterprise as a whole. By configuring a system to use a strong authentication method, the vulnerability of discovered passwords is no longer an issue. Patches and Updates To capture packets on the network, a hacker must first compromise a host (assuming the hacker does not have physical access). If all the latest patches have been applied to the hosts, the risk of someone compromising a host and installing a capture tool will be significantly reduced. Secure the Wiring Closets Because physical access is one way to access a network, make sure your wiring closets are locked. It is a very simple process to ensure the doors are secured to the wiring closets. A good attack and penetration test will often begin with a check of the physical security and of the security of the wiring closets. If access to a closet is gained and a packet sniffer is set up, a great deal of information can be obtained in short order. There is an obvious reason why an attack and penetration might begin this way. If the perimeter network and the remote access into a company 222

AU1518Ch14Frame Page 223 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse are strong, the physical security may likely be the weak link in the chain. A hacker who is intent on gaining access to the network goes through the same thought process. Also, keep in mind that with the majority of attacks originating from inside the network, you can mitigate the risk of an internal employee using a packet sniffer in a wiring closet by simply locking the doors. Detecting Packet Sniffers Another way to reduce the risk associated with packet sniffers is to monitor the monitors, so to speak. This involves running a tool that can detect a host’s network interface cards running in promiscuous mode. Several tools exist, from simple command-line utilities — which tell whether or not a NIC on the local host is running in promiscuous mode — to more elaborate programs such as Antisniff, which actively scans the network segment looking for other hosts with NICs running in promiscuous mode. SUMMARY The sniffer can be a powerful tool in the hands of the network administrator or security practitioner. Unfortunately, it can be equally powerful in the hands of the hacker. Not only are these tools powerful, they are relatively easy to download off the Internet, install, and use. Security practitioners must be aware of the dangers of packet sniffers and design and deploy security solutions that mitigate the risks associated with them. Keep in mind that using a packet sniffer to gather credential information on one system can often be used to access other unrelated systems with the same username and password. ABOUT THE AUTHOR Steve A. Rodgers, CISSP, has been assisting clients in securing their information assets for over six years. Rodgers specializes in attack and penetration testing, security policy and standards development, and security architecture design. He is the co-founder of Security Professional Services (www.securityps.com) and can be reached at [email protected].

223

AU1518Ch14Frame Page 224 Thursday, November 14, 2002 6:19 PM

AU1518Ch15Frame Page 225 Thursday, November 14, 2002 6:19 PM

Chapter 15

ISPs and Denial-of-Service Attacks K. Narayanaswamy, Ph.D.

A denial-of-service (DoS) attack is any malicious attempt to deprive legitimate customers of their ability to access services, such as a Web server. DoS attacks fall into two broad categories: 1. Server vulnerability DoS attacks: attacks that exploit known bugs in operating systems and servers. These attacks typically will use the bugs to crash programs that users routinely rely upon, thereby depriving those users of their normal access to the services provided by those programs. Examples of vulnerable systems include all operating systems, such as Windows NT or Linux, and various Internetbased services such as DNS, Microsoft’s IIS Servers, Web servers, etc. All of these programs, which have important and useful purposes, also have bugs that hackers exploit to bring them down or hack into them. This kind of DoS attack usually comes from a single location and searches for a known vulnerability in one of the programs it is targeting. Once it finds such a program, the DoS attack will attempt to crash the program to deny service to other users. Such an attack does not require high bandwidth. 2. Packet flooding DoS attacks: attacks that exploit weaknesses in the Internet infrastructure and its protocols. Floods of seemingly normal packets are used to overwhelm the processing resources of programs, thereby denying users the ability to use those services. Unlike the previous category of DoS attacks, which exploit bugs, flood attacks require high bandwidth in order to succeed. Rather than use the attacker’s own infrastructure to mount the attack (which might be easier to detect), the attacker is increasingly likely

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

225

AU1518Ch15Frame Page 226 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY to carry out attacks through intermediary computers (called zombies) that the attacker has earlier broken into. Zombies are coordinated by the hacker at a later time to launch a distributed DoS (DDoS) attack on a victim. Such attacks are extremely difficult to trace and defend with the present-day Internet. Most zombies come from home computers, universities, and other vulnerable infrastructures. Often, the owners of the computers are not even aware that their machines are being co-opted in such attacks. The hacker community has invented numerous scripts to make it convenient for those interested in mounting such attacks to set up and orchestrate the zombies. Many references are available on this topic.1–4 We will invariably use the term DoS attacks to mean all denial-of-service attacks, and DDoS to mean flood attacks as described above. As with most things in life, there is good news and bad news in regard to DDoS attacks. The bad news is that there is no “silver bullet” in terms of technology that will make the problem disappear. The good news, however, is that with a combination of common sense processes and practices with, in due course, appropriate technology, the impact of DDoS attacks can be greatly reduced. THE IMPORTANCE OF DDoS ATTACKS Many wonder why network security and DDoS problems in particular have seemingly increased suddenly in seriousness and importance. The main reason, ironically, is the unanticipated growth and success of ISPs. The rapid growth of affordable, high-bandwidth connection technologies (such as DSL, cable modem, etc.) offered by various ISPs has brought in every imaginable type of customer to the fast Internet access arena: corporations, community colleges, small businesses, and the full gamut of home users. Unfortunately, people who upgrade their bandwidth do not necessarily upgrade their knowledge of network security at the same time; all they see is what they can accomplish with speed. Few foresee the potential security dangers until it is too late. As a result, the Internet has rapidly become a high-speed network with depressingly low per-site security expertise. Such a network is almost an ideal platform to exploit in various ways, including the mounting of DoS attacks. Architecturally, ISPs are ideally situated to play a crucial role in containing the problem, although they have traditionally not been proactive on security matters. A recent study by the University of San Diego estimates that there are over 4000 DDoS attacks every week.5 Financial damages from the infamous February 2000 attacks on Yahoo, CNN, and eBay were estimated to be around $1 billion.6 Microsoft, Internet security watchdog CERT, the Department of Defense, and even the White House have been targeted by attackers. 226

AU1518Ch15Frame Page 227 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks Of course, these are high-profile installations, with some options when it comes to responses. Stephen Gibson documents how helpless the average enterprise might be to ward off DDoS attacks at www.scr.com. There is no doubt that DoS attacks are becoming more numerous and deadly. WHY IS DDoS AN ISP PROBLEM? When major corporations suffer the kind of financial losses just described and given the fanatically deterministic American psyche that requires a scapegoat (if not a reasonable explanation) for every calamity and the litigious culture that has resulted from it, rightly or wrongly, someone is eventually going to pay dearly. The day is not far off when, in the wake of a devastating DDoS attack, an enterprise will pursue litigation against the owner of the infrastructure that could (arguably) have prevented an attack with due diligence. A recent article explores this issue further from the legal perspective of an ISP.7 Our position is not so much that you need to handle DDoS problems proactively today; however, we do believe you would be negligent not to examine the issue immediately from a cost/benefit perspective. Even if you have undertaken such an assessment already, you may need to revisit the topic in light of new developments and the state of the computing world after September 11, 2001. The Internet has a much-ballyhooed, beloved, open, chaotic, laissez faire philosophical foundation. This principle permeates the underlying Internet architecture, which is optimized for speed and ease of growth and which, in turn, has facilitated the spectacular explosion and evolution of this infrastructure. For example, thus far, the market has prioritized issues of privacy, speed, and cost over other considerations such as security. However, changes may be afoot and ISPs should pay attention. Most security problems at various enterprise networks are beyond the reasonable scope of ISPs to fix. However, the DDoS problem is indeed technically different. Individual sites cannot effectively defend themselves against DDoS attacks without some help from their infrastructure providers. When under DDoS attack, the enterprise cannot block out the attack traffic or attempt to clear upstream congestion to allow some of its desirable traffic to get through. Thus, the very nature of the DDoS problem virtually compels the involvement of ISPs. The best possible outcome for ISPs is to jump in and shape the emerging DDoS solutions voluntarily with dignity and concern, rather than being perceived as having been dragged, kicking and screaming, into a dialog they do not want. Uncle Sam is weighing in heavily on DDoS as well. In December 2001, the U.S. Government held a DDoS technology conference in Arlington, Virginia, sponsored by the Defense Advanced Research Projects Agency (DARPA) 227

AU1518Ch15Frame Page 228 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY and the Joint Task Force–Central Network Operations. Fourteen carefully screened companies were selected to present their specific DDoS solutions to the government. Newly designated cybersecurity czar Richard Clarke, who keynoted the conference, stressed the critical importance of DDoS and how the administration views this problem as a threat to the nation’s infrastructure, and that protecting the Internet infrastructure is indeed part of the larger problem of homeland security. The current Republican administration, one might safely assume, is disposed toward deregulation and letting the market sort out the DDoS problem. In the reality of postSeptember 11 thinking, however, it is entirely conceivable that ISPs will eventually be forced to contend with government regulations mandating what they should provide by way of DDoS protection. WHAT CAN ISPs DO ABOUT DDoS ATTACKS? When it comes to DDoS attacks, security becomes a two-way street. Not only must the ISP focus on providing as much protection as possible against incoming DDoS attacks against its customers, but it must also do as much as possible to prevent outgoing DDoS attacks from being launched from its own infrastructure against others. All these measures are feasible and cost very little in today’s ISP environment. Minimal measures such as these can significantly reduce the impact of DDoS attacks on the infrastructure, perhaps staving off more draconian measures mandated by the government. An ISP today must have the ability to contend with the DDoS problem at different levels: • Understand and implement the best practices to defend against DDoS attacks. • Understand and implement necessary procedures to help customers during DDoS attacks. • Assess DDoS technologies to see if they can help. We address each of these major areas below. Defending against DDoS Attacks In discussing what an ISP can do, it is important to distinguish the ISP’s own infrastructure (its routers, hosts, servers, etc.), which it fully controls, from the infrastructure of the customers who lease its Internet connectivity, which the ISP cannot, and should not, control. Most of the measures we recommend for ISPs are also appropriate for their customers to carry out. The extent to which ISPs can encourage or enable their customers to follow these practices will be directly correlated to the number of DDoS attacks. 228

AU1518Ch15Frame Page 229 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks Step 1: Ensure the Integrity of the Infrastructure. An ISP plays a critical role in the Internet infrastructure. It is, therefore, very important for ISPs to ensure that their own routers and hosts are resistant to hacker compromise. This means following all the necessary best practices to protect these machines from break-ins and intrusions of any kind. Passwords for user and root accounts must be protected with extra care, and old accounts must be rendered null and void as soon as possible.

In addition, ISPs should ensure that their critical servers (DNS, Web, etc.) are always current on software patches, particularly if they are security related. These programs will typically have bugs that the vendor eliminates through new patches. When providing services such as Telnet, FTP, etc., ISPs should consider the secure versions of these protocols such as SSH, SCP, etc. The latter versions use encryption to set up secure connections, making it more difficult for hackers using packet sniffing tools to acquire usernames and passwords, for example. ISPs can do little to ensure that their users are as conscientious about these matters as they ought to be. However, providing users with the knowledge and tools necessary to follow good security practices themselves will be very helpful. Step 2: Resist Zombies in the Infrastructure. Zombies are created by hackers who break into computers. Although by no means a panacea, tools such as intrusion detection systems (IDSs) provide some amount of help in detecting when parts of an infrastructure have become compromised. These tools vary widely in functionality, capability, and cost. They have a lot of utility in securing computing assets beyond DDoS protection. (A good source on this topic is Note 8.) Certainly, larger customers of the ISP with significant computing assets should also consider such tools.

Where possible, the ISP should provide users (e.g., home users or small businesses) with the necessary software (e.g., downloadable firewalls) to help them. Many ISPs are already providing free firewalls, such as ZoneAlarm, with their access software. Such firewalls can be set up to maximize restrictions on the customers’ computers (e.g., blocking services that typical home computers are never likely to provide). Simple measures like these can greatly improve the ability of these computers to resist hackers. Most zombies can be now be discovered and removed from a computer by the traditional virus scanning software from McAffee, Symantec, and other vendors. It is important to scan not just programs but also any documents with executable content (such as macros). In other words, everything on a disk requires scanning. The only major problem with all virus scanning regimes is that they currently use databases that have signatures 229

AU1518Ch15Frame Page 230 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY of known viruses, and these databases require frequent updates as new viruses get created. As with firewalls, at least in cases where users clearly can use the help, the ISP could try bundling its access software, if any, with appropriate virus scanning software and make it something the user has to contend with before getting on the Internet. Step 3: Implement Appropriate Router Filters. Many DDoS attacks (e.g., Trinoo, Tribal Flood, etc.) rely on source address spoofing, an underlying vulnerability of the Internet protocols whereby the sender of a packet can conjure up a source address other than his actual address. In fact, the protocols allow packets to have completely fabricated, nonexistent source addresses. Several attacks actually rely on this weakness in the Internet. This makes attacks much more difficult to trace because one cannot figure out the source just by examining the packet contents because the attacker controls that.

There is no legitimate reason why an ISP should forward outgoing packets that do not have source addresses from its known legitimate range of addresses. It is relatively easy, given present-day routers, to filter outgoing packets at the border of an ISP that do not have valid source addresses. This is called ingress filtering, described in more detail in RFC 2267. Routers can also implement egress filtering at the point where traffic enters the ISP to ensure that source addresses are valid to the extent possible (e.g., source addresses cannot be from the ISP, packets from specific interfaces must match expected IP addresses, etc.). Note that such filters do not eliminate all DDoS attacks; however, they do force attackers to use methods that are more sophisticated and do not rely on ISPs forwarding packets with obviously forged source addresses. Many ISPs also have blocks of IP addresses set aside that will never be the source or destination of Internet traffic (see RFC 1918). These are addresses for traffic that will never reach the Internet. The ISP should not accept traffic with this destination, nor should it allow outbound traffic from those IP addresses set aside in this manner. Step 4: Disable Facilities You May Not Need. Ever y port that you open (albeit to provide a legitimate service) is a potential gate for hackers to exploit. Therefore, ISPs, like all enterprises, should ensure they block any and all services for which there is no need. Customer sites should certainly be provided with the same recommendations.

You should evaluate the following features to see if they are enabled and what positive value you get from their being enabled in your network: 230

AU1518Ch15Frame Page 231 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks • Directed broadcast. Some DDoS attacks rely on the ability to broadcast packets to many different addresses to amplify the impact of their handiwork. Directed broadcast is a feature that should not be needed for inbound traffic on border routers at the ISP. • Source routing. This is a feature that enables the sender of a packet to specify an ISP address through which the packet must be routed. Unless there is a compelling reason not to, this feature should be disabled because compromised computers within the ISP infrastructure can exploit this feature to become more difficult to locate during attacks. Step 5: Impose Rate Limits on ICMP and UDP Traffic. Many DDoS attacks exploit the vulnerability of the Internet where the entire bandwidth can be filled with undesirable packets of different descriptions. ICMP (Internet Control Message Protocol, or ping) packets and User Datagram Protocol (UDP) are examples of this class of packets. You cannot completely eliminate these kinds of packets, but neither should you allow the entire bandwidth to be filled with such packets.

The solution is to use your routers to specify rate limits for such packets. Most routers come with simple mechanisms called class-based queuing (CBQ), which you can use to specify the bandwidth allocation for different classes of packets. You can use these facilities to limit the rates allocated for ICMP, UDP, and other kinds of packets that do not have legitimate reasons to hog all available bandwidth. Assisting Customers during a DDoS Attack It is never wise to test a fire hydrant during a deadly blaze. In a similar manner, every ISP will do well to think through its plans should one of its customers become the target of DDoS attacks. In particular, this will entail full understanding and training of the ISP’s support personnel in as many (preferably all) of the following areas as possible: • Know which upstream providers forward traffic to the ISP. ISP personnel need to be familiar with the various providers with whom the ISP has Internet connections and the specific service level agreements (SLAs) with each, if any. During a DDoS attack, bad traffic will typically flow from one or more of these upstream providers, and the options of an ISP to help its customers will depend on the specifics of its agreements with its upstream providers. • Be able to identify and isolate traffic to a specific provider. Once the customer calls during a DDoS directed at his infrastructure, the ISP should be able to determine the source of the bad traffic. All personnel should be trained in the necessary diagnostics to do so. Customers will typically call with the ISP addresses they see on the attack traffic. While this might not be the actual source of the attack, because of 231

AU1518Ch15Frame Page 232 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY source spoofing, it should help the ISP in locating which provider is forwarding the bad traffic. • Be able to filter or limit the rate of traffic from a given provider. Often, the ISP will be able to contact the upstream provider to either filter or limit the rate of attack traffic. If the SLA does not allow for this, the ISP can consider applying such a filter at its own router to block the attack traffic. • Have reliable points of contact with each provider. The DDoS response by an ISP is only as good as its personnel and their knowledge of what to do and whom to contact from their upstream providers. Once again, such contacts cannot be cultivated after an attack has occurred. It is better to have these pieces of information in advance. Holding DDoS attack exercises to ensure that people can carry out their duties during such attacks is the best way to make sure that everyone knows what to do to help the customer. Assessing DDoS Technologies Technological solutions to the DDoS problem are intrinsically complex. DDoS attacks are a symptom of the vulnerabilities of the Internet, and a single site is impossible to protect without cooperation from upstream infrastructure. New products are indeed emerging in this field; however, if you are looking to eliminate the problem by buying an affordable rack-mountable panacea that keeps you in a safe cocoon, you are fresh out of luck. Rather than give you a laundry list of all the vendors, I am going to categorize these products somewhat by the problems they solve, their features, and their functionality so that you can compare apples to apples. Still, the comparison can be a difficult one because various products do different things and more vendors are continually entering this emerging, niche market. Protection against Outgoing DDoS Attacks. Unlike virus protection tools, which are very general in focus, these tools are geared just to find DoS worms and scripts. There are basically two kinds of products that you can find here. Host-Based DDoS Protection. Such protection typically prevents hosts from

being taken over as zombies in a DDoS attack. These tools work in one of two major ways: (1) signature analysis, which, like traditional virus scanners, stores a database of known scripts and patterns and scans for known attack programs; and (2) behavior analysis, which monitors key system parameters for the behavior underlying the attacks (rather than the specific attack programs) and aborts the programs and processes that induce the underlying bad behavior. 232

AU1518Ch15Frame Page 233 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks Established vendors of virus scanning products, such as McAffee, Symantec, and others, have extended their purview to include DoS attacks. Other vendors provide behavior-analytic DDoS protection that essentially detects and prevents DDoS behavior emanating from a host. The major problem with host-based DDoS protection, from an ISP’s perspective, is that one cannot force the customers to use such tools or to scan their disks for zombies, etc. Damage-Control Devices. A few recent products (such as Captus’ Captio and Cs3, Inc.’s Reverse Firewall 9,10) focus on containing the harm that DDoS attacks can do in the outgoing direction. They restrict the damage from DDoS to the smallest possible network. These devices can be quite useful in conjunction with host-based scanning tools. Note that the damage-control devices do not actually prevent an infrastructure from becoming compromised; however, they do provide notification that there is bad traffic from your network and provide its precise origin. Moreover, they give you time to act by throttling the attack at the perimeter of your network and sending you a notification. ISPs could consider using these devices as insurance to insulate themselves from the damage bad customers can do to them as infrastructure providers. Protection against Incoming Attacks. As we have mentioned before, defending against incoming attacks at a particular site requires cooperation from the upstream infrastructure. This makes DDoS protection products quite complex. Moreover, various vendors have tended to realize the necessary cooperation in very different ways. A full treatment of all of these products is well beyond the scope of this chapter. However, here are several issues you need to consider as an ISP when evaluating these products:

• Are the devices inline or offline? An inline device will add, however minimally, to the latency. Some of the devices are built using hardware in an effort to reduce latency. Offline devices, while they do not have that problem, do not have the full benefit of looking at all the traffic in real-time. This could affect their ability to defend effectively. • Do the devices require infrastructure changes and where do they reside? Some of the devices either replace or deploy alongside existing routers and firewalls. Other technologies require replacement of the existing infrastructure. Some of the devices need to be close to the core routers of the network, while most require placement along upstream paths from the site being protected. • How do the devices detect DDoS attacks and what is the likelihood of false positives? The degree of sophistication of the mechanism of detection and its effectiveness in indicating real attacks is all-important in any security technology. After all, a dog that barks the entire day does protect you from some burglars — but you just might stop listening to its warnings! Most of the techniques use comparisons of actual 233

AU1518Ch15Frame Page 234 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY traffic to stored profiles of attacks, or “normal” traffic, etc. A variety of signature-based heuristics are applied to detect attacks. The jury is still out on how effective such techniques will be in the long run. • How do the devices know where the attack is coming from? A major problem in dealing effectively with DDoS attacks is to know, with any degree of certainty, the source of the attacks. Because of source address spoofing on the Internet, packets do not necessarily have to originate where they say they do. All the technologies have to figure out is from where in the upstream infrastructure the attack traffic is flowing. It is the routers along the attack path that must cooperate to defend against the attack. Some of the approaches require that their devices communicate in real-time to form an aggregate picture of where the attack is originating. • What is the range of responses the devices will take and are you comfortable with them? Any DDoS defense must minimally stop the attack from reaching the intended victim, thereby preventing the victim’s computing resources from deteriorating or crashing. However, the real challenge of any DDoS defense is to find ways for legitimate customers to get through while penalizing only the attackers. This turns out to be the major technical challenge in this area. The most common response includes trying to install appropriate filters and rate limits to push the attack traffic to the outer edge of the realm of control of these devices. At the present time, all the devices that provide DDoS defense fall into this category. How effective they will be remains to be seen. The products mentioned here are quite pricey even though the technologies are still being tested under fire. DDoS will have to be a very important threat in order for smaller ISPs to feel justified in investing their dollars in these devices. Finally, many of the approaches are proprietary in nature, so side-by-side technical comparisons are difficult to conduct. Some industry publications do seem to have tested some of these devices in various ways. A sampling of vendors and their offerings, applying the above yardsticks, is provided here: • Arbor Networks (www.arbornetworks.com): offline devices, near core routers, anomaly-based detection; source is tracked by communication between devices, and defense is typically the positioning of a filter at a router where the bad traffic enters the network • Asta Networks (www.astanetworks.com): offline devices that work alongside routers within a network and upstream, signature-based detection; source is tracked by upstream devices, and defense is to use filters at upstream routers • Captus Networks (www.captusnetworks.com): inline device used to throttle incoming 234

AU1518Ch15Frame Page 235 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks or outgoing attacks; uses windowing to detect non-TCP traffic and does not provide ways for customers to get in; works as a damage-control device for outgoing attacks • Cs3, Inc. (www.cs3-inc.com): inline devices, modified routers, and firewalls; routers mark packets with path information to provide fair service, and firewalls throttle attacks; source of the attack provided by the path information, and upstream neighbors are used to limit attack traffic when requested; Reverse Firewall is a damage-control device for outgoing attacks • Mazu Networks (www.mazunetworks.com): inline devices at key points in network; deviations from stored historical traffic profile indicate attack; the source of the attack is pinpointed by communication between devices, and defense is provided by using filters to block out the bad traffic • Okena (www.okena.com): host-based system that has extended intrusion detection facilities to provide protection against zombies; it is a way to keep one’s infrastructure clean but is not intended to protect against incoming attacks IMPORTANT RESOURCES Finally, the world of DoS, as is indeed the world of Internet security, is dynamic. If your customers are important to you, you should have people that are on top of the latest threats and countermeasures. Excellent resources in the DoS security arena include: • Computer Emergency Response Team (CERT) (www.cert.org): a vast repository of wisdom about all security-related problems with a growing section on DoS attacks; you should monitor this site regularly to find out what you need to know about this area. This site has a very independent and academic flavor. Funded by the Department of Defense, this organization is likely to play an even bigger role in putting out alerts and other information on DDoS. • System Administration, Networking and Security (SANS) Institute (www.sans.org): a cooperative forum in which you can instantly access the expertise of over 90,000 professionals worldwide. It is an organization of industry professionals, unlike CERT. There is certainly a practical orientation to this organization. It offers courses, conferences, seminars, and White Papers on various topics that are well worth the investment. It also provides alerts and analyses on security incidents through incidents.org, a related facility. 235

AU1518Ch15Frame Page 236 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Notes 1. Houle, K. and Weaver, G., “Trends in Denial of Service Technology,” CERT Coordination Center, October 2001, http://www.cert.org/archive/pdf/DOS_trends.pdf. 2. Myers, M., “Securing against Distributed Denial of Service Attacks,” Client/Server Connection, Ltd., http://www.cscl.com/techsupp/techdocs/ddossamp.html. 3. Paul, B., “DDOS: Internet Weapons of Mass Destruction,” Network Computing, Jan. 1, 2001, http://www.networkcomputing.com/1201/1201f1c2.html. 4. Harris, S., “Denying Denial of Service,” Internet Security, Sept. 2001, http://www. infosecuritymag.com/articles/september01/cover.shtml. 5. Lemos, R., “DoS Attacks Underscore Net’s Vulnerability,” CNETnews.com, June 1, 2001, http://news.cnet.com/news/0-1003-200-6158264.html?tag=mn_hd. 6. Yankee Group News Releases, Feb. 10, 2000, http://www.yankeegroup.com/webfolder/ yg21a.nsf/press/384D3C49772576EF85256881007DC0EE?OpenDocument. 7. Radin, M.J. et al., “Distributed Denial of Service Attacks: Who Pays?,” Mazu Networks, http://www.mazunetworks.com/radin-es.html. 8. SANS Institute Resources, Intrusion Detection FAQ, Version 1.52, http://www.sans.org/ newlook/resources/IDFAQ/ID_FAQ.htm. 9. Savage, M., “Reverse Firewall Stymies DDOS Attacks,” Computer Reseller News, Dec. 28, 2001, http://www.crn.com/sections/BreakingNews/breakingnews.asp? ArticleID=32305. 10. Desmond, P., “Cs3 Mounts Defense against DDOS Attacks,” eComSecurity.com, Oct. 30, 2001, http://www.ecomsecurity.com/News_2001-10-30_DDos.cfm.

Further Reading Singer, A., “Eight Things that ISPs and Network Managers Can Do to Help Mitigate DDOS Attacks,” San Diego Supercomputer Center, http://security.sdsc.edu/publications/ddos.shtml.

ABOUT THE AUTHOR Dr. K. Narayanaswamy, Ph.D., Chief Technology Officer and co-founder, Cs3, Inc., is an accomplished technologist who has successfully led the company’s research division since inception. He was the principal investigator of several DARPA and NSF research projects that have resulted in the company’s initial software product suite, and leads the company’s current venture into DDoS and Internet infrastructure technology. He has a Ph.D. in Computer Science from the University of Southern California.

236

AU1518Ch16Frame Page 237 Thursday, November 14, 2002 6:18 PM

Domain 3

Security Management Practices

AU1518Ch16Frame Page 238 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES This domain is typically one of the larger in previous Handbooks, and this volume is not dissimilar. It is often said that information security is far more an infrastructure of people and process than technology. The chapters found here truly reflect this reality. In this domain, we find chapters that address the security function within an organization. Much if not all of the success of a security program can be attributed to organizational effectiveness, which spans the continuum from how much support executive management lends to the program to how well each employee acts on their individual accountability to carry out the program’s intent. However, we also see that there is no one-size-fitsall solution. Within this domain, we read various opinions on where the security function should report, strategies for partnering with other risk management functions, how to develop and protect an information security budget, methods for encouraging the adoption of security throughout the enterprise, and ways in which people — the most critical resource — can be leveraged to achieve security success. An effective security program must be grounded in clearly stated and communicated policy; however, as is pointed out here, policy development cannot be considered as one time and you’re done. Policies have life cycles; and, as the center posts of high-quality security programs, entail the fundamental principles on which the organization stands. Policies also set expectations for critical overarching issues such as ownership, custodianship, and classification of information; people issues such as setting the expectation of privacy for employees; what is the appropriate use of computing resources; and technical policies for virus protection and electronic mail security. This volume of the Handbook certainly reflects a sign of the times. Although in the past outsourcing security was considered to be taboo, many organizations currently acknowledge that good security professionals are rare, security is far from easy, economies of scale can be recognized, and synergies can be yielded. Outsourcing some or all of security administration and security operations is doable — a successful strategy when done properly. Therefore, we feature several viewpoints on contracting with external organizations to manage all or parts of the security function.

238

AU1518Ch16Frame Page 239 Thursday, November 14, 2002 6:18 PM

Chapter 16

The Human Side of Information Security Kevin Henry, CISA, CISSP

We often hear that people are the weakest link in any security model. That statement brings to mind the old adage that a chain is only as strong as its weakest link. Both of these statements may very well be true; however, they can also be false and misleading. Throughout this chapter we are going to define the roles and responsibilities of people, especially in relation to information security. We are going to explore how people can become our strongest asset and even act as a compensating strength for areas where mechanical controls are ineffective. We will look briefly at the training and awareness programs that can give people the tools and knowledge to increase security effectiveness rather than be regarded as a liability and a necessary evil. THE ROLE OF PEOPLE IN INFORMATION SECURITY First, we must always remember that systems, applications, products, etc., were created for people — not the other way around. As marketing personnel know, the end of any marketing plan is when a product or service is purchased for, and by, a person. All of the intermediate steps are only support and development for the ultimate goal of providing a service that a person is willing, or needs, to purchase. Even though many systems in development are designed to reduce labor costs, streamline operations, automate repetitive processes, or monitor behavior, the system itself will still rely on effective management, maintenance upgrades, and proper use by individuals. Therefore, one of the most critical and useful shifts in perspective is to understand how to get people committed to and knowledgeable about their roles and responsibilities as well as the importance of creating, enforcing, and committing to a sound security program. Properly trained and diligent people can become the strongest link in an organization’s security infrastructure. Machines and policy tend to be static and limited by historical perspectives. People can respond quickly, 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

239

AU1518Ch16Frame Page 240 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES absorb new data and conditions, and react in innovative and emotional ways to new situations. However, while a machine will enforce a rule it does not understand, people will not support a rule they do not believe in. The key to strengthening the effectiveness of security programs lies in education, flexibility, fairness, and monitoring. THE ORGANIZATION CHART A good security program starts with a review of the organization chart. From this administrative tool, we learn hints about the structure, reporting relationships, segregation of duties, and politics of an organization. When we map out a network, it is relatively easy to slot each piece of equipment into its proper place, show how data flows from one place to another, show linkages, and expose vulnerabilities. It is the same with an organization chart. Here we can see the structure of an organization, who reports to whom, whether authority is distributed or centralized, and who has the ability or placement to make decisions — both locally and throughout the enterprise. Why is all of this important? In some cases, it is not. In rare cases, an ideal person in the right position is able to overcome some of the weaknesses of a poor structure through strength or personality. However, in nearly all cases, people fit into their relative places in the organizational structure and are constrained by the limitations and boundaries placed around them. For example, a security department or an emergency planning group may be buried deep within one silo or branch of an organization. Unable to speak directly with decision makers, financial approval teams, or to have influence over other branches, their efforts become more or less philosophical and ineffective. In such an environment the true experts often leave in frustration and are replaced by individuals who thrive on meetings and may have limited vision or goals. DO WE NEED MORE POLICY? Many recent discussions have centered on whether the information security community needs more policy or to simply get down to work. Is all of this talk about risk assessment, policy, roles and responsibilities, disaster recovery planning, and all of the other soft issues that are a part of an information security program only expending time and effort with few results? In most cases, this is probably true. Information security must be a cohesive, coordinated action, much like planning any other large project. A house can be built without a blueprint, but endless copies of blueprints and modifications will not build a house. However, proper planning and methodologies will usually result in a project that is on time, meets customer needs, has a clearly defined budget, stays within its budget, and is almost always run at a lower stress level. As when a home is built, the blueprints almost always change, modifications are done, and, together with 240

AU1518Ch16Frame Page 241 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the physical work, the administrative effort keeps the project on track and schedules the various events and subcontractors properly. Many firms have information security programs that are floundering for lack of vision, presentation, and coordination. For most senior managers, information security is a gaping dark hole into which vast amounts of cash are poured with few outcomes except further threats, fear-mongering, and unseen results. To build an effective program requires vision, delegation, training, technical skills, presentation skills, knowledge, and often a thick skin — not necessarily in that order. The program starts with a vision. What do we want to accomplish? Where would we like to be? Who can lead and manage the program? How can we stay up to date, and how can we do it with limited resources and skills? A vision is the perception we have of the goal we want to reach. A vision is not a fairy tale but a realistic and attainable objective with clearly defined parameters. A vision is not necessarily a roadmap or a listing of each component and tool we want to use; rather, it is a strategy and picture of the functional benefits and results that would be provided by an effective implementation of the strategic vision. How do we define our vision? This is a part of policy development, adherence to regulations, and risk assessment. Once we understand our security risks, objectives, and regulations, we can begin to define a practical approach to addressing these concerns. A recent seminar was held with security managers and administrators from numerous agencies and organizations. The facilitator asked the group to define four major technical changes that were on the horizon that would affect their agencies. Even among this knowledgeable group, the response indicated that most were unaware of the emerging technologies. They were knowledgeable about current developments and new products but were unaware of dramatic changes to existing technologies that would certainly have a major impact on their operations and technical infrastructures within the next 18 months. This is a weakness among many organizations. Strategic planning has been totally overwhelmed by the need to do operational and tactical planning. Operational or day-to-day planning is primarily a response mechanism — how to react to today’s issues. This is kindly referred to as crisis management; however, in many cases the debate is whether the managers are managing the crisis or the crisis is managing the managers. Tactical planning is short- to medium-term planning. Sometimes, tactical planning is referred to in a period of up to six months. Tactical planning is 241

AU1518Ch16Frame Page 242 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES forecasting developments to existing strategies, upgrades, and operational process changes. Tactical planning involves understanding the growth, use, and risks of the environment. Good tactical plans prevent performance impacts from over-utilization of hardware resources, loss of key personnel, and market changes. Once tactical planning begins to falter, the impact is felt on operational activity and planning within a short time frame. Strategic planning was once called long-term planning, but that is relative to the pace of change and volatility of the environment. Strategic planning is preparing for totally new approaches and technologies. New projects, marketing strategies, new risks, and economic conditions are all a part of a good strategic plan. Strategic planning is looking ahead to entirely new solutions for current and future challenges — seeing the future and how the company or organization can poise itself to be ready to adopt new technologies. A failure to have a strategic plan results in investment in technologies that are outdated, have a short life span, are ineffective, do not meet the expectations of the users, and often result in a lack of confidence by senior management (especially from the user groups) in the information technology or security department. An information security program is not only a fire-fighting exercise; yet for many companies, that is exactly what they are busy with. Many system administrators are averaging more than five patch releases a week for the systems for which they are responsible. How can they possibly keep up and test each new patch to ensure that it does not introduce other problems? Numerous patches have been found to contain errors or weaknesses that affect other applications or systems. In October 2001, anti-virus companies were still reporting that the LoveLetter virus was accounting for 2.5 percent of all help desk calls — more than a year after patches were available to prevent infection.1 What has gone wrong? How did we end up in the position we are in today? The problem is that not any one person can keep up with this rapidly growing and developing field. Here, therefore, is one of the most critical reasons for delegation: the establishment of the principles of responsibility and accountability in the correct departments and with the proper individuals. Leadership and placement of the security function is an ongoing and never-to-be-resolved debate. There is not a one-size-fits-all answer; however, the core concern is whether the security function has the influence and authority it needs to fulfill its role in the organization. The role of security is to inform, monitor, lead, and enforce best practice. As we look further at each individual role and responsibility in this 242

AU1518Ch16Frame Page 243 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security chapter, we will define some methods of passing on information or awareness training. SECURITY PLACEMENT The great debate is where the security department should reside within an organization. There are several historical factors that apply to this question. Until recently, physical security was often either outsourced or considered a less-skilled department. That was suitable when security consisted primarily of locking doors and patrolling hallways. Should this older physical security function be merged into the technical and cyber-security group? To use our analogy earlier of security being a chain, and the risk that one weak link may have a serious impact on the entire chain, it is probable that combining the functions of physical and technical security is appropriate. Physical access to equipment presents a greater risk than almost any other vulnerability. The trend to incorporate security, risk management, business continuity, and sometimes even audit under one group led by a chief risk officer is recognition both of the importance of these various functions and the need for these groups to work collaboratively to be effective. The position of chief risk officer (CRO) is usually as a member of the senior management team. From this position, the CRO can ensure that all areas of the organization are included in risk management and disaster recovery planning. This is an extremely accountable position. The CRO must have a team of diligent and knowledgeable leaders who can identify, assess, analyze, and classify risks, data, legislation, and regulation. They must be able to convince, facilitate, coordinate, and plan so that results are obtained; workable strategies become tactical plans; and all areas and personnel are aware, informed, and motivated to adhere to ethics, best practices, policy, and emergency response. As with so many positions of authority, and especially in an area where most of the work is administrative such as audit, business continuity planning, and risk management, the risk of gathering a team of paper pushers and “yes men” is significant. The CRO must resist this risk by encouraging the leaders of the various departments to keep each other sharp, continue raising the bar, and striving for greater value and benefits. THE SECURITY DIRECTOR The security director should be able to coordinate the two areas of physical and technical security. This person has traditionally had a law enforcement background, but these days it is important that this person have a good understanding of information systems security. This person ideally should have certification such as the CISSP (Certified Information 243

AU1518Ch16Frame Page 244 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES Systems Security Professional administered by ISC2 [www.isc2.org]) and experience in investigation and interviewing techniques. Courses provided by companies like John E. Reid and Associates can be an asset for this position. ROLES AND RESPONSIBILITIES The security department must have a clearly defined mandate and reporting structure. All of their work should be coordinated with the legal and human resources departments. In extreme circumstances they should have access directly to the board of directors or another responsible position so that they can operate confidentially anywhere within the organization, including the executive management team. All work performed by security should be kept confidential in order to protect information about ongoing investigations or erroneously damage the reputation of an individual or a department. Security should also be a focus point to which all employees, customers, vendors, and the public can refer questions or threats. When an employee receives an e-mail that they suspect may contain a virus or that alleges a virus is on the loose, they should know to contact security for investigation — and not to send the e-mail to everyone they know to warn them of the perceived threat. The security department enforces organizational policy and is often involved in the crafting and implementation of policy. As such, they need to ensure that policy is enforceable, understandable, comprehensive, up-todate, and approved by senior management. TRAINING AND AWARENESS The security director has the responsibility of promoting education and awareness as well as staying abreast of new developments, threats, and countermeasures. Association with organizations such as SANS (www.sans.org), ISSA (www.issa.org), and CSI (www.gocsi.org) can be beneficial. There are many other groups and forums out there; and the director must ensure that the most valued resources are used to provide alerts, trends, and product evaluation. The security department must work together with the education and training departments of the organization to be able to target training programs in the most effective possible manner. Training needs to be relevant to the job functions and risks of the attendees. If the training can be imparted in such a way that the attendees are learning the concepts and principles without even realizing how much they have learned, then it is probably ideal. Training is not a “do not do this” activity — ideally, training does not need to only define rules and regulations; rather, training is an 244

AU1518Ch16Frame Page 245 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security activity designed to instill a concept of best practice and understanding to others. Once people realize the reasons behind a guideline or policy, they will be more inclined to better standards of behavior than they would if only pressured into a firm set of rules. Training should be creative, varied, related to real life, and frequent. Incorporating security training into a ten-minute segment of existing management and staff meetings, and including it as a portion of the new employee orientation process, is often more effective than a day-long seminar once a year. Using examples can be especially effective. The effectiveness of the training is increased when an actual incident known to the staff can be used as an example of the risks, actions, retribution, and reasoning associated with an action undertaken by the security department. This is often called dragging the wolf into the room. When a wolf has been taking advantage of the farmer, bringing the carcass of the wolf into the open can be a vivid demonstration of the effectiveness of the security program. When there has been an incident or employee misuse, bringing this into the open (in a tactful manner) can be a way to prevent others from making the same mistakes. Training is not fear mongering. The attitude of the trainers should be to raise the awareness and behavior of the attendees to a higher level, not to explain the rules as if to criminals that they had “better behave or else.” This is perhaps the greatest strength of the human side of information security. Machines can be programmed with a set of rules. The machine then enforces these rules mechanically. If someone is able to slightly modify their activity or use a totally new attack strategy, they may be able to circumvent the rules and attack the machine or network. Also — because machines are controlled by people — when employees feel unnecessarily constrained by a rule, they may well disable or find a way to bypass the constraint and leave a large hole in the rule base. Conversely, a securityconscious person may be able to detect an aberration in behavior or even attitude that could be a precursor to an attack that is well below the detection level of a machine. REACTING TO INCIDENTS Despite our best precautions and controls, incidents will arise that test the strength of our security programs. Many incidents may be false alarms that can be resolved quickly; however, one of the greatest fears with false alarms is the tendency to become immune to the alarms and turn off the alarm trigger. All alarms should be logged and resolved. This may be done electronically, but it should not be overlooked. Alarm rates can be critical indicators of trends or other types of attacks that may be emerging; they can also be indicators of additional training requirements or employees attempting to circumvent security controls. 245

AU1518Ch16Frame Page 246 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES One of the tools used by security departments to reduce nuisance or false alarms is the establishment of clipping levels or thresholds for alarm activation. The clipping level is the acceptable level of error before triggering the alarm. These are often used for password lockout thresholds and other low-level activity. The establishment of the correct clipping level depends on historical events, the sensitivity of the system, and the granularity of the system security components. Care must be exercised to ensure that clipping levels are not set too high so that a low-level attack can be performed without bringing in an alarm condition. Many corporations use a tiered approach to incident response. The initial incident or alarm is recognized by a help desk or low-level technical person. This person logs the alarm and attempts to resolve the alarm condition. If the incident is too complex or risky to be resolved at this level, the technician refers the alarm to a higher-level technical expert or to management. It is important for the experts to routinely review the logs of the alarms captured at the initial point of contact so that they can be assured that the alarms are being handled correctly and to detect relationships between alarms that may be an indication of further problems. Part of good incident response is communication. To ensure that the incident is handled properly and risk to the corporation is minimized, a manner of distributing the information about the incident needs to be established. Pagers, cell phones, and e-mail can all be effective tools for alerting key personnel. Some of the personnel that need to be informed of an incident include senior management, public relations, legal, human resources, and security. Incident handling is the expertise of a good security team. Proper response will contain the damage; assure customers, employees, and shareholders of adequate preparation and response skills; and provide feedback to prevent future incidents. When investigating an incident, proper care must be taken to preserve the information and evidence collected. The victims or reporting persons should be advised that their report is under investigation. The security team is also responsible for reviewing past incidents and making recommendations for improvements or better controls to prevent future damage. Whenever a business process is affected, and the business continuity plan is enacted, security should ensure that all assets are protected and controls are in place to prevent disruption of recovery efforts. Many corporations today are using managed security service providers (MSSPs) to monitor their systems. The MSSP accumulates the alarms and notifies the corporation when an alarm or event of significant seriousness occurs. When using an MSSP, the corporation should still have contracted measurement tools to evaluate the appropriateness and effectiveness of 246

AU1518Ch16Frame Page 247 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the MSSP’s response mechanisms. A competent internal resource must be designated as the contact for the MSSP. If an incident occurs that requires external agencies or other companies to become involved, a procedure for contacting external parties should be followed. An individual should not contact outside groups without the approval and notification of senior management. Policy must be developed and monitored regarding recent laws requiring an employee to alert police forces of certain types of crimes. THE IT DIRECTOR — THE CHIEF INFORMATION OFFICER (CIO) The IT director is responsible for the strategic planning and structure of the IT department. Plans for future systems development, equipment purchase, technological direction, and budgets all start from the office of the IT director. In most cases, the help desk, system administrators, development departments, production support, operations, and sometimes even telecommunications departments are included in his jurisdiction. The security department should not report to the IT director because this can create a conflict between the need for secure processes and the push to develop new systems. Security can often be perceived as a roadblock for operations and development staff, and having both groups report to the same manager can cause conflict and jeopardize security provisioning. The IT director usually requires a degree in electrical engineering or computer programming and extensive experience in project planning and implementation. This is important for an understanding of the complexities and challenges of new technologies, project management, and staffing concerns. The IT director or CIO should sit on the senior management team and be a part of the strategic planning process for the organization. Facilitating business operations and requirements and understanding the direction and technology needs of the corporation are critical to ensuring that a gulf does not develop between IT and the sales, marketing, or production shops. In many cases, corporations have been limited in their flexibility due to the cumbersome nature of legacy systems or poor communications between IT development and other corporate areas. THE IT STEERING COMMITTEE Many corporations, agencies, and organizations spend millions of dollars per year on IT projects, tools, staff, and programs and yet do not realize adequate benefits or return on investment (ROI) for the amounts of money spent. In many cases this is related to poor project planning, lack of a structured development methodology, poor requirements definition, lack of foresight for future business needs, or lack of close interaction between 247

AU1518Ch16Frame Page 248 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES the IT area and the business units. The IT steering committee is comprised of leaders from the various business units of the organization and the director of IT. The committee has the final approval for any IT expenditures and project prioritization. All proposed IT projects should be presented to the committee along with a thorough business case and forecast expenditure requirements. The committee then determines which projects are most critical to the organization according to risk, opportunities, staffing availability, costs, and alignment with business requirements. Approval for the projects is then granted. One of the challenges for many organizations is that the IT steering committee does not follow up on ongoing projects to ensure that they meet their initial requirements, budget, time frames, and performance. IT steering committee members need to be aware of business strategies, technical issues, legal and administrative requirements, and economic conditions. They need the ability to overrule the IT director and cancel or suspend any project that may not provide the functionality required by the users, adequate security, or is seriously over budget. In such cases the IT steering committee may require a detailed review of the status of the project and reevaluate whether the project is still feasible. Especially in times of weakening IT budgets, all projects should undergo periodic review and rejustification. Projects that may have been started due to hype or the proverbial bandwagon — “everyone must be E-business or they are out of business” — and do not show a realistic return on investment should be cancelled. Projects that can save money must be accelerated — including in many cases a piecemeal approach to getting the most beneficial portions implemented rapidly. Projects that will result in future savings, better technology, and more market flexibility need to be continued, including projects to simplify and streamline IT infrastructure. CHANGE MANAGEMENT — CERTIFICATION AND ACCREDITATION Change management is one of the greatest concerns for many organizations today. In our fast-paced world of rapid development, short time to market, and technological change, change management is the key to ensuring that a “sober second thought” is taken before a change to a system goes into production. Many times, the pressure to make a change rapidly and without a formal review process has resulted in a critical system failure due to inadequate testing or unanticipated or unforeseen technical problems. There are two sides to change management. The most common definition is that change management is concerned with the certification and accreditation process. This is a control set in place to ensure that all changes that are proposed to an existing system are properly tested, approved, and structured (logically and systematically planned and implemented). 248

AU1518Ch16Frame Page 249 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security The other aspect of change management comes from the project management and systems development world. When an organization is preparing to purchase or deploy a new system, or modify an existing system, the organization will usually follow a project management framework to control the budget, training, timing, and staffing requirements of the project. It is common (and often expected, depending on the type of development life cycle employed) that such projects will undergo significant changes or decision points throughout the project lifetime. The decision points are times when evaluations of the project are made and a choice to either continue or halt the project may be required. Other changes may be made to a project due to external factors — economic climate, marketing forces, and availability of skilled personnel — or to internal factors such as identification of new user requirements. These changes will often affect the scope of the project (the amount of work required and the deliverables) or timing and budgeting. Changes made to a project in midstream may cause the project to become unwieldy, subject to large financial penalties — especially when dealing with an outsourced development company — or delayed to the point of impacting business operations. In this instance, change management is the team of personnel that will review proposed changes to a project and determine the cutoff for modifications to the project plan. Almost everything we do can be improved and as the project develops, more ideas and opportunities arise. If uncontrolled, the organization may well be developing a perfect system that never gets implemented. The change control committee must ensure that a time comes when the project timeline and budget are set and followed, and refuse to allow further modifications to the project plan — often saving these ideas for a subsequent version or release. Change management requires that all changes to hardware, software, documentation, and procedures are reviewed by a knowledgeable third party prior to implementation. Even the smallest change to a configuration table or attaching a new piece of equipment can cause catastrophic failures to a system. In some cases a change may open a security hole that goes unnoticed for an extended period of time. Changes to documentation should also be subject to change management so that all documents in use are the same version, the documentation is readable and complete, and all programs and systems have adequate documentation. Furthermore, copies of critical documentation need to be kept off site in order to be available in the event of a major disaster or loss of access to the primary location. Certification Certification is the review of the system from a user perspective. The users review the changes and ensure that the changes will meet the original business requirements outlined at the start of the project or that they will be compatible with existing policy, procedures, or business objectives. 249

AU1518Ch16Frame Page 250 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES The other user group involved is the security department. They need to review the system to ensure that it is adequately secured from threats or risks. In this they will need to consider the sensitivity of the data within the system or that the system protects, the reliance of the business process on the system (availability), regulatory requirements such as data protection or storage (archival) time, and documentation and user training. Accreditation Once a system has been certified by the users, it must undergo accreditation. This is the final approval by management to permit the system, or the changes to a component, to move into production. Management must review the changes to the system in the context of its operational setting. They must evaluate the certification reports and recommendations from security regarding whether the system is adequately secured and meets user requirements and the proposed implementation timetable. This may include accepting the residual risks that could not be addressed in a costeffective manner. Change management is often handled by a committee of business analysts, business unit directors, and security and technical personnel. They meet regularly to approve implementation plans and schedules. Ideally, no change will go into production unless it has been thoroughly inspected and approved by this committee. The main exceptions to this, of course, are changes required to correct system failures. To repair a major failure, a process of emergency change management must be established. The greatest concern with emergency changes is ensuring that the correct follow-up is done to ensure that the changes are complete, documented, and working correctly. In the case of volatile information such as marketing programs, inventory, or newsflashes, the best approach is to keep the information stored in tables or other logically separated areas so that these changes (which may not be subject to change management procedures) do not affect the core system or critical functionality. TECHNICAL STANDARDS COMMITTEE Total cost of ownership (TCO) and keeping up with new or emerging tools and technologies are areas of major expenditure for most organizations today. New hardware and software are continuously marketed. In many cases a new operating system may be introduced before the organization has completed the rollout of the previous version. This often means supporting three versions of software simultaneously. Often this has resulted in the inability of personnel still using the older version of the software to read internal documents generated under the newer version. Configurations of desktops or other hardware can be different, making support 250

AU1518Ch16Frame Page 251 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security and maintenance complex. Decisions have to be made about which new products to purchase — laptops instead of desktops, the minimum standards for a new machine, or type of router or network component. All of these decisions are expensive and require a long-term view of what is coming onto the horizon. The technical standards committee is an advisory committee and should provide recommendations (usually to the IT steering committee or another executive-level committee) for the purchase, strategy, and deployment of new equipment, software, and training. The members of the technical standards committee must be aware of the products currently available as well as the emerging technologies that may affect the viability of current products or purchases. No organization wants to make a major purchase of a software or hardware product that will be incompatible with other products the organization already has or will require within the next few months or years. The members of the technical standards committee should consist of a combination of visionaries, technical experts, and strategic business planners. Care should be taken to ensure that the members of this committee do not become unreasonably influenced or restricted to one particular vendor or supplier. Central procurement is a good principle of security management. Often when an organization is spread out geographically, there is a tendency for each department to purchase equipment independently. Organizations lose control over standards and may end up with incompatible VPNs, difficult maintenance and support, loss of savings that may have been available through bulk purchases, cumbersome disaster recovery planning through the need to communicate with many vendors, and loss of inventory control. Printers and other equipment become untraceable and may be subject to theft or misuse by employees. One organization recently found that tens of thousands of dollars’ worth of equipment had been stolen by an employee that the organization never realized was missing. Unfortunately for the employee, a relationship breakdown caused an angry partner to report the employee to corporate security. THE SYSTEMS ANALYST There are several definitions for a systems analyst. Some organizations may use the term senior analyst when the person works in the IT development area; other organizations use the term to describe the person responsible for systems architecture or configuration. In the IT development shop, the systems analyst plays a critical role in the development and leadership of IT projects and the maintenance of IT systems. The systems analyst may be responsible for chairing or sitting on project development teams, working with business analysts to determine the functional requirements for a system, writing high-level project 251

AU1518Ch16Frame Page 252 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES requirements for use by programmers to write code, enforcing coding standards, coordinating the work of a team of programmers and reviewing their work, overseeing production support efforts, and working on incident handling teams. The systems analyst is usually trained in computer programming and project management skills. The systems analyst must have the ability to review a system and determine its capabilities, weaknesses, and workflow processes. The systems analyst should not have access to change production data or programs. This is important to ensure that they cannot inadvertently or maliciously change a program or organizational data. Without such controls, the analyst may be able to introduce a Trojan horse, circumvent change control procedures, and jeopardize data integrity. Systems analysts in a network or overall systems environment are responsible to ensure that secure and reliable networks or systems are developed and maintained. They are responsible to ensure that the networks or systems are constructed with no unknown gaps or backdoors, that there are few single points of failure, that configurations and access control procedures are set up, and that audit trails and alarms are monitored for violations or attacks. This systems analyst usually requires a technical college diploma and extensive in-depth training. Knowledge of system components, such as the firewalls in use by the organization, tools, and incident handling techniques, is required. Most often, the systems analyst in this environment will have the ability to set up user profiles, change permissions, change configurations, and perform high-level utilities such as backups or database reorganizations. This creates a control weakness that is difficult to overcome. In many cases the only option an organization has is to trust the person in this position. Periodic reviews of their work and proper management controls are some of the only compensating controls available. The critical problem for many organizations is ensuring that this position is properly backed up with trained personnel and thorough documentation, and that this person does not become technically stagnant or begin to become sloppy about security issues. THE BUSINESS ANALYST The business analyst is one of the most critical roles in the information management environment. A good business analyst has an excellent understanding of the business operating environment, including new trends, marketing opportunities, technological tools, current process strengths, needs and weaknesses, and is a good team member. The business 252

AU1518Ch16Frame Page 253 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security analyst is responsible for representing the needs of the users to the IT development team. The business analyst must clearly articulate the functional requirements of a project early on in the project life cycle in order to ensure that information technology resources, money, personnel, and time are expended wisely and that the final result of an IT project meets user needs, provides adequate security and functionality, and embraces controls and separation of duties. Once outlined, the business analyst must ensure that these requirements are addressed and documented in the project plan. The business analyst is then responsible for setting up test scenarios to validate the performance of the system and verify that the system meets the original requirements definitions. When testing, the business analyst should ensure that test scenarios and test cases have been developed to address all recognized risks and test scenarios. Test data should be sanitized to prevent disclosure of private or sensitive information, and test runs of programs should be carefully monitored to prevent test data and reports from introduction into the real-world production environment. Tests should include out-of-range tests, where numbers larger or smaller than the data fields are attempted and invalid data formats are tried. The purpose of the tests is to try to see if it is possible to make the system fail. Proper test data is designed to stress the limitations of the system, the edit checks, and the error handling routines so that the organization can be confident that the system will not fail or handle data incorrectly once in production. The business analyst is often responsible for providing training and documentation to the user groups. In this regard, all methods of access, use, and functionality of the system from a user perspective should be addressed. One area that has often been overlooked has been assignment of error handling and security functionality. The business analyst must ensure that these functions are also assigned to reliable and knowledgeable personnel once the system has gone into production. The business analyst is responsible for reviewing system tests and approving the change as the certification portion of the change management process. If a change needs to be made to production data, the business analyst will usually be responsible for preparing or reviewing the change and approving the timing and acceptability of the change prior to its implementation. This is a proper segregation of duties, whereby the person actually making the change in production — whether it is the operator, programmer, or other user — is not the same person who reviews and approves the change. This may prevent either human error or malicious changes. Once in production, the business analyst is often the second tier of support for the user community. Here they are responsible to check on inconsistencies, errors, or unreliable processing by the system. They will often 253

AU1518Ch16Frame Page 254 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES have a method of creating trouble tickets or system failure notices for the development and production support groups to investigate or take action. Business analysts are commonly chosen from the user groups. They must be knowledgeable in the business operations and should have good communication and teamwork skills. Several colleges offer courses in business analysis, and education in project management can also be beneficial. Because business analysts are involved in defining the original project functional requirements, they should also be trained in security awareness and requirements. Through a partnership with security, business analysts can play a key role in ensuring that adequate security controls are included in the system requirements. THE PROGRAMMER This chapter is not intended to outline all of the responsibilities of a programmer. Instead, it is focused on the security components and risks associated with this job function. The programmer, whether in a mainframe, client/ server, or Web development area, is responsible for preparing the code that will fulfill the requirements of the users. In this regard, the programmer needs to adhere to principles that will provide reliable, secure, and maintainable programs without compromising the integrity, confidentiality, or availability of the data. Poorly written code is the source of almost all buffer overflow attacks. Because of inadequate bounds, parameter checking, or error handling, a program can accept data that exceeds its acceptable range or size, thereby creating a memory or privilege overflow condition. This is a potential hole either for an attacker to exploit or to cause system problems due to simple human error during a data input function. Programs need to be properly documented so that they are maintainable, and the users (usually business analysts) reviewing the output can have confidence that the program handles the input data in a consistent and reliable method. Programmers should never have access to production data or libraries. Several firms have experienced problems due to a disgruntled programmer introducing logic bombs into programs or manipulating production data for their own benefit. Any changes to a program should be reviewed and approved by a business analyst and moved into production by another group or department (such as operators), and not by the programmer directly. This practice was established during the mainframe era but has been slow to be enforced on newer Web-based development projects. This has meant that several businesses have learned the hard way about proper segregation of duties and the protection it provides a firm. Often when a program requires frequent updating, such as a Web site, the placement of 254

AU1518Ch16Frame Page 255 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the changeable data into tables that can be updated by the business analysts or user groups is desirable. One of the greatest challenges for a programmer is to include security requirements in the programs. A program is primarily written to address functional requirements from a user perspective, and security can often be perceived as a hindrance or obstacle to the fast execution and accessibility of the program. The programmer needs to consider the sensitivity of the data collected or generated by the program and provide secure program access, storage, and audit trails. Access controls are usually set up at the initiation of the program; and user IDs, passwords, and privilege levels are checked when the user first logs on to the system or program. Most programs these days have multiple access paths to information — text commands, GUI icons, and drop-down menus are some of the common access methods. A programmer must ensure that all access methods are protected and that the user is unable to circumvent security by accessing the data through another channel or method. The programmer needs training in security and risk analysis. The work of a programmer should also be subject to peer review by other systems analysts or programmers to ensure that quality and standard programming practices have been followed. THE LIBRARIAN The librarian was a job function established in a mainframe environment. In many cases the duties of the librarian have now been incorporated into the job functions of other personnel such as system administrators or operators. However, it is important to describe the functions performed by a librarian and ensure that these tasks are still performed and included in the performance criteria and job descriptions of other individuals. The librarian is responsible for the handling of removable media — tapes, disks, and microfiche; the control of backup tapes and movement to off-site or near-line storage; the movement of programs into production; and source code control. In some instances the librarian is also responsible for system documentation and report distribution. The librarian duties need to be described, assigned, and followed. Movement of tapes to off-site storage should be done systematically with proper handling procedures, secure transport methods, and proper labeling. When reports are generated, especially those containing sensitive data, the librarian must ensure that the reports are distributed to the correct individuals and no pages are attached in error to other print jobs. For this reason, it is a good practice to restrict the access of other personnel from the main printers. 255

AU1518Ch16Frame Page 256 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES The librarian accepts the certified and accredited program changes and moves them into production. These changes should always include a backout plan in case of program or system problems. The librarian should take a backup copy of all programs or tables subject to change prior to moving the new code into production. A librarian should always ensure that all changes are properly approved prior to making a change. Librarians should not be permitted to make changes to programs or tables; they should only enact the changes prepared and approved by other personnel. Librarians also need to be inoculated against social engineering or pressure from personnel attempting to make changes without going through the proper approval process. THE OPERATOR The operator plays a key role in information systems security. No one has greater access or privileges than the operator. The operator can be a key contributor to system security or a gaping hole in a security program. The operator is responsible for the day-to-day operations, job flow, and often the scheduling of the system maintenance and backup routines. As such, an operator is in a position that may have serious impact on system performance or integrity in the event of human error, job-sequencing mistakes, processing delays, backup execution, and timing. The operator also plays a key role in incident handling and error recovery. The operator should log all incidents, abnormal conditions, and job completions so that they can be tracked and acted upon, and provide input for corrective action. Proper tracking of job performance, storage requirements, file size, and database activity provides valuable input to forecasting requirements for new equipment or identification of system performance issues and job inefficiencies before they may become serious processing impairments. The operator should never make changes to production programs or tables except where the changes have been properly approved and tested by other personnel. In the event of a system failure, the operator should have a response plan in place to notify key personnel. THE SYSTEM OWNER AND THE DATA OWNER History has taught us that information systems are not owned by the information technology department, but rather by the user group that depends on the system. The system owner therefore is usually the senior manager in the user department. For a financial system this may be the vice president of finance; for a customer support system, the vice president of sales. The IT department then plays the role of supporting the user group and responding to the needs of the user. Proper ownership and control of systems may prevent the development of systems that are technically sound but of little use to the users. Recent studies have shown that 256

AU1518Ch16Frame Page 257 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the gap between user requirements and system functionality was a serious detriment to business operations. In fact, several government departments have had to discard costly systems that required years of development because they were found to be inadequate to meet business needs.2 The roles of system owner and data owner may be separate or combined, depending on the size and complexity of the system. The system owner is responsible for all changes and improvements to a system, including decisions regarding the overall replacement of a system. The system owner sits on the IT steering committee, usually as chair, and provides input, prioritization, budgeting, and high-level resource allocation for system maintenance and development. This should not conflict with the role of the IT director and project leaders who are responsible for the day-today operations of production support activity, development projects, and technical resource hiring and allocation. The system owner also oversees the accreditation process that determines when a system change is ready for implementation. This means the system owner must be knowledgeable about new technologies, risks, threats, regulations, and market trends that may impact the security and integrity of a system. The responsibility of the data owner is to monitor the sensitivity of the data stored or processed by a system. This includes determining the appropriate levels of information classification, access restrictions, and user privileges. The data owner should establish or approve the process for granting access to new users, increasing access levels for existing users, and removing access in a timely manner for users who no longer require access as a part of their job duties. The data owner should require an annual report of all system users and determine whether the level of access each user has is appropriate. This should include a review of special access methods such as remote access, wireless access, reports received, and ad hoc requests for information. Because these duties are incidental to the main functions of the persons acting as data or system owners, it is incumbent upon these individuals to closely monitor these responsibilities while delegating certain functions to other persons. The ultimate responsibility for accepting the risks associated with a system rests with the system and data owners. THE USER All of the systems development, the changes, modifications, and daily operations are to be completed with the objective of addressing user requirements. The user is the person who must interact daily with the system and relies on the system to continue business operations. A system that is not designed correctly may lead to a high incidence of user errors, high training costs or extended learning curves, poor performance and frustration, and overly restrictive controls or security measures. Once 257

AU1518Ch16Frame Page 258 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES users notice these types of problems, they will often either attempt to circumvent security controls or other functionality that they find unnecessarily restrictive or abandon the use of the system altogether. Training for a user must include the proper use of the system and the reasons for the various controls and security parameters built into the system. Without divulging the details of the controls, explaining the reasons for the controls may help the users to accept and adhere to the security restrictions built into the system. GOOD PRINCIPLES — EXPLOITING THE STRENGTHS OF PERSONNEL IN REGARD TO A SECURITY PROGRAM A person should never be disciplined for following correct procedures. This may sound ridiculous, but it is a common weakness exploited by people as a part of social engineering. Millions of dollars’ worth of security will be worthless if our staff is not trained to resist and report all social engineering attempts. Investigators have found that the easiest way to gather corporate information is through bribery or relationships with employees. There are four main types of social engineering: intimidation, helpfulness, technical, and name-dropping. The principle of intimidation is the threat of punishment or ridicule for following correct procedures. The person being “engineered” is bullied by the attacker into granting an exception to the rules — perhaps due to position within the company or force of character. In many instances the security-minded person is berated by the attacker, threatened with discipline or loss of employment or otherwise intimidated by a person for just trying to do their job. Some of the most serious breaches of secure facilities have been accomplished through these techniques. In one instance the chief financial officer of a corporation refused to comply with the procedure of wearing an ID card. When challenged by a new security person, the executive explained in a loud voice that he should never again be challenged to display an ID card. Such intimidation unnerved the security person to the point of making the entire security procedure ineffective and arbitrary. Such a “tone at the top” indicates a lack of concern for security that will soon permeate through the entire organization. Helpfulness is another form of social engineering, appealing to the natural instinct of most people to want to provide help or assistance to another person. One of the most vulnerable areas for this type of manipulation is the help desk. Help desk personnel are responsible for password resets, remote access problem resolution, and system error handling. Improper handling of these tasks may result in an attacker getting a password reset for another legitimate user’s account and creating either a security gap or a denial-of-service for the legitimate user. 258

AU1518Ch16Frame Page 259 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security Despite the desires of users, the help desk, and administrators to facilitate the access of legitimate users to the system, they must be trained to recognize social engineering and follow established secure procedures. Name-dropping is another form of social engineering and is often facilitated by press releases, Web page ownership or administrator information, discarded corporate documentation, or other ways that an attacker can learn the names of individuals responsible for research, business operations, administrative functions, or other key roles. By using the names of these individuals in conversation, a hacker can appear to be a legitimate user or have a legitimate affiliation with the corporation. It has been quoted that “The greater the lie, the easier it is to convince someone that it is true.” This especially applies to a name-dropping type of attack. Despite the prior knowledge of the behaviors of a manager, a subordinate may be influenced into performing some task at the request of an attacker although the manager would never have contemplated or approved such a request. Technology has provided new forms of social engineering. Now an attacker may e-mail or fax a request to a corporation for information and receive a response that compromises security. This may be from a person alleging to represent law enforcement or some other government department demanding cooperation or assistance. The correct response must be to have an established manner of contact for outside agencies and train all personnel to route requests for information from an outside source through proper channels. All in all, the key to immunizing personnel against social-engineering attacks is to emphasize the importance of procedure, the correctness of following and enforcing security protocols, and the support of management for personnel who resist any actions that attempt to circumvent proper controls and may be an incidence of social engineering. All employees must know that they will never lose their job for enforcing corporate security procedures. JOB ROTATION Job rotation is an important principle from a security perspective, although it is often seen as a detriment by project managers. Job rotation moves key personnel through the various functional roles in a department or even between departments. This provides several benefits, such as cross-training of key personnel and reducing the risks to a system through lack of trained personnel during vacations or illnesses. Job rotation also serves to identify possible fraudulent activity or shortcuts taken by personnel who have been in the job for an extended time period. In one instance, a corporation needed to take disciplinary action against an employee who was the administrator for a critically important system, not only for the business but also for the community. Because this administrator 259

AU1518Ch16Frame Page 260 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES had sole knowledge of the system and the system administrator password, they were unable to take action in a timely manner. They were forced to delay any action until the administrator left for vacation and gave the password to a backup person. When people stay in a position too long, they may become more attached to the system than to the corporation, and their activity and judgment may become impaired. ANTI-VIRUS AND WEB-BASED ATTACKS The connectivity of systems and the proliferation of Web-based attacks have resulted in significant damage to corporate systems, expenses, and productivity losses. Many people recognize the impact of Code Red and Nimda; however, even when these attacks were taken out of the calculations, the incidence of Web-based attacks rose more than 79 percent in 2001.3 Some studies have documented more attacks in the first two months of 2002 than were detected in the previous year and a half.4 Users have heard many times not to open e-mail attachments; however, this has not prevented many infections and security breaches from happening. More sophisticated attacks — all of which can appear to come from trusted sources — are appearing, and today’s firewalls and anti-virus products are not able to protect an organization adequately. Instead, users need to be more diligent to confirm with a sender whether they intended to send out an attachment prior to opening it. The use of instant messaging, file sharing, and other products, many of which exploit open ports or VPN tunnels through firewalls, is creating even more vulnerabilities. The use of any technology or new product should be subject to analysis and review by security before the users adopt it. This requires the security department to react swiftly to requests from users and be aware of the new trends, technologies, and threats that are emerging. SEGREGATION OF DUTIES The principle of segregation of duties breaks an operation into separate functions so that no one person can control a process from initiation through to completion. Instead, a transaction would require one person to input the data, a second person to review and reconcile the batch totals, and another person (or perhaps the first individual) to confirm the final portion of the transaction. This is especially critical in financial transactions or error handling procedures. SUMMARY This is neither a comprehensive list of all the security concerns and ways to train and monitor the people in our organizations, nor is it a full list 260

AU1518Ch16Frame Page 261 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security of all job roles and functions. Hopefully it is a tool that managers, security personnel, and auditors can use to review some of the procedures they have in place and create a better security infrastructure. The key objective of this chapter is to identify the primary roles that people play in the information security environment. A security program is only as good as the people implementing it, and a key realization is that tools and technology are not enough when it comes to protecting our organizations. We need to enlist the support of every member of our companies. We need to see the users, administrators, managers, and auditors as partners in security. Much of this is accomplished through understanding. When the users understand why we need security, the security people understand the business, and everyone respects the role of the other departments, then the atmosphere and environment will lead to greater security, confidence, and trust. References 1. www.viruslist.com as reported in SC INFOSECURITY magazine, December 2001, p. 12. 2. www.oregon.gov, Secretary of State Audit of the Public Employees Benefit Board — also California Department of Motor Vehicles report on abandoning new system. 3. Cyber security, Claudia Flisi, Newsweek, March 18, 2002. 4. Etisalat Academy, March 2002.

ABOUT THE AUTHOR Kevin Henry, CISA, CISSP, has over 20 years of experience in telecommunications, computer programming and analysis, and information systems auditing. Kevin is an accomplished and highly respected presenter at many conferences and training sessions, and he serves as a lead instructor for the (ISC)2 Common Body of Knowledge Review for candidates preparing for the CISSP examination.

261

AU1518Ch16Frame Page 262 Thursday, November 14, 2002 6:18 PM

AU1518Ch17Frame Page 263 Thursday, November 14, 2002 6:18 PM

Chapter 17

Security Management Ken Buszta, CISSP

It was once said, “Information is king.” In today’s world, this statement has never rung more true. As a result, information is now viewed as an asset; and organizations are willing to invest large sums of money toward its protection. Unfortunately, organizations appear to be overlooking one of the weakest links for protecting their information — the information security management team. The security management team is the one component in our strategy that can ensure our security plan is working properly and takes corrective actions when necessary. In this chapter, we will address the benefits of an information security team, the various roles within the team, job separation, job rotation, and performance metrics for the team, including certifications. SECURITY MANAGEMENT TEAM JUSTIFICATION Information technology departments have always had to justify their budgets. With the recent global economic changes, the pressures of maintaining stockholder values have brought IT budgets under even more intense scrutiny. Migrations, new technology implementations, and even staff spending have been either been delayed, reduced, or removed from budgets. So how is it that an organization can justify the expense, much less the existence, of an information security management team? While most internal departments lack the necessary skill sets to address security, there are three compelling reasons to establish this team: 1. Maintain competitive advantage. An organization exists to provide a specialized product or service for its clients. The methodologies and trade secrets used to provide these services and products are the assets that establish our competitive advantage. An organization’s failure to properly protect and monitor these assets can result in the loss of not only a competitive advantage but also lost revenues and possible failure of the organization. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

263

AU1518Ch17Frame Page 264 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES 2. Protection of the organization’s reputation. In early 2000, several highprofile organizations’ Web sites were attacked. As a result, the public’s confidence was shaken in their ability to adequately protect their clients. A security management team will not be able to guarantee or fully prevent this from happening, but a well-constructed team can minimize the opportunities made available from your organization to an attacker. 3. Mandates by governmental regulations. Regulations within the United States, such as the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) and those abroad, such as the European Convention on Cybercrime, have mandated that organizations protect their data. An information security management team, working with the organization’s legal and auditing teams, can focus on ensuring that proper safeguards are utilized for regulatory compliance. EXECUTIVE MANAGEMENT AND THE IT SECURITY MANAGEMENT RELATIONSHIP The first and foremost requirement to help ensure the success of an information security management team relies on its relationship with the organization’s executive board. Commencing with the CEO and then working downward, it is essential for the executive board to support the efforts of the information security team. Failure of the executive board to actively demonstrate its support for this group will gradually become reflected within the rest of the organization. Apathy toward the information security team will become apparent, and the team will be rendered ineffective. The executive board can easily avoid this pitfall by publicly signing and adhering to all major information security initiatives such as security policies. INFORMATION SECURITY MANAGEMENT TEAM ORGANIZATION Once executive management has committed its support to an information security team, a decision must be made as to whether the team should operate within a centralized or decentralized administration environment. In a centralized environment, a dedicated team is assigned the sole responsibility for the information security program. These team members will report directly to the information security manager. Their responsibilities include promoting security throughout the organization, implementing new security initiatives, and providing daily security administration functions such as access control. In a decentralized environment, the members of the team have information security responsibilities in addition to those assigned by their departments. These individuals may be network administrators or reside in such departments as finance, legal, human resources, or production. 264

AU1518Ch17Frame Page 265 Thursday, November 14, 2002 6:18 PM

Security Management This decision will be unique to each organization. Organizations that have identified higher risks deploy a centralized administration function. A growing trend is to implement a hybrid solution utilizing the best of both worlds. A smaller dedicated team ensures that new security initiatives are implemented and oversees the overall security plan of the organization, while a decentralized team is charged with promoting security throughout their departments and possibly handling the daily department-related administrative tasking. The next issue that needs to be addressed is how the information security team will fit into the organization’s reporting structure. This is a decision that should not be taken lightly because it will have a long-enduring effect on the organization. It is important that the organization’s decision makers fully understand the ramifications of this decision. The information security team should be placed where its function has significant power and authority. For example, if the information security manager reports to management that does not support the information security charter, the manager’s group will be rendered ineffective. Likewise, if personal agendas are placed ahead of the information security agenda, it will also be rendered ineffective. An organization may place the team directly under the CIO or it may create an additional executive position, separate from any particular department. Either way, it is critical that the team be placed in a position that will allow it to perform its duties. ROLES AND RESPONSIBILITIES When planning a successful information security team, it is essential to identify the roles, rather than the titles, that each member shall perform. Within each role, their responsibilities and authority must be clearly communicated and understood by everyone in the organization. Most organizations can define a single process, such as finance, under one umbrella. There is a manager, and there are direct reports for every phase of the financial life cycle within that department. The information security process requires a different approach. Regardless of how centralized we try to make it, we cannot place it under a single umbrella. The success of the information security team is therefore based on a layered approach. As demonstrated in Exhibit 17-1, the core of any information security team lies with the executive management because they are ultimately responsible to the investors for the organization’s success or failure. As we delve outward into the other layers, we see there are roles for which an information security manager does not have direct reports, such as auditors, technology providers, and the end-user community, but he still has an accountability report from or to each of these members. 265

AU1518Ch17Frame Page 266 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES

Users

ofessionals D

n ers

Executive Management

o

vi

de

ce

Pr

ss

ogy

Ow

Te c h n ol

Inform at

Security Man ion

ers wn aO at

s

ent em ag

Cus tod ian

r IS P

rs

Pr

o

I S A u d it o r s

Exhibit 17-1. Layers of information security management team.

It is difficult to provide a generic approach to fit everyone’s needs. However, regardless of the structure, organizations need to assign securityrelated functions corresponding to the selected employees’ skill sets. Over time, eight different roles have been identified to effectively serve an organization: 1. Executive management. The executive management team is ultimately responsible for the success (or failure) of any information security program. As stated earlier, without their active support, the information security team will struggle and, in most cases, fail in achieving their charter. 2. Information security professionals. These members are the actual members trained and experienced in the information security arena. They are responsible for the design, implementation, management, and review of the organization’s security policy, standards, measures, practices, and procedures. 3. Data owners. Everyone within the organization can serve in this role. For example, the creator of a new or unique data spreadsheet or document can be considered the data owner of that file. As such, they are responsible for determining the sensitivity or classification levels of the data as well as maintaining the accuracy and integrity of the data while it resides in the system. 4. Custodians. This role may very well be the most under-appreciated of all. Custodians act as the owner’s delegate, with their primary 266

AU1518Ch17Frame Page 267 Thursday, November 14, 2002 6:18 PM

Security Management

5.

6.

7.

8.

focus on backing up and restoring the data. The data owners dictate the schedule at which the backups are performed. Additionally, they run the system for the owners and must ensure that the required security controls are applied in accordance with the organization’s security policies and procedures. Process owners. These individuals ensure the appropriate security, consistent with the organization’s security policy, is embedded in the information systems. Technology providers. These are the organization’s subject matter experts for a given set of information security technologies and assist the organization with its implementation and management. Users. As almost every member of the organization is a user of the information systems, they are responsible for adhering to the organization’s security policies and procedures. Their most vital responsibility is maintaining the confidentiality of all usernames and passwords, including the program upon which these are established. Information systems auditor. The auditor is responsible for providing independent assurance to management on the appropriateness of the security objectives and whether the security policies, standards, measures, practices, and procedures are appropriate and comply with the organization’s security objectives. Because of the responsibility this role has in the information security program, organizations may shift this role’s reporting structure directly to the auditing department as opposed to within the information security department.

SEPARATION OF DUTIES AND THE PRINCIPLE OF LEAST PRIVILEGE While it may be necessary for some organizations to have a single individual serve in multiple security roles, each organization will want to consider the possible effects of this decision. By empowering one individual, it is possible for them to manipulate the system for personal reasons without the organization’s knowledge. As such, an information security practice is to maintain a separation of duties. Under this philosophy, pieces of a task are assigned to several people. By clearly identifying the roles and responsibilities, an organization will be able to also implement the Principle of Least Privilege. This idea supports the concept that the users and the processes in a system should have the least number of privileges and for the shortest amount of time needed to perform their tasks. For example, the system administrator’s role may be broken into several different functions to limit the number of people with complete control. One person may become responsible for the system administration, a second person for the security administration, and a third person for the operator functions. 267

AU1518Ch17Frame Page 268 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES Typical system administrator/operator functions include: • • • • • •

Installing system software Starting up and shutting down the system Adding and removing system users Performing backups and recovery Mounting disks and tapes Handling printers

Typical security administrator functions include: • Setting user clearances, initial passwords, and other security clearances for new users, and changing security profiles for existing users • Setting or changing the sensitivity file labels • Setting security characteristics of devices and communication channels • Reviewing audit data The major benefit of both of these principles is to provide a two-person control process to limit the potential damage to an organization. Personnel would be forced into collusion in order to manipulate the system. JOB ROTATION Arguably, training may provide the biggest challenge to management, and many view it as a double-edged sword. On the one edge, training is viewed as an expense and is one of the first areas depreciated when budget cuts are required. This may leave the organization with stale skill sets and disgruntled employees. On the other edge, it is not unusual for an employee to absorb as much training from an organization as possible and then leave for a better opportunity. Where does management draw the line? One method to address this issue is job rotation. By routinely rotating the job a person is assigned to perform, we can provide cross-training to the employees. This process provides the team members with higher skill sets and increased self-esteem; and it provides the organization with backup personnel in the event of an emergency. From the information security point of view, job rotation has its benefits. Through job rotation, the collusion fostered through the separation of duties is broken up because an individual is not performing the same job functions for an extended period. Further, the designation of additionally trained workers adds to the personnel readiness of the organization’s disaster recovery plan. PERFORMANCE METRICS Each department within an organization is created with a charter or mission statement. While the goals for each department should be clearly defined and communicated, the tools that we use to measure a department’s 268

AU1518Ch17Frame Page 269 Thursday, November 14, 2002 6:18 PM

Security Management performance against these goals are not always as clearly defined, particularly in the case of information security. It is vital to determine a set of metrics by which to measure its effectiveness. Depending upon the metrics collected, the results may be used for several different purposes, such as: • Financial. Results may be used to justify existing or increasing future budget levels. • Team competency. A metric, such as certification, may be employed to demonstrate to management and the end users the knowledge of the information security team members. Additional metrics may include authorship and public speaking engagements. • Program efficiency. As the department’s responsibilities are increased, its ability to handle these demands while limiting its personnel hiring can be beneficial in times of economic uncertainty. While in the metric planning stages, the information security manager may consider asking for assistance from the organization’s auditing team. The auditing team can provide an independent verification of the metric results to both the executive management team and the information security department. Additionally, by getting the auditing department involved early in the process, it can assist the information security department in defining its metrics and the tools utilized to obtain them. Determining performance metrics is a multi-step process. In the first step, the department must identify its process for metric collection. Among the questions an organization may consider in this identification process are: • • • • •

Why do we need to collect the statistics? What statistics will we collect? How will the statistics be collected? Who will collect the statistics? When will these statistics be collected?

The second step is for the organization to identify the functions that will be affected. The functions are measured as time, money, and resources. The resources can be quantified as personnel, equipment, or other assets of the organization. The third step requires the department to determine the drivers behind the collection process. In the information security arena, the two drivers that affect the department’s ability to respond in a timely manner are the number of system users and the number of systems within its jurisdiction. The more systems and users an organization has, the larger the information security department. 269

AU1518Ch17Frame Page 270 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES

1000

Number of Users

800 600 400 200 0

1995

1996

1997

1998

1999

2000

2001

Exhibit 17-2. Users administered by information security department.

With these drivers in mind, executive management could rely on the following metrics with a better understanding of the department’s accomplishments and budget justifications: • • • • •

Total systems managed Total remote systems managed User administration, including additions, deletions, and modifications User awareness training Average response times

For example, Exhibit 17-2 shows an increase in the number of system users over time. This chart alone could demonstrate the efficiency of the department as it handles more users with the same number of resources. Exhibit 17-3 shows an example of the average information security response times. Upon review, we are clearly able to see an upward trend in the response times. This chart, when taken by itself, may pose some concerns by senior management regarding the information security team’s abilities. However, when this metric is used in conjunction with the metrics found in Exhibit 17-2, a justification could be made to increase the information security personnel budget. While it is important for these metrics to be gathered on a regular basis, it is even more important for this information to be shared with the appropriate parties. For example, by sharing performance metrics within the department, the department will able to identify its strong and weak areas. The information security manager will also want to share these results with the executive management team to perform a formal annual metric review and evaluation of the metrics. 270

AU1518Ch17Frame Page 271 Thursday, November 14, 2002 6:18 PM

Security Management ART

SLA

Average (in Hours)

20

15

10

5

0

1995

1996

1997

1998

1999

2000

2001

Exhibit 17-3. Average information security response times.

CERTIFICATIONS Using the various certification programs available is an effective tool for management to enhance the confidence levels in its security program while providing the team with recognition for its experience and knowledge. While there are both vendor-centric and vendor-neutral certifications available in today’s market, we will focus only on the latter. (Note: The author does not endorse any particular certification program.) Presently there is quite a debate about which certification is best. This is a hard question to answer directly. Perhaps the more important question is, “What does one want to accomplish in their career?” If based upon this premise, certification should be tailored to a set of objectives and therefore is a personal decision. Certified Information Systems Security Professional (CISSP) The CISSP Certification is an independent and objective measure of professional expertise and knowledge within the information security profession. Many regard this certification as an information security management certification. The credential, established over a decade ago, requires the candidate to have three years’ verifiable experience in one or more of the ten domains in the Common Body of Knowledge (CBK) and pass a rigorous exam. The CBK, developed by the International Information Systems Security Certification Consortium (ISC)2, established an international standard for IS security professionals. The CISSP multiple-choice certification examination covers the following ten domains of the CBK: Domain 1: Access Control Systems and Methodology Domain 2: Telecommunications and Network Security Domain 3: Security Management Practices Domain 4: Applications and Systems Development Security 271

AU1518Ch17Frame Page 272 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES Domain 5: Cryptography Domain 6: Security Architecture and Models Domain 7: Operations Security Domain 8: Business Continuity Planning (BCP) and Disaster Recovery Planning (DRP) Domain 9: Law, Investigations and Ethics Domain 10: Physical Security More information on this certification can be obtained by contacting (ISC)2 through its e-mail address, [email protected]. Systems Security Certified Practitioner (SSCP) The SSCP certification focuses on information systems security practices, roles, and responsibilities defined by experts from major industries. Established in 1998, it provides network and systems security administrators with independent and objective measures of competence and recognition as a knowledgeable information systems security practitioner. Certification is only available to those individuals who have at least one year’s experience in the CBK, subscribe to the (ISC)2 Code of Ethics, and pass the 125-question SSCP certification examination, based on seven CBK knowledge areas: 1. 2. 3. 4. 5. 6. 7.

Access Controls Administration Audit and Monitoring Risk, Response and Recovery Cryptography Data Communications Malicious Code/Malware

GIAC In 1999, the SANS (System Administration, Networking, and Security) Institute founded the Global Information Assurance Certification (GIAC) Program to address the need to validate the skills of security professionals. The GIAC certification provides assurance that a certified individual holds an appropriate level of knowledge and skill necessary for a practitioner in key areas of information security. This is accomplished through a twofold process: practitioners must pass a multiple-choice exam and then complete a practical exam to demonstrate their ability to apply their knowledge. GIAC certification programs include: • GIAC Security Essentials Certification (GSEC). GSEC graduates have the knowledge, skills, and abilities to incorporate good information security practice in any organization. The GSEC tests the essential knowle d g e a n d s k i l l s re q u i re d o f a n y i n d i v i d u a l w i t h s e c u r i t y responsibilities within an organization. 272

AU1518Ch17Frame Page 273 Thursday, November 14, 2002 6:18 PM

Security Management • GIAC Certified Firewall Analyst (GCFW). GCFWs have the knowledge, skills, and abilities to design, configure, and monitor routers, firewalls, and perimeter defense systems. • GIAC Certified Intrusion Analyst (GCIA). GCIAs have the knowledge, skills, and abilities to configure and monitor intrusion detection systems and to read, interpret, and analyze network traffic and related log files. • GIAC Certified Incident Handler (GCIH). GCIHs have the knowledge, skills, and abilities to manage incidents; to understand common attack techniques and tools; and to defend against or respond to such attacks when they occur. • GIAC Certified Windows Security Administrator (GCWN). GCWNs have the knowledge, skills and abilities to secure and audit Windows systems, including add-on services such as Internet Information Server and Certificate Services. • GIAC Certified UNIX Security Administrator (GCUX). GCUXs have the knowledge, skills and abilities to secure and audit UNIX and Linux systems. • GIAC Information Security Officer (GISO). GISOs have demonstrated the knowledge required to handle the Security Officer responsibilities, including overseeing the security of information and information resources. This combines basic technical knowledge with an understanding of threats, risks, and best practices. Alternately, this certification suits those new to security who want to demonstrate a basic understanding of security principles and technical concepts. • GIAC Systems and Network Auditor (GSNA). GSNAs have the knowledge, skills, and abilities to apply basic risk analysis techniques and to conduct a technical audit of essential information systems. Certified Information Systems Auditor (CISA) CISA is sponsored by the Information Systems and Audit Control Association (ISACA) and tests a candidate’s knowledge of IS audit principles and practices, as well as technical content areas. It is based on the results of a practice analysis. The exam tests one process and six content areas (domains) covering those tasks that are routinely performed by a CISA. The process area, which existed in the prior CISA practice analysis, has been expanded to provide the CISA candidate with a more comprehensive description of the full IS audit process. These areas are as follows: • • • • • •

Process-based area (domain) The IS audit process Content areas (domains) Management, planning, and organization of IS Technical infrastructure and operational practices Protection of information assets 273

AU1518Ch17Frame Page 274 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES • Disaster recovery and business continuity • Business application system development, acquisition, implementation, and maintenance • Business process evaluation and risk management For more information, contact ISACA via e-mail: [email protected]. CONCLUSION The protection of the assets may be driven by financial concerns, reputation protection, or government mandate. Regardless of the reason, wellconstructed information security teams play a vital role in ensuring organizations are adequately protecting their information assets. Depending upon the organization, an information security team may operate in a centralized or decentralized environment; but either way, the roles must be clearly defined and implemented. Furthermore, it is crucial to develop a set of performance metrics for the information security team. The metrics should look to identify issues such as budgets, efficiencies, and proficiencies within the team. References Hutt, Arthur E. et al., Computer Security Handbook, 3rd ed., John Wiley & Sons, Inc., New York, 1995. International Information Systems Security Certification Consortium (ISC)2, www.isc2.org. Information Systems and Audit Control Association (ISACA), www.isaca.org. Kabay, Michel E., The NCSA Guide to Enterprise Security: Protecting Information Assets, McGrawHill, New York, 1996. Killmeyer Tudor, Jan, Information Security Architecture: An Integrated Approach to Security in the Organization, Auerbach Publications, Boca Raton, FL, 2001. Kovacich, Gerald L., Information Systems Security Officer’s Guide: Establishing and Managing an Information Protection Program, Butterworth-Heinemann, Massachusetts, 1998. Management Planning Guide for Information Systems Security Auditing, National State Auditors Association and the United States General Accounting Office, 2001. Russell, Deborah and Gangemi, G.T. Sr., Computer Security Basics, O’Reilly & Associates, Inc., California, 1991. System Administration, Networking, and Security (SANS) Institute, www.sans.org. Stoll, Clifford, The Cuckoo’s Egg, Doubleday, New York, 1989 Wadlow, Thomas A., The Process of Network Security: Designing and Managing a Safe Network, Addison-Wesley, Massachusetts, 2000.

ABOUT THE AUTHOR Ken Buszta, CISSP, has more than ten years of IT experience and six years of InfoSec experience. He served in the U.S. Navy’s intelligence community before entering the consulting field in 1994. Should you have any questions or comments, he can be reached at [email protected]. 274

AU1518Ch18Frame Page 275 Thursday, November 14, 2002 6:17 PM

Chapter 18

The Common Criteria for IT Security Evaluation Debra S. Herrmann

This chapter introduces the Common Criteria (CC) by: • Describing the historical events that led to their development • Delineating the purpose and intended use of the CC and, conversely, situations not covered by the CC • Explaining the major concepts and components of the CC methodology and how they work • Discussing the CC user community and stakeholders • Looking at the future of the CC HISTORY The Common Criteria, referred to as “the standard for information security,”1 represent the culmination of a 30-year saga involving multiple organizations from around the world. The major events are discussed below and summarized in Exhibit 18-1. A common misperception is that computer and network security began with the Internet. In fact, the need for and interest in computer security or COMPUSEC have been around as long as computers. Likewise, the Orange Book is often cited as the progenitor of the CC; actually, the foundation for the CC was laid a decade earlier. One of the first COMPUSEC standards, DoD 5200.28-M,2 Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems, was issued in January 1973. An amended version was issued June 1979.3 DoD 5200.28-M defined the purpose of security testing and evaluation as:2 • To develop and acquire methodologies, techniques, and standards for the analysis, testing, and evaluation of the security features of ADP systems 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

275

276

U.S. DoD

U.S. DoD

U.S. DoD

U.S. DoD

U.S. DoD

U.S. DoD

ISO/IEC U.K. CESG

U.S. DoD

European Communities

6/79

8/83

12/85

7/87

8/90

1990 3/91

4/91

6/91

Lead Organization

1/73

Year

TCSEC or Orange Book TCSEC or Orange Book TNI, part of Rainbow Series TNI, part of Rainbow Series — —





Short Name

Part of Rainbow Series Information Technology Security Evaluation Criteria (ITSEC), Version 1.2, Office for Official ITSEC Publications of the European Communities

JTC1 SC27 WG3 formed UKSP01, UK IT Security Evaluation Scheme: Description of the Scheme, Communications–Electronics Security Group NCSC-TG-021, Version 1, Trusted DBMS Interpretation of the TCSEC, National Computer Security Center

NCSC-TG-011, Version 1, Trusted Network Interpretation of the TCSEC, National Computer Security Center

DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource Sharing ADP Systems DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource Sharing ADP Systems, with 1st Amendment CSC-STD-001–83, Trusted Computer System Evaluation Criteria, National Computer Security Center DoD 5200.28-STD, Trusted Computer System Evaluation Criteria, National Computer Security Center NCSC-TG-005, Version 1, Trusted Network Interpretation of the TCSEC, National Computer Security Center

Standard/Project

Exhibit 18-1. Timeline of events leading to the development of the CC.

AU1518Ch18Frame Page 276 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

U.S. NIST and NSA Canadian CSE

12/92

CCRA CEM Part 2 supplement



CC Parts 1–3

CEM Part 2

CC

CC — CC CEM Part 1

ECMA TR/64

Guidelines for the Security of Information Systems, Organization for Economic Cooperation — and Development Federal Criteria for Information Technology Security, Version 1.0, Volumes I and II Federal criteria The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC), Canadian System CTCPEC Security Centre, Communications Security Establishment, Version 3.oe CC Editing Board established CCEB

CC Sponsoring Organizations 12/93 ECMA Secure Information Processing versus the Concept of Product Evaluation, Technical Report ECMA TR/64, European Computer Manufacturers’ Association 1/96 CCEB Committee draft 1.0 released 1/96 to 10/97 — Public review, trial evaluations 10/97 CCIMB Committee draft 2.0 beta released 11/97 CEMEB CEM-97/017, Common Methodology for Information Technology Security, Part 1: Introduction and General Model, Version 0.6 10/97 to 12/99 CCIMB with Formal comment resolution and balloting ISO/IEC JTC1 SC27 WG3 8/99 CEMEB CEM-99/045, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, v1.0 12/99 ISO/IEC ISO/IEC 15408, Information technology — Security techniques — Evaluation criteria for IT security, Parts 1–3 released 12/99 forward CCIMB Respond to requests for interpretations (RIs), issue final interpretations, incorporate final interpretations 5/00 Multiple Common Criteria Recognition Agreement signed 8/01 CEMEB CEM-2001/0015, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, Supplement: ALC_FLR — Flaw Remediation, v1.0

6/93

1/93

OECD

11/92

AU1518Ch18Frame Page 277 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation

277

AU1518Ch18Frame Page 278 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES Exhibit 18-2. Summary of Orange Book trusted computer system evaluation criteria (TCSEC) divisions. Evaluation Division

Evaluation Class

Degree of Trust

A — Verified protection B — Mandatory protection

A1 — Verified design Highest B3 — Security domains B2 — Structured protection B1 — Labeled security protection C — Discretionary protection C2 — Controlled access protection C1 — Discretionary security protection D — Minimal protection D1 — Minimal protection Lowest

• To assist in the analysis, testing, and evaluation of the security features of ADP systems by developing factors for the Designated Approval Authority concerning the effectiveness of measures used to secure the ADP system in accordance with Section VI of DoD Directive 5200.28 and the provisions of this Manual • To minimize duplication and overlapping effort, improve the effectiveness and economy of security operations, and provide for the approval and joint use of security testing and evaluation tools and equipment As shown in the next section, these goals are quite similar to those of the Common Criteria. The standard stated that the security testing and evaluation procedures “will be published following additional testing and coordination.”2 The result was the publication of CSC-STD-001–83, the Trusted Computer System Evaluation Criteria (TCSEC),4 commonly known as the Orange Book, in 1983. A second version of this standard was issued in 1985.5 The Orange Book proposed a layered approach for rating the strength of COMPUSEC features, similar to the layered approach used by the Software Engineering Institute (SEI) Capability Maturity Model (CMM) to rate the robustness of software engineering processes. As shown in Exhibit 18-2, four evaluation divisions composed of seven classes were defined. Division A class A1 was the highest rating, while division D class D1 was the lowest. The divisions measured the extent of security protection provided, with each class and division building upon and strengthening the provisions of its predecessors. Twenty-seven specific criteria were evaluated. These criteria were grouped into four categories: security policy, accountability, assurance, and documentation. The Orange Book also introduced the concepts of a reference monitor, formal security policy model, trusted computing base, and assurance. The Orange Book was oriented toward custom software, particularly defense and intelligence applications, operating on a mainframe computer 278

AU1518Ch18Frame Page 279 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation that was the predominant technology of the time. Guidance documents were issued; however, it was difficult to interpret or apply the Orange Book to networks or database management systems. When distributed processing became the norm, additional standards were issued to supplement the Orange Book, such as the Trusted Network Interpretation and the Trusted Database Management System Interpretation. Each standard had a different color cover, and collectively they became known as the Rainbow Series. In addition, the Federal Criteria for Information Technology Security was issued by NIST and NSA in December 1992, but it was short-lived. At the same time, similar developments were proceeding outside the United States. Between 1990 and 1993, the Commission of the European Communities, the European Computer Manufacturers Association (ECMA), the Organization for Economic Cooperation and Development (OECD), the U.K. Communications–Electronics Security Group, and the Canadian Communication Security Establishment (CSE) all issued computer security standards or technical reports. These efforts and the evolution of the Rainbow Series were driven by three main factors:6 1. The rapid change in technology, which led to the need to merge communications security (COMSEC) and computer security (COMPUSEC) 2. The more universal use of information technology (IT) outside the defense and intelligence communities 3. The desire to foster a cost-effective commercial approach to developing and evaluating IT security that would be applicable to multiple industrial sectors These organizations decided to pool their resources to meet the evolving security challenge. ISO/IEC Joint Technical Committee One (JTC1) Subcommittee 27 (SC27) Working Group Three (WG3) was formed in 1990. Canada, France, Germany, the Netherlands, the United Kingdom, and the United States, which collectively became known as the CC Sponsoring Organizations, initiated the CC Project in 1993, while maintaining a close liaison with ISO/IEC JTC1 SC27 WG3. The CC Editing Board (CCEB), with the approval of ISO/IEC JTC1 SC27 WG3, released the first committee draft of the CC for public comment and review in 1996. The CC Implementation Management Board (CCIMB), again with the approval of ISO/IEC JTC1 SC27 WG3, incorporated the comments and observations gained from the first draft to create the second committee draft. It was released for public comment and review in 1997. Following a formal comment resolution and balloting period, the CC were issued as ISO/IEC 15408 in three parts: • ISO/IEC 15408-1(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model 279

AU1518Ch18Frame Page 280 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES • ISO/IEC 15408-2(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 2: Security functional requirements • ISO/IEC 15408-3(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 3: Security assurance requirements Parallel to this effort was the development and release of the Common Evaluation Methodology, referred to as the CEM or CM, by the Common Evaluation Methodology Editing Board (CEMEB): • CEM-97/017, Common Methodology for Information Technology Security Evaluation, Part 1: Introduction and General Model, v0.6, November 1997 • CEM-99/045, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, v1.0, August 1999 • CEM-2001/0015, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, Supplement: ALC_FLR — Flaw Remediation, v1.0, August 2001 As the CEM becomes more mature, it too will become an ISO/IEC standard. PURPOSE AND INTENDED USE The goal of the CC project was to develop a standardized methodology for specifying, designing, and evaluating IT products that perform security functions which would be widely recognized and yield consistent, repeatable results. In other words, the goal was to develop a full life-cycle, consensus-based security engineering standard. Once this was achieved, it was thought, organizations could turn to commercial vendors for their security needs rather than having to rely solely on custom products that had lengthy development and evaluation cycles with unpredictable results. The quantity, quality, and cost effectiveness of commercially available IT security products would increase; and the time to evaluate them would decrease, especially given the emergence of the global economy. There has been some confusion that the term IT product only refers to plug-and-play commercial off-the-shelf (COTS) products. In fact, the CC interprets the term IT product quite broadly, to include a single product or multiple IT products configured as an IT system or network. The standard lists several items that are not covered and considered out of scope:7 • Administrative security measures and procedural controls • Physical security • Personnel security 280

AU1518Ch18Frame Page 281 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation • Use of evaluation results within a wider system assessment, such as certification and accreditation (C&A) • Qualities of specific cryptographic algorithms Administrative security measures and procedural controls generally associated with operational security (OPSEC) are not addressed by the CC/CEM. Likewise, the CC/CEM does not define how risk assessments should be conducted, even though the results of a risk assessment are required as an input to a PP.7 Physical security is addressed in a very limited context — that of restrictions on unauthorized physical access to security equipment and prevention of and resistance to unauthorized physical modification or substitution of such equipment.6 Personnel security issues are not covered at all; instead, they are generally handled by assumptions made in the PP. The CC/CEM does not address C&A processes or criteria. This was specifically left to each country and/or government agency to define. However, it is expected that CC/CEM evaluation results will be used as input to C&A. The robustness of cryptographic algorithms, or even which algorithms are acceptable, is not discussed in the CC/CEM. Rather, the CC/CEM limits itself to defining requirements for key management and cryptographic operation. Many issues not handled by the CC/CEM are covered by other national and international standards. MAJOR COMPONENTS OF THE METHODOLOGY AND HOW THEY WORK The three-part CC standard (ISO/IEC 15408) and the CEM are the two major components of the CC methodology, as shown in Exhibit 18-3. The CC Part 1 of ISO/IEC 15408 provides a brief history of the development of the CC and identifies the CC sponsoring organizations. Basic concepts and terminology are introduced. The CC methodology and how it corresponds to a generic system development lifecycle is described. This information forms the foundation necessary for understanding and applying Parts 2 and 3. Four key concepts are presented in Part 1: • • • •

Protection Profiles (PPs) Security Targets (STs) Targets of Evaluation (TOEs) Packages

A Protection Profile, or PP, is a formal document that expresses an implementation-independent set of security requirements, both functional and assurance, for an IT product that meets specific consumer needs.7 The process of developing a PP helps a consumer to elucidate, define, and validate their security requirements, the end result of which is used to (1) communicate these requirements to potential developers and (2) provide a foundation 281

AU1518Ch18Frame Page 282 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

I. The Common Criteria

-

ISO/IEC 15408 Part 1 Terminology and concepts Description of CC Methodology History of development CC Sponsoring organizations

ISO/IEC 15408 Part 2 - Catalog of security functional classes, families, components, and elements

ISO/IEC 15408 Part 3 - Catalog of security assurance classes, families, components, and elements - Definition of standard EAL packages

II. The Common Evaluation Methodology CEM-97/017 Part 1 - Terminology and concepts - Description of CEM - Evaluation principles and roles

CEM-99/045 Part 2 - Standardized application and execution of CC Part 3 requirements - Evaluation tasks, activities, and work units

CEM-2001/015 Part 2 Supplement - Flaw remediation

Exhibit 18-3. Major components of the CC CEM.

from which a security target can be developed and an evaluation conducted. A Security Target, or ST, is an implementation-dependent response to a PP that is used as the basis for developing a TOE. In other words, the PP specifies security functional and assurance requirements, while an ST provides a design that incorporates security mechanisms, features, and functions to fulfill these requirements. A Target of Evaluation, or TOE, is an IT product, system, or network and its associated administrator and user guidance documentation that is the subject of an evaluation.7-9 A TOE is the physical implementation of an ST. There are three types of TOEs: monolithic, component, and composite. A monolithic TOE is self-contained; it has no higher or lower divisions. A 282

AU1518Ch18Frame Page 283 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation component TOE is the lowest-level TOE in an IT product or system; it forms part of a composite TOE. In contrast, a composite TOE is the highest-level TOE in an IT product or system; it is composed of multiple component TOEs. A package is a set of components that are combined together to satisfy a subset of identified security objectives.7 Packages are used to build PPs and STs. Packages can be a collection of functional or assurance requirements. Because they are a collection of low-level requirements or a subset of the total requirements for an IT product or system, packages are intended to be reusable. Evaluation assurance levels (EALs) are examples of predefined packages. Part 2 of ISO/IEC 15408 is a catalog of standardized security functional requirements, or SFRs. SFRs serve many purposes. They7-9 (1) describe the security behavior expected of a TOE, (2) meet the security objectives stated in a PP or ST, (3) specify security properties that users can detect by direct interaction with the TOE or by the TOE’s response to stimulus, (4) counter threats in the intended operational environment of the TOE, and (5) cover any identified organizational security policies and assumptions. The CC organizes SFRs in a hierarchical structure of security functionality: • • • •

Classes Families Components Elements

Eleven security functional classes, 67 security functional families, 138 security functional components, and 250 security functional elements are defined in Part 2. Exhibit 18-4 illustrates the relationship between classes, families, components, and elements. A class is a grouping of security requirements that share a common focus; members of a class are referred to as families.7 Each functional class is assigned a long name and a short three-character mnemonic beginning with an “F.” The purpose of the functional class is described and a structure diagram is provided that depicts the family members. ISO/IEC 15408-2 defines 11 security functional classes. These classes are lateral to one another; there is no hierarchical relationship among them. Accordingly, the standard presents the classes in alphabetical order. Classes represent the broadest spectrum of potential security functions that a consumer may need in an IT product. Classes are the highest-level entity from which a consumer begins to select security functional requirements. It is not expected that a single IT product will contain SFRs from all classes. Exhibit 18-5 lists the security functional classes. 283

AU1518Ch18Frame Page 284 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

Class A

Family 1

Family 2

Family x

Component 1

Component 2

Component x

Element 1

Element 2

Element x

Exhibit 18-4. Relationship between classes, families, components, and elements.

Exhibit 18-5. Functional security classes. Short Name

Long Name

FAU

Security audit

FCO FCS FDP

FIA

FMT FPR FPT FRU FTA FTP

284

Purpose8

Monitor, capture, store, analyze, and report information related to security events Communication Assure the identity of originators and recipients of transmitted information; non-repudiation Cryptographic support Management and operational use of cryptographic keys User data protection Protect (1) user data and the associated security attributes within a TOE and (2) data that is imported, exported, and stored Identification and Ensure unambiguous identification of authorized authentication users and the correct association of security attributes with users and subjects Security management Management of security attributes, data, and functions and definition of security roles Privacy Protect users against discovery and misuse of their identity Protection of the TSF Maintain the integrity of the TSF management functions and data Resource utilization Ensure availability of system resources through fault tolerance and the allocation of services by priority TOE access Controlling user session establishment Trusted path/channels Provide a trusted communication path between users and the TSF and between the TSF and other trusted IT products

AU1518Ch18Frame Page 285 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation

functional requirement

3-letter family code

2-letter class code

1-digit element number

1-digit component number

Exhibit 18-6. Standard notation for classes, families, components, and elements.

A functional family is a grouping of SFRs that share security objectives but may differ in emphasis or rigor. The members of a family are referred to as components.7 Each functional family is assigned a long name and a three-character mnemonic that is appended to the functional class mnemonic. Family behavior is described. Hierarchics or ordering, if any, between family members is explained. Suggestions are made about potential OPSEC management activities and security events that are candidates to be audited. Components are a specific set of security requirements that are constructed from elements; they are the smallest selectable set of elements that can be included in a Protection Profile, Security Target, or a package.7 Components are assigned a long name and described. Hierarchical relationships between one component and another are identified. The short name for components consists of the class mnemonic, the family mnemonic, and a unique number. An element is an indivisible security requirement that can be verified by an evaluation, and it is the lowest-level security requirement from which components are constructed.7 One or more elements are stated verbatim for each component. Each element has a unique number that is appended to the component identifier. If a component has more than one element, all of them must be used. Dependencies between elements are listed. Elements are the building blocks from which functional security requirements are specified in a protection profile. Exhibit 18-6 illustrates the standard CC notation for security functional classes, families, components, and elements. Part 3 of ISO/IEC 15408 is a catalog of standardized security assurance requirements or SARs. SARs define the criteria for evaluating PPs, STs, and TOEs and the security assurance responsibilities and activities of developers and evaluators. The CC organize SARs in a hierarchical structure of security assurance classes, families, components, and elements. Ten security assurance classes, 42 security assurance families, and 93 security assurance components are defined in Part 3. 285

AU1518Ch18Frame Page 286 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES A class is a grouping of security requirements that share a common focus; members of a class are referred to as families.7 Each assurance class is assigned a long name and a short three-character mnemonic beginning with an “A.” The purpose of the assurance class is described and a structure diagram is provided that depicts the family members. There are three types of assurance classes: (1) those that are used for Protection Profile or Security Target validation, (2) those that are used for TOE conformance evaluation, and (3) those that are used to maintain security assurance after certification. ISO/IEC 15408-3 defines ten security assurance classes. Two classes, APE and ASE, evaluate PPs and STs, respectively. Seven classes verify that a TOE conforms to its PP and ST. One class, AMA, verifies that security assurance is maintained between certification cycles. These classes are lateral to one another; there is no hierarchical relationship among them. Accordingly, the standard presents the classes in alphabetical order. Classes represent the broadest spectrum of potential security assurance measures that a consumer may need to verify the integrity of the security functions performed by an IT product. Classes are the highestlevel entity from which a consumer begins to select security assurance requirements. Exhibit 18-7 lists the security assurance classes in alphabetical order and indicates their type. An assurance family is a grouping of SARs that share security objectives. The members of a family are referred to as components.7 Each assurance family is assigned a long name and a three-character mnemonic that is appended to the assurance class mnemonic. Family behavior is described. Unlike functional families, the members of an assurance family only exhibit linear hierarchical relationships, with an increasing emphasis on scope, depth, and rigor. Some families contain application notes that provide additional background information and considerations concerning the use of a family or the information it generates during evaluation activities. Components are a specific set of security requirements that are constructed from elements; they are the smallest selectable set of elements that can be included in a Protection Profile, Security Target, or a package.7 Components are assigned a long name and described. Hierarchical relationships between one component and another are identified. The short name for components consists of the class mnemonic, the family mnemonic, and a unique number. Again, application notes may be included to convey additional background information and considerations. An element is an indivisible security requirement that can be verified by an evaluation, and it is the lowest-level security requirement from which components are constructed.7 One or more elements are stated verbatim for each component. If a component has more than one element, all of them must be used. Dependencies between elements are listed. Elements are the 286

AU1518Ch18Frame Page 287 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation Exhibit 18-7. Security assurance classes. Short Name Long Name APE ASE

ACM ADO ADV

AGD

ALC

ATE AVA

AMA

Type

Purpose

Protection profile PP/ST Demonstrate that the PP is complete, consistent and evaluation technically sound Security target PP/ST Demonstrate that the ST is complete, consistent, evaluation technically sound, and suitable for use as the basis for a TOE evaluation Configuration TOE Control the process by which a TOE and its related management documentation is developed, refined, and modified Delivery and TOE Ensure correct delivery, installation, generation, and operation initialization of the TOE Development TOE Ensure that the development process is methodical by requiring various levels of specification and design and evaluating the consistency between them Guidance TOE Ensure that all relevant aspects of the secure documents operation and use of the TOE are documented in user and administrator guidance Lifecycle support TOE Ensure that methodical processes are followed during the operations and maintenance phase so that security integrity is not disrupted Tests TOE Ensure adequate test coverage, test depth, functional and independent testing Vulnerability TOE Analyze the existence of latent vulnerabilities, such assessment as exploitable covert channels, misuse or incorrect configuration of the TOE, the ability to defeat, bypass, or compromise security credentials Maintenance of AMA Assure that the TOE will continue to meet its assurance security target as changes are made to the TOE or its environment

PP/ST — Protection Profile or Security Target evaluation. TOE — TOE conformance evaluation. AMA — Maintenance of assurance after certification.

building blocks from which a PP or ST is created. Each assurance element has a unique number that is appended to the component identifier and a one-character code. A “D” indicates assurance actions to be taken by the TOE developer. A “C” explains the content and presentation criteria for assurance evidence, that is, what must be demonstrated.7 An “E” identifies actions to be taken or analyses to be performed by the evaluator to confirm that evidence requirements have been met. Exhibit 18-8 illustrates the standard notation for assurance classes, families, components, and elements. Part 3 of ISO/IEC 15408 also defines seven hierarchical evaluation assurance levels, or EALs. An EAL is a grouping of assurance components that 287

AU1518Ch18Frame Page 288 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

1-digit action element type assurance requirement

3-letter family code

2-letter class code

1-digit element number 1-digit component number

Exhibit 18-8. Standard notation for assurance classes, families, components, and elements.

Exhibit 18-9. Standard EAL packages. Short Name

Long Name

EAL 1 EAL 2 EAL 3 EAL 4 EAL 5 EAL 6 EAL 7

Functionally tested Structurally tested Methodically tested and checked Methodically designed, tested, and reviewed Semi-formally designed and tested Semi-formally verified design and tested Formally verified design and tested

Level of Confidence Lowest

Medium

Highest

represents a point on the predefined assurance scale.7 In short, an EAL is an assurance package. The intent is to ensure that a TOE is not over- or underprotected by balancing the level of assurance against cost, schedule, technical, and mission constraints. Each EAL has a long name and a short name, which consists of “EAL” and a number from 1 to 7. The seven EALs add new and higher assurance components as security objectives become more rigorous. Application notes discuss limitations on evaluator actions and/or the use of information generated. Exhibit 18-9 cites the seven standard EALs. The CEM The Common Methodology for Information Technology Security Evaluation, known as the CEM (or CM), was created to provide concrete guidance to evaluators on how to apply and interpret SARs and their developer, content and presentation, and evaluator actions, so that evaluations are consistent and repeatable. To date the CEM consists of two parts and a supplement. Part 1 of the CEM defines the underlying principles of evaluations and delineates the roles of sponsors, developers, evaluators, and national evaluation authorities. Part 2 of the CEM specifies the evaluation methodology in terms of evaluator tasks, subtasks, activities, subactivities, 288

AU1518Ch18Frame Page 289 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation actions, and work units, all of which tie back to the assurance classes. A supplement was issued to Part 2 in 2001 that provides evaluation guidance for the ALC_FLR family. Like the CC, the CEM will become an ISO/IEC standard in the near future. CC USER COMMUNITY AND STAKEHOLDERS The CC user community and stakeholders can be viewed from two different constructs: (1) generic groups of users, and (2) formal organizational entities that are responsible for overseeing and implementing the CC/CEM worldwide. (See Exhibit 18-10.) ISO/IEC 15408-1 defines the CC/CEM generic user community to consist of: • Consumers • Developers • Evaluators Consumers are those organizations and individuals who are interested in acquiring a security solution that meets their specific needs. Consumers state their security functional and assurance requirements in a PP. This mechanism is used to communicate with potential developers by conveying requirements in an implementation-independent manner and information about how a product will be evaluated. Developers are organizations and individuals who design, build, and sell IT security products. Developers respond to a consumer’s PP with an implementation-dependent detailed design in the form of an ST. In addition, developers prove through the ST that all requirements from the PP have been satisfied, including the specific activities levied on developers by SARs. Evaluators perform independent evaluations of PPs, STs, and TOEs using the CC/CEM, specifically the evaluator activities stated in SARs. The results are formally documented and distributed to the appropriate entities. Consequently, consumers do not have to rely only on a developer’s claims — they are privy to independent assessments from which they can evaluate and compare IT security products. As the standard7 states: The CC is written to ensure that evaluations fulfill the needs of consumers — this is the fundamental purpose and justification for the evaluation process.

The Common Criteria Recognition Agreement (CCRA),10 signed by 15 countries to date, formally assigns roles and responsibilities to specific organizations: • Customers or end users • IT product vendors 289

AU1518Ch18Frame Page 290 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES Exhibit 18-10. Roles and responsibilities of CC/CEM stakeholders. Category I. Generic Usersa Consumers

Developers Evaluators

Roles and Responsibilities Specify requirements Inform developers how IT product will be evaluated Use PP, ST, and TOE evaluation results to compare products Respond to consumer’s requirements Prove that all requirements have been met Conduct independent evaluations using standardized criteria

II. Specific Organizationsb Customer or end user Specify requirements Inform vendors how IT product will be evaluated Use PP, ST, and TOE evaluation results to compare IT products IT product vendor Respond to customer’s requirements Prove that all requirements have been met Deliver evidence to sponsor Sponsor Contract with CCTL for IT product to be evaluated Deliver evidence to CCTL Request accreditation from National Evaluation Authority Common Criteria Testing Laboratory Receive evidence from sponsor (CCTL) Conduct evaluations according to CC/CEM Produce Evaluation Technical Reports Make certification recommendation to National Evaluation Authority National Evaluation Define and manage national evaluation scheme Authority Accredit CCTLs Monitor CCTL evaluations Issue guidance to CCTLs Issue and recognize CC certificates Maintain Evaluated Products Lists and PP Registry Common Criteria Facilitate consistent interpretation and application of the Implementation CC/CEM Management Board Oversee National Evaluation Authorities (CCIMB) Render decisions in response to Requests for Interpretations (RIs) Maintain the CC/CEM Coordinate with ISO/IEC JTC1 SC27 WG3 and CEMEB a

b

ISO/IEC 15408-1(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model; Part 2: Security functional requirements; Part 3: Security assurance requirements. Arrangement on the Recognition of Common Criteria Certificates in the Field of Information Technology Security, May 23, 2000.

290

AU1518Ch18Frame Page 291 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation • • • •

Sponsors Common Criteria Testing Laboratories (CCTLs) National Evaluation Authorities Common Criteria Implementation Management Board (CCIMB)

Customers or end users perform the same role as consumers in the generic model. They specify their security functional and assurance requirements in a PP. By defining an assurance package, they inform developers how the IT product will be evaluated. Finally, they use PP, ST, and TOE evaluation results to compare IT products and determine which best meets their specific needs and will work best in their particular operational environment. IT product vendors perform the same role as developers in the generic model. They respond to customer requirements by developing an ST and corresponding TOE. In addition, they provide proof that all security functional and assurance requirements specified in the PP have been satisfied by their ST and TOE. This proof and related development documentation is delivered to the Sponsor. A new role introduced by the CCRA is that of the Sponsor. A Sponsor locates an appropriate CCTL and makes contractual arrangements with them to conduct an evaluation of an IT product. They are responsible for delivering the PP, ST, or TOE and related documentation to the CCTL and coordinating any pre-evaluation activities. A Sponsor may represent the customer or the IT product vendor, or be a neutral third party such as a system integrator. The CCRA divides the generic evaluator role into three hierarchical functions: Common Criteria Testing Laboratories (CCTLs), National Evaluation Authorities, and the Common Criteria Implementation Management Board (CCIMB). CCTLs must meet accreditation standards and are subject to regular audit and oversight activities to ensure that their evaluations conform to the CC/CEM. CCTLs receive the PP, ST, or TOE and the associated documentation from the Sponsor. They conduct a formal evaluation of the PP, ST or TOE according to the CC/CEM and the assurance package specified in the PP. If missing, ambiguous, or incorrect information is uncovered during the course of an evaluation, the CCTL issues an Observation Report (OR) to the sponsor requesting clarification. The results are documented in an Evaluation Technical Report (ETR), which is sent to the National Evaluation Authority along with a recommendation that the IT product be certified (or not). Each country that is a signatory to the CCRA has a National Evaluation Authority. The National Evaluation Authority is the focal point for CC activities within its jurisdiction. A National Evaluation Authority may take one 291

AU1518Ch18Frame Page 292 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES of two forms — that of a Certificate Consuming Participant or that of a Certificate Authorizing Participant. A Certificate Consuming Participant recognizes CC certificates issued by other entities but, at present, does not issue any certificates itself. It is not uncommon for a country to sign on to the CCRA as a Certificate Consuming Participant, then switch to a Certificate Authorizing Participant later, after they have established their national evaluation scheme and accredited some CCTLs. A Certificate Authorizing Participant is responsible for defining and managing the evaluation scheme within their jurisdiction. This is the administrative and regulatory framework by which CCTLs are initially accredited and subsequently maintain their accreditation. The National Evaluation Authority issues guidance to CCTLs about standard practices and procedures and monitors evaluation results to ensure their objectivity, repeatability, and conformance to the CC/CEM. The National Evaluation Authority issues official CC certificates, if they agree with the CCTL recommendation, and recognizes CC certificates issued by other National Evaluation Authorities. In addition, the National Evaluation Authority maintains the Evaluated Products List and PP Registry for its jurisdiction. The Common Criteria Implementation Management Board (CCIMB) is composed of representatives from each country that is a party to the CCRA. The CCIMB has the ultimate responsibility for facilitating the consistent interpretation and application of the CC/CEM across all CCTLs and National Evaluation Authorities. Accordingly, the CCIMB monitors and oversees the National Evaluation Authorities. The CCIMB renders decisions in response to Requests for Interpretations (RIs). Finally, the CCIMB maintains the current version of the CC/CEM and coordinates with ISO/IEC JTC1 SC27 WG3 and the CEMEB concerning new releases of the CC/CEM and related standards. FUTURE OF THE CC As mentioned earlier, the CC/CEM is the result of a 30-year evolutionary process. The CC/CEM and the processes governing it have been designed so that CC/CEM will continue to evolve and not become obsolete when technology changes, like the Orange Book did. Given that and the fact that 15 countries have signed the CC Recognition Agreement (CCRA), the CC/CEM will be with us for the long term. Two near-term events to watch for are the issuance of both the CEM and the SSE-CMM as ISO/IEC standards. The CCIMB has set in place a process to ensure consistent interpretations of the CC/CEM and to capture any needed corrections or enhancements to the methodology. Both situations are dealt with through what is known as the Request for Interpretation (RI) process. The first step in this process is for a developer, sponsor, or CCTL to formulate a question. This 292

AU1518Ch18Frame Page 293 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation question or RI may be triggered by four different scenarios. The organization submitting the RI:10 • Perceives an error in the CC or CEM • Perceives the need for additional material in the CC or CEM • Proposes a new application of the CC and/or CEM and wants this new approach to be validated • Requests help in understanding part of the CC or CEM The RI cites the relevant CC and/or CEM reference and states the problem or question. The ISO/IEC has a five-year reaffirm, update, or withdrawal cycle for standards. This means that the next version of ISO/IEC 15408, which will include all of the final interpretations in effect at that time, should be released near the end of 2004. The CCIMB has indicated that it may issue an interim version of the CC or CEM, prior to the release of the new ISO/IEC 15408 version, if the volume and magnitude of final interpretations warrant such an action. However, the CCIMB makes it clear that it remains dedicated to support the ISO/IEC process.1 Acronyms ADP — Automatic Data Processing equipment C&A — Certification and Accreditation CC — Common Criteria CCEB — Common Criteria Editing Board CCIMB — Common Criteria Implementation Board CCRA — Common Criteria Recognition Agreement CCTL — accredited CC Testing Laboratory CEM — Common Evaluation Methodology CESG — U.K. Communication Electronics Security Group CMM — Capability Maturity Model COMSEC — Communications Security COMPUSEC — Computer Security CSE — Canadian Computer Security Establishment DoD — U.S. Department of Defense EAL — Evaluation Assurance Level ECMA — European Computer Manufacturers Association 293

AU1518Ch18Frame Page 294 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES ETR — Evaluation Technical Report IEC — International Electrotechnical Commission ISO — International Organization for Standardization JTC — ISO/IEC Joint Technical Committee NASA — U.S. National Aeronautics and Space Administration NIST — U.S. National Institute of Standards and Technology NSA — U.S. National Security Agency OECD — Organization for Economic Cooperation and Development OPSEC — Operational Security OR — Observation Report PP — Protection Profile RI — Request for Interpretation SAR — Security Assurance Requirement SEI — Software Engineering Institute at Carnegie Mellon University SFR — Security Functional Requirement SSE-CMM — System Security Engineering CMM ST — Security Target TCSEC — Trusted Computer Security Evaluation Criteria TOE — Target of Evaluation References 1. www.commoncriteria.org; centralized resource for current information about the Common Criteria standards, members, and events. 2. DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems , U.S. Department of Defense, January 1973. 3. DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems , with 1st Amendment, U.S. Department of Defense, June 25, 1979. 4. CSC-STD-001-83, Trusted Computer System Evaluation Criteria (TCSEC), National Computer Security Center, U.S. Department of Defense, August 15, 1983. 5. DoD 5200.28-STD, Trusted Computer System Evaluation Criteria (TCSEC), National Computer Security Center, U.S. Department of Defense, December 1985. 6. Herrmann, D., A Practical Guide to Security Engineering and Information Assurance, Auerbach Publications, Boca Raton, FL, 2001. 7. ISO/IEC 15408-1(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model. 8. ISO/IEC 15408-2(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 2: Security functional requirements.

294

AU1518Ch18Frame Page 295 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation 9. ISO/IEC 15408-3(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 3: Security assurance requirements. 10. Arrangement on the Recognition of Common Criteria Certificates in the Field of Information Technology Security, May 23, 2000.

ABOUT THE AUTHOR Debra Herrmann is the ITT manager of security engineering for the FAA Telecommunications Infrastructure program. Her special expertise is in the specification, design, and assessment of secure mission-critical systems. She is the author of Using the Common Criteria for IT Security Evaluation and A Practical Guide to Security Engineering and Information Assurance, both from Auerbach Publications.

295

AU1518Ch18Frame Page 296 Thursday, November 14, 2002 6:17 PM

AU1518Ch19Frame Page 297 Thursday, November 14, 2002 6:17 PM

Chapter 19

The Security Policy Life Cycle: Functions and Responsibilities Patrick D. Howard, CISSP

Most information security practitioners normally think of security policy development in fairly narrow terms. Use of the term policy development usually connotes writing a policy on a particular topic and putting it into effect. If practitioners happen to have recent, hands-on experience in developing information security policies, they may also include in their working definition the staffing and coordination of the policy, security awareness tasks, and perhaps policy compliance oversight. But is this an adequate inventory of the functions that must be performed in the development of an effective security policy? Unfortunately, many security policies are ineffective because of a failure to acknowledge all that is actually required in developing policies. Limiting the way security policy development is defined also limits the effectiveness of policies resulting from this flawed definition. Security policy development goes beyond simple policy writing and implementation. It is also much more than activities related to staffing a newly created policy, making employees aware of it, and ensuring that they comply with its provisions. A security policy has an entire life cycle that it must pass through during its useful lifetime. This life cycle includes research, getting policies down in writing, getting management buy-in, getting them approved, getting them disseminated across the enterprise, keeping users aware of them, getting them enforced, tracking them and ensuring that they are kept current, getting rid of old policies, and other similar tasks. Unless an organization recognizes the various functions involved in the policy development task, it runs the risk of developing policies that are poorly thought out, incomplete, redundant, not fully supported by users or management, superfluous, or irrelevant. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

297

AU1518Ch19Frame Page 298 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES Use of the security policy life cycle approach to policy development can ensure that the process is comprehensive of all functions necessary for effective policies. It leads to a greater understanding of the policy development process through the definition of discrete roles and responsibilities, through enhanced visibility of the steps necessary in developing effective policies, and through the integration of disparate tasks into a cohesive process that aims to generate, implement, and maintain policies. POLICY DEFINITIONS It is important to be clear on terms at the beginning. What do we mean when we say policy, or standard, or baseline, or guideline, or procedure? These are terms information security practitioners hear and use every day in the performance of their security duties. Sometimes they are used correctly, and sometimes they are not. For the purpose of this discussion these terms are defined in Exhibit 19-1. Exhibit 19-1 provides generally accepted definitions for a security policy hierarchy. A policy is defined as a broad statement of principle that presents management’s position for a defined control area. A standard is defined as a rule that specifies a particular course of action or response to a given situation and is a mandatory directive for carrying out policies. Baselines establish how security controls are to be implemented on specific technologies. Procedures define specifically how policies and standards will be implemented in a given situation. Guidelines provide recommendations on how other requirements are to be met. An example of interrelated security requirements at each level might be an electronic mail security policy for the entire organization at the highest policy level. This would be supported by various standards, including perhaps a requirement that e-mail messages be routinely purged 90 days following their creation. A baseline in this example would relate to how security controls for the e-mail service will be configured on a specific type of system (e.g., ACF2, VAX VMS, UNIX, etc.). Continuing the example, procedures would be specific requirements for how the e-mail security policy and its supporting standards are to be applied in a given business unit. Finally, guidelines in this example would include guidance to users on best practices for securing information sent or received via electronic mail. It should be noted that many times the term policy is used in a generic sense to apply to security requirements of all types. When used in this fashion it is meant to comprehensively include policies, standards, baselines, guidelines, and procedures. In this document, the reader is reminded to consider the context of the word’s use to determine if it is used in a general way to refer to policies of all types or to specific policies at one level of the hierarchy. 298

AU1518Ch19Frame Page 299 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities Exhibit 19-1. Definition of terms. Policy

A broad statement of principle that presents management’s position for a defined control area. Policies are intended to be long-term and guide the development of more specific rules to address specific situations. Policies are interpreted and supported by standards, baselines, procedures, and guidelines. Policies should be relatively few in number, should be approved and supported by executive management, and should provide overall direction to the organization. Policies are mandatory in nature, and an inability to comply with a policy should require approval of an exception. Standard A rule that specifies a particular course of action or response to a given situation. Standards are mandatory directives to carry out management’s policies and are used to measure compliance with policies. Standards serve as specifications for the implementation of policies. Standards are designed to promote implementation of high-level organization policy rather than to create new policy in themselves. Baseline A baseline is a platform-specific security rule that is accepted across the industry as providing the most effective approach to a specific security implementation. Baselines are established to ensure that the security features of commonly used systems are configured and administered uniformly so that a consistent level of security can be achieved throughout the organization. Procedure Procedures define specifically how policies, standards, baselines and guidelines will be implemented in a given situation. Procedures are either technology or process dependent and refer to specific platforms, applications, or processes. They are used to outline steps that must be taken by an organizational element to implement security related to these discrete systems and processes. Procedures are normally developed, implemented, and enforced by the organization owning the process or system. Procedures support organization policies, standards, baselines, and guidelines as closely as possible, while addressing specific technical or procedural requirements within the local organization to which they apply. Guideline A guideline is a general statement used to recommend or suggest an approach to implementation of policies, standards, and baselines. Guidelines are essentially recommendations to consider when implementing security. While they are not mandatory in nature, they are to be followed unless there is a documented and approved reason not to.

POLICY FUNCTIONS There are 11 functions that must be performed throughout the life of security policy documentation, from cradle to grave. These can be categorized in four fairly distinct phases of a policy’s life. During its development a policy is created, reviewed, and approved. This is followed by an implementation phase where the policy is communicated and either complied with or given an exception. Then, during the maintenance phase, the policy must be kept up-to-date, awareness of it must be maintained, and compliance with 299

AU1518Ch19Frame Page 300 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

Creation Communication Awareness

Review Compliance

Monitoring

Approval Exceptions

Enforcement

Development Tasks

Retirement Implementation Tasks

Maintenance Maintenance Tasks Disposal Tasks

Exhibit 19-2. Policy functions.

it must be monitored and enforced. Finally, during the disposal phase, the policy is retired when it is no longer required. Exhibit 19-2 shows all of these security policy development functions by phase and their relationships through the flow of when they are performed chronologically in the life cycle. The following paragraphs expand on each of these policy functions within these four phases. Creation: Plan, Research, Document, and Coordinate the Policy The first step in the policy development phase is the planning for, research, and writing of the policy — or, taken together, the creation function. The policy creation function includes identifying why there is a need for the policy (for example, the regulatory, legal, contractual, or operational requirement for the policy); determining the scope and applicability of the policy; roles and responsibilities inherent in implementing the policy; and assessing the feasibility of implementing it. This function also includes conducting research to determine organizational requirements for developing policies (i.e., approval authorities, coordination requirements, and style or formatting standards), and researching industry-standard best practices for their applicability to the current organizational policy need. This function results in the documentation of the policy in accordance with organization standards and procedures, as well as coordination as necessary with internal and external organizations that it affects 300

AU1518Ch19Frame Page 301 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities to obtain input and buy-in from these elements. Overall, policy creation is probably the most easily understood function in the policy development life cycle because it is the one that is most often encountered and which normally requires the readily identifiable milestones. Review: Get an Independent Assessment of the Policy Policy review is the second function in the development phase of the life cycle. Once the policy document has been created and initial coordination has been effected, it must be submitted to an independent individual or group for assessment prior to its final approval. There are several benefits of an independent review: a more viable policy through the scrutiny of individuals who have a different or wider perspective than the writer of the policy; broadened support for the policy through an increase in the number of stakeholders; and increased policy credibility through the input of a variety of specialists on the review team. Inherent to this function is the presentation of the policy to the reviewer(s) either formally or informally; addressing any issues that may arise during the review; explaining the objective, context, and potential benefits of the policy; and providing justification for why the policy is needed. As part of this function, the creator of the policy is expected to address comments and recommendations for changes to the policy, and to make all necessary adjustments and revisions resulting in a final policy ready for management approval. Approval: Obtain Management Approval of the Policy The final step in the policy development phase is the approval function. The intent of this function is to obtain management support for the policy and endorsement of the policy by a company official in a position of authority through their signature. Approval permits and hopefully launches the implementation of the policy. The approval function requires the policy creator to make a reasoned determination as to the appropriate approval authority; coordination with that official; presentation of the recommendations stemming from the policy review; and then a diligent effort to obtain broader management buy-in to the policy. Also, should the approving authority hesitate to grant full approval of the policy, the policy creator must address issues regarding interim or temporary approval as part of this function. Communication: Disseminate the Policy Once the policy has been formally approved, it passes into the implementation phase of the policy life cycle. Communication of the policy is the first function to be performed in this phase. The policy must be initially disseminated to organization employees or others who are affected by the policy (e.g., contractors, partners, customers, etc.). This function entails determining the extent and the method of the initial distribution of the policy, 301

AU1518Ch19Frame Page 302 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES addressing issues of geography, language, and culture; prevention of unauthorized disclosure; and the extent to which the supervisory chain will be used in communicating the policy. This is most effectively completed through the development of a policy communication, implementation, or rollout plan, which addresses these issues as well as resources required for implementation, resource dependencies, documenting employee acknowledgment of the policy, and approaches for enhancing visibility of the policy. Compliance: Implement the Policy Compliance encompasses activities related to the initial execution of the policy to comply with its requirements. This includes working with organizational personnel and staff to interpret how the policy can best be implemented in various situations and organizational elements; ensuring that the policy is understood by those required to implement, monitor, and enforce the policy; monitoring, tracking, and reporting on the pace, extent, and effectiveness of implementation activities; and measuring the policy’s immediate impact on operations. This function also includes keeping management apprised of the status of the policy’s implementation. Exceptions: Manage Situations where Implementation Is Not Possible. Because of timing, personnel shortages, and other operational requirements, not every policy can be complied with as originally intended. Therefore, exceptions to the policy will probably need to be granted to organizational elements that cannot fully meet the requirements of the policy. There must be a process in place to ensure that requests for exception are recorded, tracked, evaluated, submitted for approval/disapproval to the appropriate authority, documented, and monitored throughout the approved period of noncompliance. The process must also accommodate permanent exceptions to the policy as well as temporary waivers of requirements based on short-term obstacles. Awareness: Assure Continued Policy Awareness Following implementation of the policy, the maintenance phase of the policy development life cycle begins. The awareness function of the maintenance phase comprises continuing efforts to ensure that personnel are aware of the policy in order to facilitate their compliance with its requirements. This is done by defining the awareness needs of various audience groups within the organization (executives, line managers, users, etc.); determining the most effective awareness methods for each audience group (i.e., briefings, training, messages); and developing and disseminating awareness materials (presentations, posters, mailings, etc.) regarding the need for adherence to the policy. The awareness function also includes efforts to integrate up-to-date policy compliance and enforcement feedback 302

AU1518Ch19Frame Page 303 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities as well as current threat information to make awareness information as topical and realistic as possible. The final task is measuring the awareness of employees with the policy and adjusting awareness efforts based on the results of measurement activities. Monitoring: Track and Report Policy Compliance During the maintenance phase, the monitoring function is performed to track and report on the effectiveness of efforts to comply with the policy. This information results from observations of employees and supervisors; from formal audits, assessments, inspections, and reviews; and from violation reports and incident response activities. This function includes continuing activities to monitor compliance or noncompliance with the policy through both formal and informal methods, and the reporting of these deficiencies to appropriate management authorities for action. Enforcement: Deal with Policy Violations The compliance muscle behind the policy is effective enforcement. The enforcement function comprises management’s response to acts or omissions that result in violations of the policy with the purpose of preventing or deterring their recurrence. This means that once a violation is identified, appropriate corrective action must be determined and applied to the people (disciplinary action), processes (revision), and technologies (upgrade) affected by the violation to lessen the likelihood of it happening again. As stated previously, inclusion of information on these corrective actions in the awareness efforts can be highly effective. Maintenance: Ensure the Policy Is Current Maintenance addresses the process of ensuring the currency and integrity of the policy. This includes tracking drivers for change (i.e., changes in technology, processes, people, organization, business focus, etc.) that may affect the policy; recommending and coordinating policy modifications resulting from these changes; and documenting policy changes and recording change activities. This function also ensures the continued availability of the policy to all parties affected by it, as well as maintaining the integrity of the policy through effective version control. When changes to the policy are required, several previously performed functions need to be revisited — review, approval, communication, and compliance in particular. Retirement: Dispense with the Policy when No Longer Needed After the policy has served its useful purpose (e.g., the company no longer uses the technology for which it applies, or it has been superseded by another policy), then it must be retired. The retirement function makes up the disposal phase of the life cycle, and is the final function in the policy 303

AU1518Ch19Frame Page 304 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES development life cycle. This function entails removing a superfluous policy from the inventory of active policies to avoid confusion, archiving it for future reference, and documenting information about the decision to retire the policy (i.e., justification, authority, date, etc.). These four life cycle phases comprising 11 distinct functions must be performed in their entirety over the complete life cycle of a given policy. One cannot rule out the possibility of combining certain functions to suit current operational requirements. Nevertheless, regardless of the manner in which they are grouped, or the degree to which they are abbreviated by immediate circumstances, each function needs to be performed. In the development phase, organizations often attempt to develop policy without an independent review, resulting in policies that are not well conceived or well received. Shortsighted managers often fail to appropriately address the exception function from the implementation phase, mistakenly thinking there can be no circumstances for noncompliance. Many organizations fail to continually evaluate the need for their established policies during the maintenance phase, discounting the importance of maintaining the integrity and availability of the policies. One often finds inactive policies on the books of major organizations, indicating that the disposal function is not being applied. Not only do all the functions need to be performed, several of them must be done iteratively. In particular, maintenance, awareness, compliance monitoring, and enforcement must be continually exercised over the full life of the policy. POLICY RESPONSIBILITIES In most cases the organization’s information security function — either a group or an individual — performs the vast majority of the functions in the policy life cycle and acts as the proponent for most policy documentation related to the protection of information assets. By design, the information security function exercises both long-term responsibility and day-today tasks for securing information resources and, as such, should own and exercise centralized control over security-related policies, standards, baselines, procedures, and guidelines. This is not to say, however, that the information security function and its staff should be the proponent for all security-related policies or perform all policy development functions. For instance, owners of information systems should have responsibility for establishing requirements necessary to implement organization policies for their own systems. While requirements such as these must comport with higherlevel policy directives, their proponent should be the organizational element that has the greatest interest in ensuring the effectiveness of the policy. While the proponent or owner of a policy exercises continuous responsibility for the policy over its entire life cycle, there are several factors that have a significant bearing on deciding what individual or element should 304

AU1518Ch19Frame Page 305 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities have direct responsibility for performing specific policy functions in an organization. These factors include the following: • The principle of separation of duties should be applied in determining responsibility for a particular policy function to ensure that necessary checks and balances are applied. To provide a different or broader perspective, an official or group that is independent of the proponent should review the policy, and an official who is senior to the proponent should be charged with approving the policy. Or, to lessen the potential for conflicts of interest, the audit function as an independent element within an organization should be tasked with monitoring compliance with the policy, while external audit groups or organizations should be relied upon to provide an independent assessment of policy compliance to be consistent with this principle. • Additionally, for reasons of efficiency, organizational elements other than the proponent may need to be assigned responsibility for certain security policy development life-cycle functions. For instance, dissemination and communication of the policy is best carried out by the organizational element normally charged with performing these functions for the entire organization, (i.e., knowledge management, corporate communications, etc.). On the other hand, awareness efforts are often assigned to the organization training function on the basis of efficiency, even though the training staff is not particularly well suited to perform the policy awareness function. While the training department may render valuable support during the initial dissemination of the policy and in measuring the effectiveness of awareness efforts, the organization’s information security function is better suited to perform continuing awareness efforts because it is well positioned to monitor policy compliance and enforcement activities and to identify requirements for updating the program, each of which is an essential ingredient to effective employee awareness of the policy. • Limits on span of control that the proponent exercises have an impact on who should be the proponent for a given policy function. Normally, the proponent can play only a limited role in compliance monitoring and enforcement of the policy because the proponent cannot be in all places where the policy has been implemented at all times. Line managers, because of their close proximity to the employees who are affected by security policies, are in a much better position to effectively monitor and enforce them and should therefore assume responsibility for these functions. These managers can provide the policy owner assurance that the policy is being adhered to and can ensure that violations are dealt with effectively. • Limits on the authority that an individual or element exercises may determine the ability to successfully perform a policy function. The effectiveness of a policy may often be judged by its visibility and the emphasis 305

AU1518Ch19Frame Page 306 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES that organizational management places on it. The effectiveness of a policy in many cases depends on the authority on which the policy rests. For a policy to have organization-wide support, the official who approves it must have some recognized degree of authority over a substantial part of the organization. Normally, the organization’s information security function does not enjoy that level of recognition across an entire organization and requires the support of upper-level management in accomplishing its mission. Consequently, acceptance of and compliance with information security policies is more likely when based on the authority of executive management. • The proponent’s placement in the organization may cause a lack of knowledge of the environment in which the policy will be implemented, thus hindering its effectiveness. Employment of a policy evaluation committee can provide a broader understanding of operations that will be affected by the policy. A body of this type can help ensure that the policy is written so as to promote its acceptance and successful implementation, and it can be used to forecast implementation problems and to effectively assess situations where exceptions to the policy may be warranted. • Finally, the applicability of the policy also affects the responsibility for policy life-cycle functions. What portion of the organization is affected by the policy? Does it apply to a single business unit, all users of a particular technology, or the entire global enterprise? This distinction can be significant. If the applicability of a policy is limited to a single organizational element, then management of that element should own the policy. However, if the policy is applicable to the entire organization, then a higher-level entity should exercise ownership responsibilities for the policy. THE POLICY LIFE-CYCLE MODEL To ensure that all functions in the policy life cycle are appropriately performed and that responsibilities for their execution are adequately assigned for each function, organizations should establish a framework that facilitates ready understanding, promotes consistent application, establishes a hierarchical structure of mutually supporting policy levels, and effectively accommodates frequent technological and organizational change. Exhibit 19-3 provides a reference for assigning responsibilities for each policy development function according to policy level. In general, this model proposes that responsibilities for functions related to security policies, standards, baselines, and guidelines are similar in many respects. As the element charged with managing the organization’s overall information security program, the information security function should normally serve as the proponent for most related policies, standards, 306

Retirement

Maintenance

Enforcement

Monitoring

Awareness

Exceptions

Compliance

Approval Communication

Review

Function Creation

Information security function Information security function

Policies Information security function Policy evaluation committee Chief executive officer Communications department Managers and employees organization-wide Policy evaluation committee Information security function Managers and employees, information security function, and audit function Managers Information security function Information security function

Information security function Information security function

Information security function Managers and employees, information security function, and audit function Not applicable

Guidelines Information security function Policy evaluation committee Chief information officer Communications Department Managers and employees organization-wide Not applicable

Responsibility Standards and Baselines Information security function Policy evaluation committee Chief information officer Communications department Managers and employees organization-wide Policy evaluation committee Information security function Managers and employees, information security function, and audit function Managers

Exhibit 19-3. Policy function–responsibility model.

Proponent element

Managers and employees assigned to proponent element, information security function, and audit function Managers assigned to proponent element Proponent element

Proponent management

Managers and employees of proponent element Department vice president

Information security function and proponent management Department vice president Proponent element

Procedures Proponent element

AU1518Ch19Frame Page 307 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities

307

AU1518Ch19Frame Page 308 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES baselines, and guidelines related to the security of the organization’s information resources. In this capacity, the information security function should perform the creation, awareness, maintenance, and retirement functions for security policies at these levels. There are exceptions to this general principle, however. For instance, even though it has a substantial impact on the security of information resources, it is more efficient for the human resources department to serve as the proponent for employee hiring policy and standards. Responsibilities for functions related to security procedures, on the other hand, are distinctly different than those for policies, standards, baselines, and guidelines. Exhibit 19-3 shows that proponents for procedures rests outside the organization information security function and is decentralized based on the limited applicability by organizational element. Although procedures are created and implemented (among other functions) on a decentralized basis, they must be consistent with higher organization security policy; therefore, they should be reviewed by the organization information security function as well as the next-higher official in the proponent element’s management chain. Additionally, the security and audit functions should provide feedback to the proponent on compliance with procedures when conducting reviews and audits. The specific rationale for the assignment of responsibilities shown in the model is best understood through an exploration of the model according to life-cycle functions as noted below. • Creation. In most organizations the information security function should serve as the proponent for all security-related policies that extend across the entire enterprise; and should be responsible for creating these policies, standards, baselines, and guidelines. However, security procedures necessary to implement higher-level security requirements and guidelines should be created by each proponent element to which they apply because they must be specific to the element’s operations and structure. • Review. The establishment of a policy evaluation committee provides a broad-based forum for reviewing and assessing the viability of security policies, standards, baselines, and guidelines that affect the entire organization. The policy evaluation committee should be chartered as a group of policy stakeholders drawn from across the organization who are responsible for ensuring that security policies, standards, baselines, and guidelines are well written and understandable, are fully coordinated, and are feasible in terms of the people, processes, and technologies that they affect. Because of their volume, and the number of organizational elements involved, it will probably not be feasible for the central policy evaluation committee to review all procedures developed by proponent elements. However, security procedures require a 308

AU1518Ch19Frame Page 309 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities









similar review, and the proponent should seek to establish a peer review or management review process to accomplish this or request review by the information security function within its capability. Approval. The most significant differences between the responsibilities for policies vis-à-vis standards, baselines, and guidelines are the level of the approval required for each and the extent of the implementation. Security policies affecting the entire organization should be signed by the chief executive officer to provide the necessary level of emphasis and visibility to this most important type of policy. Because information security standards, baselines, and guidelines are designed to elaborate on specific policies, this level of policy should be approved with the signature of the executive official subordinate to the CEO who has overall responsibility for the implementation of the policy. The chief information officer will normally be responsible for approving these types of policies. Similarly, security procedures should bear the approval of the official exercising direct management responsibility for the element to which the procedures apply. The department vice president or department chief will normally serve in this capacity. Communication. Because it has the apparatus to efficiently disseminate information across the entire organization, the communications department should exercise the policy communication responsibility for enterprisewide policies. The proponent should assume the responsibility for communicating security procedures, but as much as possible should seek the assistance of the communications department in executing this function. Compliance. Managers and employees to whom security policies are applicable play the primary role in implementing and ensuring initial compliance with newly published policies. In the case of organizationwide policies, standards, baselines, and guidelines, this responsibility extends to all managers and employees to whom they apply. As for security procedures, this responsibility will be limited to managers and employees of the organizational element to which the procedures apply. Exceptions. At all levels of an organization, there is the potential for situations that prevent full compliance with the policy. It is important that the proponent of the policy or an individual or group with equal or higher authority review exceptions. The policy evaluation committee can be effective in screening requests for exceptions received from elements that cannot comply with policies, standards, and baselines. Because guidelines are, by definition, recommendations or suggestions and are not mandatory, formal requests for exceptions to them are not necessary. In the case of security procedures, the lower-level official who approves the procedures should also serve as the authority for approving exceptions to them. The department vice president typically performs this function. 309

AU1518Ch19Frame Page 310 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES • Awareness. For most organizations, the information security function is ideally positioned to manage the security awareness program and should therefore have the responsibility for this function in the case of security policies, standards, baselines, and guidelines that are applicable to the entire organization. However, the information security function should perform this function in coordination with the organization’s training department to ensure unity of effort and optimum use of resources. Proponent management should exercise responsibility for employee awareness of security procedures that it owns. Within capability, this can be accomplished with the advice and assistance of the information security function. • Monitoring. The responsibility for monitoring compliance with security policies, standards, baselines, and guidelines that are applicable to the entire organization is shared among employees, managers, the audit function, and the information security function. Every employee that is subject to security requirements should assist in monitoring compliance by reporting deviations that they observe. Although they should not be involved in enforcing security policies, the information security functions and organization audit function can play a significant role in monitoring compliance. This includes monitoring compliance with security procedures owned by lower-level organizational elements by reporting deviations to the proponent for appropriate enforcement action. • Enforcement. The primary responsibility for enforcing security requirements of all types falls on managers of employees affected by the policy. Of course, this does not apply to guidelines, which by design are not enforceable in strict disciplinary terms. Managers assigned to proponent elements to which procedures are applicable must be responsible for their enforcement. The general rule is that the individual granted the authority for supervising employees should be the official who enforces the security policy. Hence, in no case should the information security function or audit function be granted enforcement authority in lieu of or in addition to the manager. Although the information security function should not be directly involved in enforcement actions, it is important that it be privy to reports of corrective action so that this information can be integrated into ongoing awareness efforts. • Maintenance. With its overall responsibility for the organization’s information security program, the information security function is best positioned to maintain security policies, guidelines, standards, and baselines having organization-wide applicability to ensure they remain current and available to those affected by them. At lower levels of the organization, proponent elements as owners of security procedures should perform the maintenance function for procedures that they develop for their organizations. 310

AU1518Ch19Frame Page 311 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities • Retirement. When a policy, standard, baseline, or guideline is no longer necessary and must be retired, the proponent for it should have the responsibility for retiring it. Normally, the organization’s information security function will perform this function for organization-wide security policies, standards, baselines, and guidelines, while the proponent element that serves as the owner of security procedures should have responsibility for retiring the procedure under these circumstances. Although this methodology is presented as an approach for developing information security policies specifically, its potential utility should be fairly obvious to an organization in the development, implementation, maintenance, and disposal of the full range of its policies — both security related and otherwise. CONCLUSION The life cycle of a security policy is far more complex than simply drafting written requirements to correct a deviation or in response to a newly deployed technology and then posting it on the corporate intranet for employees to read. Employment of a comprehensive policy life cycle as described here will provide a framework to help an organization ensure that these interrelated functions are performed consistently over the life of a policy through the assignment of responsibility for the execution of each policy development function according to policy type. Utilization of the security policy life-cycle model can result in policies that are timely, well written, current, widely supported and endorsed, approved, and enforceable for all levels of the organization to which they apply. References Fites, Philip and Martin P.J. Kratz. Information Systems Security: A Practitioner’s Reference, London: International Thomson Computer Press, 1996. Hutt, Arthur E., Seymour Bosworth, and Douglas B. Hoyt. Computer Security Handbook, 3rd ed., John Wiley & Sons, New York, 1995. National Institute of Standards and Technology, An Introduction to Computer Security: The NIST Handbook, Special Publication 800-12, October 1995. Peltier, Thomas R., Information Security Policies and Procedures: A Practitioner’s Reference, Auerbach Publications, New York, 1999. Tudor, Jan Killmeyer, Information Security Architecture: An Integrated Approach to Security in the Organization, Auerbach Publications, New York, 2001.

ABOUT THE AUTHOR Patrick D. Howard, CISSP, a senior information security consultant with QinetiQ-TIM, has more than 20 years of experience in information security. Pat has been an instructor for the Computer Security Institute, conducting CISSP Prep for Success Workshops across the United States. 311

AU1518Ch19Frame Page 312 Thursday, November 14, 2002 6:17 PM

AU1518Ch20Frame Page 313 Thursday, November 14, 2002 6:16 PM

Chapter 20

Security Assessment Sudhanshu Kairab, CISSP, CISA

During the past decade, businesses have become increasingly dependent on technology. IT environments have evolved from mainframes running selected applications and independent desktop computers to complex client/server networks running a multitude of operating systems with connectivity to business partners and consumers. Technology trends indicate that IT environments will continue to become more complicated and connected. With this trend in technology, why is security important? With advances in technology, security has become a central part of strategies to deploy and maintain technology. For companies pursuing E-commerce initiatives, security is a key consideration in developing the strategy. In the businessto-consumer markets, customers cite security as the main reason for buying or not buying online. In addition, most of the critical data resides on various systems within the IT environment of most companies. Loss or corruption of data can have devastating effects on a company, ranging from regulatory penalties stemming from laws such as HIPAA (Health Insurance Portability and Accountability Act) to loss of customer confidence. In evaluating security in a company, it is important to keep in mind that managing security is a process much like any other process in a company. Like any other business process, security has certain technologies that support it. In the same way that an ERP (enterprise resources planning) package supports various supply-chain business processes such as procurement, manufacturing, etc., technologies such as firewalls, intrusion detection systems, etc. support the security process. However, unlike some other business processes, security is something that touches virtually every part of the business, from human resources and finance to core operations. Consequently, security must be looked at as a business process and not a set of tools. The best security technology will not yield a secure environment if it is without sound processes and properly defined business requirements. One of the issues in companies today is that, as they have raced to address the numerous security concerns, security processes and technology have not always been implemented with the full 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

313

AU1518Ch20Frame Page 314 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES understanding of the business and, as a result, have not always been aligned with the needs of the business. When securing a company’s environment, management must consider several things. In deciding what security measures are appropriate, some considerations include: • • • •

What needs to be protected? How valuable is it? How much does downtime cost a company? Are there regulatory concerns (e.g., HIPAA, GLBA [Gramm-LeachBliley Act])? • What is the potential damage to the company’s reputation if there is a security breach? • What is the probability that a breach can occur? Depending on the answers to these and other questions, a company can decide which security processes make good business sense for them. The security posture must balance: • The security needs of the business • The operational concerns of the business • The financial constraints of the business The answers to the questions stated earlier can be ascertained by performing a security assessment. An independent third-party security assessment can help a company define what its security needs are and provide a framework for enhancing and developing its information security program. Like an audit, it is important for an assessment to be independent so that results are not (or do not have the appearance of being) biased in any way. An independent security assessment using an internal auditor or a third-party consultant can facilitate open and honest discussion that will provide meaningful information. If hiring a third-party consultant to perform an assessment, it is important to properly evaluate their qualifications and set up the engagement carefully. The results of the security assessment will serve as the guidance for short- and long-term security initiatives; therefore, it is imperative to perform the appropriate due diligence evaluation of any consulting firm considered. In evaluating a third-party consultant, some attributes that management should review include: • Client references. Determine where they have previously performed security assessments. • Sample deliverables. Obtain a sense of the type of report that will be provided. Clients sometimes receive boilerplate documents or voluminous reports from security software packages that are difficult to decipher, not always accurate, and fail to adequately define the risks. 314

AU1518Ch20Frame Page 315 Thursday, November 14, 2002 6:16 PM

Security Assessment • Qualifications of the consultants. Determine if the consultants have technical or industry certifications (e.g., CISSP, CISA, MCSE, etc.) and what type of experience they have. • Methodology and tools. Determine if the consultants have a formal methodology for performing the assessment and what tools are used to do some of the technical pieces of the assessment. Because the security assessment will provide a roadmap for the information security program, it is critical that a quality assessment be performed. Once the selection of who is to do the security assessment is finalized, management should define or put parameters around the engagement. Some things to consider include: • Scope. The scope of the assessment must be very clear, that is, network, servers, specific departments or business units, etc. • Timing. One risk with assessments is that they can drag on. The people who will offer input should be identified as soon as possible, and a single point of contact should be appointed to work with the consultants or auditors performing the assessment to ensure that the work is completed on time. • Documentation. The results of the assessment should be presented in a clear and concise fashion so management understands the risks and recommendations. STANDARDS The actual security assessment must measure the security posture of a company against standards. Security standards range from ones that address high-level operational processes to more technical and sometimes technology-specific standards. Some examples include: • ISO 17799: Information Security Best Practices. This standard was developed by a consortium of companies and describes best practices for information security in the areas listed below. This standard is very process driven and is technology independent. — Security policy — Organizational security — Asset classification and control — Personnel security — Physical and environmental security — Communications and operations management — Access control — Systems development and maintenance — Business continuity management — Compliance 315

AU1518Ch20Frame Page 316 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES • Common Criteria (http://www.commoncriteria.org). “Represents the outcome of a series of efforts to develop criteria for evaluation of IT security products that are broadly useful within the international community.”1 The Common Criteria are broken down into three parts listed below: — Part 1: Introduction and general model: defines general concepts and principles of IT security evaluation and presents a general model for evaluation — Part 2: Security functional requirements — Part 3: Security assurance requirements • SANS/FBI Top 20 Vulnerabilities (http://www.sans.org/top20.htm). This is an updated list of the 20 most significant Internet security vulnerabilities broken down into three categories: General, UNIX related, and NT related. • Technology-specific standards. For instance, best practices for locking down Microsoft products can be found on the Microsoft Web site. When performing an assessment, parts or all of the standards listed above or other known standards can be used. In addition, the consultant or auditor should leverage past experience and their knowledge of the company. UNDERSTANDING THE BUSINESS To perform an effective security assessment, one must have a thorough understanding of the business environment. Some of the components of the business environment that should be understood include: • What are the inherent risks for the industry in which the company operates? • What is the long- and short-term strategy for the company? — What are the current business requirements, and how will this change during the short term and the long term? • What is the organizational structure, and how are security responsibilities handled? • What are the critical business processes that support the core operations? • What technology is in place? To answer these and other questions, the appropriate individuals, including business process owners, technology owners, and executives, should be interviewed. INHERENT RISKS As part of obtaining a detailed understanding of the company, an understanding of the inherent risks in the business is required. Inherent risks are 316

AU1518Ch20Frame Page 317 Thursday, November 14, 2002 6:16 PM

Security Assessment those risks that exist in the business without considering any controls. These risks are a result of the nature of the business and the environment in which it operates. Inherent risks can be related to a particular industry or to general business practices, and can range from regulatory concerns as a result of inadequate protection of data to risks associated with disgruntled employees within an information technology (IT) department. These risks can be ascertained by understanding the industry and the particular company. Executives are often a good source of this type of information. BUSINESS STRATEGY Understanding the business strategy can help identify what is important to a company. This will ultimately be a factor in the risk assessment and the associated recommendations. To determine what is important to a company, it is important to understand the long- and short-term strategies. To take this one step further, how will IT support the long- and short-term business strategies? What will change in the IT environment once the strategies are implemented? The business strategy gives an indication of where the company is heading and what is or is not important. For example, if a company were planning on consolidating business units, the security assessment might focus on integration issues related to consolidation, which would be valuable input in developing a consolidation strategy. One example of a prevalent business strategy for companies of all sizes is facilitating employee telecommuting. In today’s environment, employees are increasingly accessing corporate networks from hotels or their homes during business hours as well as off hours. Executives as well as lowerlevel personnel have become dependent on the ability to access company resources at any time. From a security assessment perspective, the key objective is to determine if the infrastructure supporting remote access is secure and reliable. Some questions that an assessment might address in evaluating a remote access strategy include: • How will remote users access the corporate network (e.g., dial in, VPN, etc.)? • What network resources do remote users require (e.g., e-mail, shared files, certain applications)? — Based on what users must access, what kind of bandwidth is required? • What is the tolerable downtime for remote access? Each of the questions above has technology and process implications that need to be considered as part of the security assessment. In addition to the business strategies, it is also helpful to understand security concerns at the executive level. Executives offer the “big-picture” 317

AU1518Ch20Frame Page 318 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES view of the business, which others in the business sometimes do not. This high-level view can help prioritize the findings of a security assessment according to what is important to senior management. Interfacing with executives also provides an opportunity to make them more aware of security exposures that may potentially exist. ORGANIZATIONAL STRUCTURE For an information security program to be effective, the organization structure must adequately support it. Where the responsibility for information security resides in an organization is often an indication of how seriously management views information security. In many companies today, information security is the responsibility of a CISO (chief information security officer) who might report to either the CIO (chief information officer) or the CEO (chief executive officer). The CISO position has risen in prominence since the September 11 attacks. According to a survey done in January 2002 by Booz Allen Hamilton, “firms with more than $1 billion in annual revenues … 54 percent of the 72 chief executive officers it surveyed have a chief security officer in place. Ninety percent have been in that position for more than two years.”2 In other companies, either middle- or lowerlevel management within an IT organization handles security. Having a CISO can be an indication that management has a high level of awareness of information security issues. Conversely, information security responsibility at a lower level might mean a low level of awareness of information security. While this is not always true, a security assessment must ascertain management and company attitude regarding the importance of information security. Any recommendations that would be made in the context of a security assessment must consider the organizational impact and, more importantly, whether the current setup of the organization is conducive to implementing the recommendations of the security assessment in the first place. Another aspect of where information security resides in an organization is whether roles and responsibilities are clearly defined. As stated earlier, information security is a combination of process and technology. Roles and responsibilities must be defined such that there is a process owner for the key information security-related processes. In evaluating any part of an information security program, one of the first questions to ask is: “Who is responsible for performing the process?” Oftentimes, a security assessment may reveal that, while the process is very clearly defined and adequately addresses the business risk, no one owns it. In this case, there is no assurance that the process is being done. A common example of this is the process of ensuring that terminated employees are adequately processed. When employees are terminated, some things that are typically done include: 318

AU1518Ch20Frame Page 319 Thursday, November 14, 2002 6:16 PM

Security Assessment • • • •

Payroll is stopped. All user access is eliminated. All assets (i.e., computers, ID badges, etc.) are returned. Common IDs and passwords that the employee was using are changed.

Each of the steps above requires coordination among various departments, depending on the size and structure of a given company. Ensuring that terminated employees are processed correctly might mean coordination among departments such as human resources, IT, finance, and others. To ensure the steps outlined above are completed, a company might have a form or checklist to help facilitate communication among the relevant departments and to have a record that the process has been completed. However, without someone in the company owning the responsibility of ensuring that the items on the checklist are completed, there is no assurance that a terminated employee is adequately processed. It might be the case that each department thought someone else was responsible for it. Too often, in the case of terminated employees, processing is incomplete because of a lack of ownership of the process, which presents significant risk for any company. Once there are clear roles and responsibilities for security-related processes, the next step is to determine how the company ensures compliance. Compliance with security processes can be checked using two methods. First, management controls can be built into the processes to ensure compliance. Building on the example of terminated employees, one of the significant elements in the processing is to ensure that the relevant user IDs are removed. If the user IDs of the terminated employees are, by mistake, not removed, it can be still be caught during periodic reviews of user IDs. This periodic review is a management control to ensure that only valid user IDs are active, while also providing a measure of security compliance. The second method of checking compliance is an audit. Many internal audit departments include information security as part of their scope as it grows in importance. The role of internal audit in an information security program is twofold. First, audits check compliance with key security processes. Internal audits focus on different processes and related controls on a rotation basis over a period of time based on risk. The auditors gain an understanding of the processes and associated risks and ensure that internal controls are in place to reasonably mitigate the risks. Essentially, internal audit is in a position to do a continuous security assessment. Second, internal audits provide a company with an independent evaluation of the business processes, associated risks, and security policies. Because of their experience with and knowledge of the business and technology, internal auditors can evaluate and advise on security processes and related internal controls. 319

AU1518Ch20Frame Page 320 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES While there are many internal audit departments that do not have an adequate level of focus on information security, its inclusion within the scope of internal audit activities is an important indication about the level of importance placed on it. Internal audit is in a unique position to raise the level of awareness of information security because of its independence and access to senior management and the audit committee of the board of directors. BUSINESS PROCESSES In conjunction with understanding the organization, the core business processes must be understood when performing a security assessment. The core business processes are those that support the main operations of a company. For example, the supply-chain management process is a core process for a manufacturing company. In this case, the security related to the systems supporting supply-chain management would warrant a close examination. A good example of where core business processes have resulted in increased security exposures is business-to-business (B2B) relationships. One common use of a B2B relationship is where business partners manage supply-chain activities using various software packages. In such a relationship, business partners might have access to each other’s manufacturing and inventory information. Some controls for potential security exposures as a result of such an arrangement include ensuring that: • Business partners have access based on a need-to-know basis. • Communication of information between business partners is secure. • B2B connection is reliable. These security exposure controls have information security implications and should be addressed in an information security program. For example, ensuring that business partners have access on a need-to-know basis might be accomplished using the access control features of the software as well as strict user ID administration procedures. The reliability of the B2B connection might be accomplished with a combination of hardware and software measures as well as SLAs (service level agreements) establishing acceptable downtime requirements. In addition to the core business processes listed above, security assessments must consider other business processes in place to support the operations of a company, including: • • • • • 320

Backup and recovery Information classification Information retention Physical security User ID administration

AU1518Ch20Frame Page 321 Thursday, November 14, 2002 6:16 PM

Security Assessment • • • • • •

Personnel security Business continuity and disaster recovery Incident handling Software development Change management Noncompliance

The processes listed above are the more traditional security-related processes that are common across most companies. In some cases, these processes might be discussed in conjunction with the core business processes, depending on the environment. In evaluating these processes, guidelines such as the ISO 17799 and the Common Criteria can be used as benchmarks. It is important to remember that understanding any of the business processes means understanding the manual processes as well as the technology used to support them. Business process owners and technology owners should be interviewed to determine exactly how the process is performed. Sometimes, a walk-through is helpful in gaining this understanding. TECHNOLOGY ENVIRONMENT As stated in the previous section, the technology supporting business processes is an important part of the security assessment. The technology environment ranges from industry-specific applications, to network operating systems, to security software such as firewalls and intrusion detection systems. Some of the more common areas to focus on in a security assessment include: • • • • • • • •

Critical applications Local area network Wide area network Server operating systems Firewalls Intrusion detection systems Anti-virus protection Patch levels

When considering the technology environment, it is important to not only identify the components but also to determine how they are used. For example, firewalls are typically installed to filter traffic going in and out of a network. In a security assessment, one must understand what the firewall is protecting and if the rule base is configured around business requirements. Understanding whether the technology environment is set up in alignment with business requirements will enable a more thoughtful security assessment. 321

AU1518Ch20Frame Page 322 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES RISK ASSESSMENT Once there is a good understanding of the business, its critical processes, and the technology supporting the business, the actual risk assessment can be done — that is, what is the risk as a result of the security exposures? While gaining an understanding of the business and the risk assessment are listed as separate steps, it is important to note that both of these steps will tend to happen simultaneously in the context of an audit; and this process will be iterative to some extent. Due to the nature of how information is obtained and the dynamic nature of a security assessment, the approach to performing the assessment must be flexible. The assessment of risk takes the understanding of the critical processes and technology one step further. The critical business processes and the associated security exposures must be evaluated to determine what the risk is to the company. Some questions to think about when determining risk include: • What is the impact to the business if the business process cannot be performed? • What is the monetary impact? — Cost to restore information — Regulatory penalties • What is the impact to the reputation of the company? • What is the likelihood of an incident due to the security exposure? • Are there any mitigating controls that reduce the risk? It is critical to involve business process and technology owners when determining risks. Depending on how the assessment is performed, some of the questions will come up or be answered as the initial information is gathered. In addition, other more detailed questions will come up that will provide the necessary information to properly assess the risk. In addition to evaluating the business processes, the risk assessment should also be done relative to security exposures in the technology environment. Some areas on which to focus here include: • • • • •

Perimeter security (firewalls, intrusion detection, etc.) Servers Individual PCs Anti-virus software Remote access

Security issues relating to the specific technologies listed above may come up during the discussions about the critical business processes. For example, locking down servers may arise because it is likely that there are servers that support some of the critical business processes. 322

AU1518Ch20Frame Page 323 Thursday, November 14, 2002 6:16 PM

Security Assessment Once all the security risks have been determined, the consultant or auditor must identify what measures are in place to mitigate the risks. Some of the measures to look for include: • Information security policies • Technical controls (e.g., servers secured according to best practice standards) • Business process controls (e.g., review of logs and management reports) The controls may be identified while the process is reviewed and the risk is determined. Again, a security assessment is an iterative process in which information may not be uncovered in a structured manner. It is important to differentiate and organize the information so that risk is assessed properly. The combination of security exposures and controls (or lack thereof) to mitigate the associated risks should then be used to develop the gap analysis and recommendations. The gap analysis is essentially a detailed list of security exposures, along with controls to mitigate the associated risks. Those areas where there are inadequate controls or no controls to mitigate the security exposure are the gaps, which potentially require remediation of some kind. The final step in the gap analysis is to develop recommendations to close the gaps. Recommendations could range from writing a security policy to changing the technical architecture to altering how the current business process is performed. It is very important that the recommendations consider the business needs of the organization. Before a recommendation is made, a cost/benefit analysis should be done to ensure that it makes business sense. It is possible that, based on the cost/benefit analysis and operational or financial constraints, the organization might find it reasonable to accept certain security risks. Because the recommendations must be sold to management, they must make sense from a business perspective. The gap analysis should be presented in an organized format that management can use to understand the risks and implement the recommendations. An effective way to present the gap analysis is with a risk matrix with the following columns represented: • • • •

Finding Risk Controls in place Recommendation

This format provides a simple and concise presentation of the security exposures, controls, and recommendations. The presentation of the gap 323

AU1518Ch20Frame Page 324 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES analysis is very important because management will use it to understand the security exposures and associated risks. In addition, the gap analysis can be used to prioritize short- and long-term security initiatives. CONCLUSION For many companies, the security assessment is the first step in developing an effective information security program because many organizations do not know where they are from a security perspective. An independent security assessment and the resulting gap analysis can help determine what the security exposures are, as well as provide recommendations for additional security measures that should be in implemented. The gap analysis can also help management prioritize the tasks in the event that all the recommendations could not be immediately implemented. The gap analysis reflects the security position at a given time, and the recommendations reflect current and future business requirements to the extent they are known. As business requirements and technologies change, security exposures will invariably change. To maintain a sound information security program, the cycle of assessments, gap analysis, and implementation of recommendations should be done on a continuous basis to effectively manage security risk. References 1. Common Criteria Web page: http://www.commoncriteria.org/docs/origins.html. 2. Flash, Cynthia, Rise of the chief security officer, Internet News, March 25, 2002, http://www. internetnews.com/ent-news/article/0,7_997111,00.html.

ABOUT THE AUTHOR Sudhanshu Kairab, CISSP, CISA, is an information security consultant with a diverse background, including security consulting, internal auditing, and public accounting across different industries. His recent projects include security assessments and development of security policies and procedures.

324

AU1518Ch21Frame Page 325 Thursday, November 14, 2002 6:15 PM

Chapter 21

Evaluating the Security Posture of an Information Technology Environment: The Challenges of Balancing Risk, Cost, and Frequency of Evaluating Safeguards Brian R. Schultz, CISSP, CISA

The elements that could affect the integrity, availability, and confidentiality of the data contained within an information technology (IT) system must be assessed periodically to ensure that the proper safeguards have been implemented to adequately protect the resources of an organization. More specifically, the security that protects the data contained within the IT systems should be evaluated regularly. Without the assurance that the data contained within the system has integrity and is therefore accurate, the system is useless to serve the stakeholders who rely on the accuracy of such data. Historically, safeguards over a system have been evaluated as a function of compliance with laws, regulations, or guidelines that are driven by an external entity. External auditors such as financial statement auditors might assess security over a system to understand the extent of security controls implemented and whether these controls are adequate to allow them to rely on the data processed by the systems. Potential partners for a merger might assess the security of an organization’s systems to determine the effectiveness of security measures and to gain a better understanding of the systems’ condition and value. See Exhibit 21-1 for a list of common IT evaluation methodologies.

325

AU1518Ch21Frame Page 326 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 21-1. Common IT evaluation types. Type of Evaluation: Financial Statement Audit Stakeholders: All professionals who work for the organization or who own a company that undergoes an annual financial statement audit. Description: Financial statement auditors review the financial data of an organization to determine whether the financial data is accurately reported. As a component of performing the financial statement audit, they also review the controls (safeguards) used to protect the integrity of the data. Financial statement auditors are not concerned with the confidentiality or availability of data as long as it has no impact on the integrity of the data. This work will be conducted in accordance with American Institute of Certified Public Accountants (AICPA) standards for public organizations and in accordance with the Federal Information System Control Audit Methodology (FISCAM) for all U.S. federal agency financial statement audits. Type of Evaluation: Due Diligence Audit before the Purchase of a Company Stakeholders: Potential buyers of a company. Description: Evaluation of the safeguards implemented and the condition of an IT system prior to the purchase of a company. Type of Evaluation: SAS 70 Audit Stakeholders: The users of a system that is being processed by a facility run by another organization. Description: The evaluation of data centers that process (host) applications or complete systems for several organizations. The data center will frequently obtain the services of a third-party organization to perform an IT audit over the data center. The report, commonly referred to as an SAS 70 Report, provides an independent opinion of the safeguards implemented at the shared data center. The SAS 70 Report is generally shared with each of the subscribing organizations that uses the services of the data center. Because the SAS 70 audit and associated report are produced by a third-party independent organization, most subscribing organizations of the data center readily accept the results to be sufficient, eliminating the need to initiate their own audits of the data center. Type of Evaluation: Federal Financial Institutions Examination Council (FFIEC) Information Systems Examination Stakeholders: All professionals in the financial industry and their customers. Description: Evaluation of the safeguards affecting the integrity, reliability, and accuracy of data and the quality of the management information systems supporting management decisions. Type of Evaluation: Health Insurance Portability Accountability Act (HIPAA) Compliance Audit Stakeholders: All professionals in health care and patients. Description: Evaluation of an organization’s compliance with HIPAA specifically in the area of security and privacy of healthcare data and data transmissions.

326

AU1518Ch21Frame Page 327 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment Exhibit 21-1. Common IT evaluation types (Continued). Type of Evaluation: U.S. Federal Government Information Systems Reform Act (GISRA) Review Stakeholders: All U.S. federal government personnel and American citizens. Description: Evaluation of safeguards of federal IT systems with a final summary report of each agency’s security posture provided to the Office of Management and Budget. Type of Evaluation: U.S. Federal Government Risk Assessment in compliance with Office of Management and Budget Circular A-130 Stakeholders: All federal government personnel and those who use the data contained within those systems. Description: Evaluation of U.S. government major applications and general support systems every three years to certify and accredit that the system is properly secured to operate and process data.

Evaluations of IT environments generally are not performed proactively by the IT department of an organization. This is primarily due to a performance-focused culture within the ranks of the chief information officers and other executives of organizations who have been driven to achieve performance over the necessity of security. As more organizations experience performance issues as a result of lack of effective security, there will be more proactive efforts to integrate security into the development of IT infrastructures and the applications that reside within them. In the long run, incorporating security from the beginning is significantly more effective and results in a lower cost over the life cycle of a system. Internal risk assessments should be completed by the information security officer or an internal audit department on an annual basis and more often if the frequency of hardware and software changes so necessitates. In the case of a major launch of a new application or major platform, a preimplementation (before placing into production) review should be performed. If an organization does not have the capacity or expertise to perform its own internal risk assessment or pre-implementation evaluation, a qualified consultant should be hired to perform the risk assessment. The use of a contractor offers many advantages: • Independent evaluators have a fresh approach and will not rely on previously formed assumptions. • Independent evaluators are not restricted by internal politics. • Systems personnel are generally more forthright with an outside consultant than with internal personnel. • Outside consultants have been exposed to an array of systems of other organizations and can offer a wider perspective on how the security posture of the system compares with systems of other organizations. 327

AU1518Ch21Frame Page 328 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES

Assess

Design

Security Strategy and Policy

Implement

Test

Exhibit 21-2. Security life-cycle model.

• Outside consultants might have broader technology experience based on their exposure to multiple technologies and therefore are likely to be in a position to offer recommendations for improving security. When preparing for an evaluation of the security posture of an IT system, the security life-cycle model should be addressed to examine the organization’s security strategy, policies, procedures, architecture, infrastructure design, testing methodologies, implementation plans, and prior assessment findings. SECURITY LIFE-CYCLE MODEL The security life-cycle model contains all of the elements of security for a particular component of security of an information technology as seen in Exhibit 21-2. Security elements tend to work in cycles. Ideally, the security strategy and policy are determined with a great deal of thought and vision followed by the sequential phases of design, test, implement and, finally, assess. The design phase is when the risk analyst examines the design of safeguards and the chosen methods of implementation. In the second phase, the test phase, the risk assessment examines the testing procedures and processes that are used before placing safeguards into production. In the following phase, the implementation phase, the risk assessment analyzes the effectiveness of the technical safeguards settings contained within the operating system, multilevel security, database management system, application-level security, public key infrastructure, intrusion detection system, firewalls, and routers. These safeguards are evaluated using 328

AU1518Ch21Frame Page 329 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment

Threats

Vulnerability

Data

Exhibit 21-3. Elements of an organization’s security posture.

technical vulnerability tools as well as a manual review of security settings provided on printed reports. Assessing security is the last phase of the security life-cycle model, and it is in this phase that the actions taken during the previous phases of the security life-cycle model are assessed. The assess phase is the feedback mechanism that provides the organization with the condition of the security posture of an IT environment. The risk assessment first focuses on the security strategy and policy component of the model. The security strategy and policy component is the core of the model, and many information security professionals would argue that this is the most important element of a successful security program. The success or failure of an organization’s security hinges on a well-formulated, risk-based security strategy and policy. When used in the appropriate context, the security life-cycle model is an effective tool to use as a framework in the evaluation of IT security risks. ELEMENTS OF RISK ASSESSMENT METHODOLOGIES A risk assessment is an active process that is used to evaluate the security of an IT environment. Contained within each security assessment methodology are the elements that permit the identification and categorization of the components of the security posture of a given IT environment. These identified elements provide the language necessary to identify, communicate, and report the results of a risk assessment. These elements are comprised of threats, vulnerabilities, safeguards, countermeasures, and residual risk analysis. As seen in Exhibit 21-3, each of these elements is dynamic and, in combination, constitutes the security posture of the IT environment. 329

AU1518Ch21Frame Page 330 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES THREATS A threat is a force that could affect an organization or an element of an organization. Threats can be either external or internal to an organization and, by themselves, are not harmful. However, they have the potential to be harmful. Threats are also defined as either man-made — those that mankind generates — or natural — those that naturally occur. For a threat to affect an organization, it must exploit an existing vulnerability. Every organization is vulnerable to threats. The number, frequency, severity, type, and likelihood of each threat are dependent on the environment of the IT system. Threats can be ranked on a relative scale of low, medium, and high, based on the potential risk to an asset or group of assets. • Low indicates a relatively low probability that this threat would have significant effect. • Medium indicates a moderate probability that this threat would have significant effect if not mitigated by an appropriate safeguard. • High indicates a relatively high probability that the threat could have significant effect if not mitigated by an appropriate safeguard or series of safeguards. VULNERABILITY Vulnerability is a weakness or condition of an organization that could permit a threat to take advantage of the weakness to affect its performance. The absence of a firewall to protect an organization’s network from external attacks is an example of vulnerability in the protection of the network from potential external attacks. All organizations have and will continue to have vulnerabilities. However, each organization should identify the potential threats that could exploit vulnerabilities and properly safeguard against threats that could have a dramatic effect on performance. SAFEGUARDS Safeguards, also called controls, are measures that are designed to prevent, detect, protect, or sometimes react to reduce the likelihood — or to completely mitigate the possibility — of a threat to exploit an organization’s vulnerabilities. Safeguards can perform several of these functions at the same time, or they may only perform one of these functions. A firewall that is installed and configured properly is an example of a safeguard to prevent external attacks to the organization’s network. Ideally, a “defensein-depth” approach should be deployed to implement multiple layers of safeguards to establish the appropriate level of protection for the given environment. The layering of protection provides several obstacles for an attacker, thereby consuming the attacker’s resources of time, money, and risk in continuing the attack. For instance, a medical research firm should safeguard its product research from theft by implementing a firewall on its 330

AU1518Ch21Frame Page 331 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment network to prevent someone from obtaining unauthorized access to the network. In addition, the firm might also implement a network intrusion detection system to create an effective defense-in-depth approach to external network safeguards. A countermeasure is a type of safeguard that is triggered by an attack and is reactive in nature. Its primary goal is to defend by launching an offensive action. Countermeasures should be deployed with caution because they could have a profound effect on numerous systems if activated by an attack. RESIDUAL RISK ANALYSIS As a risk assessment is completed, a list of all of the identified vulnerabilities should be documented and a residual risk analysis performed. Through this process, each individual vulnerability is examined along with the existing safeguards (if any), and the residual risk is then determined. The final step is the development of recommendations to strengthen existing safeguards or recommendations to implement new safeguards to mitigate the identified residual risk. RISK ASSESSMENT METHODOLOGIES Several risk assessment methodologies are available to the information security professional to evaluate the security posture of an IT environment. The selection of a methodology is based on a combination of factors, including the purpose of the risk assessment, available budget, and the required frequency. The primary consideration in selecting a risk assessment methodology, however, is the need of the organization for performing the risk assessment. The depth of the risk assessment required is driven by the level of risk attributed to the continued and accurate performance of the organization’s systems. An organization that could be put out of business by a systems outage for a few days would hold a much higher level of risk than an organization that could survive weeks or months without their system. For example, an online discount stockbroker would be out of business without the ability to execute timely stock transactions, whereas a construction company might be able to continue operations for several weeks without access to its systems without significant impact. An organization’s risk management approach should also be considered before selecting a risk assessment methodology. Some organizations are proactive in their approach to addressing risk and have a well-established risk management program. Before proceeding in the selection of a risk assessment methodology, it would be helpful to determine if the organization has such a program and the extent of its depth and breadth. In the case 331

AU1518Ch21Frame Page 332 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES of a highly developed risk assessment methodology, several layers of safeguards are deployed and require a much different risk assessment approach than if the risk management program were not developed and few safeguards had been designed and deployed. Gaining an understanding of the design of the risk management program, or lack thereof, will enable the information security professional conducting the risk assessment to quickly identify the layers of controls that should be considered when scoping the risk assessment. The risk assessment methodologies available to the information security professional are general and not platform specific. There are several methodologies available, and the inexperienced information security professional and those not familiar with the risk assessment process will quickly become frustrated with the vast array of methodologies and opinions with regard to how to conduct an IT risk assessment. It is the author’s opinion that all IT risk assessment methodologies should be based on the platform level. This is the only real way to thoroughly address the risk of a given IT environment. Some of the highest risks associated within an IT environment are technology specific; therefore, each risk assessment should include a technical-level evaluation. However, the lack of technology-specific vulnerability and safeguard information makes the task of a technically driven risk assessment a challenge to the information security professional. Hardware and software changes frequently open up new vulnerabilities with each new version. In an ideal world, a centralized depository of vulnerabilities and associated safeguards would be available to the security professional. In the meantime, the information security professional must rely on decentralized sources of information regarding technical vulnerabilities and associated safeguards. Although the task is daunting, the information security professional can be quite effective in obtaining the primary goal, which is to reduce risk to the greatest extent possible. This might be accomplished by prioritizing risk mitigation efforts on the vulnerabilities that represent the highest risk and diligently eliminating lower-risk vulnerabilities until the risk has been reduced to an acceptable level. Several varieties of risk assessments are available to the information security professional, each one carrying unique qualities, timing, and cost. In addition, risk assessments can be scoped to fit an organization’s needs to address risk and to the budget available to address risk. The lexicon and standards of risk assessments vary greatly. While this provides for a great deal of flexibility, it also adds a lot of frustration when trying to scope an evaluation and determine the associated cost. Listed below are several of the most common types of risk assessments. 332

AU1518Ch21Frame Page 333 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment QUALITATIVE RISK ASSESSMENT A qualitative risk assessment is subjective, based on best practices and the experience of the professional performing it. Generally, the findings of a qualitative risk assessment will result in a list of vulnerabilities with a relative ranking of risk (low, medium, or high). Some standards exist for some specific industries, as listed in Exhibit 21-1; however, qualitative risk assessments tend to be open and flexible, providing the evaluator a great deal of latitude in determining the scope of the evaluation. Given that each IT environment potentially represents a unique combination of threats, vulnerabilities, and safeguards, the flexibility is helpful in obtaining quick, cost-effective, and meaningful results. Due to this flexibility, the scope and cost of the qualitative risk assessment can vary greatly. Therefore, evaluators have the ability to scope evaluations to fit an available budget. QUANTITATIVE RISK ASSESSMENT A quantitative risk assessment follows many of the same methodologies of a qualitative risk assessment, with the added task of determining the cost associated with the occurrence of a given vulnerability or group of vulnerabilities. These costs are calculated by determining asset value, threat frequency, threat exposure factors, safeguard effectiveness, safeguard cost, and uncertainty calculations. This is a highly effective methodology in communicating risk to an audience that appreciates interpreting risk based on cost. For example, if an information systems security officer of a large oil company wanted to increase the information security budget of the department, presentation of the proposed budget to the board of directors for approval is required. The best way for this professional to effectively communicate the need for additional funding to improve safeguards and the associated increase in the budget is to report the cost of the risk in familiar terms with which the board members are comfortable. In this particular case, the members of the board are very familiar with financial terms. Thus, the expression of risk in terms of financial cost provides a compelling case for action. For such an audience, a budget increase is much more likely to be approved if the presenter indicates that the cost of not increasing the budget has a high likelihood of resulting in a “two billion dollar loss of revenue” rather than “the risk represents a high operational cost.” Although the risk represented is the same, the ability to communicate risk in financial terms is very compelling. A quantitative risk assessment approach requires a professional or team of professionals who are exceptional in their professions to obtain meaningful and accurate results. They must be well seasoned in performing qualitative and quantitative risk assessments, as the old GI-GO (garbage-in, garbage-out) rule applies. If the persons performing the quantitative risk assessment do not properly estimate the cost of an asset and frequency of 333

AU1518Ch21Frame Page 334 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES loss expectancy, the risk assessment will yield meaningless results. In addition to requiring a more capable professional, a quantitative risk assessment approach necessitates the use of a risk assessment tool such as RiskWatch or CORA (Cost of Risk Analysis). The requirement for the advanced skills of a quantitative risk assessment professional and the use of a quantitative risk assessment tool significantly increases the cost above that of a qualitative risk assessment. For many organizations, a qualitative risk assessment would be more than adequate to identify risk for appropriate mitigation. As a word of caution when using a quantitative approach, much like the use of statistics in politics to influence an audience’s opinion, the cost information that results from a quantitative risk assessment could be manipulated to lead an audience to a variety of conclusions. INFORMATION TECHNOLOGY AUDIT IT audits are primarily performed by external entities and internal audit departments with the charge to determine the effectiveness of the security posture over an IT environment and, in the case of a financial statement audit, to determine the reliability (integrity) of the data contained within the system. They essentially focus on the adequacy of and compliance with existing policies, procedures, technical baseline controls, and guidelines. Therefore, the primary purpose of an IT audit is to report the condition of the system and not to improve security. However, IT auditors are usually more than willing to share their findings and recommendations with the IT department. In addition, IT auditors are required to document their work in sufficient detail as to permit another competent IT auditor to perform the exact same audit procedure (test) and come to the same conclusion. This level of documentation is time-consuming and therefore usually has an effect on the depth and breadth of the evaluation. Thus, IT audits may not be as technically deep in scope as a non-audit type of evaluation. TECHNICAL VULNERABILITY ASSESSMENT A technical vulnerability assessment is a type of risk assessment that is focused primarily on the technical safeguards at the platform and network levels and does not include an assessment of physical, environmental, configuration management, and management safeguards. NETWORK TECHNICAL VULNERABILITY ASSESSMENT The safeguards employed at the network level support all systems contained within its environment. Sometimes these collective systems are referred to as a general support system. Most networks are connected to the Internet, which requires protection from exterior threats. Accordingly, a network technical vulnerability assessment should include an evaluation of the 334

AU1518Ch21Frame Page 335 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment Exhibit 21-4. Automated technical vulnerability assessment tools. Nessus. This is a free system security scanning software that provides the ability to remotely evaluate security within a given network and determine the vulnerabilities that an attacker might use. ISS Internet Scanner. A security scanner that provides comprehensive network vulnerability assessment for measuring online security risks, it performs scheduled and selective probes of communication services, operating systems, applications, and routers to uncover and report systems vulnerabilities. Shadow Security Scanner. This tool identifies known and unknown vulnerabilities, suggests fixes to identified vulnerabilities, and reports possible security holes within a network’s Internet, intranet, and extranet environments. It employs a unique artificial intelligence engine that allows the product to think like a hacker or network security analyst attempting to penetrate your network. NMAP. NMAP (Network Mapper) is an open-source utility for network exploration or security auditing. It rapidly scans large networks using raw IP packets in unique ways to determine what hosts are available on the network, what services (ports) they are offering, what operating system (and OS version) they are running, and what type of packet filters or firewalls are in use. NMAP is free software available under the terms of the GNU GPL. Snort. This packet-sniffing utility monitors displays and logs network traffic. L0ftCrack. This utility can crack captured password files through comparisons of passwords to dictionaries of words. If the users devised unique passwords, the utility uses brute-force guessing to reveal the passwords of the users.

safeguards implemented to protect the network and its infrastructure. This would include the routers, load balancers, firewalls, virtual private networks, public key infrastructure, single sign-on solutions, network-based operating systems (e.g., Windows 2000), and network protocols (e.g., TCP/IP). Several automated tools can be used to assist the vulnerability assessment team. See Exhibit 21-4 for a list of some of the more common tools used. PLATFORM TECHNICAL VULNERABILITY ASSESSMENT The safeguards employed at the platform level support the integrity, availability, and confidentiality of the data contained within the platform. A platform is defined as a combination of hardware, operating system software, communications software, security software, and the database management system and application security that support a set of data (see Exhibit 21-5 for an example of a mainframe platform diagram). The combination of these distinctly separate platform components contains a unique set of risks, necessitating that each platform be evaluated based on its unique combination. Unless the evaluator is able to examine the safeguards at the platform level, the integrity of the data cannot be properly and completely assessed and, therefore, is not reliable. Several automated tools can be used by the vulnerability assessment team. 335

AU1518Ch21Frame Page 336 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES

Data Application Database System Security Software - RACF Operating System - OS/390 Hardware - IBM Mainframe

Exhibit 21-5. Mainframe platform diagram.

PENETRATION TESTING A penetration test, also known as a pen test, is a type of risk assessment; but its purpose is quite different. A pen test is designed to test the security of a system after an organization has implemented all designed safeguards, performed a risk assessment, implemented all recommended improvements, and implemented all new recommended safeguards. It is the final test to determine if enough layered safeguards have been sufficiently implemented to prevent a successful attack against the system. This form of ethical hacking attempts to find vulnerabilities that have been overlooked in prior risk assessments. Frequently, a successful penetration is accomplished as a result of the penetration team, otherwise known as a tiger team, discovering multiple vulnerabilities that by themselves are not considered high risk but, when combined, create a backdoor permitting the penetration team to successfully exploit the low-risk vulnerabilities. There are several potential components to a pen test that, based on the organization’s needs, can be selected for use: • External penetration testing is performed from outside of the organization’s network, usually from the Internet. The organization can either provide the pen team with the organization’s range of IP addresses or ask the evaluators to perform a blind test. Blind tests are more expensive because it will take the penetration team time to discover the IP addresses of the organization. While it might seem to be a more effective test to have the team perform a blind test, it is inevitable that the team will find the IP addresses; therefore, it may be considered a waste of time and funds. 336

AU1518Ch21Frame Page 337 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment • Internal penetration testing is performed within the internal network of the organization. The penetration team attempts to gain access to sensitive unauthorized areas of the system. The internal penetration test is a valuable test, especially in light of the fact that an estimated 80 percent of incidents of unauthorized access are committed by employees. • Social engineering can be used by the pen testers to discover vital information from the organization’s personnel that might be helpful in launching an attack. For instance, a pen tester might drive up to the building of the organization, write down the name on an empty reserved parking space, and then call the help desk impersonating the absent employee to report that they had forgotten their password. The pen tester would then request that his password be reset so that he can get back into the system. Unless the help desk personnel have a way (employee number, etc.) to verify his identity, they will reset the password, giving the attacker the opportunity to make a new password for the absent employee and gain unauthorized access to the network. • War dialing tools can be used to automatically dial every combination of phone numbers for a given phone number exchange in an attempt to identify a phone line that has a modem connected. Once a phone line with an active modem has been discovered, the penetration team will attempt to gain access to the system. • Dumpster diving is the practice of searching through trash cans and recycling bins in an attempt to obtain information that will allow the penetration team to gain access to the system. Penetration testing is the most exciting of all of the risk assessments because it is an all-out attempt to gain access to the system. It is the only risk assessment methodology that proves the existence of a vulnerability or series of vulnerabilities. The excitement of penetration testing is also sometimes perpetuated by those who perform them. Some pen testers, also known as ethical hackers or “white hats,” are retired hackers who at one time were “black hats.” Some organizations might be tempted to skip the detailed risk assessment and risk remediation plan and go straight to a penetration test. While pen testing is an enthralling process, the results will be meaningless if the organization does not do its homework before the penetration test. In all likelihood, a good penetration team will gain access to an organization’s systems if it has not gone through the rigors of the risk assessment and improvement of safeguards. EVALUATING IDENTIFIED VULNERABILITIES After the vulnerabilities have been identified through a risk assessment, a vulnerability analysis should be performed to rank each vulnerability according to its risk level: 337

AU1518Ch21Frame Page 338 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES • Low. The risk of this vulnerability is not considered significant; however, when combined with several other low-risk vulnerabilities, the aggregate might be considered either a medium or high risk. Recommended safeguards need to be reviewed to determine if they are practical or cost-effective relative to the risk of the vulnerability. • Medium. This risk is potentially significant. If the vulnerability could be exploited more readily in combination with another vulnerability, then this risk could be ranked higher. Corrective action of a medium risk level should be taken within a short period of time after careful consideration of the cost-effectiveness of implementing the recommended safeguard. • High. The risk of this vulnerability is significant and, if exploited, could have profound effects on the viability of the organization. Immediate corrective action should be taken to mitigate the risk. ANALYZING PAIRED VULNERABILITIES In addition to ranking individual vulnerabilities, an analysis of all of the vulnerabilities should be performed to determine if any of the combinations of vulnerabilities, when considered together, represent a higher level of risk. These potentially higher-risk combinations should be documented and action taken to mitigate the risk. This is particularly important when considering the low-risk items because the combination of these lower-risk items could create the backdoor that permits an attacker to gain access to the system. To determine the relative nominal risk level of the identified vulnerabilities, the information security professional should identify potential layers of safeguards that mitigate a risk and then determine the residual risk. A residual risk mitigation plan should then be developed to reduce the residual risk to an acceptable level. CONCLUSION Unfortunately, security assessments are usually the last action that the IT department initiates as part of its security program. Other priorities such as application development, infrastructure building, or computer operations typically take precedence. Many organizations typically do not take security past the initial implementation because of a rush-to-build functionality of the systems — until an IT auditor or a hacker forces them to take security seriously. The “pressures to process” sometimes force organizations to ignore prudent security design and security assessment, leaving security as an afterthought. In these circumstances, security is not considered a critical element in serving the users; thus, many times security is left behind. The reality is that information contained within a system cannot be relied upon as having integrity unless security has been assessed and adequate protection of the data has been provided for the entire time the data has resided on the system. 338

AU1518Ch21Frame Page 339 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment Evaluating the security posture of an IT environment is a challenge that involves balancing the risk, frequency of evaluation, and cost. Security that is designed, tested, and implemented based on a strong security strategy and policy will be highly effective and in the long run cost-effective. Unfortunately, there are no clear-cut answers regarding how often a given IT environment should be evaluated. The answer may be found by defining how long the organization may viably operate without the systems. Such an answer will define the level of risk the organization is willing, or is not willing, to accept. A security posture that is built with the knowledge of this threshold of risk can lead to a system of safeguards that is both risk-based and cost-effective. ABOUT THE AUTHOR Brian Schultz, CISSP, CISA, is chairman of the board of INTEGRITY, a nonprofit organization dedicated to assisting the federal government with implementation of information security solutions. An expert in the field of information security assessment, Mr. Schultz has, throughout his career, assessed the security of numerous private and public organizations. He is a founding member of the Northern Virginia chapter of the Information Systems Security Association (ISSA).

Copyright 2003. INTEGRITY. All Rights Reserved. Used with permission.

339

AU1518Ch21Frame Page 340 Thursday, November 14, 2002 6:15 PM

AU1518Ch22Frame Page 341 Thursday, November 14, 2002 6:15 PM

Chapter 22

Cyber-Risk Management: Technical and Insurance Controls for Enterprise-Level Security Carol A. Siegel, CISSP Ty R. Sagalow Paul Serritella

Traditional approaches to security architecture and design have attempted to achieve the goal of the elimination of risk factors — the complete prevention of system compromise through technical and procedural means. Insurance-based solutions to risk long ago admitted that a complete elimination of risk is impossible and, instead, have focused more on reducing the impact of harm through financial avenues — providing policies that indemnify the policyholder in the event of harm. It is becoming increasingly clear that early models of computer security, which focused exclusively on the risk-elimination model, are not sufficient in the increasingly complex world of the Internet. There is simply no magic bullet for computer security; no amount of time or money can create a perfectly hardened system. However, insurance cannot stand alone as a risk mitigation tool — the front line of defense must always be a complete information security program and the implementation of security tools and products. It is only through leveraging both approaches in a complementary fashion that an organization can reach the greatest degree of risk 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

341

AU1518Ch22Frame Page 342 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES reduction and control. Thus, today, the optimal model requires a program of understanding, mitigating, and transferring risk through the use of integrating technology, processes, and insurance — that is, a risk management approach. The risk management approach starts with a complete understanding of the risk factors facing an organization. Risk assessments allow for security teams to design appropriate control systems and leverage the necessary technical tools; they also are required for insurance companies to properly draft and price policies for the remediation of harm. Complete risk assessments must take into account not only the known risks to a system but also the possible exploits that may be developed in the future. The completeness of cyber-risk management and assessment is the backbone of any secure computing environment. After a risk assessment and mitigation effort has been completed, insurance needs to be procured from a specialized insurance carrier of top financial strength and global reach. The purpose of the insurance is threefold: (1) assistance in the evaluation of the risk through products and services available from the insurer, (2) transfer of the financial costs of a successful computer attack or threat to the carrier, and (3) the provision of important post-incident support funds to reduce the potential reputation damage after an attack. THE RISK MANAGEMENT APPROACH As depicted in Exhibit 22-1, risk management requires a continuous cycle of assessment, mitigation, insurance, detection, and remediation. Assess An assessment means conducting a comprehensive evaluation of the security in an organization. It usually covers diverse aspects, ranging from physical security to network vulnerabilities. Assessments should include penetration testing of key enterprise systems and interviews with security and IT management staff. Because there are many different assessment formats, an enterprise should use a method that conforms to a recognized standard (e.g., ISO 17799, InfoSec — Exhibit 22-2). Regardless of the model used, however, the assessment should evaluate people, processes, technology, and financial management. The completed assessment should then be used to determine what technology and processes should be employed to mitigate the risks exposed by the assessment. An assessment should be done periodically to determine new vulnerabilities and to develop a baseline for future analysis to create consistency and objectivity. 342

AU1518Ch22Frame Page 343 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management

- Understand the report that the assessment yields - Determine areas of vulnerability that need immediate attention - Establish a recurring procedure to address these vulnerabilities - Recover lost data from backup systems - Execute alternative hot site until primary site is available

- Monitor assets to discover any unusual activity - Implement a 24x7 monitoring system that includes intrusion detection, antivirus, etc., to immediately identify and stop any potential intrusion - Analyze logs to determine any past events that were missed

Remediate

- Evaluate the organization’s security framework, including penetration testing and interviews with key personnel. - Use standard methodology and guidelines for assessment (e.g., ISO 17799, InfoSec, etc.)

Assess

Detect Mitigate

- Create and implement policies and procedures that ensure high levels of security - Implement financial risk mitigation and transfer mechanisms - Should be reviewed periodically to ensure maintenance of security posture

Insure - Choose the right insurance carrier based on expertise, financial strength, and global reach - Choose the right policy, including both first party and third party coverage - Implement insurance as a risk transfer solution and risk evaluation based security solutions - Work with the carrier to determine potential loss and business impact due to a security breach

Exhibit 22-1. Risk management cycle.

Mitigate Mitigation is the series of actions taken to reduce risk, minimize chances of an incident occurring, or limit the impact of any breach that does occur. Mitigation includes creating and implementing policies that ensure high levels of security. Security policies, once created, require procedures that ensure compliance. Mitigation also includes determining and using the right set of technologies to address the threats that the organization faces and implementing financial risk mitigation and transfer mechanisms. Insure Insurance is a key risk transfer mechanism that allows organizations to be protected financially in the event of loss or damage. A quality insurance program can also provide superior loss prevention and analysis recommendations, often providing premium discounts for the purchase of certain security products and services from companies known to the insurer that dovetail into a company’s own risk assessment program. Initially, 343

AU1518Ch22Frame Page 344 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-2. The 11 domains of risk assessment. Security Policy: During the assessment, the existence and quality of the organization’s security policy is evaluated. Security policies should establish guidelines, standards, and procedures to be followed by the entire organization. These need to be updated frequently. Organizational Security: One of the key areas that any assessment looks at is the organizational aspect of security. This means ensuring that adequate staff has been assigned to security functions, that there are hierarchies in place for security-related issues, and that people with the right skill sets and job responsibilities are in place. Asset Classification and Control: Any business will be impacted if the software and hardware assets it has are compromised. In evaluating the security of the organization, the existence of an inventory management system and risk classification system have to be verified. Personnel Security: The hiring process of the organization needs to be evaluated to ensure that adequate background checks and legal safeguards are in place. Also, employee awareness of security and usage policies should be determined. Physical and Environmental Security: Ease of access to the physical premises needs to be tested, making sure that adequate controls are in place to allow access only to authorized personnel. Also, the availability of redundant power supplies and other essential services has to be ensured. Communication and Operations Management: Operational procedures need to be verified to ensure that information processing occurs in a safe and protected manner. These should cover standard operating procedures for routine tasks as well as procedures for change control for software, hardware, and communication assets. Access Control: This domain demands that access to systems and data be determined by a set of criteria based on business requirement, job responsibility, and time period. Access control needs to be constantly verified to ensure that it is available only on a need-to-know basis with strong justification. Systems Development and Maintenance: If a company is involved in development activity, assess whether security is a key consideration at all stages of the development life cycle. Business Continuity Management: Determining the existence of a business continuity plan that minimizes or eliminates the impact of business interruption is a part of the assessment. Compliance: The assessment has to determine if the organization is in compliance with all regulatory, contractual, and legal requirements. Financial Considerations: The assessment should include a review to determine if adequate safeguards have to be implemented to ensure that any security breach results in minimal financial impact. This is implemented through risk transfer mechanisms; primarily insurance that covers the specific needs of the organization.

determining potential loss and business impact due to a security breach allows organizations to choose the right policy for their specific needs. The insurance component then complements the technical solutions, policies, and procedures. A vital step is choosing the right insurance carrier by 344

AU1518Ch22Frame Page 345 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management seeking companies with specific underwriting and claims units with expertise in the area of information security, top financial ratings, and global reach. The right carrier should offer a suite of policies from which companies can choose to provide adequate coverage. Detect Detection implies constant monitoring of assets to discover any unusual activity. Usually this is done by implementing a 24/7 monitoring system that includes intrusion detection to immediately identify and stop any potential intrusion. Additionally, anti-virus solutions allow companies to detect new viruses or worms as they appear. Detection also includes analyzing logs to determine any past events that were missed and specification of actions to prevent future misses. Part of detection is the appointment of a team in charge of incident response. Remediate Remediation is the tactical response to vulnerabilities that assessments discover. This involves understanding the report that the assessment yields and prioritizing the areas of vulnerability that need immediate attention. The right tactic and solution for the most efficient closing of these holes must be chosen and implemented. Remediation should follow an established recurring procedure to address these vulnerabilities periodically. In the cycle above, most of the phases focus on the assessment and implementation of technical controls. However, no amount of time or money spent on technology will eliminate risk. Therefore, insurance plays a key role in any risk management strategy. When properly placed, the insurance policy will transfer the financial risk of unavoidable security exposures from the balance sheet of the company to that of the insurer. As part of this basic control, companies need to have methods of detection (such as intrusion detection systems, or IDS) in place to catch the cyberattack when it takes place. Post incident, the insurer will then remediate any damage done, including finance and reputation impacts. The remediation function includes recovery of data, insurance recoveries, and potential claims against third parties. Finally, the whole process starts again with an assessment of the company’s vulnerabilities, including an understanding of a previously unknown threat. TYPES OF SECURITY RISKS The CSI 2001 Computer Crime and Security Survey2 confirms that the threat from computer crime and other information security breaches continues unabated and that the financial toll is mounting. According to the survey, 85 percent of respondents had detected computer security breaches within the past 12 months; and the total amount of financial loss 345

AU1518Ch22Frame Page 346 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES reported by those who could quantify the loss amounted to $377,828,700 — that is, over $2 million per event. One logical method for categorizing financial loss is to separate loss into three general areas of risk: 1. First-party financial risk: direct financial loss not arising from a thirdparty claim (called first-party security risks) 2. Third-party financial risk: a company’s legal liabilities to others (called third-party security risks) 3. Reputation risk: the less quantifiable damages such as those arising from a loss of reputation and brand identity. These risks, in turn, arise from the particular cyber-activities. Cyber-activities can include a Web site presence, e-mail, Internet professional services such as Web design or hosting, network data storage, and E-commerce (i.e., purchase or sale of goods and services over the Internet). First-party security risks include financial loss arising from damage, destruction, or corruption of a company’s information assets — that is, data. Information assets — whether in the form of customer lists and privacy information, business strategies, competitor information, product formulas, or other trade secrets vital to the success of a business — are the real assets of the 21st century. Their proper protection and quantification are key to a successful company. Malicious code transmissions and computer viruses — whether launched by a disgruntled employee, overzealous competitor, cyber-criminal, or prankster — can result in enormous costs of recollection and recovery. A second type of first-party security risk is the risk of revenue loss arising from a successful denial-of-service (DoS) attack. According to the Yankee Group, in February 2000 a distributed DoS attack was launched against some of the most sophisticated Web sites, including Yahoo, Buy.com, CNN, and others, resulting in $1.2 billion in lost revenue and related damages. Finally, first-party security risk can arise from the theft of trade secrets. Third-party security risk can manifest itself in a number of different types of legal liability claims against a company, its directors, officers, or employees. Examples of these risks can arise from the company’s presence on the Web, its rendering of professional services, the transmission of malicious code or a DoS attack (whether or not intentional), and theft of the company’s customer information. The very content of a company’s Web site can result in allegations of copyright and trademark infringement, libel, or invasion of privacy claims. The claims need not even arise from the visual part of a Web page but can, and often do, arise out of the content of a site’s metatags — the invisible part of a Web page used by search engines. 346

AU1518Ch22Frame Page 347 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management If a company renders Internet-related professional services to others, this too can be a source of liability. Customers or others who allege that such services, such as Web design or hosting, were rendered in a negligent manner or in violation of a contractual agreement may find relief in the court system. Third-party claims can directly arise from a failure of security. A company that negligently or through the actions of a disgruntled employee transmits a computer virus to its customers or other e-mail recipients may be open to allegations of negligent security practices. The accidental transmission of a DoS attack can pose similar legal liabilities. In addition, if a company has made itself legally obligated to keep its Web site open on a 24/7 basis to its customers, a DoS attack shutting down the Web site could result in claims by its customers. A wise legal department will make sure that the company’s customer agreements specifically permit the company to shut down its Web site for any reason at any time without incurring legal liability. Other potential third-party claims can arise from the theft of customer information such as credit card information, financial information, health information, or other personal data. For example, theft of credit card information could result in a variety of potential lawsuits, whether from the card-issuing companies that then must undergo the expense of reissuing, the cardholders themselves, or even the Web merchants who later become the victims of the fraudulent use of the stolen credit cards. As discussed later, certain industries such as financial institutions and healthcare companies have specific regulatory obligations to guard their customer data. Directors and officers (D&Os) face unique, and potentially personal, liabilities arising out of their fiduciary duties. In addition to case law or common-law obligations, D&Os can have obligations under various statutory laws such as the Securities Act of 1933 and the Securities & Exchange Act of 1934. Certain industries may also have specific statutory obligations such as those imposed on financial institutions under the Gramm-LeachBliley Act (GLBA), discussed in detail later. Perhaps the most difficult and yet one of the most important risks to understand is the intangible risk of damage to the company’s reputation. Will customers give a company their credit card numbers once they read in the paper that a company’s database of credit card numbers was violated by hackers? Will top employees remain at a company so damaged? And what will be the reaction of the company’s shareholders? Again, the best way to analyze reputation risk is to attempt to quantify it. What is the expected loss of future business revenue? What is the expected loss of market capitalization? Can shareholder class or derivative actions be foreseen? And, if so, what can the expected financial cost of those actions be in terms of legal fees and potential settlement amounts? 347

AU1518Ch22Frame Page 348 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-3. First- and third-party risks. Activity

First-Party Risk

Third-Party Risk

Web site presence

Damage or theft of data (assumes database is connected to network) via hacking Damage or theft of data (assumes database is connected to network) via computer virus; shutdown of network via DoS attack Loss of revenue due to successful DoS attack

Allegations of trademark, copyright, libel, invasion of privacy, and other Web content liabilities Transmission of malicious code (e.g., NIMDA) or DoS due to negligent network security; DoS customer claims if site is shut down due to DoS attack Customer suits

E-mail

E-commerce Internet professional services Any

Customer suits alleging negligent performance of professional services Claims against directors and officers for mismanagement

The risks just discussed are summarized in Exhibit 22-3. Threats These risks defined above do not exist in a vacuum. They are the product of specific threats, operating in an environment featuring specific vulnerabilities that allow those threats to proceed uninhibited. Threats may be any person or object, from a disgruntled employee to an act of nature, that may lead to damage or value loss for an enterprise. While insurance may be used to minimize the costs of a destructive event, it is not a substitute for controls on the threats themselves. Threats may arise from external or internal entities and may be the product of intentional or unintentional action. External entities comprise the well-known sources — hackers, virus writers — as well as less obvious ones such as government regulators or law enforcement entities. Attackers may attempt to penetrate IT systems through various means, including exploits at the system, server, or application layers. Whether the intent is to interrupt business operations, or to directly acquire confidential data or access to trusted systems, the cost in system downtime, lost revenue, and system repair and redesign can be crippling to any enterprise. The collapse of the British Internet service provider (ISP) Cloud-Nine in January 2002, due to irreparable damage caused by distributed DoS attacks launched against its infrastructure, is only the most recent example of the enterprise costs of cyber-attacks.3 348

AU1518Ch22Frame Page 349 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management

at re Th s

al P r

ocedural Co ntr ol

In te rn al

al rn te

Th re at

s

Ex

ern Int

s

ical Contro chn ls Te

ls

Fi

lC ncia ontro na

at re Th ct re di In

t/ ts en rea nm Th er ry ov to G ula eg

R

s

Enterprise Resources

Exhibit 22-4. Enterprise resource threats.

Viruses and other malicious code frequently use the same exploits as human attackers to gain access to systems. However, as viruses can replicate and spread themselves without human intervention, they have the potential to cause widespread damage across an internal network or the Internet as a whole. Risks may arise from non-human factors as well. For example, system outages through failures at the ISP level, power outages, or natural disasters may create the same loss of service and revenue as attackers conducting DoS attacks. Therefore, technical controls should be put in place to minimize those risks. These risks are diagrammed in Exhibit 22-4. Threats that originate from within an organization can be particularly difficult to track. This may entail threats from disgruntled employees (or ex-employees), or mistakes made by well-meaning employees as well. Many standard technical controls — firewalls, anti-virus software, or intrusion detection — assume that the internal users are working actively to support the security infrastructure. However, such controls are hardly sufficient against insiders working actively to subvert a system. Other types of risks — for example, first-party risks of intellectual property violations — 349

AU1518Ch22Frame Page 350 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES may be created by internal entities without their knowledge. Exhibit 22-5 describes various threats by type. As noted, threats are comprised of motive, access, and opportunity — outsiders must have a desire to cause damage as well as a means of affecting the target system. While an organization’s exposure to risk can never be completely eliminated, all steps should be taken to minimize exposure and limit the scope of damage. Such vulnerabilities may take a number of forms. Technical vulnerabilities include exploits against systems at the operating system, network, or application level. Given the complexity and scope of many commercial applications, vulnerabilities within code become increasingly difficult to detect and eradicate during the testing and quality assurance (QA) processes. Examples range from the original Internet Worm to recently documented vulnerabilities in commercial instant messaging clients and Web servers. Such weaknesses are an increasing risk in today’s highly interconnected environments. Weaknesses within operating procedures may expose an enterprise to risk not controlled by technology. Proper change management processes, security administration processes, and human resources controls and oversight, for example, are necessary. They may also prove disruptive in highly regulated environments, such as financial services or healthcare, in which regulatory agencies require complete sets of documentation as part of periodic auditing requirements. GLBA/HIPAA Title V of the Gramm-Leach-Bliley Act (GLBA) has imposed new requirements on the ways in which financial services companies handle consumer data. The primary focus of Title V, and the area that has received the most attention, is the sharing of personal data among organizations and their unaffiliated business partners and agencies. Consumers must be given notice of the ways in which their data is used and must be given notice of their right to opt out of any data-sharing plan. However, Title V also requires financial services organizations to provide adequate security for systems that handle customer data. Security guidelines require the creation and documentation of detailed data security programs addressing both physical and logical access to data, risk assessment, and mitigation programs, and employee training in the new security controls. Third-party contractors of financial services firms are also bound to comply with the GLBA regulations. On February 1, 2001, the Department of the Treasury, Federal Reserve System, and Federal Deposit Insurance Corporation issued interagency regulations, in part requiring financial institutions to: 350

Internal

External

System penetration (internal source)

Intellectual property violation

Virus penetration Power loss or connectivity loss

Regulatory action

System penetration (external source)

Threat

Exhibit 22-5. Threat matrix.

Attempts by external parties to penetrate corporate resources to modify or delete data or application systems Regulatory action or investigation based on corporate noncompliance with privacy and security guidelines Malicious code designed to self-replicate Loss of Internet connectivity, power, cooling system; may result in large-scale system outages Illicit use of third-party intellectual property (images, text, code) without appropriate license arrangements Malicious insiders attempting to access restricted data

Description

Moderate

Low to moderate

Moderate Low

Low to moderate

Moderate

Security Risk

Strong authentication; strong access control; use of internal firewalls to segregate critical systems

Strong authentication; strong access control; ongoing system support and tracking Data protection; risk assessment and management programs; user training; contractual controls Technological: anti-virus controls Redundant power and connectivity; contractual controls with ISP/hosting facilities Procedural and personnel controls; financial controls mitigating risk

Controls

AU1518Ch22Frame Page 351 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management

351

AU1518Ch22Frame Page 352 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES • Develop and execute an information security program. • Conduct regular tests of key controls of the information security program. These tests should be conducted by an independent third party or staff independent of those who develop or maintain the program. • Protect against destruction, loss, or damage to customer information, including encrypting customer information while in transit or storage on networks. • Involve the board of directors, or appropriate committee of the board, to oversee and execute all of the above. Because the responsibility for developing specific guidelines for compliance was delegated to the various federal and state agencies that oversee commercial and financial services (and some are still in the process of being issued), it is possible that different guidelines for GLBA compliance will develop between different states and different financial services industries (banking, investments, insurance, etc.). The Health Insurance Portability and Accountability Act (HIPAA) will force similar controls on data privacy and security within the healthcare industry. As part of HIPAA regulations, healthcare providers, health plans, and clearinghouses are responsible for protecting the security of client health information. As with GLBA, customer medical data is subject to controls on distribution and usage, and controls must be established to protect the privacy of customer data. Data must also be classified according to a standard classification system to allow greater portability of health data between providers and health plans. Specific guidelines on security controls for medical information have not been issued yet. HIPAA regulations are enforced through the Department of Health and Human Services. As GLBA and HIPAA regulations are finalized and enforced, regulators will be auditing those organizations that handle medical or financial data to confirm compliance with their security programs. Failure to comply can be classified as an unfair trade practice and may result in fines or criminal action. Furthermore, firms that do not comply with privacy regulations may leave themselves vulnerable to class-action lawsuits from clients or third-party partners. These regulations represent an entirely new type of exposure for certain types of organizations as they increase the scope of their IT operations. Cyber-Terrorism The potential for cyber-terrorism deserves special mention. After the attacks of 9/11/01, it is clear that no area of the world is protected from a potential terrorist act. The Internet plays a critical role in the economic stability of our national infrastructure. Financial transactions, running of utilities and manufacturing plants, and much more are dependent upon a working Internet. Fortunately, companies are coming together in newly 352

AU1518Ch22Frame Page 353 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management formed entities such as ISACs (Information Sharing and Analysis Centers) to determine their interdependency vulnerabilities and plan for the worst. It is also fortunate that the weapons used by a cyber-terrorist do not differ much from those of a cyber-criminal or other hacker. Thus, the same risk management formula discussed above should be implemented for the risk of cyber-terrorism. INSURANCE FOR CYBER-RISKS Insurance, when properly placed, can serve two important purposes. First, it can provide positive reinforcement for good behavior by adjusting the availability and affordability of insurance depending upon the quality of an insured’s Internet security program. It can also condition the continuation of such insurance on the maintenance of that quality. Second, insurance will transfer the financial risk of a covered event from a company’s balance sheet to that of the insurer. The logical first step in evaluating potential insurance solutions is to review the company’s traditional insurance program, including its property (including business interruption) insurance, comprehensive general liability (CGL), directors and officers insurance, professional liability insurance, and crime policies. These policies should be examined in connection with a company’s particular risks (see above) to determine whether any gap exists. Given that these policies were written for a world that no longer exists, it is not surprising that traditional insurance policies are almost always found to be inadequate to address today’s cyber-needs. This is not due to any defect in these time-honored policies but simply due to the fact that, with the advent of the new economy risks, there comes a need for specialized insurance to meet those new risks. One of the main reasons why traditional policies such as property and CGL do not provide much coverage for cyber-risks is their approach that property means tangible property and not data. Property policies also focus on physical perils such as fire and windstorm. Business interruption insurance is sold as part of a property policy and covers, for example, lost revenue when your business burns down in a fire. It will not, however, cover E-revenue loss due to a DoS attack. Even computer crime policies usually do not cover loss other than for money, securities, and other tangible property. This is not to say that traditional insurance can never be helpful with respect to cyber-risks. A mismanagement claim against a company’s directors and officers arising from cyber-events will generally be covered under the company’s directors’ and officers’ insurance policy to the same extent as a non-cyber claim. For companies that render professional services to others for a fee, such as financial institutions, those that fail to reasonably render those services due to a cyber-risk may find customer claims to be covered under their professional liability policy. (Internet professional companies 353

AU1518Ch22Frame Page 354 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-6. First- and third-party coverage. First-Party Coverage Media E&O Network security

Cyber extortion Reputation Criminal reward

Cyber-attack caused damage, destruction and corruption of data, theft of trade secrets or E-revenue business interruption Payment of cyber-investigator

Third-Party Coverage Web content liability Professional liability Transmission of a computer virus or DoS liability; theft of customer information liability; DoS customer liability Payment of extortion amount where appropriate

Payment of public relations fees up to $50,000 Payment of criminal reward fund up to $50,000

should still seek to purchase a specific Internet professional liability insurance policy.) Specific Cyber-Liability and Property Loss Policies The inquiry detailed above illustrates the extreme dangers associated with relying upon traditional insurance policies to provide broad coverage for 21st-century cyber-risks. Regrettably, at present there are only a few specific policies providing expressed coverage for all the risks of cyberspace listed at the beginning of this chapter. One should be counseled against buying an insurance product simply because it has the name Internet or cyber in it. So-called Internet insurance policies vary widely, with some providing relatively little real coverage. A properly crafted Internet risk program should contain multiple products within a suite concept permitting a company to choose which risks to cover, depending upon where it is in its Internet maturity curve.4 A suite should provide at least six areas of coverage, as shown in Exhibit 22-6. These areas of coverage may be summarized as follows: • Web content liability provides coverage for claims arising out of the content of your Web site (including the invisible metatags content), such as libel, slander, copyright, and trademark infringement. • Internet professional liability provides coverage for claims arising out of the performance of professional services. Coverage usually includes both Web publishing activities as well as pure Internet services such as being an ISP, host, or Web designer. Any professional service conducted over the Internet can usually be added to the policy. • Network security coverage comes in two basic types: 354

AU1518Ch22Frame Page 355 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management — Third-party coverage provides liability coverage arising from a failure of the insured’s security to prevent unauthorized use of or access to its network. This important coverage would apply, subject to the policy’s full terms, to claims arising from the transmission of a computer virus (such as the Love Bug or Nimda Virus), theft of a customer’s information (most notably including credit card information), and so-called denial-of-service liability. In the last year alone, countless cases of this type of misconduct have been reported. — First-party coverage provides, upon a covered event, reimbursement for loss arising out of the altering, copying, misappropriating, corrupting, destroying, disrupting, deleting, damaging, or theft of information assets, whether or not criminal. Typically the policy will cover the cost of replacing, reproducing, recreating, restoring, or recollecting. In case of theft of a trade secret (a broadly defined term), the policy will either pay or be capped at the endorsed negotiated amount. First-party coverage also provides reimbursement for lost E-revenue as a result of a covered event. Here, the policy will provide coverage for the period of recovery plus an extended business interruption period. Some policies also provide coverage for dependent business interruption, meaning loss of E-revenue as a result of a computer attack on a third-party business (such as a supplier) upon which the insured’s business depends. • Cyber extortion coverage provides reimbursement of investigation costs, and sometimes the extortion demand itself, in the event of a covered cyber-extortion threat. These threats, which usually take the form of a demand for “consulting fees” to prevent the release of hacked information or to prevent the extortion from carrying out a threat to shut down the victims’ Web sites, are all too common. • Public relations or crisis communication coverage provides reimbursement up to $50,000 for use of public relation firms to rebuild an enterprise’s reputation with customers, employees, and shareholders following a computer attack. • Criminal reward funds coverage provides reimbursement up to $50,000 for information leading to the arrest and conviction of a cybercriminal. Given that many cyber-criminals hack into sites for “bragging rights,” this unique insurance provision may create a most welcome chilling effect. Loss Prevention Services Another important feature of a quality cyber-risk insurance program is its loss prevention services. Typically these services could include anything from free online self-assessment programs and free educational CDs to a full-fledged, on-site security assessment, usually based on ISO 17799. 355

AU1518Ch22Frame Page 356 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-7. Finding the right insurer. Quality

Preferred or Minimum Threshold

Financial strength Experience

Triple-A from Standard & Poor’s At least two years in dedicated, specialized unit composed of underwriters, claims, technologists, and legal professionals Defined as amount of limits single carrier can offer; minimum acceptable: $25,000,000 Global presence with employees and law firm contacts throughout the United States, Europe, Asia, Middle East, South America Flexible, knowledgeable Customer focused; willing to meet with client both before and after claim Suite permitting insured to choose right coverage including eight coverages described above Array of services, most importantly including FREE on-site security assessments conducted by well-established thirdparty (worldwide) security assessment firms

Capacity Territory

Underwriting Claims philosophy Policy form Loss prevention

Some insurers may also add other services such as an internal or external network scan. The good news is that these services are valuable, costing up to $50,000. The bad news is that the insurance applicant usually has to pay for the services, sometimes regardless of whether or not it ends up buying the policy. Beginning in 2001, one carrier has arranged to pay for these services as part of the application process. This is welcome news. It can only be hoped that more insurers will follow this lead. Finding the Right Insurer As important as finding the right insurance product is finding the right insurer. Financial strength, experience, and claims philosophy are all important. In evaluating insurers, buyers should take into consideration the factors listed in Exhibit 22-7. In summary, traditional insurance is not up to the task of dealing with today’s cyber-risks. To yield the full benefits, insurance programs should provide and implement a purchase combination of traditional and specific cyber-risk insurance. TECHNICAL CONTROLS Beyond insurance, standard technical controls must be put in place to manage risks. First of all, the basic physical infrastructure of the IT data center should be secured against service disruptions caused by environmental threats. Organizations that plan to build and manage their own data 356

AU1518Ch22Frame Page 357 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management centers should implement fully redundant and modular systems for power, Internet access, and cooling. For example, data centers should consider backup generators in case of area-wide power failures, and Internet connectivity from multiple ISPs in case of service outages from one provider. In cases where the customer does not wish to directly manage its data center, the above controls should be verified before contracting with an ASP or ISP. These controls should be guaranteed contractually, as should failover controls and minimum uptime requirements. Physical Access Control Access control is an additional necessity for a complete data center infrastructure. Physical access control is more than simply securing entrances and exits with conventional locks and security guards. Secure data centers should rely on alarm systems and approved locks for access to the most secure areas, with motion detectors throughout. More complex security systems, such as biometric5 or dual-factor authentication (authentication requiring more than one proof of identity; e.g., card and biometric), should be considered for highly secure areas. Employee auditing and tracking for entrances and exits should be put in place wherever possible, and visitor and guest access should be limited. A summary of potential controls is provided in Exhibit 22-8. If it is feasible to do so, outside expertise in physical security, like logical security, should be leveraged wherever possible. Independent security audits may provide insight regarding areas of physical security that are not covered by existing controls. Furthermore, security reports may be required by auditors, regulators, and other third parties. Audit reports and other security documentation should be kept current and retained in a secure fashion. Again, if an organization uses outsourced facilities for application hosting and management, it should look for multilevel physical access control. Third-party audit reports should be made available as part of the vendor search process; security controls should be made part of the evaluation criteria. As with environmental controls, access controls should also be addressed within the final service agreement such that major modifications to the existing access control infrastructure require advance knowledge and approval. Organizations should insist on periodic audits or thirdparty reviews to ensure compliance. Network Security Controls A secure network is the first layer of defense against risk within an Ebusiness system. Network-level controls are instrumental in preventing unauthorized access from within and without, and tracking sessions internally will detect and alert administrators in case of system penetration. 357

AU1518Ch22Frame Page 358 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-8. Physical controls. Physical Control

Description

Role

Access control

Grants access to physical resources through possession of keys, cards, biometric indicators, or key combinations; multi-factor authentication may be used to increase authentication strength; access control system which requires multiple-party authentication provide higher levels of access control Detection of attempted intrusion through motion sensors, contact sensors, and sensors at standard access points (doors, windows, etc.) Any data center infrastructure should rely on round-the-clock monitoring, through onpremises personnel and offsite monitoring

Securing data center access in general, as well as access to core resources such as server rooms; media — disks, CD-ROMs, tapes — should be secured using appropriate means as well; organizations should model their access control requirements on the overall sensitivity of their data and applications At all perimeter access points to the data center, as well as in critical areas

Intrusion detection

24/7 Monitoring

Validation to existing alarm and access control systems

Internet DMZ

Intranet DMZ

Internet Internet Web Server Internet Router

Intrusion Detection

Internet Firewall

Intranet Web Servers Intranet Firewall

DNS

Application Server

Exhibit 22-9. Demilitarized zone architecture.

Exhibit 22-9 conceptually depicts the overall architecture of an E-business data center. Common network security controls include the following features. 358

AU1518Ch22Frame Page 359 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management Firewalls. Firewalls are critical components of any Internet-facing system. Firewalls filter network traffic based on protocol, destination port, or packet content. As firewall systems have become more advanced, the range of different attack types that can be recognized by the firewall has continued to grow. Firewalls may also be upgraded to filter questionable content or scan incoming traffic for attack signatures or illicit content.

For any infrastructure that requires access to business data, a multiplefirewall configuration should be used. An Internet demilitarized zone (DMZ) should be created for all Web-accessible systems — Web servers or DNS servers — while an intranet DMZ, separated from the Internet, contains application and database servers. This architecture prevents external entities from directly accessing application logic or business data. Network Intrusion Detection Systems. Networked IDSs track internal sessions at major network nodes and look for attack signatures — a sequence of instructions corresponding to a known attack. These systems generally are also tied into monitoring systems that can alert system administrators in the case of detected penetration. More advanced IDSs look for only “correct” sequences of packets and use real-time monitoring capabilities to identify suspicious but unknown sequences. Anti-virus Software. Anti-virus gateway products can provide a powerful second level of defense against worms, viruses, and other malicious code. Anti-virus gateway products, provided by vendors such as Network Associates, Trend Micro, and Symantec, can scan incoming HTTP, SMTP, and FTP traffic for known virus signatures and block the virus before it infects critical systems.

As described in Exhibit 22-10, specific design principles should be observed in building a stable and secure network. Exhibit 22-11 provides a summary of the controls in question. Increasingly, organizations are moving toward managed network services rather than supporting the systems internally. Such a solution saves the organization from having to build staff for managing security devices, or to maintain a 24/7 administration center for monitoring critical systems. Such a buy (or, in this case, hire) versus build decision should be seriously considered in planning your overall risk management framework. Organizations looking to outsource security functions can certainly save money, resources, and time; however, organizations should look closely at the financial as well as technical soundness of any such vendors. Application Security Controls A successful network security strategy is only useful as a backbone to support the development of secure applications. These controls entail security at the operating system level for enterprise systems, as well as 359

AU1518Ch22Frame Page 360 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-10. Secure network design principles. Redundancy. Firewall systems, routers, and critical components such as directory servers should be fully redundant to reduce the impact of a single failure. Currency. Critical network tools must be kept up -to-date with respect to patch-level and core system operations. Vulnerabilities are discovered frequently, even within network security devices such as firewalls or routers. Scalability. An enterprise’s network security infrastructure should be able to grow as business needs require. Service outages caused by insufficient bandwidth provided by an ISP, or server outages due to system maintenance, can be fatal for growing applications. The financial restitution provided by cyber-risk coverage might cover business lost during the service outage but cannot address the greater issues of loss of business, consumer goodwill, or reputation. Simplicity. Complexity of systems, rules, and components can create unexpected vulnerabilities in commercial systems. Where possible, Internet-facing infrastructures should be modularized and simplified such that each component is not called upon to perform multiple services. For example, an organization with a complex E-business infrastructure should separate that network environment from its own internal testing and development networks, with only limited points of access between the two environments. A more audited and restricted set of rules may be enforced in the former without affecting the productivity of the latter.

Exhibit 22-11. Network security controls. Network Control

Description

Role

Firewall

Blocks connections to internal resources by protocol, port, and address; also provides stateful packet inspection Detects signature of known attacks at the network level

Behind Internet routers; also within corporate networks to segregate systems into DMZs

IDS

Anti-virus

Detects malicious code at network nodes

At high-throughput nodes within networks, and at perimeter of network (at firewall level) At Internet HTTP and SMTP gateways

trust management, encryption, data security, and audit controls at the application level. Operating systems should be treated as one of the most vulnerable components of any application framework. Too often, application developers create strong security controls within an application, but have no control over the lower level exploits. Furthermore, system maintenance and administration over time is frequently overlooked as a necessary component of security. Therefore, the following controls should be observed: 360

AU1518Ch22Frame Page 361 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management • Most major OS suppliers — Microsoft, Sun, Hewlett-Packard, etc. — provide guidelines for operating system hardening. Implement those guidelines on all production systems. • Any nonessential software should be removed from production systems. • Administer critical servers from the system console wherever possible. Remote administration should be disabled; if this is not possible, secure log-in shells should be used in place of less secure protocols such as Telnet. • Host-based intrusion detection software should be installed on all critical systems. A host-based IDS is similar to the network-based variety, except it only scans traffic intended for the target server. Known attack signatures may be detected and blocked before reaching the target application, such as a Web or application server. Application-level security is based on maintaining the integrity and confidentiality of the system as well as the data managed by the system. A Web server that provides promotional content and brochures to the public, for example, has little need to provide controls on confidentiality. However, a compromise of that system resulting in vandalism or server downtime could prove costly; therefore, system and data integrity should be closely controlled. These controls are partially provided by security and the operating system and network levels as noted above; additional controls, however, should be provided within the application itself. Authentication and authorization are necessary components of application-level security. Known users must be identified and allowed access to the system, and system functions must be categorized such that users are only presented with access to data and procedures that correspond to their defined privilege level. The technical controls around authentication and authorization are only as useful as the procedural controls around user management. The enrollment of new users, management of personal user information and usage profiles, password management, and the removal of defunct users from the system are required for an authentication engine to provide real risk mitigation. Exhibit 22-12 provides a summary of these technologies and procedures. Data Backup and Archival In addition to technologies to prevent or detect unauthorized system penetration, controls should be put in place to restore data in the event of loss. System backups — onto tape or permanent media — should be in place for any business-critical application. 361

AU1518Ch22Frame Page 362 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-12. Application security controls. Application Control

Description

Role

System hardening

Processes, procedures, and products to harden operating system against exploitation of network services Monitors connections to servers and detects malicious code or attack signatures Allows for identification and management of system users through identities and passwords

Should be performed for all critical servers and internal systems

Host-based intrusion detection

Authentication

Access control

Encryption

Maps users, by identity or by role, to system resources and functions Critical business data or nonpublic client information should be encrypted (i.e., obscured) while in transit over public networks

On all critical servers and internal systems

For any critical systems; authentication systems may be leveraged across multiple applications to provide single sign-on for enterprise For any critical application

For all Internet-based transactional connectivity; encryption should also be considered for securing highly sensitive data in storage

Backups should be made regularly — as often as daily, depending on the requirements of the business — and should be stored off-site to prevent loss or damage. Test restores should also be performed regularly to ensure the continued viability of the backup copies. Backup retention should extend to at least a month, with one backup per week retained for a year and monthly backups retained for several years. Backup data should always be created and stored in a highly secure fashion. Finally, to ensure system availability, enterprise applications should plan on at least one tier of redundancy for all critical systems and components. Redundant systems can increase the load-bearing capacity of a system as well as provide increased stability. The use of enterprise-class multiprocessor machines is one solution; multiple systems can also be consolidated into server farms. Network devices such as firewalls and routers can also be made redundant through load balancers. Businesses may also wish to consider maintaining standby systems in the event of critical data center failure. Standby systems, like backups, should be housed in a separate storage facility and should be tested periodically to ensure stability. These backup systems should be able to be brought online within 48 hours of a 362

AU1518Ch22Frame Page 363 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management disaster and should be restored with the most recently available system backups as well. CONCLUSION The optimal model to address the risks of Internet security must combine technology, process, and insurance. This risk management approach permits companies to successfully address a range of different risk exposures, from direct attacks on system resources to unintentional acts of copyright infringement. In some cases, technical controls have been devised that help address these threats; in others, procedural and audit controls must be implemented. Because these threats cannot be completely removed, however, cyber-risk insurance coverage represents an essential tool in providing such nontechnical controls and a major innovation in the conception of risk management in general. A comprehensive policy backed by a specialized insurer with top financial marks and global reach allows organizations to lessen the damage caused by a successful exploit and better manage costs related to loss of business and reputation. It is only through merging the two types of controls that an organization can best minimize its security threats and mitigate its IT risks. Notes 1. The views and policy interpretations expressed in this work by the authors are their own and do not necessarily represent those of American International Group, Inc., or any of its subsidiaries, business units, or affiliates. 2. See http://www.gocsi.com for additional information. 3. Coverage provided in ISPreview, ZDNet. 4. One carrier’s example of this concept can be found at www.aignetadvantage.com. 5. Biometrics authentication comprises many different measures, including fingerprint scans, retinal or iris scans, handwriting dynamics, and facial recognition.

ABOUT THE AUTHORS Carol A. Siegel, CISSP, is the chief security officer of American International Group. Siegel is a well-known expert in the field of information security and has been in the field for more than ten years. She holds a B.S. in systems engineering from Boston University, an M.B.A. in computer applications from New York University, and is a CISA. She can be reached at [email protected]. Ty R. Sagalow is executive vice president and chief operating officer of American International Group eBusiness Risk Solutions, the largest Internet risk insurance organization. Over the past 18 years, he has held several executive and legal positions within AIG. He graduated summa cum laude from Long Island University, cum laude from Georgetown University Law Center, and holds a Master of Law from New York University. He can be reached at [email protected]. 363

AU1518Ch22Frame Page 364 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Paul Serritella is a security architect at American International Group. He has worked extensively in the areas of secure application design, encryption, and network security. He received a B.A. from Princeton University in 1998.

364

AU1518Ch23Frame Page 365 Thursday, November 14, 2002 6:14 PM

Chapter 23

How to Work with a Managed Security Service Provider Laurie Hill McQuillan, CISSP

Throughout history, the best way to keep information secure has been to hide it from those without a need to know. Before there was written language, the practice of information security arose when humans used euphemisms or code words to refer to communications they wanted to protect. With the advent of the computer in modern times, information was often protected by its placement on mainframes locked in fortified rooms, accessible only to those who were trusted employees and capable of communicating in esoteric programming languages. The growth of networks and the Internet have made hiding sensitive information much more difficult. Where it was once sufficient to provide a key to those with a need to know, now any user with access to the Internet potentially has access to every node on the network and every piece of data sent through it. So while technology has enabled huge gains in connectivity and communication, it has also complicated the ability of networked organizations to protect their sensitive information from hackers, disgruntled employees, and other threats. Faced with a lack of resources, a need to recover from an attack, or little understanding of secure technology, organizations are looking for creative and effective ways to protect the information and networks on which their success depends. OUTSOURCING DEFINED One way of protecting networks and information is to hire someone with security expertise that is not available in-house. Outsourcing is an arrangement whereby one business hires another to perform tasks it cannot (or does not want to) perform for itself. In the context of information security, 365

AU1518Ch23Frame Page 366 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES outsourcing means that the organization turns over responsibility for its information or assets security to professional security managers. In the words of one IT manager, outsourcing “represents the possibility of recovering from the awkward position of trying to accomplish an impossible task with limited resources.”1 This promising possibility is embodied in a new segment of the information security market called managed system security providers (MSSPs), which has arisen to provide organizations with an alternative to investing in their own systems security. INDUSTRY PERSPECTIVE With the exception of a few large companies that have offered security services for many years, providing outsourced security is a relatively new phenomenon. Until the late 1990s, no company described itself exclusively as a provider of security services; while in 2001, several hundred service and product providers are listed in MSSP directories. One company has estimated that companies spent $140 million on security services in 1999; and by 2001, managed security firms have secured almost $1 billion in venture capital.2 Another has predicted that the demand for third-party security services will exceed $17.2 billion by the end of 2004.3 The security products and services industry can be segmented in a number of different ways. One view is to look at the way in which the outsourced service relates to the security program supported. These services include performance of short-term or one-time tasks (such as risk assessments, policy development, and architecture planning); mid-term (including integration of functions into an existing security program); and longrange (such as ongoing management and monitoring of security devices or incidents). By far, the majority of MSSPs fall into the latter category and seek to establish ongoing and long-term relationships with their customers. A second type of market segmentation is based on the type of information protected or on the target customer base. Some security services focus on particular vertical markets such as the financial industry, the government, or the defense industry. Others focus on particular devices and technologies, such as virtual private networks or firewalls, and provide implementation and ongoing support services. Still others offer combinations of services or partnerships with vendors and other providers outside their immediate expertise. The outsourcing of security services is not only growing in the United States or the English-speaking world, either in terms of organizations who choose to outsource their security or those who provide the outsourced services. Although many U.S. MSSP companies have international branches, MSSP directories turn up as many Far Eastern and European companies as American or British. In fact, these global companies grow because they understand the local requirements of their customer base. 366

AU1518Ch23Frame Page 367 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider This is particularly evident in Europe, where International Security Standard (ISO) 17799 has gained acceptance much more rapidly than in the U.S., providing guidance for good security practices to both client and vendor organizations. This, in turn, has contributed to a reduction in the risk of experiencing some of the outsourcing performance issues described below. Future Prospective Many MSSPs were formed during the dot.com boom of the mid-1990s in conjunction with the rapid growth of E-commerce and the Internet. Initially, dot.com companies preferred to focus on their core businesses but neglected to secure that business, providing quick opportunity for those who understood newly evolving security requirements. Later, as the boom turned to bust, dot.coms took their expertise in security and new technology and evolved themselves into MSSPs. However, as this chapter is being written in early 2002, while the number of MSSPs is growing, a rapid consolidation and fallout among MSSPs is taking place — particularly among those who never achieved financial stability or a strong market niche. Some analysts “expect this proliferation to continue, but vendors over the next year will be sharply culled by funding limits, acquisition, and channel limits. Over the next three years, we expect consolidation in this space, first by vendors attempting multifunction aggregation, then by resellers through channel aggregation.”4 OUTSOURCING FROM THE CORPORATE PERSPECTIVE On the surface, the practice of outsourcing appears to run contrary to the ancient tenet of hiding information from those without a need to know. If the use of networks and the Internet has become central to the corporate business model, then exposing that model to an outside entity would seem inimical to good security practice. So why, then, would any organization want to undertake an outsourcing arrangement? Relationship to the Life Cycle The answer to this question lies in the pace at which the networked world has evolved. It is rare to read a discussion of the growth of the Internet without seeing the word exponential used to describe the rate of expansion. But while this exponential growth has led to rapid integration of the Internet with corporate business models, businesses have moved more slowly to protect the information — due to lack of knowledge, to immature security technology, or to a misplaced confidence in a vendor’s ability to provide secure IT products. Most automated organizations have 20 or more years of experience with IT management and operations, and their IT departments know how to build systems and integrate them. What they 367

AU1518Ch23Frame Page 368 Thursday, November 14, 2002 6:14 PM

Architecture, Policy, and Education

Foundation

or nd ve nical ct ele ech o s e t ents e t riv Us d de uirem an req

Us e res to de po fin n e c sib ro req ontr ilities les a a n uir tu em al and d en ts

SECURITY MANAGEMENT PRACTICES

or sf ism ent an ch gem rol Me ana Cont M nd a

Trust

Re Sec u q Av (Con uire rity ail fid me ab en nt ility tia s , In lity teg , rity )

Control

Use to establish metrics and derive performance requirements

Exhibit 23-1. Using a security model to derive requirements.

have not known and have been slow to learn is how to secure them, because the traditional IT security model has been to hide secret information; and in a networked world, it is no longer possible to do that easily. One of the most commonly cited security models is that documented by Glen Bruce and Rob Dempsey.5 This model defines three components: foundation, control, and trust. The foundation layer includes security policy and principles, criteria and standards, and the education and training systems. The trust layer includes the environment’s security, availability, and performance characteristics. The control layer includes the mechanisms used to manage and control each of the required components. In deciding whether to outsource its security and in planning for a successful outsourcing arrangement, this model can serve as a useful reference for ensuring that all aspects of security are considered in the requirements. As shown in Exhibit 23-1, each of the model’s components can drive aspects of the arrangement. THE FOUR PHASES OF AN OUTSOURCING ARRANGEMENT Phase 1 of an outsourcing arrangement begins when an organization perceives a business problem — in the case of IT, this is often a vulnerability or threat that the organization cannot address. The organization then decides that an outside entity may be better equipped to solve the problem than the organization’s own staff. The reasons why this decision is made will be discussed below; but once the decision is made, the organization must put an infrastructure in place to manage the arrangement. In 368

AU1518Ch23Frame Page 369 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider Phase 2, a provider of services is selected and hired. In Phase 3, the arrangement must be monitored and managed to ensure that the desired benefits are being realized. And finally, in Phase 4, the arrangement comes to an end, and the organization must ensure a smooth and nondisruptive transition out. Phase 1: Identify the Need and Prepare to Outsource It is axiomatic that no project can be successful unless the requirements are well defined and the expectations of all participants are clearly articulated. In the case of a security outsourcing project, if the decision to bring in an outside concern is made under pressure during a security breach, this is especially true. In fact, one of the biggest reasons many outsourcing projects fail is that the business does not understand what lies behind the decision to outsource or why it is believed that the work cannot (or should not) be done in-house. Those organizations that make the decision to outsource after careful consideration, and who plan carefully to avoid its potential pitfalls, will benefit most from the decision to outsource. The goal of Phase 1 is to articulate (in writing if possible) the reasons for the decision to outsource. As will be discussed below, this means spelling out the products or services to be acquired, the advantages expected, the legal and business risks inherent in the decision, and the steps to be taken to minimize those risks. Consider Strategic Reasons to Outsource. Many of the reasons to outsource can be considered strategic in nature. These promise advantages beyond a solution to the immediate need and allow the organization to seek long-term or strategic advantages to the business as a whole:

• Free up resources to be used for other mission-critical purposes. • Maintain flexibility of operations by allowing peak requirements to be met while avoiding the cost of hiring new staff. • Accelerate process improvement by bringing in subject matter expertise to train corporate staff or to teach by example. • Obtain current technology or capability that would otherwise have to be hired or acquired by retraining, both at a potentially high cost. • Avoid infrastructure obsolescence by giving the responsibility for technical currency to someone else. • Overcome strategic stumbling blocks by bringing in third-party objectivity. • Control operating costs or turn fixed costs into variable ones through the use of predictable fees, because presumably an MSSP has superior performance and lower cost structure. • Enhance organizational effectiveness by focusing on what is known best, leaving more difficult security tasks to someone else. • Acquire innovative ideas from experts in the field. 369

AU1518Ch23Frame Page 370 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES Organizations that outsource for strategic reasons should be cautious. The decision to refocus on strategic objectives is a good one, but turning to an outside organization for assistance with key strategic security functions is not. If security is an inherent part of the company’s corporate mission, and strategic management of this function is not working, the company might consider whether outsourcing is going to correct those issues. The problems may be deeper than a vendor can fix. Consider Tactical Reasons. The tactical reasons for outsourcing security functions are those that deal with day-to-day functions and issues. When the organization is looking for a short-term benefit, an immediate response to a specific issue, or improvement in a specific aspect of its operations, these tactical advantages of outsourcing are attractive:

• • • •

Reduce response times when dealing with security incidents. Improve customer service to those being supported. Allow IT staff to focus on day-to-day or routine support work. Avoid an extensive capital outlay by obviating the need to invest in new equipment such as firewalls, servers, or intrusion detection devices. • Meet short-term staffing needs by bringing in staff that is not needed on a full-time basis. • Solve a specific problem for which existing staff does not have the expertise to address. While the tactical decision to outsource might promise quick or more focused results, this does not necessarily mean that the outsourcing arrangement must be short-term. Many successful long-term outsourcing arrangements are viewed as just one part of a successful information security program, or are selected for a combination of strategic and technical reasons. Anticipate Potential Problems. The prospect of seeing these advantages in place can be seductive to an organization that is troubled by a business problem. But for every potential benefit, there is a potential pitfall as well. During Phase 1, after the decision to outsource is made, the organization must put in place an infrastructure to manage that arrangement. This requires fully understanding (and taking steps to avoid) the many problems that can arise with outsourcing contracts:

• Exceeding expected costs, either because the vendor failed to disclose them in advance or because the organization did not anticipate them • Experiencing contract issues that lead to difficulties in managing the arrangement or to legal disputes • Losing control of basic business resources and processes that now belong to someone else 370

AU1518Ch23Frame Page 371 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider • Failing to maintain mechanisms for effective provider management • Losing in-house expertise to the provider • Suffering degradation of service if the provider cannot perform adequately • Discovering conflicts of interest between the organization and the outsourcer • Disclosing confidential data to an outside entity that may not have a strong incentive to protect it • Experiencing declines in productivity and morale from staff who believe they are no longer important to the business or that they do not have control of resources • Becoming dependent on inadequate technology if the vendor does not maintain technical currency • Becoming a “hostage” to the provider who now controls key resources Document Requirements and Expectations. As discussed above, the goal of Phase 1 is to fully understand why the decision to outsource is made, to justify the rationale for the decision, and to ensure that the arrangement’s risks are minimized. Minimizing this risk is best accomplished through careful preparation for the outsourced arrangement.

Thus, the organization’s security requirements must be clearly defined and documented. In the best situation, this will include a comprehensive security policy that has been communicated and agreed to throughout the organization. However, companies that are beginning to implement a security program may be hiring expertise to help with first steps and consequently do not have such a policy. In these cases, the security requirements should be defined in business terms. This includes a description of the information or assets to be protected, their level of sensitivity, their relationship to the core business, and the requirement for maintaining the confidentiality, availability, and integrity of each. One of the most common issues that surfaces from outsourcing arrangements is financial, wherein costs may not be fully understood or unanticipated costs arise after the fact. It is important that the organization understand the potential costs of the arrangement, which include a complete understanding of the internal costs before the outsourcing contract is established. A cost/benefit analysis should be performed and should include a calculation of return on investment. As with any cost/benefit analysis, there may be costs and benefits that are not quantifiable in financial terms, and these should be considered and included as well. These may include additional overhead in terms of staffing, financial obligations, and management requirements. Outsourcing will add new risks to the corporate environment and may exacerbate existing risks. Many organizations that outsource perform a 371

AU1518Ch23Frame Page 372 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES complete risk analysis before undertaking the arrangement, including a description of residual risk expected after the outsourcing project begins. Such an analysis can be invaluable during the process of preparing the formal specification, because it will point to the inclusion of requirements for ameliorating these risks. Because risk can be avoided or reduced by the implementation of risk management strategies, a full understanding of residual risk will also aid in managing the vendor’s performance once the work begins; and it will suggest areas where management must pay stronger attention in assessing the project’s success. Prepare the Organization. To ensure the success of the outsourcing arrangement, the organization should be sure that it can manage the provider’s work effectively. This requires internal corporate knowledge of the work or service outsourced. Even if this knowledge is not deeply technical — if, for example, the business is networking its services for the first time — the outsourcing organization must understand the business value of the work or service and how it supports the corporate mission. This includes an understanding of the internal cost structure because without this understanding, the financial value of the outsourcing arrangement cannot be assessed. Assign Organizational Roles. As with any corporate venture, management and staff acceptance are important in ensuring the success of the outsourcing project. This can best be accomplished by involving all affected corporate staff in the decision-making process from the outset, and by ensuring that everyone is in agreement with, or is willing to support, the decision to go ahead.

With general support for the arrangement, the organization should articulate clearly each affected party’s role in working with the vendor. Executives and management-level staff who are ultimately responsible for the success of the arrangement must be supportive and must communicate the importance of the project’s success throughout the organization. System owners and content providers must be helped to view the vendor as an IT partner and must not feel their ownership threatened by the assistance of an outside entity. These individuals should be given the responsibility for establishing the project’s metrics and desired outcome because they are in the best position to understand what the organization’s information requirements are. The organization’s IT staff is in the best position to gauge the vendor’s technical ability and should be given a role in bringing the vendor up to speed on the technical requirements that must be met. The IT staff also should be encouraged to view the vendor as a partner in providing IT services to the organization’s customers. And finally, if there are internal security employees, they should be responsible for establishing security policies and 372

AU1518Ch23Frame Page 373 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider

CIOs, Senior Management, Representatives of Outsourcing Company

Strategy Formulation

Security Goals and Policies

Contract Management

Management Control

Implementation of Security Strategies and Technologies

System Administrators, End Users

Task Control

Efficient and Effective Performance of Security Tasks

Exhibit 23-2. Management control for outsourcing contracts.

procedures to be followed by the vendor throughout the term of the contract. The most important part of establishing organizational parameters is to assign accountability for the project’s success. Although the vendor will be held accountable for the effectiveness of its work, the outsourcing organization should not give away accountability for management success. Where to lodge this accountability in the corporate structure is a decision that will vary based on the organization and its requirements, but the chances for success will be greatly enhanced by ensuring that those responsible for managing the effort are also directly accountable for its results. A useful summary of organizational responsibilities for the outsourcing arrangement is shown in Exhibit 23-2, which illustrates the level of management control for various activities.6 Prepare a Specification and RFP. If the foregoing steps have been completed correctly, the process of documenting requirements and preparing a specification should be a simple formality. A well-written request for proposals (RFP) will include a complete and thorough description of the organizational, technical, management, and performance requirements and of the products and services to be provided by the vendor. Every corporate expectation that was articulated during the exploration stage should be covered by a performance requirement in the RFP. And the expected metrics that will be used to assess the vendor’s performance should be included in a service level agreement (SLA). The SLA can be a separate document, but it should be legally incorporated into the resulting contract.

The RFP and resulting contract should specify the provisions for the use of hardware and software that are part of the outsourcing arrangements. 373

AU1518Ch23Frame Page 374 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES This might include, for example, the type of software that is acceptable or its placement, so that the provider does not modify the client’s technical infrastructure or remove assets from the customer premises without advance approval. Some MSSPs want to install their own hardware or software at the customer site; others prefer to use customer-owned technical resources; and still others perform on their own premises using their own resources. Regardless, the contract should spell out the provisions for ownership of all resources that support the arrangement and for the eventual return of any assets whose control or possession are outsourced. If there is intellectual property involved, as might be the case in a customdeveloped security solution, the contract should also specify how the licensing of the property works and who will retain ownership of it at the end of the arrangement. During the specification process, the organization should have determined what contractual provisions it will apply for nonperformance or substandard performance. The SLA contract should clearly define items considered to be performance infractions or errors, including requirements for correction of errors. This includes any financial or nonfinancial penalties for noncompliance or failure to perform. The contract may not be restricted to technical requirements and contractual terms but may also consider human resources and business management issues. Some of the requirements that might be included govern access to vendor staff by the customer, and vice versa, and provisions for day-to-day management of the staff performing the work. In addition, requirements for written deliverables, regular reports, etc. should be specified in advance. The final section of the RFP and contract should govern the end of the outsourcing arrangement and provisions for terminating the relationship with the vendor. The terms that govern the transition out should be designed to reduce exit barriers for both the vendor and the client, particularly because these terms may need to be invoked during a dispute or otherwise in less-than-optimal circumstances. One key provision will be to require that the vendor cooperates fully with any vendor that succeeds it in performance of the work. Specify Financial Terms and Pricing. Some of the basic financial considerations for the RFP are to request that the vendor provide evidence that its pricing and terms are competitive and provide an acceptable cost/benefit business case. The RFP should request that the vendor propose incentives and penalties based on performance and warrant the work it performs.

The specific cost and pricing sections of the specification depend on the nature of the work outsourced. Historically, many outsourcing contracts were priced in terms of unit prices for units provided, and may have been 374

AU1518Ch23Frame Page 375 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider measured by staff (such as hourly rates for various skill levels), resources (such as workstations supported), or events (such as calls answered). The unit prices may have been fixed or varied based on rates of consumption, may have included guaranteed levels of consumption, and may have been calculated based on cost or on target profits. However, these types of arrangements have become less common over the past few years. The cost-per-unit model tends to cause the selling organization to try to increase the units sold, driving up the quantity consumed by the customer regardless of the benefit to the customer. By the same token, this causes the customer to seek alternative arrangements with lower unit costs; and at some point the two competing requirements diverge enough that the arrangement must end. So it has become more popular to craft contracts that tie costs to expected results and provide incentives for both vendor and customer to perform according to expectations. Some arrangements provide increased revenue to the vendor each time a threshold of performance is met; others are tied to customer satisfaction measures; and still others provide for gain-sharing wherein the customer and vendor share in any savings from reduction in customer costs. Whichever model is used, both vendor and customer are given incentives to perform according to the requirements to be met by each. Anticipate Legal Issues. The RFP and resulting contract should spell out clear requirements for liability and culpability. For example, if the MSSP is providing security alert and intrusion detection services, who is responsible in the event of a security breach? No vendor can provide a 100 percent guarantee that such breaches will not occur, and organizations should be wary of anyone who makes such a claim. However, it is reasonable to expect that the vendor can prevent predefined, known, and quantified events from occurring. If there is damage to the client’s infrastructure, who is responsible for paying the cost of recovery? By considering these questions carefully, the client organization can use the possibility of breaches to provide incentives for the vendor to perform well.

In any contractual arrangement, the client is responsible for performing due diligence. The RFP and contract should spell out the standards of care that will be followed, and it will assign accountability for technical and management due diligence. This includes the requirements to maintain the confidentiality of protected information and for nondisclosure of sensitive, confidential, and secret information. There may be legislative and regulatory issues that impact the outsourcing arrangement, and both the client and vendor should be aware of these. Organizations should be wary of outsourcing responsibilities for which it is legally responsible, unless it can legally assign these responsibilities to 375

AU1518Ch23Frame Page 376 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES another party. In fact, outsourcing such services may be prohibited by regulation or law, particularly for government entities. Existing protections may not be automatically carried over in an outsourced environment. For example, certain requirements for compliance with the Privacy Act or the Freedom of Information Act may not apply to employees of an MSSP or service provider. Preparing a good RFP for security services is no different than preparing any RFP. The proposing vendors should be obligated to respond with clear, measurable responses to every requirement, including, if possible, client references demonstrating successful prior performance. Phase 2: Select a Provider During Phase 1, the organization defined the scope of work and the services to be outsourced. The RFP and specification were created, and the organization must now evaluate the proposals received and select a vendor. The process of selecting a vendor includes determining the appropriate characteristics of an outsourcing supplier, choosing a suitable vendor, and negotiating requirements and contractual terms. Determine Vendor Characteristics. Among the most common security ser-

vices outsourced are those that include installation, management, or maintenance of equipment and services for intrusion detection, perimeter scanning, VPNs and firewalls, and anti-virus and content protection. These arrangements, if successfully acquired and managed, tend to be long-term and ongoing in nature. However, shorter-term outsourcing arrangements might include testing and deployment of new technologies, such as encryption services and PKI in particular, because it is often difficult and expensive to hire expertise in these arenas. Hiring an outside provider to do one-time or short-term tasks such as security assessments, policy development and implementation, or audit, enforcement, and compliance monitoring is also becoming popular. One factor to consider during the selection process is the breadth of services offered by the prospective provider. Some vendors have expertise in a single product or service that can bring superior performance and focus, although this can also mean that the vendor has not been able to expand beyond a small core offering. Other vendors sell a product or set of products, then provide ongoing support and monitoring of the offering. This, too, can mean superior performance due to focus on a small set of offerings; but the potential drawback is that the customer becomes hostage to a single technology and is later unable to change vendors. One relatively new phenomenon in the MSSP market is to hire a vendor-neutral service broker who can perform an independent assessment of requirements and recommend the best providers. 376

AU1518Ch23Frame Page 377 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider There are a number of terms that have become synonymous with outsourcing or that describe various aspects of the arrangement. Insourcing is the opposite of outsourcing, referring to the decision to manage services in-house. The term midsourcing refers to a decision to outsource a specific selection of services. Smartsourcing is used to mean a well-managed outsourcing (or insourcing) project and is sometimes used by vendors to refer to their set of offerings. Choose a Vendor. Given that the MSSP market is relatively new and immature, organizations must pay particular attention to due diligence during the selection process, and should select a vendor that not only has expertise in the services to be performed but also shows financial, technical, and management stability. There should be evidence of an appropriate level of investment in the infrastructure necessary to support the service. In addition to assessing the ability of the vendor to perform well, the organization should consider less tangible factors that might indicate the degree to which the vendor can act as a business partner. Some of these characteristics are:

• Business culture and management processes. Does the vendor share the corporate values of the client? Does it agree with the way in which projects are managed? Will staff members be able to work successfully with the vendor’s staff? • Security methods and policies. Will the vendor disclose what these are? Are these similar to or compatible with the customer’s? • Security infrastructure, tools, and technology. Do these demonstrate the vendor’s commitment to maintaining a secure environment? Do they reflect the sophistication expected of the vendor? • Staff skills, knowledge, and turnover. Is turnover low? Does the staff appear confident and knowledgeable? Does the offered set of skills meet or exceed what the vendor has promised? • Financial and business viability. How long has the vendor provided these services? Does the vendor have sufficient funding to remain in the business for at least two years? • Insurance and legal history. Have there been prior claims against the vendor? Negotiate the Arrangement. With a well-written specification, the negotiation process will be simple because expectations and requirements are spelled out in the contract and can be fully understood by all parties. The specific legal aspects of the arrangement will depend on the client’s industry or core business, and they may be governed by regulation (for example, in the case of government and many financial entities). It is important to establish in advance whether the contract will include subcontractors, and if so, to include them in any final negotiations prior to signing a contract. 377

AU1518Ch23Frame Page 378 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES This will avoid the potential inability to hold subcontractors as accountable for performance as their prime contractor. Negotiation of pricing, delivery terms, and warranties should also be governed by the specification; and the organization should ensure that the terms and conditions of the specification are carried over to the resulting contract. Phase 3: Manage the Arrangement Once a provider has been selected and a contract is signed, the SLA will govern the management of the vendor. If the SLA was not included in the specification, it should be documented before the contract is signed and included in the final contract. Address Performance Factors. For every service or resource being outsourced, the SLA should address the following factors:

• • • • • •

The expectations for successful service delivery (service levels) Escalation procedures Business impact of failure to meet service levels Turnaround times for delivery Service availability, such as for after-hours Methods for measurement and monitoring of performance

Use Metrics. To be able to manage the vendor effectively, the customer must be able to measure compliance with contractual terms and the results and benefits of the provider’s work. The SLA should set a baseline for all items to be measured during the contract term. These will by necessity depend on which services are provided. For example, a vendor that is providing intrusion detection services might be assessed in part by the number of intrusions repelled as documented in IDS logs.

To motivate the vendor to behave appropriately, the organization must measure the right things — that is, results over which the provider has control. However, care should be taken to ensure that the vendor cannot directly influence the outcome of the collection process. In the example above, the logs should be monitored to ensure that they are not modified manually, or backup copies should be turned over to the client on a regular basis. The SLA metrics should be reasonable in that they can be easily measured without introducing a burdensome data collection requirement. The frequency of measurement and audits should be established in advance, as should the expectations for how the vendor will respond to security issues and whether the vendor will participate in disaster recovery planning and rehearsals. Even if the provider is responsible for monitoring of equipment such as firewalls or intrusion detection devices, the organization may want 378

AU1518Ch23Frame Page 379 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider to retain control of the incident response process, particularly if the possibility of future legal action exists. In these cases, the client may specify that the provider is to identify, but not act on, suspected security incidents. Thus, they may ask the provider for recommendations but may manage or staff the response process itself. Other organizations distinguish between internal and external threats or intrusions to avoid the possibility that an outside organization has to respond to incidents caused by the client’s own employees. Monitor Performance. Once the contract is in place and the SLA is active, managing the ongoing relationship with the service provider becomes the same as managing any other contractual arrangement. The provider is responsible for performing the work to specifications, and the client is responsible for monitoring performance and managing the contract.

Monitoring and reviewing the outsourced functions are critically important. Although the accountability for success of the arrangement remains with the client organization, the responsibility for monitoring can be a joint responsibility; or it may be done by an independent group inside or outside the organization. Throughout the life of the contract, there should be clear single points of contact identified by the client and the vendor; and both should fully understand and support provisions for coordinating emergency response during a security breach or disaster. Phase 4: Transition Out In an ideal world, the outsourcing arrangement will continue with both parties to their mutual satisfaction. In fact, the client organization should include provisions in the contract for renewal, for technical refresh, and for adjustment of terms and conditions as the need arises. However, an ideal world rarely exists, and most arrangements end sooner or later. It is important to define in advance (in the contract and SLA) the terms that will govern the parties if the client decides to bring the work in-house or to use another contractor, along with provisions for penalties should either party not comply. Should the arrangement end, the organization should continue to monitor vendor performance during the transition out. The following tasks should be completed to the satisfaction of both vendor and client: • All property is returned to its original owner (with reasonable allowance for wear and tear). • Documentation is fully maintained and up-to-date. • Outstanding work is complete and documented. • Data owned by each party is returned, along with documented settings for security controls. This includes backup copies. 379

AU1518Ch23Frame Page 380 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES

Outstanding

17%

Satisfactory

55%

Needs Work

25%

Not Working

3% 0

10

20

30

40

50

60

Exhibit 23-3. Customer satisfaction with security outsourcing.

• If there is to be staff turnover, the hiring organization has completed the hiring process. • Requirements for confidentiality and nondisclosure continue to be followed. • If legally required, the parties are released from any indemnities, warranties, etc. CONCLUSION The growth of the MSSP market clearly demonstrates that outsourcing of security services can be a successful venture both for the client and the vendor. While the market is undergoing some consolidation and refocusing as this chapter is being written, in the ultimate analysis, outsourcing security services is not much different than outsourcing any other IT service, and the IT outsourcing industry is established and mature. The lessons learned from one clearly apply to the other, and it is clear that organizations that choose to outsource are in fact applying those lessons. In fact, as Exhibit 23-3 shows, the majority of companies that outsource their security describe their level of satisfaction as outstanding or satisfactory.7 Outsourcing the security of an organization’s information assets may be the antithesis of the ancient “security through obscurity” model. However, in today’s networked world, with solid planning in advance, a sound rationale, and good due diligence and management, any organization can outsource its security with satisfaction and success. References 1. Gary Kaiser, quoted by John Makulowich, In government outsourcing, in Washington Technol., 05/13/97; Vol. 12 No. 3, http://www.washingtontechnology.com/news/12_3/news/ 12940-1.html. 2. George Hulme, Security’s best friend, Information Week, July 16, 2001, http://www.informationweek.com/story/IWK20010713S0009. 3. Jaikumar Vijayan, Outsources rush to meet security demand, ComputerWorld, February 26, 2001, http://www.computerworld.com/cwi/story/0,1199,NAV47_STO57980,00.html.

380

AU1518Ch23Frame Page 381 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider 4. Chris King, META report: are managed security services ready for prime time?, Datamation , July 13, 2002, http://itmanagement.earthweb.com/secu/article/0,,11953_ 801181,00.html. 5. Glen Bruce and Rob Dempsey, Security in Distributed Computing, Hewlett-Packard Professional Books, Saddle River, NJ, 1997. 6. V. Govindarajan and R.N. Anthony, Management Control Systems, Irwin, Chicago, 1995. 7. Forrester Research, cited in When Outsourcing the Information Security Program Is an Appropriate Strategy, at http://www.hyperon.com/outsourcing.htm.

ABOUT THE AUTHOR Laurie Hill McQuillan, CISSP, has been a technology consultant for 25 years, providing IT support services to commercial and federal government organizations. Ms. McQuillan is vice president of KeyCrest Enterprises, a national security consulting company. She has a Master’s degree in technology management and teaches graduate-level classes on the uses of technology for research and the impact of technology on culture. She is treasurer of the Northern Virginia Chapter of the Information Systems Security Association (ISSA) and a founding member of CASPR, an international project that plans to publish Commonly Accepted Security Practices and Recommendations. She can be contacted at [email protected].

Copyright 2003. Laurie Hill McQuillan. All Rights Reserved.

381

AU1518Ch23Frame Page 382 Thursday, November 14, 2002 6:14 PM

AU1518Ch24Frame Page 383 Thursday, November 14, 2002 8:42 PM

Chapter 24

Considerations for Outsourcing Security Michael J. Corby

Outsourcing computer operations is not a new concept. Since the 1960s, companies have been in the business of providing computer operations support for a fee. The risks and challenges of providing a reliable, confidential, and responsive data center operation have increased, leaving many organizations to consider retaining an outside organization to manage the data center in a way that the risks associated with these challenges are minimized. Let me say at the onset that there is no one solution for all environments. Each organization must decide for itself whether to build and staff its own IT security operation or hire an organization to do it for them. This discussion will help clarify the factors most often used in making the decision of whether outsourcing security is a good move for your organization. HISTORY OF OUTSOURCING IT FUNCTIONS Data Center Operations Computer facilities have been traditionally very expensive undertakings. The equipment alone often costs millions of dollars, and the room to house the computer equipment required extensive and expensive special preparation. For that reason, many companies in the 1960s and 1970s seriously considered the ability to provide the functions of an IT (or EDP) department without the expense of building the computer room, hiring computer operators, and, of course, acquiring the equipment. Computer service bureaus and shared facilities sprang up to service the banking, insurance, manufacturing, and service industries. Through shared costs, these outsourced facilities were able to offer cost savings to their customers and also turn a pretty fancy profit in the process. In almost all cases, the reasons for justifying the outsourcing decision were based on financial factors. Many organizations viewed the regular 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

383

AU1518Ch24Frame Page 384 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES monthly costs associated with the outsource contract far more acceptable than the need to justify and depreciate a major capital expense. In addition to the financial reasons for outsourcing, many organizations also saw the opportunity to off-load the risk of having to replace equipment and software long before it had been fully depreciated due to increasing volume, software and hardware enhancements, and training requirements for operators, system programmers, and other support staff. The technical landscape at the time was changing rapidly; there was an aura of special knowledge that was shared by those who knew how to manage the technology, and that knowledge was shared with only a few individuals outside the “inner circle.” Organizations that offered this service were grouped according to their market. That market was dictated by the size, location, or support needs of the customer: • Size was measured in the number of transactions per hour or per day, the quantity of records stored in various databases, and the size and frequency of printed reports. • Location was important because in the pre-data communications era, often the facility accepted transactions delivered by courier in paper batches and delivered reports directly to the customer in paper form. To take advantage of the power of automating the business process, quick turnaround was a big factor. • The provider’s depth of expertise and special areas of competence were also a factor for many organizations. Banks wanted to deal with a service that knew the banking industry, its regulations, need for detailed audits, and intense control procedures. Application software products that were designed for specific industries were factors in deciding which service could support those industries. In most instances, the software most often used for a particular industry could be found running in a particular hardware environment. Services were oriented around IBM, Digital, Hewlett-Packard, NCR, Burroughs, Wang, and other brands of computer equipment. Along with the hardware type came the technical expertise to operate, maintain, and diagnose problems in that environment. Few services would be able to support multiple brands of hardware. Of course, selecting a data center service was a time-consuming and emotional process. The expense was still quite a major financial factor, and there was the added risk of putting the organization’s competitive edge and customer relations in the hands of a third party. Consumers and businesses cowered when they were told that their delivery was postponed or that their payment was not credited because of a computer problem. Nobody wanted to be forced to go through a file conversion process and 384

AU1518Ch24Frame Page 385 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security learn how to deal with a new organization any more than necessary. The ability to provide a consistent and highly responsive “look and feel” to the end customer was important, and the vendor’s perceived reliability and long-term capabilities to perform in this area were crucial factors in deciding which service and organization would be chosen. Contracting Issues There were very few contracting issues in the early days of outsourced data center operations. Remember that almost all applications involved batch processing and paper exchange. Occasionally, limited file inquiry was provided, but price was the basis for most contract decisions. If the reports could be delivered within hours or maybe within the same day, the service was acceptable. If there were errors or problems noted in the results, the obligation of the service was to rerun the process. Computer processing has always been bathed in the expectation of confidentiality. Organizations recognized the importance of keeping their customer lists, employee ranks, financial operations, and sales information confidential; and contracts were respectful of that factor. If any violations of this expectation of confidentiality occurred in those days, they were isolated incidents that were dealt with privately, probably in the courts. Whether processing occurred in a contracted facility or in-house, expectations that there would be an independent oversight or audit process were the same. EDP auditors focused on the operational behavior of servicer-designed specific procedures, and the expectations were usually clearly communicated. Disaster recovery planning, document storage, tape and disk archival procedures, and software maintenance procedures were reviewed and expected to meet generally accepted practices. Overall, the performance targets were communicated, contracts were structured based on meeting those targets, companies were fairly satisfied with the level of performance they were getting for their money, and they had the benefit of not dealing with the technology changes or the huge capital costs associated with their IT operations. Control of Strategic Initiatives The dividing line of whether an organization elected to acquire services of a managed data center operation or do it in-house was the control of their strategic initiatives. For most regulated businesses, the operations were not permitted to get too creative. The most aggressive organizations generally did not use the data center operations as an integral component of their strategy. Those who did deploy new or creative computer processing initiatives generally did not outsource that part of their operation to a shared service. 385

AU1518Ch24Frame Page 386 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES NETWORK OPERATIONS The decision to outsource network operations came later in the evolution of the data center. The change from a batch, paper processing orientation to an online, electronically linked operation brought about many of the same decisions that organizations faced years before when deciding to “build or buy” their computer facilities. The scene began to change when organizations decided to look into the cost, technology, and risk involved with network operations. New metrics of success were part of this concept. Gone was the almost single focus on cost as the basis of a decision to outsource or develop an inside data communication facility. Reliability, culminating in the concept we now know as continuous availability, became the biggest reason to hire a data communications servicer. The success of the business often came to depend on the success of the data communications facility. Imagine the effect on today’s banking environment if ATMs had a very low reliability, were fraught with security problems, or theft of cash or data. We frequently forget how different our personal banking was in the period before the proliferation of ATMs. A generation of young adults has been transformed by the direct ability to communicate electronically with a bank — much in the same way, years ago, that credit cards opened up a new relationship between consumers and retailers. The qualification expected of the network operations provider was also very different from the batch-processing counterpart. Because the ability to work extra hours to catch up when things fell behind was gone, new expectations had to be set for successful network operators. Failures to provide the service were clearly and immediately obvious to the organization and its clients. Several areas of technical qualification were established. One of the biggest questions used to gauge qualified vendors was bandwidth. How much data could be transmitted to and through the facility? This was reviewed on both a micro and macro domain. From the micro perspective, the question was, ”How fast could data be sent over the network to the other end?” The higher the speed, the higher the cost. On a larger scale, what was the capacity of the network provider to transfer data over the 24-hour period? This included downtime, retransmissions, and recovery. This demand gave rise to the 24/7 operation, where staples of a sound operation like daily backups and software upgrades were considered impediments to the totally available network. From this demand came the design and proliferation of the dual processor and totally redundant systems. Front-end processors and network controllers were designed to be failsafe. If anything happened to any of the components, a second copy of that component was ready to take over. For 386

AU1518Ch24Frame Page 387 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security the most advanced network service provider, this included dual data processing systems at the back end executing every transaction twice, sometimes in different data centers, to achieve total redundancy. Late delivery and slow delivery became unacceptable failures and would be a prime cause for seeking a new network service provider. After the technical capability of the hardware/software architecture was considered, the competence of the staff directing the facility was considered. How smart, how qualified, how experienced were the people that ran and directed the network provider? Did the people understand the mission of the organization, and could they appreciate the need for a solid and reliable operation? Could they upgrade operating systems with total confidence? Could they implement software fixes and patches to assure data integrity and security? Could they properly interface with the applications software developers without requiring additional people in the organization duplicating their design and research capabilities? In addition to pushing bits through the wires, the network service provider took on the role of the front-end manager of the organization’s strategy. Competence was a huge factor in building the level of trust that executives demanded. Along with this swing toward the strategic issues, organizations became very concerned about long-term viability. Often, huge companies were the only ones that could demonstrate this longevity promise. The mainframe vendor, global communications companies, and large well-funded network servicers were the most successful at offering these services universally. As the commerce version of the globe began to shrink, the most viable of these were the ones who could offer services in any country, any culture, at any time. The data communications world became a nonstop, “the store never closes” operation. Contracting Issues With this new demand for qualified providers with global reach came new demands for contracts that would reflect the growing importance of this outsourcing decision to the lifeblood of the organization. Quality-of-service expectations were explicitly defined and put into contracts. Response time would be measured in seconds or even milliseconds. Uptime was measured in the number of nines in the percentage that would be guaranteed. Two nines, or 99 percent, was not good enough. Four nines (99.99 percent) or even five nines (99.999 percent) became the common expectation of availability. A new emphasis developed regarding the extent to which data would be kept confidential. Questions were asked and a response expected in the 387

AU1518Ch24Frame Page 388 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES contract regarding the access to the data while in transit. Private line networks were expected for most data communications facilities because of the perceived vulnerability of public telecommunications facilities. In some high-sensitivity areas, the concept of encryption was requested. Modems were developed that would encrypt data while in transit. Software tools were designed to help ensure unauthorized people would not be able to see the data sent. Independent auditors reviewed data communications facilities periodically. This review expanded to include a picture of the data communications operation over time using logs and transaction monitors. Management of the data communication provider was frequently retained by the organization so it could attest to the data integrity and confidentiality issues that were part of the new expectations levied by the external regulators, reviewers, and investors. If the executives were required to increase security and reduce response time to maintain a competitive edge, the data communications manager was expected to place the demand on the outsourced provider. Control of Strategic Initiatives As the need to integrate this technical ability becomes more important to the overall organization mission, more and more companies opted to retain their own data communications management. Nobody other than the communications carriers and utilities actually started hanging wires on poles; but data communications devices were bought and managed by employees, not contractors. Alternatives to public networks were considered; microwave, laser, and satellite communications were evaluated in an effort to make sure that the growth plan was not derailed by the dependence on outside organizations. The daily operating cost of this communications capability was large; but in comparison to the computer room equipment and software, the capital outlay was small. With the right people directing the data communications area, there was less need for outsourced data communications facilities as a stand-alone service. In many cases it was rolled into an existing managed data center; but in probably just as many instances, the managed data center sat at the end of the internally controlled data communications facility. The ability to deliver reliable communications to customers, constituents, providers, and partners was considered a key strategy of many forward-thinking organizations APPLICATION DEVELOPMENT While the data center operations and data communications outsourcing industries have been fairly easy to isolate and identify, the application development outsourcing business is more subtle. First, there are usually 388

AU1518Ch24Frame Page 389 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security many different application software initiatives going on concurrently within any large organization. Each of them has a different corporate mission, each with different metrics for success, and each with a very different user focus. Software customer relationship management is very different from software for human resources management, manufacturing planning, investment management, or general accounting. In addition, outsourced application development can be carried out by general software development professionals, by software vendors, or by targeted software enhancement firms. Take, for instance, the well-known IBM manufacturing product Mapics®. Many companies that acquired the software contracted directly with IBM to provide enhancements; many others employed the services of software development organizations specifically oriented toward Mapics enhancements, while some simply added their Mapics product to the list of products supported or enhanced by their general application design and development servicer. Despite the difficulty in viewing the clear picture of application development outsourcing, the justification was always quite clear. Design and development of new software, or features to be added to software packages, required skills that differed greatly from general data center or communications operations. Often, hiring the people with those skills was expensive and posed the added challenge in that designers were motivated by new creative design projects. Many companies did not want to pay the salary of good design and development professionals, train and orient them, and give them a one- or two-year design project that they would simply add to their resume when they went shopping for their next job. By outsourcing the application development, organizations could employ business and project managers who had long careers doing many things related to application work on a variety of platforms and for a variety of business functions — and simply roll the coding or database expertise in and out as needed. In many instances, also, outsourced applications developers were used for another type of activity — routine software maintenance. Good designers hate mundane program maintenance and start looking for new employment if forced to do too much of it. People who are motivated by the quick response and variety of tasks that can be juggled at the same time are well suited to maintenance tasks, but are often less enthusiastic about trying to work on creative designs and user-interactive activities where total immersion is preferred. Outsourcing the maintenance function is a great way to avoid the career dilemma posed by these conflicting needs. Y2K gave the maintenance programmers a whole new universe of opportunities to demonstrate their values. Aside from that once-in-a-millennium opportunity, program language conversions, operation system upgrades, and new software 389

AU1518Ch24Frame Page 390 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES releases are a constant source of engagements for application maintenance organizations. Qualifications for this type of service were fairly easy to determine. Knowledge of the hardware platform, programming language, and related applications were key factors in selecting an application development firm. Beyond those specifics, a key factor in selecting an application developer was in the actual experience with the specific application in question. A financial systems analyst or programmer was designated to work on financial systems; a manufacturing specialist on manufacturing systems, and so on. Word quickly spread about which organizations were the application and program development leaders. Companies opened offices across the United States and around the world offering contract application services. Inexpensive labor was available for some programming tasks if contracted through international job shops, but the majority of application development outsourcing took place close to the organization that needed the work done. Often, to ensure proper qualifications, programming tests were given to the application coders. Certifications and test-based credentials support extensive experience and intimate language knowledge. Both methods are cited as meritorious in determining the credentials of the technical development staff assigned to the contract. Along with the measurable criteria of syntax knowledge, a key ingredient was the maintainability of the results. Often, one of the great fears was that the program code was so obscure that only the actual developer could maintain the result. This is not a good thing. The flexibility to absorb the application development at the time the initial development is completed or when the contract expires is a significant factor in selecting a provider. To ensure code maintainability, standards are developed and code reviews are frequently undertaken by the hiring organization. Perhaps the most complicated part of the agreement is the process by which errors, omissions, and problems are resolved. Often, differences of opinion, interpretations of what is required, and the definition of things like “acceptable response time” and “suitable performance” were subject to debate and dispute. The chief way this factor was considered was in contacting reference clients. It probably goes to say that no application development organization registered 100 percent satisfaction with 100 percent of their customers 100 percent of the time. Providing the right reference account that gives a true representation of the experience, particularly in the application area evaluated, is a critical credential. Contracting Issues Application development outsourcing contracts generally took on two forms: pay by product or pay by production. 390

AU1518Ch24Frame Page 391 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security • Pay by product is basically the fixed-price contract; that is, hiring a developer to develop the product and, upon acceptance, paying a certain agreed amount. There are obvious derivations of this concept: phased payments, payment upon acceptance of work completed at each of several checkpoints — for example, payment upon approval of design concept, code completion, code unit testing, system integration testing, user documentation acceptance, or a determined number of cycles of production operation. This was done to avoid the huge balloon payment at the end of the project, a factor that crushed the cash flow of the provider and crippled the ability of the organization to develop workable budgets. • Pay by production is the time-and-materials method. The expectation is that the provider works a prearranged schedule and, periodically, the hours worked are invoiced and paid. The presumption is that hours worked are productive and that the project scope is fixed. Failure of either of these factors most often results in projects that never end or exceed their budgets by huge amounts. The control against either of these types of projects running amok is qualified approval oversight and audit. Project managers who can determine progress and assess completion targets are generally part of the organization’s review team. In many instances, a third party is retained to advise the organization’s management of the status of the developers and to recommend changes to the project or the relationship if necessary. Control of Strategic Initiatives Clearly the most sensitive aspect of outsourced service is the degree to which the developer is invited into the inner sanctum of the customer’s strategic planning. Obviously, some projects such as Y2K upgrades, software upgrades, and platform conversions do not require anyone sitting in an executive strategy session; but they can offer a glimpse into the specifics of product pricing, engineering, investment strategy, and employee/ partner compensation that are quite private. Almost always, application development contracts are accompanied by assurances of confidentiality and nondisclosure, with stiff penalties for violation. OUTSOURCING SECURITY The history of the various components of outsourcing plays an important part in defining the security outsourcing business issue and how it is addressed by those seeking or providing the service. In many ways, outsourced security service is like a combination of the hardware operation, communications, and application development counterparts, all together. Outsourced is the general term; managed security services or MSS is the industry name for the operational component of an organization’s total 391

AU1518Ch24Frame Page 392 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES data facility, but viewed solely from the security perspective. As in any broadreaching component, the best place to start is with a scope definition. Defining the Security Component to be Outsourced Outsourcing security can be a vast undertaking. To delineate each of the components, security outsourcing can be divided into six specific areas or domains: 1. 2. 3. 4. 5. 6.

Policy development Training and awareness Security administration Security operations Network operations Incident response

Each area represents a significant opportunity to improve security, in increasing order of complexity. Let us look at each of these domains and define them a bit further. Security Policies. These are the underpinning of an organization’s entire security profile. Poorly developed policies, or policies that are not kept current with the technology, are a waste of time and space. Often, policies can work against the organization in that they invite unscrupulous employees or outsiders to violate the intent of the policy and to do so with impunity. The policies must be designed from the perspectives of legal awareness, effective communications skills, and confirmed acceptance on the part of those invited to use the secured facility (remember: unless the organization intends to invite the world to enjoy the benefits of the facility — like a Web site — it is restricted and thereby should be operated as a secured facility).

The unique skills needed to develop policies that can withstand the challenges of these perspectives are frequently a good reason to contract with an outside organization to develop and maintain the policies. Being an outside provider, however, does not lessen the obligation to intimately connect each policy with the internal organization. Buying the book of policies is not sufficient. They must present and define an organization’s philosophy regarding the security of the facility and data assets. Policies that are strict about the protection of data on a computer should not be excessively lax regarding the same data in printed form. Similarly, a personal Web browsing policy should reflect the same organization’s policy regarding personal telephone calls, etc. Good policy developers know this. Policies cannot put the company in a position of inviting legal action but must be clearly worded to protect its interests. Personal privacy is a good thing, but using company assets for personal tasks and sending correspondence that is attributed to the organization are clear reasons to allow some 392

AU1518Ch24Frame Page 393 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security level of supervisory review or periodic usage auditing. Again, good policy developers know this. Finally, policies must be clearly communicated, remain apropos, carry with them appropriate means for reporting and handling violations, and for being updated and replaced. Printed policy books are replaced with intranet-based, easily updated policies that can be adapted to meet new security demands and rapidly sent to all subject parties. Policy developers need to display a good command of the technology in all its forms — data communication, printed booklets, posters, memos, etc., video graphics, and nontraditional means of bringing the policy to its intended audience’s attention. Even hot air balloons and skywriting are fair game if they accomplish the intent of getting the policy across. Failure to know the security policy cannot be a defense for violating it. Selecting a security policy developer must take all of these factors into consideration. Training and Awareness. Training and awareness is also frequently assigned to an outside servicer. Some organizations establish guidelines for the amount and type of training an employee or partner should receive. This can take the form of attending lectures, seminars, and conferences; reading books; enrolling in classes at local educational facilities; or taking correspondence courses. Some organizations will hire educators to provide specific training in a specific subject matter. This can be done using standard course material good for anyone, or it can be a custom-designed session targeted specifically to the particular security needs of the organization.

The most frequent topics of general education that anyone can attend are security awareness, asset protection, data classification, and recently, business ethics. Anyone at any level is usually responsible to some degree for ensuring that his or her work habits and general knowledge are within the guidance provided by this type of education. Usually conducted by the human resources department at orientation, upon promotion, or periodically, the objective is to make sure that everyone knows the baseline of security expectations. Each attendee will be expected to learn what everyone in the organization must do to provide for a secure operation. It should be clearly obvious what constitutes unacceptable behavior to anyone who successfully attends such training. Often, the provider of this service has a list of several dozen standard points that are made in an entertaining and informative manner, with a few custom points where the organization’s name or business mission is plugged into the presentation; but it is often 90 percent boilerplate. Selecting an education provider for this type of training is generally based on their creative entertainment value — holding the student’s attention — and the way in which students register their acknowledgment that 393

AU1518Ch24Frame Page 394 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES they have heard and understood their obligations. Some use the standard signed acknowledgment form; some even go so far as to administer a digitally signed test. Either is perfectly acceptable but should fit the corporate culture and general tenor. Some additional requirements are often specified in selecting a training vendor to deal with technical specifics. Usually some sort of hands-on facility is required to ensure that the students know the information and can demonstrate their knowledge in a real scenario. Most often, this education will require a test for mastery or even a supervised training assignment. Providers of this type of education will often provide these services in their own training center where equipment is configured and can be monitored to meet the needs of the requesting organization. Either in the general or specific areas, organizations that outsource their security education generally elect to do a bit of both on an annual basis with scheduled events and an expected level of participation. Evaluation of the educator is by way of performance feedback forms that are completed by all attendees. Some advanced organizations will also provide metrics to show that the education has rendered the desired results — for example, fewer password resets, lost files, or system crashes. Security Administration. Outsourcing security administration begins to get a bit more complicated. Whereas security policies and security education are both essential elements of a security foundation, security administration is part of the ongoing security “face” that an organization puts on every minute of every day and requires a higher level of expectations and credentials than the other domains.

First, let us identify what the security administrator is expected to do. In general terms, security administration is the routine adds, changes, and deletes that go along with authorized account administration. This can include verification of identity and creation of a subsequent authentication method. This can be a password, token, or even a biometric pattern of some sort. Once this authentication has been developed, it needs to be maintained. That means password resets, token replacement, and biometric alternative (this last one gets a bit tricky, or messy, or both). Another significant responsibility of the security administrator is the assignment of approved authorization levels. Read, write, create, execute, delete, share, and other authorizations can be assigned to objects from the computer that can be addressed down to the data item if the organization’s authorization schema reaches that level. In most instances, the tools to do this are provided to the administrator, but occasionally there is a need to devise and manage the authority assignment in whatever platform and at whatever level is required by the organization. 394

AU1518Ch24Frame Page 395 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security A major responsibility of security administrators that is often overlooked is reporting their activities. If a security policy is to be deemed effective, the workload should diminish over time if the population of users remains constant. I once worked with an organization that had outsourced the security administration function and paid a fee based on the number of transactions handled. Interestingly, there was an increasing frequency of reassignment of authorizations, password resets, and adds, changes, and deletes as time went on. The rate of increase was double the rate of user population expansion. We soon discovered that the number of user IDs mushroomed to two or three times the total number of employees in the company. What is wrong with that picture? Nothing if you are the provider, but a lot if you are the contracting organization. The final crucial responsibility of the security administrator is making sure that the procedures designed to assure data confidentiality, availability, and integrity are carried out according to plan. Backup logs, incident reports, and other operational elements — although not exactly part of most administrators’ responsibilities — are to be monitored by the administrator, with violations or exceptions reported to the appropriate person. Security Operations. The security operations domain has become another recent growth area in terms of outsourced security services. Physical security was traditionally separate from data security or computer security. Each had its own set of credentials and its own objectives. Hiring a company it has a well-established physical security reputation does not qualify it as a good data security or computer security operations provider. As has been said, “Guns, guards, and dogs do not make a good data security policy;” but recently they have been called upon to help. The ability to track the location of people with access cards and even facial recognition has started to blend into the data and operational end of security so that physical security is vastly enhanced and even tightly coupled with security technology.

Many organizations, particularly since September 11, have started to employ security operations specialists to assess and minimize the threat of physical access and damage in many of the same terms that used to be reserved only for data access and computer log-in authentication. Traditional security operations such as security software installation and monitoring (remember ACF2, RACF, Top Secret, and others), disaster recovery and data archival (Comdisco, Sunguard, Iron Mountain, and others), and a whole list of application-oriented control and assurance programs and procedures have not gone away. Skills are still required in these areas, but the whole secure operations area has been expanded to include protection of the tangible assets as well as the data assets. Watch this area for more developments, including the ability to use the GPS location of the 395

AU1518Ch24Frame Page 396 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES input device, together with the location of the person as an additional factor in transaction authentication. Network Operations. The most recent articles on outsourcing security have looked at the security of the network operations as the most highly vulnerable and therefore the most sensitive of the security domains. Indeed, much work has been done in this area, and industry analysts are falling over themselves to assess and evaluate the vendors that can provide a managed security operation center, or SOC.

It is important to define the difference between a network operation center (NOC) and a security operation center (SOC). The difference can be easily explained with an analogy. The NOC is like a pipe that carries and routes data traffic to where it needs to go. The pipe must be wide enough in diameter to ensure that the data is not significantly impeded in its flow. The SOC, on the other hand, is not like the pipe but rather like a window in the pipe. It does not need to carry the data, but it must be placed at a point where the data flowing through the pipe can be carefully observed. Unlike the NOC, which is a constraint if not wide enough, the SOC will not be able to observe the data flow carefully enough if it is not fast enough. Network operations have changed from the earlier counterparts described previously in terms of the tools and components that are used for function. Screens are larger and flatter. Software is more graphically oriented. Hardware is quicker and provides more control than earlier generations of the NOC, but the basic function is the same. Security operation centers, however, are totally new. In their role of maintaining a close watch on data traffic, significant new software developments have been introduced to stay ahead of the volume. This software architecture generally takes two forms: data compression and pattern matching. • Data compression usually involves stripping out all the inert traffic (which is usually well over 90 percent) and presenting the data that appears to be interesting to the operator. The operator then decides if the interesting data is problematic or indicative of a security violation or intrusion attempt, or whether it is simply a new form of routine inert activity such as the connection of a new server or the introduction of a new user. • Pattern matching (also known as data modeling) is a bit more complex and much more interesting. In this method, the data is fit to known patterns of how intrusion attempts are frequently constructed. For example, there may be a series of pings, several other probing commands, followed by a brief period of analysis, and then the attempt to use the data obtained to gain access or cause denial of service. In its ideal state, 396

AU1518Ch24Frame Page 397 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security this method can actually predict intrusions before they occur and give the operator or security manager a chance to take evasive action. Most MSS providers offer data compression, but the ones that have developed a comprehensive pattern-matching technique have more to offer in that they can occasionally predict and prevent intrusions — whereas the data compression services can, at best, inform when an intrusion occurs. Questions to ask when selecting an MSS provider include first determining if they are providing a NOC or SOC architecture (the pipe or the window). Second, determine if they compress data or pattern match. Third, review very carefully the qualifications of the people who monitor the security. In some cases they are simply a beeper service. (“Hello, Security Officer? You’ve been hacked. Have a nice day. Goodbye.”) Other providers have well-trained incident response professionals who can describe how you can take evasive action or redesign the network architecture to prevent future occurrences. There are several cost justifications for outsourcing security operations: • The cost of the data compression and modeling tools is shared among several clients. • The facility is available 24/7 and can be staffed with the best people at the most vulnerable time of day (nights, weekends, and holidays). • The expensive technical skills that are difficult to keep motivated for a single network are highly motivated when put in a position of constant activity. This job has been equated to that of a military fighter pilot: 23 hours and 50 minutes of total boredom followed by ten minutes of sheer terror. The best operators thrive on the terror and are good at it. • Patterns can be analyzed over a wide range of address spaces representing many different clients. This allows some advanced warning on disruptions that spread (like viruses and worms), and also can be effective at finding the source of the disruption (perpetrator). Incident Response The last area of outsourced security involves the response to an incident. A perfectly legitimate and popular strategy is that every organization will at some time experience an incident. The ones that successfully respond will consider that incident a minor event. The ones that fail to respond or respond incorrectly can experience a disaster. Incident response involves four specialties: 1. 2. 3. 4.

Intrusion detection Employee misuse Crime and fraud Disaster recovery 397

AU1518Ch24Frame Page 398 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES Intrusion Detection. Best depicted by the previous description of the SOC, intrusion detection involves the identification and isolation of an intrusion attempt. This can be either from the outside, or, in the case of server-based probes, can identify attempts by authorized users to go to places they are not authorized to access. This includes placing sensors (these can be certain firewalls, routers, or IDSs) at various points in the network and having those sensors report activity to a central monitoring place. Some of these devices perform a simple form of data compression and can even issue an e-mail or dial a wireless pager when a situation occurs that requires attention. Employee Misuse. Many attempts to discover employee abuse have been tried over the last several years, especially since the universal acceptance of Internet access as a staple of desktop appliances. Employees have been playing “cat and mouse” with employers over the use of the Internet search capabilities for personal research, viewing pornography, gift shopping, participation in unapproved chat rooms, etc. Employers attempt to monitor their use or prevent such use with filters and firewalls, and employees find new, creative ways to circumvent the restriction. In the United States, this is a game with huge legal consequences. Employees claim that their privacy has been violated; employers claim the employee is wasting company resources and decreasing their effectiveness. Many legal battles have been waged over this issue.

Outsourcing the monitoring of employee misuse ensures that independently defined measures are used across the board for all employees in all areas and at all levels. Using proper techniques for evidence collection and corroboration, the potential for successfully trimming misuse and dismissal or punishment of offenders can be more readily ensured. Crime and Fraud. The ultimate misuse is the commission of a crime or fraud using the organization’s systems and facilities. Unless there is already a significant legal group tuned in to prosecuting this type of abuse, almost always the forensic analysis and evidence preparation are left to an outside team of experts. Successfully identifying and prosecuting or seeking retribution from these individuals depends very heavily on the skills of the first responder to the situation.

Professionals trained in data recovery, forensic analysis, legal interviewing techniques, and collaboration with local law enforcement and judiciary are crucial to achieving success by outsourcing this component. Disaster Recovery. Finally, one of the oldest security specialties is in the area of disaster recovery. The proliferation of backup data centers, records archival facilities, and site recovery experts have made this task easier; but most still find it highly beneficial to retain outside services in several areas: 398

AU1518Ch24Frame Page 399 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security • Recovery plan development: including transfer and training of the organization’s recovery team • Recovery plan test: usually periodic with reports to the executives and, optionally, the independent auditors or regulators • Recovery site preparation: retained in advance but deployed when needed to ensure that the backup facility is fully capable of accepting the operation and, equally important, that the restored original site can resume operation as quickly as possible All of these functions require special skills for which most organizations cannot justify full-time employment, so outsourcing these services makes good business sense. In many cases, the cost of this service can be recovered in reduced business interruption insurance premiums. Look for a provider that meets insurance company specifications for a risk class reduction. Establishing the Qualifications of the Provider For all these different types of security providers, there is no one standard measure of their qualifications. Buyers will need to fall back on standard ways to determine their vendor of choice. Here are a few important questions to ask that may help: • What are the skills and training plan of the people actually providing the service? • Is the facility certified under a quality or standards-based program (ISO 9000/17799, BS7799, NIST Common Criteria, HIPAA, EU Safe Harbors, etc.)? • Is the organization large enough or backed by enough capital to sustain operation for the duration of the contract? • How secure is the monitoring facility (for MSS providers)? If anyone can walk through it, be concerned. • Is there a redundant monitoring facility? Redundant is different from a follow-the-sun or backup site in that there is essentially no downtime experienced if the primary monitoring site is unavailable. • Are there SLAs (service level agreements) that are acceptable to the mission of the organization? Can they be raised or lowered for an appropriate price adjustment? • Can the provider do all of the required services with its own resources, or must the provider obtain third-party subcontractor agreements for some components of the plan? • Can the provider prove that its methodology works with either client testimonial or anecdotal case studies? Protecting Intellectual Property Companies in the security outsourcing business all have a primary objective of being a critical element of an organization’s trust initiative. To 399

AU1518Ch24Frame Page 400 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES achieve that objective, strategic information may very likely be included in the security administration, operation, or response domains. Protecting an organization’s intellectual property is essential in successfully providing those services. Review the methods that help preserve the restricted and confidential data from disclosure or discovery. In the case of incident response, a preferred contracting method is to have a pre-agreed contract between the investigator team and the organization’s attorney to conduct investigations. That way, the response can begin immediately when an event occurs without protracted negotiation, and any data collected during the investigation (i.e., password policies, intrusion or misuse monitoring methods) are protected by attorney–client privilege from subpoena and disclosure in open court. Contracting Issues Contracts for security services can be as different as night is to day. Usually when dealing with security services, providers have developed standard terms and conditions and contract prototypes that make sure they do not commit to more risk than they can control. In most cases there is some “wiggle room” to insert specific expectations, but because the potential for misunderstanding is high, I suggest supplementing the standard contract with an easy-to-read memo of understanding that defines in as clear a language as possible what is included and what is excluded in the agreement. Often, this clear intent can take precedence over “legalese” in the event of a serious misunderstanding or error that could lead to legal action. Attorneys are often comfortable with one style of writing; technicians are comfortable with another. Neither is understandable to most business managers. Make sure that all three groups are in agreement as to what is going to be done at what price. Most activities involve payment for services rendered, either time and materials (with an optional maximum), or a fixed periodic amount (in the case of MSS). Occasionally there may be special conditions. For example, a prepaid retainer is a great way to ensure that incident response services are deployed immediately when needed. “Next plane out” timing is a good measure of immediacy for incident response teams that may need to travel to reach the site. Obviously, a provider with a broad geographic reach will be able to reach any given site more easily than the organization with only a local presence. Expect a higher rate for court testimony, immediate incident response, and evidence collection. 400

AU1518Ch24Frame Page 401 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security Quality of Service Level Agreements The key to a successful managed security agreement is in negotiating a reasonable service level agreement. Response time is one measure. Several companies will give an expected measure of operational improvement, such as fewer password resets, reduced downtime, etc. Try to work out an agreeable set of QoS factors and tie a financial or an additional time penalty for response outside acceptable parameters. Be prudent and accept what is attainable, and do not try to make the provider responsible for more than it can control. Aggressively driving a deal past acceptable criteria will result in no contract or a contract with a servicer that may fail to thrive. Retained Responsibilities Despite what domain of service is selected or the breadth of activities that are to be performed, there are certain cautions regarding the elements that should be held within the organization if at all possible. Management. The first of these is management. Remember that management is responsible for presenting and determining the culture of the organization. Internal and external expectations of performance are almost always carried forth by management style, measurement, and communications, both formal and informal. Risk of losing that culture or identity is considerably increased if the management responsibility for any of the outsourced functions is not retained by someone in the organization ultimately accountable for their performance. If success is based on presenting a trusted image to partners, customers, and employees, help to ensure that success by maintaining close control over the management style and responsibility of the services that are acquired. Operations. Outsourcing security is not outsourcing business operation. There are many companies that can help run the business, including operating the data center, the financial operations, legal, shipping, etc. The same company that provides the operational support should not, as a rule, provide the security of that operation. Keep the old separation of duties principle in effect. People other than those who perform the operations should be selected to provide the security direction or security response. Audit and Oversight. Finally, applying the same principle, invite and

encourage frequent audit and evaluation activities. Outsourced services should always be viewed like a yoyo. Whenever necessary, an easy pull on the string should be all that is necessary to bring them back into range for a check and a possible redirection. Outsourcing security or any other business service should not be treated as a “sign the contract and forget it” project. 401

AU1518Ch24Frame Page 402 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES Building an Escape Clause. But what if all this is done and it still looks like we made a mistake? Easy. If possible, build in an escape clause in the outsource contract that allows for a change in scope, direction, or implementation. If these changes (within reason) cannot be accommodated, most professional organizations will allow for an escape from the contract. Setup and equipment charges may be incurred, but those would typically be small compared to the lost time and expense involved in misunderstanding or hiring the wrong service. No security service organization wants a reference client that had to be dragged, kicking and screaming, through a contract simply because the name is on the line when everyone can agree that the service does not fit.

THE FUTURE OF OUTSOURCED SECURITY Industries Most Likely to Outsource The first category of industries most likely to outsource security is represented by those companies whose key assets are the access to reliable data or information service. Financial institutions, especially banks, securities brokers, and insurance, health, or property claims operations, are traditional buyers of security services. Recent developments in privacy have added healthcare providers and associated industries to that list. Hospitals, medical care providers, pharmaceuticals, and health-centered industries have a new need for protecting the privacy of personal health information. Reporting on the success of that protection is often a new concept that neither meets the existing operation nor justifies the full-time expense. HIPAA compliance will likely initiate a rise in the need for security (privacy) compliance providers. The third category of industry that frequently requires outsourced security is the set of industries that cannot suffer any downtime or show any compromise of security. Railroads, cargo ships, and air traffic control are obvious examples of the types of industries where continuous availability is a crucial element for success. They may outsource the network operation or periodic review of their response and recovery plan. Internet retailers that process transactions with credit cards or against credit accounts fit into this category. Release of credit card data, or access to or changes made to purchasing history, is often fatal to continued successful operation. The final category of industry that may need security services are those industries that have as a basis of their success an extraordinary level of trust in the confidentiality of their data. Taken to the extreme, this can include military or national defense organizations. More routinely, this would include technology research, legal, marketing, and other industries that would suffer severe image loss if it were revealed that their security was compromised or otherwise rendered ineffectual. 402

AU1518Ch24Frame Page 403 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security Measurements of Success I once worked on a fairly complex application project that could easily have suffered from “scope creep.” To offset this risk, we encouraged the user to continually ask the team, “How do we know we are done?” This simple question can help identify quite clearly what the expectations are for the security service, and how success is measured. What comes to my mind is the selection of the three milestones of project success: “scope, time, and cost — pick two out of three.” A similar principle applies to measuring the success of security services. They are providing a savings of risk, cost, or effort. Pick two out of three. It is impractical to expect that everything can be completely solved at a low cost with total confidence. Security servicers operate along the same principles. They can explain how you can experience success, but only in two out of three areas. Either they save money, reduce risk, or take on the complexity of securing the enterprise. Only rarely can they do all three. Most can address two of these measures, but it lies to the buying organization to determine which of these are the two most important. Response of MSS (Managed Security Service) Providers to New World Priorities After September 11, 2001, the security world moved substantially. What was secure was no longer secure. What was important was no longer important. The world focused on the risk of personal safety and physical security and anticipated the corresponding loss of privacy and confidentiality. In the United States, the constitutional guarantee of freedom was challenged by the collective need for personal safety, and previously guaranteed rights were brought into question. The security providers have started to address physical safety issues in a new light. What was previously deferred to the physical security people is now accepted as part of the holistic approach to risk reduction and trust. Look for an integration of traditional physical security concepts to be enhanced with new technologies like digital facial imaging, integrated with logical security components. New authentication methods will reliably validate “who did what where,” not only when something was done on a certain device. Look also for an increase in the sophistication of pattern matching for intrusion management services. Data compression can tell you faster that something has happened, but sophisticated modeling will soon be able to predict with good reliability that an event is forming in enough time to take appropriate defensive action. We will soon look back on today as the primitive era of security management. 403

AU1518Ch24Frame Page 404 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES Response of the MSS Buyers to New World Priorities The servicers are in business to respond quickly to new priorities, but managed security service buyers will also respond to emerging priorities. Creative solutions are nice, but practicality demands that enhanced security be able to prove itself in terms of financial viability. I believe we will see a new emphasis on risk management and image enhancements. Organizations have taken a new tack on the meaning of trust in their industries. Whether it is confidentiality, accuracy, or reliability, the new mantra of business success is the ability to depend on the service or product that is promised. Security in all its forms is key to delivering on that promise. SUMMARY AND CONCLUSIONS Outsourced security, or managed security services (MSS), will continue to command the spotlight. Providers of these services will be successful if they can translate technology into real business metrics. Buyers of that service will be successful if they focus on the measurement of the defined objectives that managed services can provide. Avoid the attraction offered simply by a recognized name and get down to real specifics. Based on several old and tried methods, there are new opportunities to effectively use and build on the skills and economies of scale offered by competent MSS providers. Organizations can refocus on what made them viable or successful in the first place: products and services that can be trusted to deliver on the promise of business success. ABOUT THE AUTHOR Michael J. Corby is president of QinetiQ Trusted Information Management, Inc. He was most recently vice president of the Netigy Global Security Practice, CIO for Bain & Company and the Riley Stoker division of Ashland Oil, and founder of M. Corby & Associates, Inc., a regional consulting firm in continuous operation since 1989. He has more than 30 years of experience in the information security field and has been a senior executive in several leading IT and security consulting organizations. He was a founding officer of (ISC)2 Inc., developer of the CISSP program, and was named the first recipient of the CSI Lifetime Achievement Award. A frequent speaker and prolific author, Corby graduated from WPI in 1972 with a degree in electrical engineering.

404

Au1518Ch25Frame Page 405 Thursday, November 14, 2002 6:13 PM

Chapter 25

Roles and Responsibilities of the Information Systems Security Officer Carl Burney, CISSP

Information is a major asset of an organization. As with any major asset, its loss can have a negative impact on the organization’s competitive advantage in the marketplace, a loss of market share, and become a potential liability to shareholders or business partners. Protecting information is as critical as protecting other organizational assets, such as plant assets (i.e., equipment and physical structures) and intangible assets (i.e., copyrights or intellectual property). It is the information systems security officer (ISSO) who establishes a program of information security to help ensure the protection of the organization’s information. The information systems security officer is the main focal point for all matters involving information security. Accordingly, the ISSO will: • Establish an information security program including: — Information security plans, policies, standards, guidelines, and training • Advise management on all information security issues • Provide advice and assistance on all matters involving information security

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

405

Au1518Ch25Frame Page 406 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES THE ROLE OF THE INFORMATION SYSTEMS SECURITY OFFICER There can be many different security roles in an organization in addition to the information system security officer, such as: • • • • • • •

Network security specialist Database security specialist Internet security specialist E-business security specialist Public key infrastructure specialist Forensic specialist Risk manager

Each of these roles is in a unique, specialized area of the information security arena and has specific but limited responsibilities. However, it is the role of the ISSO to be responsible for the entire information security effort in the organization. As such, the ISSO has many broad responsibilities, crossing all organizational lines, to ensure the protection of the organization’s information. RESPONSIBILITIES OF THE INFORMATION SYSTEMS SECURITY OFFICER As the individual with the primary responsibility for information security in the organization, the ISSO will interact with other members of the organization in all matters involving information security, to include: • Develop, implement, and manage an information security program. • Ensure that there are adequate resources to implement and maintain a cost-effective information security program • Work closely with different departments on information security issues, such as: — The physical security department on physical access, security incidents, security violations, etc. — The personnel department on background checks, terminations due to security violations, etc. — The audit department on audit reports involving information security and any resulting corrective actions • Provide advice and assistance concerning the security of sensitive information and the processing of that information. • Provide advice and assistance to the business groups to ensure that information security is addressed early in all projects and programs. • Establish an information security coordinating committee to address organization-wide issues involving information security matters and concerns. • Serve as a member of technical advisory committees. 406

Au1518Ch25Frame Page 407 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer Exhibit 25-1. An information security program will cover a broad spectrum. Policies, Standards, Guidelines, and Rules

Reports

Access controls Audits and Reviews Configuration management Contingency planning Copyright Incident response Personnel security Physical security

Risk management Security software/hardware Testing Training Systems acquisition Systems development Certification/accreditation Exceptions

• Consult with and advise senior management on all major information security-related incidents or violations. • Provide senior management with an annual state of information security report. Developing, implementing, and managing an information security program is the ISSO’s primary responsibility. The Information Security Program will cross all organizational lines and encompass many different areas to ensure the protection of the organization’s information. Exhibit 25-1 contains a noninclusive list of the different areas covered by an information security program. Policies, Standards, Guidelines, and Rules • Develop and issue security policies, standards, guidelines, and rules. • Ensure that the security policies, standards, guidelines, and rules appropriately protect all information that is collected, processed, transmitted, stored, or disseminated. • Review (and revise if necessary) the security policies, standards, guidelines, and rules on a periodic basis. • Specify the consequences for violations of established policies, standards, guidelines, and rules. • Ensure that all contracts with vendors, contractors, etc. include a clause that the vendor or contractor must adhere to the organization’s security policies, standards, guidelines, and rules, and be liable for any loss due to violation of these policies, standards, guidelines, and rules. Access Controls • Ensure that access to all information systems is controlled. • Ensure that the access controls for each information system are commensurate with the level of risk, determined by a risk assessment. 407

Au1518Ch25Frame Page 408 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES • Ensure that access controls cover access by workers at home, dial-in access, connection from the Internet, and public access. • Ensure that additional access controls are added for information systems that permit public access. Audits and Reviews • Establish a program for conducting periodic reviews and evaluations of the security controls in each system, both periodically and when systems undergo significant modifications. • Ensure audit logs are reviewed periodically and all audit records are archived for future reference. • Work closely with the audit teams in required audits involving information systems. • Ensure the extent of audits and reviews involving information systems are commensurate with the level of risk, determined by a risk assessment. Configuration Management • Ensure that configuration management controls monitor all changes to information systems software, firmware, hardware, and documentation. • Monitor the configuration management records to ensure that implemented changes do not compromise or degrade security and do not violate existing security policies. Contingency Planning • Ensure that contingency plans are developed, maintained in an up-todate status, and tested at least annually. • Ensure that contingency plans provide for enough service to meet the minimal needs of users of the system and provide for adequate continuity of operations. • Ensure that information is backed up and stored off-site. Copyright • Establish a policy against the illegal duplication of copyrighted software. • Ensure inventories are maintained for each information system’s authorized/legal software. • Ensure that all systems are periodically audited for illegal software. Incident Response • Establish a central point of contact for all information security-related incidents or violations. 408

Au1518Ch25Frame Page 409 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer • Disseminate information concerning common vulnerabilities and threats. • Establish and disseminate a point of contact for reporting information security-related incidents or violations. • Respond to and investigate all information security-related incidents or violations, maintain records, and prepare reports. • Report all major information security-related incidents or violations to senior management. • Notify and work closely with the legal department when incidents are suspected of involving criminal or fraudulent activities. • Ensure guidelines are provided for those incidents that are suspected of involving criminal or fraudulent activities, to include: — Collection and identification of evidence — Chain of custody of evidence — Storage of evidence Personnel Security • Implement personnel security policies covering all individuals with access to information systems or having access to data from such systems. Clearly delineate responsibilities and expectations for all individuals. • Ensure all information systems personnel and users have the proper security clearances, authorizations, and need-to-know, if required. • Ensure each information system has an individual, knowledgeable about information security, assigned the responsibility for the security of that system. • Ensure all critical processes employ separation of duties to ensure one person cannot subvert a critical process. • Implement periodic job rotation for selected positions to ensure that present job holders have not subverted the system. • Ensure users are given only those access rights necessary to perform their assigned duties (i.e., least privilege). Physical Security • Ensure adequate physical security is provided for all information systems and all components. • Ensure all computer rooms and network/communications equipment rooms are kept physically secure, with access by authorized personnel only. Reports • Implement a reporting system, to include: — Informing senior management of all major information security related incidents or violations 409

Au1518Ch25Frame Page 410 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES — An annual State of Information Security Report — Other reports as required (i.e., for federal organizations: OMB CIRCULAR NO. A-130, Management of Federal Information Resources) Risk Management • Establish a risk management program to identify and quantify all risks, threats, and vulnerabilities to the organization’s information systems and data. • Ensure that risk assessments are conducted to establish the appropriate levels of protection for all information systems. • Conduct periodic risk analyses to maintain proper protection of information. • Ensure that all security safeguards are cost-effective and commensurate with the identifiable risk and the resulting damage if the information was lost, improperly accessed, or improperly modified. Security Software/Hardware • Ensure security software and hardware (i.e., anti-virus software, intrusion detection software, firewalls, etc.) are operated by trained personnel, properly maintained, and kept updated. Testing • Ensure that all security features, functions, and controls are periodically tested, and the test results are documented and maintained. • Ensure new information systems (hardware and software) are tested to verify that the systems meet the documented security specifications and do not violate existing security policies. Training • Ensure that all personnel receive mandatory, periodic training in information security awareness and accepted information security practices. • Ensure that all new employees receive an information security briefing, as part of the new employee indoctrination process. • Ensure that all information systems personnel are provided appropriate information security training for the systems with which they work. • Ensure that all information security training is tailored to what users need to know about the specific information systems with which they work. • Ensure that information security training stays current by periodically evaluating and updating the training. 410

Au1518Ch25Frame Page 411 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer Systems Acquisition • Ensure that appropriate security requirements are included in specifications for the acquisition of information systems. • Ensure that all security features, functions, and controls of a newly acquired information system are tested to verify that the system meets the documented security specifications, and does not violate existing security policies, prior to system implementation. • Ensure all default passwords are changed when installing new systems. Systems Development • Ensure information security is part of the design phase. • Ensure that a design review of all security features is conducted. • Ensure that all information systems security specifications are defined and approved prior to programming. • Ensure that all security features, functions, and controls are tested to verify that the system meets the documented security specifications and does not violate existing security policies, prior to system implementation. Certification/Accreditation • Ensure that all information systems are certified/accredited, as required. • Act as the central point of contact for all information systems that are being certified/accredited. • Ensure that all certification requirements have been met prior to accreditation. • Ensure that all accreditation documentation is properly prepared before submission for final approval. Exceptions • If an information system is not in compliance with established security policies or procedures, and cannot or will not be corrected: — Document: • The violation of the policy or procedure • The resulting vulnerability • Any necessary corrective action that would correct the violation • A risk assessment of the vulnerability. — Have the manager of the information system that is not in compliance document and sign the reasons for noncompliance. — Send these documents to the CIO for signature. 411

Au1518Ch25Frame Page 412 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES THE NONTECHNICAL ROLE OF THE INFORMATION SYSTEMS SECURITY OFFICER As mentioned, the ISSO is the main focal point for all matters involving information security in the organization, and the ISSO will: • Establish an information security program. • Advise management on all information security issues. • Provide advice and assistance on all matters involving information security. Although information security may be considered technical in nature, a successful ISSO is much more than a “techie.” The ISSO must be a businessman, a communicator, a salesman, and a politician. The ISSO (the businessman) needs to understand the organization’s business, its mission, its goals, and its objectives. With this understanding, the ISSO can demonstrate to the rest of the management team how information security supports the business of the organization. The ISSO must be able to balance the needs of the business with the needs of information security. At those times when there is a conflict between the needs of the business and the needs of information security, the ISSO (the businessman, the politician, and the communicator) will be able to translate the technical side of information security into terms that business managers will be better able to understand and appreciate, thus building consensus and support. Without this management support, the ISSO will not be able to implement an effective information security program. Unfortunately, information security is sometimes viewed as unnecessary, as something that gets in the way of “real work,” and as an obstacle most workers try to circumvent. Perhaps the biggest challenge is to implement information security into the working culture of an organization. Anybody can stand up in front of a group of employees and talk about information security, but the ISSO (the communicator and the salesman) must “reach” the employees and instill in them the value and importance of information security. Otherwise, the information security program will be ineffective. CONCLUSION It is readily understood that information is a major asset of an organization. Protection of this asset is the daily responsibility of all members of the organization, from top-level management to the most junior workers. However, it is the ISSO who carries out the long list of responsibilities, implementing good information security practices, providing the proper guidance and direction to the organization, and establishing a successful 412

Au1518Ch25Frame Page 413 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer information security program that leads to the successful protection of the organization’s information. ABOUT THE AUTHOR Carl Burney, CISSP, is a Senior Internet Security Analyst with IBM in Salt Lake City, Utah.

413

Au1518Ch25Frame Page 414 Thursday, November 14, 2002 6:13 PM

AU1518Ch26Frame Page 415 Thursday, November 14, 2002 6:12 PM

Chapter 26

Information Protection: Organization, Roles, and Separation of Duties Rebecca Herold, CISSP, CISA

Successful information protection and security requires the participation, compliance, and support of all personnel within your organization, regardless of their positions, locations, or relationships with the company. This includes any person who has been granted access to your organization’s extended enterprise information, and any employee, contractor, vendor, or business associate of the company who uses information systems resources as part of the job. A brief overview of the information protection and security responsibilities for various groups within your organization follows. ALL PERSONNEL WITHIN THE ORGANIZATION All personnel have an obligation to use the information according to the specific protection requirements established by your organization’s information owner or information security delegate. A few of the basic obligations include, but are not limited to, the following: • Maintaining confidentiality of log-on passwords • Ensuring the security of information entrusted to their care

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

415

AU1518Ch26Frame Page 416 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES • Using the organization’s business assets and information resources for approved purposes only • Adhering to all information security policies, procedures, standards, and guidelines • Promptly reporting security incidents to the appropriate management area Information Security Oversight Committee An information protection and/or security oversight committee comprised of representatives from various areas of your organization should exist or be created if not already in existence. The members should include high-level representatives from each of your revenue business units, as well as a representative from your organization’s legal, corporate auditing, human resources, physical and facilities management, and finance and accounting areas. The oversight committee should be responsible for ensuring and supporting the establishment, implementation, and maintenance of information protection awareness and training programs to assist management in the security of corporate information assets. Additionally, the committee should be kept informed of all information security-related issues, new technologies, and provide input for information security, protection costs, and budget approvals. Corporate Auditing The corporate auditing department should be responsible for ensuring compliance with the information protection and security policies, standards, procedures, and guidelines. They should ensure that the organizational business units are operating in a manner consistent with policies and standards, and ensure any audit plan includes a compliance review of applicable information protection policies and standards that are related to the audit topic. Additionally, a high-level management member of the corporate auditing department should take an active role in your organization’s information security oversight committee. Human Resources Your human resources department should be responsible for providing timely information to your centrally managed information protection department, as well as the enterprise and division systems managers and application administrators, about corporate personnel terminations or transfers. They should also enforce the stated consequences of noncompliance with the corporate policies, and a high-level member of the human resources department should take an active role in your organization’s information security oversight committee. 416

AU1518Ch26Frame Page 417 Thursday, November 14, 2002 6:12 PM

Information Protection Law Your law department should have someone assigned responsibility for reviewing your enterprise security policies and standards for legal and regulatory compliance and enforceability. Your law department should also be advised of and responsible for addressing legal issues arising from security incidents. Additionally, a high-level member of the law department should take an active role in your organization’s information security oversight committee. This person should be savvy with computer and information technology and related issues; otherwise, the person will not make a positive contribution to the oversight committee, and could, in fact, create unnecessary roadblocks or stop necessary progress based upon lack of knowledge of the issues. Managers Your organization’s line management should retain primary responsibility for identifying and protecting information and computer assets within their assigned areas of management control. When talking about a manager, we are referring to any person who has been specifically given responsibility for directing the actions of others and overseeing their work — basically, the immediate manager or supervisor of an employee. Managers have ultimate responsibility for all user IDs and information owned by company employees in the areas of their control. In the case of non-employee individuals such as contractors, consultants, etc., managers are responsible for the activity and for the company assets used by these individuals. This is usually the manager responsible for hiring the outside party. Managers have additional information protection and security responsibilities as listed, but not limited to, the following: • Continually monitor the practices of employees and consultants under their control and take necessary corrective actions to ensure compliance with your organization’s policies and standards. • Inform the appropriate security administration department of the termination of any employee so that the user ID owned by that individual can be revoked, suspended, or made inaccessible in a timely manner. • Inform the appropriate security administration department of the transfer of any employee if the transfer involves the change of access rights or privileges. • Report any security incident or suspected incident to the centralized information protection department. • Ensure the currency of user ID information (e.g., employee identification number and account information of the user ID owner). • Educate the employees in their area of your organization’s security policies, procedures, and standards for which they are accountable. 417

AU1518Ch26Frame Page 418 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES IT Administrators (Information Delegates) A person, organization, or process that implements or administers security controls for the information owners are referred to as information delegates. Such information delegates typically (but not always) are part of the information technology departments with primary responsibilities for dealing with backup and recovery of the business information, applying and updating information access controls, installing and maintaining information security technology and systems, etc. An information delegate is also any company employee who owns a user ID that has been assigned attributes or privileges associated with access control systems such as Top Secret, RACF, ACF2, etc. This user ID allows them to set system-wide security controls or administrator user IDs and information resource access rights. These security and systems administrators may report to either a business division or your central information protection department. Information delegates are also responsible for implementing and administering security controls for corporate extended enterprise information as instructed by the information owner or delegate. Some of the responsibilities for information delegates include, but are not limited to, the following: • Perform backups according to the backup requirements established by the information owner. • Document backup schedule, backup intervals, storage locations, and number of backup generation copies. • Regularly test backups to ensure they can be used successfully to restore data. • When necessary, restore lost or corrupted information from backup media to return the application to production status. • Perform related tape and DASD management functions as required to ensure availability of the information to the business. • Ensure record retention requirements are met based on the information owner’s analysis. • Implement and administer security controls for corporate extended enterprise information as instructed by the information owner or delegate. • Electronically store information in locations based on classification. • Specifically identify the privileges associated with each system, and categorize the staff allocated to these privileges. • Produce security log reports that will report applications and system violations and incidents to the central information protection department. • Understand the different data environments and the impact of granting access to them. • Ensure access requests are consistent with the information directions and security guidelines. 418

AU1518Ch26Frame Page 419 Thursday, November 14, 2002 6:12 PM

Information Protection • Administer access rights according to criteria established by the information owners. • Create and remove user IDs as directed by the appropriate managers. • Administer the system within the scope of the job description and functional responsibilities. • Distribute and follow up on security violation reports. • Report suspected security breaches to your central information protection department. • Give passwords of newly created user IDs to the user ID owner only. • Maintain responsibility for day-to-day security of information. Information Asset and Systems Owners The information asset owner for a specific data item is a management position within the business area facing the greatest negative impact from disclosure or loss of that information. The information asset owner is ultimately responsible for ensuring that appropriate protection requirements for the information assets are defined and implemented. The information owner responsibilities include, but are not limited to, the following: • Assign initial information classification and periodically review the classification to ensure it still meets the business needs. • Ensure security controls are in place commensurate with the information classification. • Review and ensure currency of the access rights associated with information assets they own. • Determine security requirements, access criteria, and backup requirements for the information assets they own. • Report suspected security breaches to corporate security. • Perform, or delegate if desired, the following: — Approval authority for access requests from other business units or assign a delegate in the same business unit as the executive or manager owner — Backup and recovery duties or assign to the information custodian — Approval of the disclosure of information — Act on notifications received concerning security violations against their information assets — Determine information availability requirements — Assess information risks Systems owners must consider three fundamental security areas: management controls, operational controls, and technical controls. They must follow the direction and requests of the information owners when establishing access controls in these three areas. 419

AU1518Ch26Frame Page 420 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES Information Protection An area should exist that is responsible for determining your organization’s information protection and security directions (strategies, procedures, guidelines), as approved or suggested by the information protection oversight committee, to ensure information is controlled and secured based on its value, risk of loss or compromise, and ease of recoverability. As a very high overview, some of the responsibilities of an information protection department include, but are not limited to, the following: • Provide information security guidelines to the information management process. • Develop a basic understanding of your organization’s information to ensure proper controls are implemented. • Provide information security design input, consulting, and review. • Ensure appropriate security controls are built into new applications. • Provide information security expertise and support for electronic interchange. • Create information protection audit standards and baselines. • Help reduce your organization’s liability by demonstrating a standard of due care or diligence by following general standards or practices of professional care. • Help ensure awareness of information protection and security issues throughout your entire organization and act as internal information security consultants to project members. • Promote and evaluate information and computer security in IT products and services. • Advise others within your organization of information security needs and requirements. The remainder of this chapter includes a full discussion of the roles and related issues of the information protection department. WHAT IS THE ROLE OF INFORMATION PROTECTION? Secure information and network systems are essential to providing highquality services to customers, avoiding fraud and disclosure of sensitive information, promoting efficient business operations, and complying with laws and regulations. Your organization must make information protection a visible, integral component of all your business operations. The best way to accomplish this is to establish a department dedicated to ensuring the protection of all your organization’s information assets throughout every department and process. Information protection, or if you would prefer, information security, is a very broad discipline. Your information protection department should fulfill five basic roles: 420

AU1518Ch26Frame Page 421 Thursday, November 14, 2002 6:12 PM

Information Protection 1. 2. 3. 4.

Support information risk management processes. Create corporate information protection policies and procedures. Ensure information protection awareness and training. Ensure the integration of information protection into all management practices. 5. Support your organization’s business objectives. Risk Management Risk management is a necessary element of a comprehensive information protection and security program. What is risk management? The General Accounting Office (GAO) has a good, high-level definition: risk management is the process of assessing risk, taking steps to reduce risk to an acceptable level, and maintaining that level of risk. There are four basic principles of effective risk management. Assess Risk and Determine Needs. Your organization must recognize that information is an essential asset that must be protected. When high-level executives understand and demonstrate that managing risks is important and necessary, it will help to ensure that security is taken seriously at lower levels in your organization and that security programs have adequate resources.

Your organization must develop practical risk assessment procedures that clearly link security to business needs. However, do not spend too much time trying to quantify the risks precisely — the difficulty of identifying such data makes the task inefficient and overly time consuming. Your organization must hold program and business managers accountable for ensuring compliance with information protection policies, procedures, and standards. The accountability factor will help ensure managers understand the importance of information protection and not dismiss it, considering it a hindrance. You must manage risk on a continuing basis. As new technologies evolve, you must stay abreast of the associated risks to your information assets. And, as new information protection tools are available, you must know how such tools can help you mitigate risks within your organization. Establish a Central Information Protection and Risk Management Focus. This is your information protection department. You must carry out key information protection risk management activities. Your information protection department will serve as a catalyst for ensuring that information security risks are considered in planned and ongoing operations. You need to provide advice and expertise to all organizational levels and keep managers informed about security issues. Information protection should research 421

AU1518Ch26Frame Page 422 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES potential threats, vulnerabilities, and control techniques, and test controls, assess risks, and identify needed policies. The information protection department must have ready and independent access to senior executives. Security concerns can often be at odds with the desires of business managers and system developers when they are developing new computer applications — they want to do so quickly and want to avoid controls that they view as impeding efficiency and convenience. By elevating security concerns to higher management levels, it helps ensure that the risks are understood by those with the most to lose from information security incidents and that information security is taken into account when decisions are made. The information protection department must have dedicated funding and staff. Information protection budgets need to cover central staff salaries, training and awareness costs, and security software and hardware. The central information protection department must strive to enhance its staff professionalism and technical skills. It is important in fulfilling your role as a trusted information security advisor to keep current on new information security vulnerabilities as well as new information security tools and practices. Information and Systems Security Must Be Cost Effective. T h e c o s t s a n d benefits of security must be carefully examined in both monetary and nonmonetary terms to ensure that the cost of controls does not exceed expected benefits. Security benefits have direct and indirect costs. Direct costs include purchasing, installing, and administering security measures, such as access control software or fire-suppression systems. Indirect costs include system performance, employee morale, and retraining requirements. Information and Systems Security Must Be Periodically Reassessed. S e c u rity is never perfect when a system is implemented. Systems users and operators discover new vulnerabilities or ways to intentionally or accidentally circumvent security. Changes in the system or the environment can also create new vulnerabilities. Procedures become outdated over time. All these issues make it necessary to periodically reassess the security of your organization’s security.

Information Protection Policies, Procedures, Standards, and Guidelines The information protection department must create corporate information protection policies with business unit input and support. Additionally, they must provide guidance and training to help the individual business units create their own procedures, standards, and guidelines that support the corporate information protection policies. 422

AU1518Ch26Frame Page 423 Thursday, November 14, 2002 6:12 PM

Information Protection The Information Protection Department Must Create and Implement Appropriate Policies and Related Controls. You need to link the information pro-

tection policies you create to the business risks of your organization. The information protection policies must be adjusted on a continuing basis to respond to newly identified risks. Be sure to pay particular attention to addressing user behavior within the information protection policies. Distinguish between information protection policies and guidelines or standards. Policies generally outline fundamental requirements that managers consider to be mandatory. Guidelines and standards contain more detailed rules for how to implement the policies. It is vital to the success of the information protection policies for the oversight group and executive management to visibly support the organization’s information protection policies. Information and Systems Security Is Often Constrained by Societal Factors.

The ability of your information protection department to support the mission of your organization may be limited by various social factors depending upon the country in which your offices are located, or the laws and regulations that exist within certain locations where you do business. Know your operating environments and ensure your policies are in sync with these environments. Awareness and Training The information protection department must make your organization aware of information protection policies, related issues, and news on an ongoing basis. Additionally, it must provide adequate training — not only to help ensure personnel know how to address information security risks and threats but also to keep the information protection department personnel up to date on the most appropriate methods of ensuring information security. An Information Protection Department Must Promote Awareness of Information Protection Issues and Concerns throughout Your Entire Organization. The

information protection department must continually educate users and others on risks and related policies. Merely sending out a memo to management once every year or two is not sufficient. Use attention-getting and user-friendly techniques to promote awareness of information protection issues. Awareness techniques do not need to be dry or boring — they should not be, or your personnel will not take notice of the message you are trying to send. An Information Protection Department Must Monitor and Evaluate Policy and Control Effectiveness of the Policies. The information protection depart-

ment needs to monitor factors that affect risk and indicate security effectiveness. One key to your success is to keep summary records of actual 423

AU1518Ch26Frame Page 424 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES security incidents within your organization to measure the types of violations and the damage suffered from the incidents. These records will be valuable input for risk assessments and budget decisions. Use the results of your monitoring and record keeping to help determine future information protection efforts and to hold managers accountable for the activities and incidents that occur. Stay aware of new information protection and security monitoring tools and techniques to address the issues you find during the monitoring. An Information Protection Department Must Extend Security Responsibilities to Those Outside Your Organization. Your organization and the systems

owners have security responsibilities outside your own organization. You have a responsibility to share appropriate knowledge about the existence and extent of security measures with your external users (e.g., customers, business partners, etc.) so they can be confident that your systems are adequately secured, and so they can help to address any risks you communicate to them. An Information Protection Department Must Make Security Responsibilities Explicit. Information and systems security responsibilities and account-

ability must be clearly and explicitly documented and communicated. The information security responsibilities of all groups and audiences within your organization must be communicated to them, using effective methods and on an ongoing basis. Information Protection Must Be Integrated into Your Organization’s Management Practices. Information and systems security must be an integral ele-

ment of sound management practices. Ultimately, managers of the areas owning the information must decide what level of risk they are willing to accept, taking into account the cost of security controls as well as the potential financial impact of not having the security controls. The information protection department must help management understand the risks and associated costs. Information and systems security requires a comprehensive approach that is integrated within your organization’s management practices. Your information protection department also needs to work with traditional security disciplines, such as physical and personnel security. To help integrate information protection within your management practices, use the following: • Establish a process to coordinate implementation of information security measures. The process should coordinate specific information security roles and responsibilities organization-wide, and it should aid agreement about specific information security methods and processes such as risk assessment and a security classification system. Additionally, the process should facilitate coordination of organizationwide security initiatives and promote integration of security into the 424

AU1518Ch26Frame Page 425 Thursday, November 14, 2002 6:12 PM

Information Protection

• •





• •





organizational information planning process. The process should call for implementation of specific security measures for new systems or services and include guidelines for reviewing information security incidents. Also, the process should promote visible business support for information security throughout your organization. Establish a management approval process to centrally authorize new IT facilities from both a business and technical standpoint. Make managers responsible for maintaining the local information system security environment and supporting the corporate information protection policies when they approve new facilities, systems, and applications. Establish procedures to check hardware and software to ensure compatibility with other system components before implementing them into the corporate systems environment. Create a centralized process for authorizing the use of personal information processing systems and facilities for use in processing business information. Include processes to ensure necessary controls are implemented. In conjunction with this, ensure the vulnerabilities inherent in using personal information processing systems and facilities for business purposes have been assessed. Ensure management uses the information protection department for specialized information security advice and guidance. Create a liaison between your information protection department and external information security organizations, including industry and government security specialists, law enforcement authorities, IT service providers, and telecommunications authorities, to stay current with new information security threats and technologies and to learn from the experiences of others. Establish management procedures to ensure that the exchange of security information with outside entities is restricted so that confidential organizational information is not divulged to unauthorized persons. Ensure your information protection policies and practices throughout your organization are independently reviewed to ensure feasibility, effectiveness, and compliance with written policies.

Information Protection Must Support the Business Needs, Objectives, and Mission Statement of Your Organization. Information and systems security prac-

tices must support the mission of your organization. Through the selection and application of appropriate safeguards, the information protection department will help your organization’s mission by protecting its physical and electronic information and financial resources, reputation, legal position, employees, and other tangible and intangible assets. Well-chosen information security policies and procedures do not exist for their own sake — they are put in place to protect your organization’s assets and support the organizational mission. Information security is a means to an end, 425

AU1518Ch26Frame Page 426 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES and not an end in itself. In a private-sector business, having good security is usually secondary to the need to make a profit. With this in mind, security ought to be seen as a way to increase the firm’s ability to make a profit. In a public-sector agency, security is usually secondary to the agency’s provision of services to citizens. Security, in this case then, ought to be considered as a way to help improve the service provided to the public. So, what is a good mission statement for your information protection department? It really depends upon your business, environment, company size, industry, and several other factors. To determine your information protection department’s mission statement, ask yourself these questions: • What do your personnel, systems users, and customers expect with regard to information and systems security controls and procedures? • Will you lose valued staff or customers if information and systems security is not taken seriously enough, or if it is implemented in such a manner that functionality is noticeably impaired? • Has any downtime or monetary loss occurred within your organization as a result of security incidents? • Are you concerned about insider threats? Do you trust your users? Are most of your systems users local or remote? • Does your organization keep non-public information online? What is the loss to your organization if this information is compromised or stolen? • What would be the impact of negative publicity if your organization suffered an information security incident? • Are there security guidelines, regulations, or laws your organization is required to meet? • How important are confidentiality, integrity, and availability to the overall operation of your organization? • Have the information and network security decisions that have been made been consistent with the business needs and economic stance of your organization? To help get you started with creating your own information protection department mission statement, here is an example for you to use in conjunction with considering the previous questions: The mission of the information protection department is to ensure the confidentiality, integrity, and availability of the organization’s information; provide information protection guidance to the organization’s personnel; and help ensure compliance with information security laws and regulations while promoting the organization’s mission statement, business initiatives, and objectives. 426

AU1518Ch26Frame Page 427 Thursday, November 14, 2002 6:12 PM

Information Protection Information Protection Budgeting How much should your organization budget for information protection? You will not like the answer; however, there is no benchmark for what information protection and security could or should cost within organizations. The variables from organization to organization are too great for such a number. Plus, it really depends upon how information protection and security costs are spread throughout your organization and where your information protection department is located within your organization. Most information and network security spending recommendations are in extremes. The Gartner Group research in 2000 showed that government agencies spent 3.3 percent of their IT budgets on security — a significantly higher average percentage than all organizations as a whole spent on security (2.6 percent). Both numbers represent a very low amount to spend to protect an organization’s information assets. Then there is the opinion of a former chief security officer at an online trading firm who believes the information security budget should be 4 to 10 percent of total company revenues and not part of the IT budget at all. An October 2001 Computerworld/J.P. Morgan Security poll showed that companies with annual revenues of more than $500 million are expected to spend the most on security in 2002, when security-related investments will account for 11.2 percent of total IT budgets on average, compared with an average of 10.3 percent for all the users which responded to the poll. However, there are other polls, such as a 2001 survey from Metricnet, that shows that only 33 percent of companies polled after September 11, 2001, will spend more than 5 percent of their IT budgets on security. What is probably the most realistic target for information security spending is the one given by eSecurityOnline.com, which indicates information protection should be 3 to 5 percent of the company’s total revenue. Unfortunately, it has been documented in more than one news report that some CIOs do not consider information security a normal or prudent business expense. Some CFOs and CEOs have been quoted as saying information security expenses were “nuisance protection.” Some decision makers need hard evidence of a security threat to their companies before they will respond. But doing nothing is not a viable option. It only takes one significant security incident to bring down a company. When budgeting for information protection, keep in mind the facts and experiences of others. As the San Francisco-based Computer Security Institute found in its 2001 annual Computer Crime and Security Survey, 85 percent of the respondents admitted they had detected computer security breaches during the year. While only 35 percent of the respondents admitted to being able to quantify the losses, the total financial impact from these incidents was a staggering $378 million in losses. 427

AU1518Ch26Frame Page 428 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES The CIO of the Department of Energy’s (DoE) Lawrence Livermore National Laboratory in Livermore, California, indicated in 2001 that security incidents had risen steadily by about 20 percent a year. Security of information is not a declining issue; it is an increasingly significant issue to address. Basically, security is a matter of existence or nonexistence for data. So, to help you establish your information protection budget: • Establish need before cost. If you know money is going to be a stumbling block, then do not lead with a budget request. Instead, break down your company’s functions by business process and illustrate how these processes are tied to the company’s information and network. Ask executive management, “What do you want to protect?” and then show them, “This is what it will cost to do it.” • Show them numbers. It is not enough to talk about information security threats in broad terms. Make your point with numbers. Track the number of attempted intrusions, security incidents, and viruses within your organization. Document them in reports and plot them on graphs. Present them monthly to your executive management. This will provide evidence of the growing information security threat. • Use others’ losses to your advantage. Show them what has happened to other companies. Use the annual CSI/FBI computer crime and security statistics. Give your executive managers copies of Tangled Web by Richard Power to show them narratives of exactly what has happened to other companies. • Put it in legal terms. Corporate officers are not only accountable for protecting their businesses’ financial assets, but are also responsible for maintaining critical information. Remind executive management that it has a fiduciary responsibility to detect and protect areas where information assets might be exposed. • Keep it simple. Divide your budget into categories and indicate needed budgets within each. Suggested categories include: — Personnel — Software systems — Hardware systems — Awareness and training — Law and regulation compliance — Emerging technology research — Business continuity • Show them where it hurts. Simply state the impact of not implementing or funding security. 428

AU1518Ch26Frame Page 429 Thursday, November 14, 2002 6:12 PM

Information Protection EXECUTIVE MANAGEMENT MUST SPONSOR AND SUPPORT INFORMATION PROTECTION Executive management must clearly and unequivocally support information protection and security initiatives. It must provide a role model for the rest of your organization that adhering to information protection policies and practices is the right thing to do. It must ensure information protection is built into the management framework. The management framework should be established to initiate and control the implementation of information security within your organization. Ideally, the structure of a security program should result from the implementation of a planned and integrated management philosophy. Managing computer security at multiple levels brings many benefits. The higher levels (such as the headquarters or unit levels) must understand the organization as a whole, exercise more authority, set policy, and enforce compliance with applicable policies and procedures. On the other hand, the systems levels (such as the computer facility and applications levels) know the technical and procedural requirements and problems. The information protection department addresses the overall management of security within the organization as well as corporate activities such as policy development and oversight. The system-level security program can then focus on the management of security for a particular information processing system. A central information protection department can disseminate security-related information throughout the organization in an efficient and cost-effective manner. A central information protection department has an increased ability to influence external and internal policy decisions. A central information protection department can help ensure spending its scarce security dollars more efficiently. Another advantage of a centralized program is its ability to negotiate discounts based on volume purchasing of security hardware and software. Where Does the Information Security Role Best Fit within the Organization? Information security should be separated from operations. When the security program is embedded in IT operations, the security program often lacks independence, exercises minimal authority, receives little management attention, and lacks resources. In fact, the GAO identified this type of organizational mode (information security as part of IT operations) as a principal basic weakness in federal agency IT security programs. The location of the information protection department needs to be based on your organization’s goals, structure, and culture. To be effective, a central information protection department must be an established part of organization management. 429

AU1518Ch26Frame Page 430 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES Should Information Protection Be a Separate Business Unit Reporting to the CEO? This is the ideal situation. Korn/Ferry’s Jim Bock, a recruiter who

specializes in IT and information security placements, has noticed that more chief security officers are starting to report directly to the CEO, on a peer level to the CIO. This provides information protection with a direct line to executive management and demonstrates the importance of information security to the rest of the organization. Should Information Protection Be a Separate Business Unit Reporting to the CIO? This is becoming more commonplace. This could be an effective area

for the information protection group. However, there exists conflict of interest in this position. Additionally, security budgets may get cut to increase spending in the other IT areas for which the CIO has responsibility. Based upon recent history and published reports, CIOs tend to focus more on technology and security; they may not understand the diverse information protection needs that extend beyond the IT arena. Should Information Protection Be a Separate Business Unit Reporting to the CFO? This could possibly work if the CFO also understands the informa-

tion security finance issues. However, it is not likely because it is difficult (if not impossible) to show a return on investment for information security costs; so this may not be a good location for the information protection department. Should Information Protection Exist as a Department within IT Reporting to the IT VP? This is generally not a good idea. Not only does this create a true

conflict of interest, but it also demonstrates to the rest of the organization an attitude of decreased importance of information security within the organization. It creates a competition of security dollars with other IT dollars. Additionally, it sends the message that information protection is only a technical matter and does not extend to all areas of business processes (such as hard-copy protection, voice, fax, mail, etc.). Should Information Protection Exist as a Group within Corporate Auditing Reporting to the Corporate Auditor? This has been attempted within sev-

eral large organizations, and none that I have known of have had success with this arrangement. Not only does this create a huge conflict of interest — auditors cannot objectively audit and evaluate the same security practices the people within their same area created — but it also sends the message to the rest of the organization that information security professionals fill the same role as auditors. Should Information Protection Exist as a Group within Human Resources Reporting to the HR VP? This could work. One advantage of this arrange-

ment is that the area creating the information protection policies would be within the same area as the people who enforce the policies from a 430

AU1518Ch26Frame Page 431 Thursday, November 14, 2002 6:12 PM

Information Protection disciplinary aspect. However, this could also create a conflict of interest. Also, by placing information protection within the HR area, you could send the message to the rest of the organization that information protection is a type of police unit; and it could also place it too far from executive management. Should Information Protection Exist within Facilities Management Reporting to the Risk Management Director? This does place all types of risk functions

together, making it easier to link physical and personnel security with information security. However, this could be too far removed from executive management to be effective. Should Information Protection Exist as a Group within IT Reporting to Middle Management? This is probably the worst place to place the information

protection group. Not only is this too far removed from executive management, but this also creates a conflict of interest with the IT processes to which information security practices apply. It also sends a message to the rest of the organization that information protection is not of significant importance to the entire organization and that it only applies to the organization’s computer systems. What Security Positions Should Exist, and What Are the Roles, Requirements, and Job Descriptions for Each? Responsibilities for accomplishing information security requirements must be clearly defined. The information security policy should provide general guidance on the allocation of security roles and responsibilities within the organization. General information security roles and responsibilities must be supplemented with a more detailed local interpretation for specific sites, systems, and services. The security of an information system must be made the responsibility of the owner of that system. To avoid any misunderstanding about individual responsibilities, assets and security processes associated with each individual must be clearly defined. To avoid misunderstanding individual responsibilities, the manager responsible for each asset or security process must be assigned and documented. To avoid misunderstanding individual responsibilities, authorization levels must be defined and documented. Multiple levels of dedicated information security positions must exist to ensure full and successful integration of information protection into all aspects of your organization’s business processes. So what positions are going to accomplish all these tasks? A few example job descriptions can be found in Exhibit 26-1. The following are some suggestions of positions for you to consider establishing within your organization: 431

AU1518Ch26Frame Page 432 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES Exhibit 26-1. Example job descriptions. The following job descriptions should provide a reference to help you create your own unique job descriptions for information security-related positions based upon your own organization’s needs. COMPLIANCE OFFICER Job Description A regulatory/compliance attorney to monitor, interpret, and communicate laws and legislation impacting regulation. Such laws and legislation include HIPAA regulations. The compliance officer will be responsible for compliance and quality control covering all areas within the information technology and operations areas. Responsibilities include: • • • •

Quality assurance Approval and release of all personal health information HIPAA compliance oversight and implementation Ensuring all records and activities are maintained acceptably in accordance with health and regulatory authorities

Qualifications • J.D. with outstanding academics and a minimum of ten years of experience • Three to five years’ current experience with healthcare compliance and regulatory issues • In-depth familiarity with federal and state regulatory matters (Medicare, Medicaid, fraud, privacy, abuse, etc.) CHIEF SECURITY OFFICER Job Description The role of the information security department is primarily to safeguard the confidential information, assets, and intellectual property that belongs to or is processed by the organization. The scope of this position primarily involves computer security but also covers physical security as it relates to the safeguarding of information and assets. The CSO is responsible for enforcing the information security policy, creating new procedures, and reviewing existing procedures to ensure that information is handled in an appropriate manner and meets all legislative requirements, such as those set by the HIPAA security and privacy standards. The security officer must also be very familiar with anti-virus software, IP firewalls, VPN devices, cryptographic ciphers, and other aspects of computer security. Requirements • Experience with systems and networking security • Experience with implementing and auditing security measures in a multi-processor environment • Experience with data center security • Experience with business resumption planning • Experience with firewalls, VPNs, and other security devices • Good communication skills, both verbal and written • Good understanding of security- and privacy-related legislation as it applies to MMIS

432

AU1518Ch26Frame Page 433 Thursday, November 14, 2002 6:12 PM

Information Protection Exhibit 26-1. Example job descriptions (Continued). • Basic knowledge of cryptography as it relates to computer security • CISSP certification Duties and Responsibilities The information security department has the following responsibilities: • Create and implement information security policies and procedures. • Ensure that procedures adhere to the security policies. • Ensure that network security devices exist and are functioning correctly where they are required (such as firewalls and software tools such as anti-virus software, intrusion detection software, log analysis software, etc). • Keep up-to-date on known computer security issues and ensure that all security devices and software are continuously updated as problems are found. • Assist the operations team in establishing procedures and documentation pertaining to network security. • Assist the engineering team to ensure that infrastructure design does not contain security weaknesses. • Assist the facilities department to ensure that physical security is adequate to protect critical information and assets. • Assist the customer systems administration and the professional services groups in advising clients on network security issues. • Provide basic security training programs for all employees, and — when they access information — partners, business associates, and customers. • In the event of a security incident, work with the appropriate authorities as directed by the executive. • Work with external auditors to ensure that information security is adequate and evaluate external auditors to ensure that external auditors meet proper qualifications. The Chief Security Officer has the following responsibilities: • Ensure that the information security department is able to fulfill the above mandate. • Hire personnel for the information security department. • Hold regular meetings and set goals for information security personnel. • Perform employee evaluations of information security personnel as directed by human resources. • Ensure that information security staff receives proper training and certification where required. • Participate in setting information security policies and procedures. • Review all company procedures that involve information security. • Manage the corporate information security policies and make recommendations for modifications as the needs arise. INFORMATION SECURITY ADMINISTRATOR Job Specifications The information security administrator will: • Work with security analysts and application developers to code and develop information security rules, roles, policies, standards, etc.

433

AU1518Ch26Frame Page 434 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES Exhibit 26-1. Example job descriptions (Continued). • Analyze existing security rules to ensure no problems will occur as new rules are defined, objects added, etc. • Work with other administrative areas in information security activities. • Troubleshoot problems when they occur in the test and production environments. • Define and implement access control requirements and processes to ensure appropriate information access authorization across the organizations. • Plan and develop user administration and security awareness measures to safeguard information against accidental or unauthorized modification, destruction, or disclosure. • Manage the overall functions of user account administration and the companywide information security awareness training program according to corporate policies and federal regulations. • Define relevant data security objectives, goals, and procedures. • Evaluate data security user administration, resource protection, and security awareness training effectiveness. • Evaluate and select security software products to support the assigned functions. • Coordinate security software installation. • Meet with senior management regarding data security issues. • Participate in designing and implementing an overall data security program. • Work with internal and external auditors as required. • Ensure that user administration and information security awareness training programs adhere to HIPAA and other regulations. • Respond to internal and external audit reports in a timely manner. • Provide periodic status reports to the information security officer. • Develop information security awareness training content. • Implement a measuring system to monitor workload of user administrators. • Coordinate the development of and evaluate existing user administration procedures in areas outside information security. Qualifications • Human relations and communication skills to effectively interact with personnel from technical areas, internal auditors, and end users, promoting information security as an enabler and not as an inhibitor • Decision-making ability to define data security policies, goals, and tactics, and to accurately measure these practices as well as risk assessments and selection of security devices including software tools • Ability to organize and prioritize work to balance cost and risk factors and bring adequate data security measures to the information technology environments • Ability to jointly establish measurable goals and objectives with staff, monitor progress on attainment of them, and adjust as required • Ability to work collaboratively with IT and business unit management • Ability to relate business requirements and risks to technology implementation for security-related issues • Knowledge of role-based authorization methodologies and authentication technologies • Knowledge of generally accepted security practices such as ISO 17799 standards • Security administration experience • Good communication skills • Two to four years of security administration experience • SSCP or CISSP certification a plus, but not required

434

AU1518Ch26Frame Page 435 Thursday, November 14, 2002 6:12 PM

Information Protection • Chief Security Officer. The chief security officer (CSO) must raise security issues and help to develop solutions. This position must communicate directly with executive management and effectively communicate information security concerns and needs. The CSO will ensure security management is integrated into the management of all corporate systems and processes to assure that system managers and data owners consider security in the planning and operation of the system. This position establishes liaisons with external groups to take advantage of external information sources and to improve the dissemination of this information throughout the organization. • Information Protection Director. This position oversees the information protection department and staff. This position communicates significant issues to the CSO, sets goals, and creates plans for the information protection department, including budget development. This position establishes liaisons that should be established with internal groups, including the information resources management (IRM) office and traditional security offices. • Information Protection Awareness and Training Manager. This position oversees all awareness and training activities within the organization. This position communicates with all areas of the organization about information protection issues and policies on an ongoing basis. This position ensures that all personnel and parties involved with outsourcing and customer communications are aware of their security responsibilities. • Information Protection Technical/Network Manager. This position works directly with the IT areas to analyze and assess risks within the IT systems and functions. This position stays abreast of new information security risks as well as new and effective information security tools. This position also analyzes third-party connection risks and establishes requirements for the identified risks. • Information Protection Administration Manager. This position oversees user account and access control practices. This person should have a wide experience range over many different security areas. • Privacy Officer. This position ensures the organization addresses new and emerging privacy regulations and concerns. • Internal Auditor. This position performs audits within the corporate auditing area in such a way as to ensure compliance with corporate information protection policies, procedures, and standards. • Security Administrator. The systems security administrator should participate in the selection and implementation of appropriate technical controls and security procedures, understand system vulnerabilities, and be able to respond quickly to system security problems. The security administrator is responsible for the daily administration of user IDs and system controls, and works primarily with the user community. 435

AU1518Ch26Frame Page 436 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES • Information Security Oversight Committee. This is a management information security forum established to provide direction and promote information protection visibility. The committee is responsible for review and approval of information security policy and overall responsibilities. Additionally, this committee is responsible for monitoring exposure to major threats to information assets, for reviewing and monitoring security incidents, and for approving major initiatives to enhance information security. How Do You Effectively Maintain Separation of Duties? When considering quality assurance for computer program code development, the principles of separation of duty are well-established. For example, the person who designs or codes a program must not be the only one to test the design or the code. You need similar separation of duties for information protection responsibilities to reduce the likelihood of accidental compromise or fraud. A good example is the 1996 Omega case where the network administrator, Tim Lloyd, was an employee who was responsible for everything to do with the manufacturing of computers. As a result, when Lloyd was terminated, he was able to add a line of program code to a major manufacturing program that ultimately deleted and purged all the programs in the system. Lloyd also had erased all the backup tapes, for which he also had complete control. Ultimately, the company suffered $12 million in damages, lost its competitive footing in the high-tech instrument and measurement market, and 80 employees lost their jobs as a result. If separation of duties had been in place, this could have been avoided. Management must be become active in hiring practices (ensuring background checks); bonding individuals (which should be routine for individuals in all critical areas); and auditing and monitoring, which should be routine practices. Users should be recertified to resources, and resources to users, at least annually to ensure proper access controls are in place. Because the system administration group is probably placed within the confines of the computer room, an audit of physical and logical controls also needs to be performed by a third party. Certain information protection duties must not be performed by the same person or within one area. For example, there should be separation of roles of systems operators, systems administrators, security administrators, and separation of security-relevant functions from others. Admittedly, ideal separation can be costly in time and money, and often possible only within large staffs. You need to make information security responsibilities dependent upon your business, organization size, and associated risks. You must perform risk assessment to determine what information protection tasks should be centralized and what should be distributed. 436

AU1518Ch26Frame Page 437 Thursday, November 14, 2002 6:12 PM

Information Protection When considering separation of duties for information security roles, it is helpful to use a tool similar to the one in Exhibit 26-2. How Large Should the Information Protection/Security Department Be? Ah, if only there were one easy answer to the question of how large an information protection department should be. This is one of the most commonly asked questions I have heard at information security conferences over the past several years, and I have seen this question asked regularly within all the major information security companies. There is no “best practice” magic number or ratio. The size of an information protection department depends on many factors. These include, but are not limited to, the following: • • • • • •

Industry Organization size Network diversification and size Number of network users Geographical locations Outsourced functions

Whatever size you determine is best for your organization, you need to ensure the staff you choose has a security background or, at least, has some basic security training. SUMMARY This chapter reviewed a wide range of issues involved in creating an information protection program and department. Specifically: • • • • •

Organizational information protection responsibilities Roles of an information protection department Information protection budgeting Executive management support of information protection Where to place the information protection department within your organization • Separation of information security duties • Descriptions of information protection responsibilities Accompanying this chapter is a tool to help you determine separation of information security duties (Exhibit 26-2) and some examples of information protection job descriptions to help you get your own written (Exhibit 26-1). References

The following references were used to collect and support much of the information within this chapter, as well as a general reference for information 437

AU1518Ch26Frame Page 438 Thursday, November 14, 2002 6:12 PM

SECURITY MANAGEMENT PRACTICES Exhibit 26-2. Application roles and privileges worksheet. Application System

________________________________________________

Purpose/Description

________________________________________________

Information Owner

________________________________________________

Application/System Owner

________________________________________________

Implementation Date

________________________________________________

Role/Function

Group/Persons

Access Rights

Comments

User Account Creation Backups Testing Production Change Approvals Disaster Recovery Plans Disable User Accounts Incident Response Error Correction End-User Training Application Documentation Quality Assurance User Access Approvals

protection practices. Other information was gathered from discussions with clients and peers throughout my years working in information technology as well as from widely publicized incidents related to information protection. 1. National Institute of Standards and Technology (NIST) publication, Management of Risks in Information Systems: Practices of Successful Organizations. 2. NIST publication, CSL Bulletin, August 1993, Security Program Management. 3. NIST Generally Accepted System Security Principles (GSSPs). 4. ISO 17799. 5. Organization for Economic Cooperation and Development’s (OECD) Guidelines for the Security of Information Systems. 6. Computer Security Institute (CSI) and FBI joint annual Computer Crime and Security Survey. 7. CIO Magazine, 1-17-2002, The security spending mystery, by Scott Berinato. 8. CIO Magazine, 12-6-2001, Will security make a 360-degree turn?, by Sarah D. Scalet. 9. CIO Magazine, 8-9-2001, Another chair at the table, by Sarah D. Scalet. 10. CIO Magazine, 10-1-200, Protection money, by Tom Field.

438

AU1518Ch26Frame Page 439 Thursday, November 14, 2002 6:12 PM

Information Protection ABOUT THE AUTHOR Rebecca Herold, CISSP, CISA, FLMI, is chief privacy officer and senior security architect for QinetiQ Trusted Information Management, Inc. (Q-TIM), where she has worked since the inception of the company. She has more than 13 years of information security experience. Herold edited The Privacy Papers, released in December 2001. She has also written numerous magazine and newsletter articles on information security topics and has given many presentations at conferences and seminars. Herold may be reached at [email protected].

439

AU1518Ch26Frame Page 440 Thursday, November 14, 2002 6:12 PM

AU1518Ch27Frame Page 441 Thursday, November 14, 2002 8:43 PM

Chapter 27

Organizing for Success: Some Human Resources Issues in Information Security Jeffrey H. Fenton, CBCP, CISSP James M. Wolfe, MSM

In a holistic view, information security is a triad of people, process, and technology. Appropriate technology must be combined with management support, understood requirements, clear policies, trained and aware users, and plans and processes for its use. While the perimeter is traditionally emphasized, threats from inside have received less attention. Insider threats are potentially more serious because an insider already has knowledge of the target systems. When dealing with insider threats, people and process issues are paramount. Also, too often, security measures are viewed as a box to install (technology) or a one-time review. Security is an ongoing process, never finished. This chapter focuses on roles and responsibilities for performing the job of information security. Roles and responsibilities are part of an operationally excellent environment, in which people and processes, along with technology, are integrated to sustain security on a consistent basis. Separation of responsibilities, requiring at least two persons with separate job duties to complete a transaction or process end-to-end, or avoiding a conflict of interest, is also introduced as part of organizing for success. This concept originated in accounting and financial management; for example, not having the same person who approves a purchase also able to write a check. The principle is applied to several roles in information technology (IT) development and operations, as well as the IT system development life 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

441

AU1518Ch27Frame Page 442 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES cycle. All these principles support the overall management goal to protect and leverage the organization’s information assets. INFORMATION SECURITY ROLES AND RESPONSIBILITIES This section introduces the functional components of information security, from a role and responsibility perspective, along with several other IT and business functional roles. Information security is much more than a specialized function; it is everyone’s responsibility in any organization. The Business Process Owner, Information Custodian, and End User The business process owner is the manager responsible for a business process such as supply-chain management or payroll. This manager would be the focal point for one or more IT applications and data supporting the processes. The process owner understands the business needs and the value of information assets to support them. The International Standard ISO 17799, Information Security Management, defines the role of the information asset owner responsible for maintaining the security of that asset.1 The information custodian is an organization, usually the internal IT function or an outsourced provider, responsible for operating and managing the IT systems and processes for a business owner on an ongoing basis. The business process owner is responsible for specifying the requirements for that operation, usually in the form of a service level agreement (SLA). While information security policy vests ultimate responsibility in business owners for risk management and compliance, the day-to-day operation of the compliance and risk mitigation measures is the responsibility of information custodians and end users. End users interact with IT systems while executing business functional responsibilities. End users may be internal to the organization, or business partners, or end customers of an online business. End users are responsible to comply with information security policy, whether general, issue-specific, or specific to the applications they use. Educating end users on application usage, security policies, and best practices is essential to achieve compliance and quality. In an era of budget challenges for the information security functions, the educated and committed end user is an information security force multiplier for defense-in-depth. John Weaver, in a recent essay, “Zen and Information Security,”2 recommends turning people into assets. For training and awareness, this includes going beyond rules and alerts to make security “as second nature as being polite to customers,” as Neal O’Farrell noted in his recent paper, “Employees: Your Best Defense, or Your Greatest Vulnerability?”3 All users should be trained to recognize potential social engineering. Users should also watch the end results of the business processes they 442

AU1518Ch27Frame Page 443 Thursday, November 14, 2002 8:43 PM

Organizing for Success use. Accounting irregularities, sustained quality problems in manufacturing, or incorrect operation of critical automated temperature-control equipment could be due to many causes, including security breaches. When alert end users notice these problems and solve them in a results-oriented manner, they could identify signs of sabotage, fraud, or an internal hacker that technical information security tools might miss. End users who follow proper practices and alert management of suspicious conditions are as important as anti-virus software, intrusion detection, and log monitoring. Users who learn this holistic view of security can also apply the concepts to their homes and families.4 In today’s environment, users include an increasing proportion of nonemployee users, including temporary or contract workers, consultants, outsourced provider personnel, and business-partner representatives. Two main issues with non-employee users are nondisclosure agreements (NDAs) and the process for issuing and deleting computer accounts. Nonemployee users should be treated as business partners, or representatives of business partners, if they are given access to systems on the internal network. This should include a written, signed NDA describing their obligations to protect sensitive information. In contrast with employees, who go through a formal human resources (HR) hiring and separation process, non-employee users are often brought in by a purchasing group (for temporary labor or consulting services), or brought in by the program manager for a project or outsourced activity. While a formal HR information system (HRIS) can alert system administrators to delete computer accounts when employees leave or transfer, non-employees who do not go through the HRIS would not generate this alert. Removing computer accounts for departed non-employees is an operational weak link in many organizations. Information Security Functions Information security functions fall into five main categories — policy/strategy/governance, engineering, disaster recovery/business continuity (DR/BC), crisis management and incident response/investigation, and administrative/operational (see Exhibit 27-1). In addition, information security functions have many interfaces with other business functions as well as with outsource providers, business partners, and other outside organizations. Information security policy, strategy, and governance functions should be organized in an information security department or directorate, headed by an information security manager or director who may also be known as the chief information security officer (CISO). This individual directs, coordinates, plans, and organizes information security activities throughout the organization, as noted by Charles Cresson Wood.5 The information security function must work with many other groups within and outside the organization, 443

AU1518Ch27Frame Page 444 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES

Information Security Policy, Strategy, Governance

Information Security Engineering

Disaster Recovery/ Business Continuity

Information Security Administration and Operations

Crisis Management, Incident Response, Investigations

Exhibit 27-1. Five information security roles.

including physical security, risk management (usually an insurance-related group in larger companies), internal audit, legal, internal and external customers, industry peers, research groups, and law enforcement and regulatory agencies. Within the information security function, policy and governance include the development and interpretation of written information security policies for the organization, an education and awareness program for all users, and a formal approval and waiver process. Any deviation from policy represents a risk above the acceptable level represented by compliance with policy. Such deviations should be documented with a formal waiver approval, including the added risk and additional risk mitigation measures applied, a limited term, and a plan to achieve compliance. Ideally, all connections between the internal network and any outside entity should be consolidated as much as possible through one or a few gateways and demilitarized zones (DMZs), with a standard architecture and continuous monitoring. In very large organizations with decentralized business units, this might not be possible. When business units have unique requirements for external connectivity, those should be formally reviewed and approved by the information security group before implementation. The security strategy role, also in the central information security group, includes the identification of long-term technology and risk trends driving the evolution of the organization’s security architecture. The information security group should develop a security technology roadmap, planning for the next five years the organization’s need for security technologies driven by 444

AU1518Ch27Frame Page 445 Thursday, November 14, 2002 8:43 PM

Organizing for Success risk management and business needs. Once the roadmap is identified, the security group would be responsible for identifying and integrating the products to support those capabilities. Evaluating new products is another part of this activity, and a formal test laboratory should be provided. In larger IT organizations, the security strategy function would work closely with an overall IT strategy function. The information security group should have project responsibility to execute all security initiatives that affect the entire organization. Information security engineering is the function of identifying security requirements and bringing them to realization when a specific network or application environment is newly developed. While the information security group would set the policies as part of the policy and governance function, security engineers would assess the risks associated with a particular program (such as implementing a new enterprise resource planning [ERP] system), identify the applicable policies, and develop a system policy for the system or application environment. Working through the system development life cycle, engineers would identify requirements and specifications, develop the designs, and participate in the integration and testing of the final product. Engineering also includes developing the operational and change-control procedures needed to maintain security once the system is fielded. Information security engineering may be added to the central information security group, or it may be organized as a separate group (as part of an IT systems engineering function). Disaster recovery/business continuity (DR/BC) includes responding to and recovering from disruptive incidents. While DR involves the recovery of IT assets, BC is broader and includes recovery of the business functions (such as alternative office space or manufacturing facilities). While DR and BC began by focusing on physical risks to availability, especially natural disasters, both disciplines have broadened to consider typically nonphysical events such as breaches of information confidentiality or integrity. Much of the planning component of DR/BC can utilize the same risk assessment methods as for information security risk assessments. In large organizations, the DR/BC group is often separate from the central information security group, and included in an operational IT function, because of DR’s close relationship to computer operations and backup procedures. Because of the convergence of DR/BC applicability and methods with other information security disciplines, including DR/BC in the central information security group is a worthwhile option. Crisis management is the overall discipline of planning for and responding to emergencies. Crisis management in IT began as a component of DR. With the broadening of the DR/BC viewpoint, crisis management needs to cover incident types beyond the traditional physical or natural disasters. For all types of incidents, similar principles can be applied to build a team, 445

AU1518Ch27Frame Page 446 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES develop a plan, assess the incident at the onset and identify its severity, and match the response to the incident. In many organizations, the physical security and facilities functions have developed emergency plans, usually focusing on physical incidents or natural disasters, separate from the DR plans in IT. For this reason, an IT crisis management expert should ensure that IT emergency plans are integrated with other emergency plans in the organization. With the broadening of crisis to embrace nonphysical information security incidents, the integrative role must also include coordinating the separate DR plans for various IT resources. During certain emergencies, while the emergency team is in action, it may be necessary to weigh information security risks along with other considerations (such as rapidly returning IT systems or networks to service). For this reason, as well as for coordinating the plans, the integrative crisis management role should be placed in the central information security group. Information security crisis management can also include working with the public relations, human resources, physical security, and legal functions as well as with suppliers, customers, and outside law enforcement agencies. Incident response has already been noted as part of crisis management. Many information security incidents require special response procedures different from responding to a physical disaster. These procedures are closely tied to monitoring and notification, described in the next two paragraphs. An organization needs to plan for responding to various types of information security attacks and breaches, depending on their nature and severity. Investigation is closely related to incident response, because the response team must identify when an incident might require further investigation after service is restored. Investigation is fundamentally different in that it takes place after the immediate emergency is resolved, and it requires evidence collection and custody procedures that can withstand subsequent legal scrutiny. Along with this, however, the incident response must include the processes and technology to collect and preserve logs, alerts, and data for subsequent investigation. These provisions must be in place and operational before an incident happens. The investigation role may be centralized in the information security group, or decentralized in large organizations provided that common procedures are followed. If firstline investigation is decentralized to business units in a large corporation, there should be a central information security group specialist to set technical and process direction on incident response planning and investigation techniques. For all incidents and crises, the lessons learned must be documented — not to place blame but to prevent future incidents, improve the response, and help the central information security group update its risk assessment and strategy. Information security administration and operations include account management, privilege management, security configuration management (on client systems, servers, and network devices), monitoring and notification, 446

AU1518Ch27Frame Page 447 Thursday, November 14, 2002 8:43 PM

Organizing for Success and malicious code and vulnerability management. These administrative and operational functions are diverse, not only in their content but also in who performs them, how they are performed, and where they reside organizationally. Account and privilege management include setting up and removing user accounts for all resources requiring access control, and defining and granting levels of privilege on those systems. These functions should be performed by a central security operations group, where possible, to leverage common processes and tools as well as to ensure that accounts are deleted promptly when users leave or transfer. In many organizations, however, individual system administrators perform these tasks. Security configuration management includes configuring computer operating systems and application software, and network devices such as routers and firewalls, with security functions and access rules. This activity actually implements much of the organization’s security policy. While the central information security group owns the policy, configuration management is typically distributed among system administrators and telecommunication network administrators. This is consistent with enabling the central information security group to focus on its strategic, policy, and governance roles. Monitoring and notification should also be part of a central security operations function, with the ability to “roll up” alerts and capture logs from systems and network devices across the enterprise. Intrusion detection systems (IDSs) would also be the responsibility of this group. In many large organizations, monitoring and notification are not well integrated, with some locally administered systems depending on their own system administrators who are often overworked with other duties. As noted earlier, monitoring and notification processes and tools must meet the needs of incident response. The additional challenges of providing 24/7 coverage are also noted below. Malicious code and vulnerability management includes deploying and maintaining antivirus software, isolating and remediating infected systems, and identifying and correcting security vulnerabilities (in operating systems, software applications, and network devices). These activities require centrally driven technical and process disciplines. It is not enough only to expect individual desktop users to keep anti-virus software updated and individual system administrators to apply patches. A central group should test and push anti-virus updates. The central group should also test patches on representative systems in a laboratory and provide a central repository of alerts and patches for system and network administrators to deploy. Malicious code management is also closely tied to incident response. With the advent of multifunctional worms, and exploits appearing quickly after vulnerabilities become known, an infection could easily occur before patches or anti-virus signatures become available. In some cases, anomaly-based IDSs can detect unusual behavior before 447

AU1518Ch27Frame Page 448 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES patches and signatures are deployed, bringing malicious code and vulnerability management into a closer relationship with monitoring. These central activities cross several functional boundaries in larger IT organizations, including e-mail/messaging operations, enterprise server operations, and telecommunications, as well as security operations. One approach is establishing a cross-functional team to coordinate these activities, with technical leadership in the central information security organization. Distributed Information Security Support in Larger Organizations Some of the challenges of providing security support in a large organization, especially a large corporation with multiple business units, have already been noted. Whether IT functions in general are centralized or distributed reflects the culture of the organization as well as its business needs and technology choices. In any organization, presenting the business value of the information security functions is challenging. Beyond simply preventing bad things from happening, security is an enabler for E-business. To make this case, the central information security group needs to partner with the business as its internal customer. Building a formal relationship with the business units in a large enterprise is strongly recommended. This relationship can take the shape of a formal information protection council, with a representative from each division or business unit. The representative’s role, which must be supported by business unit management, would include bringing the unique technical, process, and people concerns of security, as viewed by that business unit, to the information security group through two-way communication. The representatives can also assist in security training and awareness, helping to push the program to the user community. Representatives can also serve in a first-line role to assist their business units with the approval and waiver requests described earlier. Information Security Options for Smaller Organizations The most important information security problem in many smaller organizations is the lack of an information security function and program. Information security must have an individual (a manager, director, or CISO) with overall responsibility. Leaving it to individual system administrators, without policy and direction, will assure failure. Once this need is met, the next challenge is to scale the function appropriately to the size and needs of the business. Some of the functions, which might be separate groups in a large enterprise, can be combined in a smaller organization. Security engineering and parts of security operations (account and privilege management, monitoring and notification, incident response, crisis management, and DR) could be combined with the policy, governance, and user awareness roles into the central information security group. The hands-on security configuration management of desktops, servers, and network devices 448

AU1518Ch27Frame Page 449 Thursday, November 14, 2002 8:43 PM

Organizing for Success should still be the separate responsibility of system and network administrators. In the earlier discussion, the role of an in-house test laboratory, especially for patches, was noted. Even in a smaller organization, it is strongly recommended that representative test systems be set aside and patches be tested by a system administrator before deployment. For smaller organizations, there are special challenges in security strategy. In a smaller enterprise, the security technology roadmap is set by technology suppliers, as the enterprise depends on commercial off-theshelf (COTS) vendors to supply all its products. Whatever the COTS vendors supply becomes the de facto security strategy for the enterprise. To a great extent, this is still true in large enterprises unless they have a business case to, and have or engage the expertise to, develop some of their own solutions. While a large enterprise can exert some influence over its suppliers, and should develop a formal technology strategy, smaller enterprises should not overlook this need. If a smaller enterprise cannot justify a strategy role on a full-time basis, it could consider engaging external consultants to assist with this function initially and on a periodic review basis. Consultants can also support DR plan development. As with any activity in information security, doing it once is not enough. The strategy or the DR plan must be maintained. Internal and External Audit The role of auditors is to provide an independent review of controls and compliance. The central information security group, and security operational roles, should not audit their own work. To do so would be a conflict of interest. Instead, auditors provide a crucial service because of their independence. The central information security group should partner with the internal audit organization to develop priorities for audit reviews based on risk, exchange views on the important risks to the enterprise, and develop corrective action plans based on the results of past audits. The audit organization can recognize risks based on what it sees in audit results. External auditors may be engaged to provide a second kind of independent review. For external engagements, it is very important to specify the scope of work, including the systems to be reviewed, attributes to be reviewed and tested, and processes and procedures for the review. These ground rules are especially important where vulnerability scanning or penetration testing is involved. Outsourcing Providers Outsourcing providers offer services for a variety of information security tasks, including firewall management and security monitoring. Some Internet service providers (ISPs) offer firewall and VPN management. Outsourcing firewall management can be considered if the organization’s environment is relatively stable, with infrequent changes. If changes are frequent, an outsourcing provider’s ability to respond quickly can be a 449

AU1518Ch27Frame Page 450 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES limiting factor. By contrast, 24/7 monitoring of system logs and IDSs can be more promising as an outsource task. Staffing one seat 24/7 requires several people. This is out of reach for smaller organizations and a challenge in even the largest enterprises. An outsourcing provider for monitoring can leverage a staff across its customer base. Also, in contrast with the firewall, where the organization would trust the provider to have privileged access to firewalls, monitoring can be done with the provider having no interactive access to any of the customer’s systems or network devices. In all consulting and outsourcing relationships, it is essential to have a written, signed NDA to protect the organization’s sensitive information. Also, the contract must specify the obligations of the provider when the customer has an emergency. If an emergency affects many of the same provider’s customers, how would priority be determined? To Whom Should the Information Security Function Report? Tom Peltier, in a report for the Computer Security Institute,6 recommends that the central information security group report as high as possible in the organization, at least to the chief information officer (CIO). The group definitely should not be part of internal audit (due to the potential for conflict of interest) or part of an operational group in IT. If it were part of an operational group, conflict of interest could also result. Peltier noted that operational groups’ top priority is maintaining maximum system uptime and production schedules. This emphasis can work against implementing and maintaining needed security controls. The central information security group should also never be part of an IT system development group, because security controls are often viewed as an impediment or an extra cost add-on to development projects. A security engineer should be assigned from the security engineering group to support each development project. There are several issues around having the central information security group as part of the physical security organization. This can help with investigations and crisis management. The drawbacks are technology incompatibility (physical security generally has little understanding of IT), being perceived only as preventing bad things from happening (contrast with the business enabler viewpoint noted earlier), and being part of a group that often suffers budget cuts during difficult times. Tracy Mayor7 presented a successful experience with a single organization combining physical security and information security. Such an organization could be headed by a chief security officer (CSO), reporting to the chief executive officer (CEO), placing the combined group at the highest level. The combined group could also include the risk management function in large enterprises, an activity usually focused on insurance risks. This would recognize the emerging role of insurance for information security risks. The model can work but would require cultural compatibility, cross-training, 450

AU1518Ch27Frame Page 451 Thursday, November 14, 2002 8:43 PM

Organizing for Success management commitment, and a proactive partnership posture with customers. Another alternative, keeping information security and physical security separate, is to form a working partnership to address shared issues, with crisis management as a promising place to begin. Similarly, the CISO can partner with the risk management function. Although the DR/BC function, as noted earlier, might be part of an operational group, DR/BC issues should be represented to upper management at a comparable level to the CISO. The CISO could consider making DR/BC a component of risk management in security strategy, and partnering with the head of the DR/BC group to ensure that issues are considered and presented at the highest level. Ed Devlin has recommended8 that a BC officer, equal to the CISO, reports at the same high level. FILLING THE ROLES: REMARKS ON HIRING INFORMATION SECURITY PROFESSIONALS One of the most difficult aspects of information security management is finding the right people for the job. What should the job description say? Does someone necessarily need specific information security experience? What are the key points for choosing the best candidate? Answering these questions will provide a clearer picture of how to fill the role effectively. Note: This section outlines several procedures for identifying and hiring job candidates. It is strongly recommended to review these procedures with your human resources team and legal advisors before implementing them in your environment. Job Descriptions A description of the position is the starting point in the process. This job description should contain the following:9 • • • •

• • • • •

The position title and functional reporting relationship The length of time the candidate search will be open A general statement about the position An explicit description of responsibilities, including any specific subject matter expertise required (such as a particular operating system or software application) The qualifications needed, including education The desired attributes wanted Job location (or telecommuting if allowed) and anticipated frequency of travel Start date A statement on required national security clearances (if any) 451

AU1518Ch27Frame Page 452 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES • A statement on requirements for U.S. citizenship or resident alien status, if the position is associated with a U.S. Government contract requiring such status • A statement on the requirements for a background investigation and the organization’s drug-free workplace policy Other position attributes that could be included are: • Salary range • Supervisor name • Etc. The general statement should be two to three sentences, giving the applicant some insight into what the position is. It should be an outline of sorts for the responsibilities section. For example: General: The information security specialist (ISS) uses current computer science technologies to assist in the design, development, evaluation, and integration of computer systems and networks to maintain system security. Using various tools, the ISS will perform penetration and vulnerability analyses of corporate networks and will prepare reports that may be submitted to government regulatory agencies.

The most difficult part of the position description is the responsibilities section. To capture what is expected from the new employee, managers are encouraged to engage their current employees for input on the day-to-day activities of the position. This accomplishes two goals. First, it gives the manager a realistic view of what knowledge, skills, and abilities will be needed. Second, it involves the employees who will be working with the new candidate in the process. This can prevent some of the difficulties current employees encounter when trying to accept new employees. More importantly, it makes them feel a valued part of the process. Finally, this is more accurate than reusing a previous job description or a standard job description provided by HR. HR groups often have difficulty describing highly technical jobs. An old job description may no longer match the needs of a changing environment. Most current employees are doing tasks not enumerated in the job descriptions when they were hired. Using the above general statement, an example of responsibilities might be: • Evaluate new information security products using a standard image of the corporate network and prepare reports for management. • Represent information security in the design, development, and implementation of new customer secured networks. • Assist in customer support issues. • Using intrusion detection tools, test the corporation’s network for vulnerabilities. • Assist government auditors in regulatory compliance audits. 452

AU1518Ch27Frame Page 453 Thursday, November 14, 2002 8:43 PM

Organizing for Success Relevant Experience When hiring a new security professional, it is important to ensure that the person has the necessary experience to perform the job well. There are few professional training courses for information security professionals. Some certification programs, such as the Certified Information System Security Professional (CISSP),10 require experience that would not be relevant for an entry-level position. In addition, Lee Kushner noted, “… while certification is indeed beneficial, it should be looked on as a valuable enhancement or add-on, as opposed to a prerequisite for hiring.”11 Several more considerations can help: • Current information security professionals on the staff can describe the skills they feel are important and which might be overlooked. • Some other backgrounds can help a person transition into an information security career: — Auditors are already trained in looking for minute inconsistencies. — Computer sales people are trained to know the features of computers and software. They also have good people skills and can help market the information security function. — Military experience can include thorough process discipline and hands-on expertise in a variety of system and network environments. Whether enlisted or officer grade, military personnel are often given much greater responsibility (in numbers supervised, value of assets, and criticality of missions) than civilians with comparable years of experience. — A candidate might meet all qualifications except for having comparable experience on a different operating system, another software application in the same market space, or a different hardware platform. In many cases, the skills are easily transferable with some training for an eager candidate. • A new employee might have gained years of relevant experience in college (or even in high school) in part-time work. An employee with experience on legacy systems may have critical skills difficult to find in the marketplace. Even if an employee with a legacy system background needs retraining, such an employee is often more likely to want to stay and grow with an organization. For a new college graduate, extracurricular activities that demonstrate leadership and discipline, such as competing in intercollegiate athletics while maintaining a good scholastic record, should also be considered. The Selection Process Selecting the best candidate is often difficult. Current employees should help with interviewing the candidates. The potential candidates should speak to several, if not all, of the current employees. Most firms use interviews, yet the interview process is far from perfect. HR professionals, who 453

AU1518Ch27Frame Page 454 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES have to interview candidates for many kinds of jobs, are not able to focus on the unique technical needs of information security. Any interview process can suffer from stereotypes, personal biases, and even the order in which the candidates are interviewed. Having current employees perform at least part of the interview can increase its validity.12 Current employees can assess the candidate’s knowledge with questions in their individual areas of expertise. Two additional recommendations are: 1. Making sure the interviews are structured with the same list of general questions for each candidate 2. Using a candidate score sheet for interviewers to quantify their opinions about a candidate A good place to start is the required skills section and desired skills section of the position description. The required skills should be weighted about 70 percent of the score sheet, while the desired skills should be about 30 percent. Filling an open position in information security can be difficult. Using tools such as the position description13 and the candidate score sheet (see Exhibits 27-2 and 27-3) can make selecting a new employee much easier. Having current employees involved throughout the hiring process is strongly recommended and will make choosing the right person even easier. Because information security personnel play a critical and trusted role in the organization, criminal and financial background checks are essential. Eric Shaw et al.14 note that candidates should also be asked about past misuse of information resources. Resumes and references should be checked carefully. The same clearance procedures should apply to consultants, contractors, and temporary workers, depending on the access privileges they have. ISO 1779915 also emphasizes the importance of these measures. Shaw and co-authors recommend working with HR to identify and intervene effectively when any employee (regardless of whether in information security) exhibits at-risk conduct. Schlossberg and Sarris16 recommend repeating background checks annually for existing employees. HR and legal advisors must participate in developing and applying the background check procedures. When Employees and Non-Employees Leave The issue of deleting accounts promptly when users leave has already been emphasized. Several additional considerations apply, especially if employees are being laid off or any departure is on less than amicable terms. Anne Saita17 recommends moving critical data to a separate database, to which the user(s) leaving do not have access. Users leaving must be reminded of their NDA obligations. Saita further notes that the users’ desktop computers could also contain backdoors and should be disconnected. 454

AU1518Ch27Frame Page 455 Thursday, November 14, 2002 8:43 PM

Organizing for Success Exhibit 27-2. Sample position description. Job Title: Pay Range: Application Date: Business Unit: Division: Location: Supervisor:

Information Security Specialist Associate $40,000 to $50,000 per year 01/25/03–02/25/03 Data Security Assurance Computing Services Orlando, FL John Smith

General: The Information Security Specialist Associate uses current computer science technologies to assist in the design, development, evaluation, and integration of computer systems and networks to maintain system security. Using various tools, the information security specialist associate will perform penetration and vulnerability analyses of corporate networks and will prepare reports that may be submitted to government regulatory agencies. Responsibilities: • Evaluate new information security products using a standard image of the corporate network and prepare reports for management. • Represent information security in the design, development, and implementation of new customer secured network. • Assist in day-to-day customer support issues. • Using intrusion detection tools, test the corporation’s network for vulnerabilities. • Provide security and integration services to internal and commercial customers. • Build and maintain user data groups in the Win NT environment. • Add and remove user Win NT accounts. • Assist government auditors in regulatory compliance audits. Required Education/Skills: • Knowledge of Windows, UNIX, and Macintosh operating systems • Understanding of current networking technologies, including TCP/IP and Banyan Vines • Microsoft Certified Systems Engineer certification • Bachelor’s degree in computer science or relevant discipline Desired Education/Skills: • Two years of information security experience • MBA • CISSP certification

Identifying at-risk behavior, as noted earlier, is even more important for the employees still working after a layoff who could be overworked or resentful. SEPARATION OF RESPONSIBILITIES Separation of responsibilities, or segregation of duties, originated in financial internal control. The basic concept is that no single individual has complete control over a sequence of related transactions.18 A 1977 U.S. federal 455

AU1518Ch27Frame Page 456 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES Exhibit 27-3. Candidate score sheet. Candidate Name: Date: Position:

Fred Jones 1/30/2003 Information Security Specialist Associate

Required Skill

Knowledge Levela

Multiplier

Score

2 2 3 2

0.2 0.2 0.2 0.1

0.4 0.4 0.6 0.2

0 2 0

0.1 0.1 0.1

0 0.2 0 1.8

OS knowledge Networking knowledge Bachelor’s degree MCSE Desired skill InfoSec experience MBA CISSP Total a

Knowledge Level: 0 — Does not meet requirement 1 — Partially meets requirement 2 — Meets requirement 3 — Exceeds requirement Knowledge level × Multiplier = Score Note: It is strongly recommended to review your procedures with your human resources team and legal advisors.

law, the Foreign Corrupt Practices Act,19 requires all corporations registering with the Securities and Exchange Commission to have effective internal accounting controls. Despite its name, this law applies even if an organization does no business outside the United States.20 When separation of duties is enforced, it is more difficult to defraud the organization because two or more individuals must be involved and it is more likely that the conduct will be noticed. In the IT environment, separation of duties applies to many tasks. Vallabhaneni21 noted that computer operations should be separated from application programming, job scheduling, the tape library, the help desk, systems programming, database programming, information security, data entry, and users. Information security should be separate from database and application development and maintenance, system programming, telecommunications, data management or administration, and users. System programmers should never have access to application code, and application programmers should not have access to live production data. Kabay22 noted that separation of duties should be applied throughout the development life cycle so that the person who codes a program would not also test 456

AU1518Ch27Frame Page 457 Thursday, November 14, 2002 8:43 PM

Organizing for Success it, test systems and production systems are separate, and operators cannot modify production programs. ISO 17799 emphasizes23 that a program developer or tester with access to the production system could make unauthorized changes to the code or to production data. Conversely, compilers and other system utilities should also not be accessible from production systems. The earlier discussion of system administration and security operations noted that account and privilege management should be part of a central security operations group separate from local system administrators. In a small organization where the same person might perform both these functions, procedures should be in place (such as logging off and logging on with different privileges) to provide some separation.24 Several related administrative controls go along with separation of duties. One control is requiring mandatory vacations each year for certain job functions. When another person has to perform a job temporarily, a fraud perpetrated by the regular employee might be noticed. Job rotation has a similar effect.25 Another approach is dual control, requiring two or more persons to perform an operation simultaneously, such as accessing emergency passwords.26 Separation of duties helps to implement the principle of least privilege.27 Each user is given only the minimum access needed to perform the job, whether the access is logical or physical. Beyond IT positions, every position that has any access to sensitive information should be analyzed for sensitivity. Then the security requirements of each position can be specified, and appropriately controlled access to information can be provided. When each position at every level is specified in this fashion, HR can focus background checks and other safeguards on the positions that truly need them. Every worker with access to sensitive information has security responsibilities. Those responsibilities should be made part of the job description28 and briefed to the user annually with written sign-off. SUMMARY This chapter has presented several concepts on the human side of information security, including: • Information security roles and responsibilities, including user responsibilities • Information security relationships to other groups in the organization • Options for organizing the information security functions • Staffing the information security functions • Separation of duties, job sensitivity, and least privilege Security is a triad of people, process, and technology. This chapter has emphasized the people issues, the importance of good processes, and the need to maintain security continuously. The information security function 457

AU1518Ch27Frame Page 458 Thursday, November 14, 2002 8:43 PM

SECURITY MANAGEMENT PRACTICES has unique human resources needs. Attention to the people issues throughout the enterprise helps to avoid or detect many potential security problems. Building processes based on separation of duties and least privilege helps build in controls organic to the organization, making security part of the culture while facilitating the business. Secure processes, when understood and made part of each person’s business, are a powerful complement to technology. When the organization thinks and acts securely, the job of the information security professional becomes easier. References 1. British Standard 7799/ISO Standard 17799: Information Security Management, London: British Standards Institute, 1999, Section 4.1.3. 2. Weaver, John, Zen and information security, available online at http://www.infosecnews.com/opinion/2001/12/19_03.htm. 3. O’Farrell, Neal, Employees: your best defense, or your greatest vulnerability?”, in SearchSecurity.com, available online at (http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci771517,00.html). 4. O’Farrell, Neal, Employees: your best defense, or your greatest vulnerability?,” in SearchSecurity.com, available online at (http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci771517,00.html). 5. Wood, Charles Cresson, Information Security Roles & Responsibilities Made Easy, Houston: PentaSafe, 2001, p. 72. 6. Peltier, Tom, Where should information protection report?, Computer Security Institute editorial archive, available online at http://www.gocsi.com/infopro.htm. 7. Mayor, Tracy, Someone to watch over you, CIO, March 1, 2001. 8. Devlin, Ed, Business continuity programs, job levels need to change in the wake of Sept. 11 attacks, Disaster Recovery J., Winter, 2002 9. Bernardin, H. John and Russell, Joyce, Human Resource Management: An Experimental Approach, 2nd ed., New York: McGraw-Hill, 1998, pp. 73–101. 10. International Information System Security Certification Consortium (ISC)2, available online at http://www.isc2.org/. 11. Quoted in Rothke, Ben, The professional certification predicament, Comput. Security J., V. XVI, No. 2 (2000), p. 2. 12. Bernardin, H. John and Russell, Joyce, Human Resource Management: An Experimental Approach, 2nd ed., New York: McGraw-Hill, 1998, p. 161. 13. Bernardin, H. John and Russell, Joyce, Human Resource Management: An Experimental Approach, 2nd ed., New York: McGraw-Hill, 1998, pp. 499–507. 14. Shaw, Eric, Post, Jerrold, and Ruby, Keven, Managing the threat from within, Inf. Security, July, 2000, p. 70. 15. British Standard 7799/ISO Standard 17799: Information Security Management, London: British Standards Institute, 1999, Sections 6.1.1–2. 16. Schlossberg, Barry J. and Sarris, Scott, Beyond the firewall: the enemy within, Inf. Syst. Security Assoc. Password, January, 2002. 17. Saita, Anne, The enemy within, Inf. Security, June, 2001, p. 20. 18. Walgenbach, Paul H., Dittrich, Norman E., and Hanson, Ernest I., Principles of Accounting, 3rd ed., New York: Harcourt Brace Jovanovich, 1984, p. 244. 19. Walgenbach, Paul H., Dittrich, Norman E., and Hanson, Ernest I., Principles of Accounting, 3rd ed., New York: Harcourt Brace Jovanovich, 1984, p. 260. 20. Horngren, Charles T., Cost Accounting: A Managerial Emphasis, 5th ed., Englewood Cliffs, NJ: Prentice Hall, 1982, p. 909. 21. Vallabhaneni, S. Rao, CISSP Examination Textbooks Vol. 1: Theory, Schaumburg, IL: SRV Professional Publications, 2000, pp. 142, 311–312. 22. Kabay, M.E., Personnel and security: separation of duties, Network World Fusion, available online at http://www.nwfusion.com/newsletters/sec/2000/0612sec2.html.

458

AU1518Ch27Frame Page 459 Thursday, November 14, 2002 8:43 PM

Organizing for Success 23. British Standard 7799/ISO Standard 17799: Information Security Management, London: British Standards Institute, 1999, Section 8.1.5. 24. Russell, Deborah and Gangemi, G.T. Sr., Computer Security Basics, Sebastopol, CA: O’Reilly, 1991, pp. 100–101. 25. Horngren, Charles T., Cost Accounting: A Managerial Emphasis, 5th ed., Englewood Cliffs, NJ: Prentice-Hall, 1982, p. 914. 26. Kabay, M.E., Personnel and security: separation of duties, Network World Fusion, available online at http://www.nwfusion.com/newsletters/sec/2000/0612sec2.html. 27. Garfinkel, Simson and Spafford, Gene, Practical UNIX and Internet Security, Sebastopol, CA: O’Reilly, 1996, p. 393. 28. Wood, Charles Cresson, Top 10 information security policies to help protect your organization against cyber-terrorism, p. 3, available online at http://www.pentasafe.com/.

ABOUT THE AUTHORS Jeffrey H. Fenton, CBCP, CISSP, is the corporate IT crisis assurance/mitigation manager and technical lead for IT Risk Management and a senior staff computer system security analyst in the Corporate Information Security Office at Lockheed Martin Corporation. He joined Lockheed Missiles & Space Company in Sunnyvale, California, as a system engineer in 1982 and transferred into its telecommunications group in 1985. Fenton completed a succession of increasingly complex assignments, including project manager for the construction and activation of an earthquake-resistant network center on the Sunnyvale campus in 1992, and group leader for network design and operations from 1993 through 1996. Fenton holds a B.A. in economics from the University of California, San Diego; an M.A. in economics and an M.S. in operations research from Stanford University, and an M.B.A. in telecommunications from Golden Gate University. Fenton is also a Certified Business Continuity Planner (CBCP) and a Certified Information Systems Security Professional (CISSP). James M. Wolfe, MSM, is the senior virus researcher and primary technical contact for the Lockheed Enterprise Virus Management Group at Lockheed Martin Corporation. He is a member of the European Institute of Computer Antivirus Researchers (EICAR), the EICAR Antivirus Enhancement Program, the Antivirus Information Exchange Network, Infragard, and is a reporter for the WildList Organization. He has a B.S. in management information systems and an M.S. in change management from the University of Central Florida. He lives in Orlando, Florida, with his wife.

459

AU1518Ch27Frame Page 460 Thursday, November 14, 2002 8:43 PM

AU1518Ch28Frame Page 461 Thursday, November 14, 2002 6:11 PM

Chapter 28

Ownership and Custody of Data William Hugh Murray, CISSP

This chapter introduces and defines the concepts of data owner and custodian; their origins and their emergence; and the rights, duties, privileges, and responsibilities of each. It describes how to identify the data and the owner and to map one to the other. It discusses the language and the tools that the owner uses to communicate his intention to the custodian and the user. Finally, it makes recommendations about how to employ these concepts within your organization. INTRODUCTION AND BACKGROUND For a number of years now we have been using the roles of data owner and custodian to assist us in managing the security of our data. These concepts were implicit in the way the enterprise acted, but we have only recently made them sufficiently explicit that we can talk about them. We use the words routinely as though there were general agreement on what we mean by them. However, there is relatively little discussion of them in the literature. In the early days of mainframe access control, we simply assumed that we knew who was supposed to access the data. In military mandatory access control systems, the assumption was that data was classified and users were cleared. If the clearance of the user dominated the classification of the user, then access was allowed. There was the troublesome concept of need-to-know; but for the life of me, I cannot remember how we intended to deal with it. I assume that we intended to deal with it in agreement with the paper analogy. There would have been an access control matrix, but it was viewed as stable. It could be created and maintained by some omniscient privileged user, but no one seemed to give much thought to the source of his knowledge. (I recall being told about an A-level system where access could not be changed while the system was operational. This was not considered to be a problem because the system routinely failed about once a week. Rights were changed while it was offline.) 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

461

AU1518Ch28Frame Page 462 Thursday, November 14, 2002 6:11 PM

SECURITY MANAGEMENT PRACTICES In time-sharing systems, access was similarly obvious. Most data was accessed and used only by its author and creator. Such sharing of his data as occurred was authorized in a manner similar to that in modern UNIX. That is, the creator granted privileges to the file system object to members of his own affinity group or to the world. While this is not sufficiently granular for today’s large group sizes and populations, it was adequate at the time. ACF2, the first access control for MVS, was developed in a university setting by systems programmers and for systems programmers. It was rulesbased. The default rule was that a user could access data that he created. To facilitate this, the creator’s name was forced as the high-level qualifier of the object name. Sharing was based upon the rules database. As with the access control matrix, creation and maintenance of this database required both privilege and omniscience. In practice, the privilege was assigned to a systems programmer. It was simply assumed that all systems programmers were omniscient and trustworthy; they were trusted by necessity. Over time, the creation and maintenance of the ACF2 rules migrated to the security staff. While I am sure that we had begun to talk about ownership by that time, none of these systems included any concept of or abstraction for an object owner. In reviewing my papers, the first explicit discussion of ownership that I find is in 1981; but by that time it was a fairly mature concept. It must have been a fairly intuitive concept to emerge whole without much previous discussion in the literature. What is clear is that we must have someone with the authority to control access to data and to make the difficult decisions about how it is to be used and protected. We call this person the author. It is less obvious, but no less true, that the person who makes that decision needs to understand the sensitivity of the data. The more granular and specific that knowledge, the better the decision will be. My recollection is that the first important system to externalize the abstraction of owner was RACF. (One of the nice things about having lived to this age is that the memories of your contemporaries are not good enough for them to challenge you.) RACF access control is list-based. The list is organized by resource. That is, there is a row for each object. The row contains the names of any users or defined and named groups of users with access to that resource and the type of access (e.g., create, read, write, delete) that they have. Each object has an owner and the name of that owner is explicit in the row. The owner might be a user or a group, that is, a business function or other affinity group. The owner has the implicit right to grant access or to add users or groups to the entry. For the first time we had a system that externalized the privilege to create and maintain the access control rules in a formal, granular, and independent manner. 462

AU1518Ch28Frame Page 463 Thursday, November 14, 2002 6:11 PM

Ownership and Custody of Data DEFINITIONS Owner, n. One who owns; a rightful proprietor; one who has the legal or rightful title, whether he is the possessor or not. — Webster’s Dictionary, 1913 Owner, n. Principal or agent who exercises the exclusive right to use. Owner, n. The individual manager or representative of management who is responsible for making and communicating judgments and decisions on behalf of the organization with regard to the use, identification, classification, and protection of a specific information asset. — Handbook of Information Security Management Zella G. Ruthberg and Harold F. Tipton, Editors, 1993 Ownership, n. The state of being an owner; the right to own; exclusive right of possession; legal or just claim or title; proprietorship. Ownership, n. The exclusive right to use. Custodian, n. One that guards and protects or maintains; especially: one entrusted with guarding and keeping property or records or with custody or guardianship of prisoners or inmates. — Merriam-Webster’s Collegiate Dictionary Custodian. A designated person who has authorized possession of information and is entrusted to provide proper protection, maintenance, and usage control of the information in an operational environment. — Handbook of Information Security Management Zella G. Ruthberg and Harold F. Tipton, Editors, 1993

POLICY It is a matter of policy that management makes statements about the level of risk that it is prepared to take and whom it intends to hold accountable for protection. Owners and custodians are useful abstractions for assigning and distinguishing this responsibility for protection. Policy should require that owners be explicitly identified; that is, that the responsibility for protection be explicitly identified. While ownership is implicit, in the absence of requiring that it be made explicit, the responsibility for the protection of information is often overlooked. Similarly, policy should make it explicit that custodians of data must protect it in accordance with the directions of the owner. ROLES AND RESPONSIBILITIES Owner At one level, the owner of institutional data is the institution itself. However, it is a fundamental characteristic of organizations that they assign 463

AU1518Ch28Frame Page 464 Thursday, November 14, 2002 6:11 PM

SECURITY MANAGEMENT PRACTICES their privileges and capabilities to individual members of the organization. When we speak of owner, we refer to that member of the organization to whom the organization has assigned the responsibility for a particular asset. (To avoid any possible confusion about the real versus the virtual owner of the data, many organizations eschew the use of owner in favor of some other word such as agent, steward, or surrogate. For our purposes, the owner is the assigned agent.) This individual exercises all of the organization’s rights and interests in the data. These include: • • • • •

Judging the asset’s importance, value, and sensitivity Deciding how and by whom the asset may be used Specifying the business controls Specifying the protection requirements for the asset Communicating decisions to others (e.g., labeling the object with its classification) • Acquiring and operating necessary automated controls over the assets • Monitoring compliance and initiating corrective action Note that these duties are not normally separable. That is to say that all must be assigned to the same agent. Specifically, the right to use cannot be separated from the responsibility to protect. We should keep in mind that others might have some interest in an information asset. For example, while the institution may own a copy of information such as employee name and address in the pay record, the employee still has a proprietary interest in the data. While this interest may not rise to the level of ownership, it is still a material interest. For example, the employee has an interest in the accuracy and confidentiality of the data. In exercising its interest, the institution and its agents must honor these other interests. Custodian Even the dictionary definition recognizes that the idea of custodian includes one who is responsible for protecting records. This responsibility includes: • Protecting the data in accordance with owner direction or agreement with the owner • Exercising sound business judgment in the protection of data • Reporting to the data owner on the discharge of his responsibilities Suppliers of data processing services and managers of computers and storage devices are typically custodians of application data and software 464

AU1518Ch28Frame Page 465 Thursday, November 14, 2002 6:11 PM

Ownership and Custody of Data processed or stored on their systems. This may include paper input documents and printed reports. Because it is these custodians who choose, acquire, and operate the computers and storage, they must provide the necessary access controls. The controls chosen must, at a minimum, meet the requirements specified by the owners. Better yet, they should meet the real requirements of the application, regardless of whether the owner of the data is able to recognize and articulate those requirements. Requirements to which the controls must answer include reliability, granularity, ease of use, responsiveness, and others. Administrator The owner may wish to delegate the actual operation of the access controls to a surrogate. This will be particularly true when the amount of special knowledge required to operate the controls exceeds the amount required to make the decisions about the use of the data. Such an administrator is responsible for faithfully carrying out the intent of the owner. He should act in such a way that he can demonstrate that all of his actions were authorized by the responsible owner and that he acted on all such authorizations. This includes keeping records of what he did and the authorizations on which he acted. User Manager The duties of user management include: • • • •

Enrolling users and vouching for their identities Instructing them in the use and protection of assets Supervising their use of assets Noting variances and taking corrective action

While the list of responsibilities is short, the role of user management may be the most important in the enterprise. This is because user management is closer to the use of the resources than any other managers. User Users are responsible for: • Using the enterprise information and information processing resources only for authorized and intended purposes • Effective use and operation of controls (e.g., choice of passwords) • Performance of applicable owner and custodian duties • Compliance with directions of owners and management • Reporting all variances to owners, managers, and staff 465

AU1518Ch28Frame Page 466 Thursday, November 14, 2002 6:11 PM

SECURITY MANAGEMENT PRACTICES Variances should be reported to at least two people. This reduces the probability that the variance is called to the attention of only the individual causing it. The owner of the resource and the manager of the user would be likely candidates for notification. Otherwise, use one line manager and one staff manager (e.g., audit or security staff). IDENTIFYING THE INFORMATION Identifying the data to be protected might seem to be a trivial exercise. Indeed, before computers, it really was. The enterprise focused on major and persistent documents and on major functional files such as those of payroll records or payables. Focus was placed on those files that were special to the industry or enterprise. In banking, one worried about the records of deposits and loans; in insurance, one worried about policy master records. Managers focused on departmental records and used file cabinets as the objects of control and protection. Even when computers emerged, one might still focus on the paper printout of the data rather than on the record on magnetic tape. When a megabyte was the size of a refrigerator, one identified it and protected its contents similarly to how one protected the contents of a file cabinet. As magnetic storage became sufficiently dense that the storage object was shared across a large number of data objects, we started to identify data sets. While we often think of a data set as analogous to the modern file, in fact it was a collection of logically related files that shared a name. The input file to a job, the output file from the job, and the archival version of that file might all be part of the same logical data set. The members of a data set were related in a formal way. While there are a small number of different types of data sets (e.g., partitioned, sequential, VSAM), members of all data sets within a type were related in a similar way. The information about the relationships was recorded in the metadata for the data set. Therefore, for protection purposes, one made decisions about the named data set rather than about the physical objects that made them up. The number of data sets was sufficiently small that identifying them all was not difficult. In modern systems, the data objects of interest are organized into (treestructured) directories and files. A data set in a mainframe might correspond to a file or to all the files in a directory. However, the relationship between a directory and the files and other directories that are stored in it may be totally arbitrary. There are conventions, but there are no fixed rules that can be consistently used to reduce the number of objects over which one must make decisions. For example, in one directory, programs and data may be stored together; while in the next one, programs and data may be stored in separate named subdirectories. A file name may be qualified 466

AU1518Ch28Frame Page 467 Thursday, November 14, 2002 6:11 PM

Ownership and Custody of Data by the name of the directory in which it is stored — and then again, it may not. Therefore, for protection purposes, a decision may have to be made over every directory entry and possibly every file. The number of objects expands, perhaps even faster than the quantity of data. This is complicated further by the rapidly falling cost of storage. Cheap storage enables one to keep data longer and otherwise encourages growth in the number of data objects. Data sets also had the advantage that the names tended to be unique within a system and, often, by convention, across an enterprise. In modern practice, neither objects nor names are unique even within a system, much less across an enterprise. In modern systems, there is no single reference or handle that one can use to identify all data within an enterprise. However, most of them require some enterprise procedures or conventions. For example, one can store data according to its kind and, by inference, its importance. • Enterprise data versus departmental, personal, or other • Changeable versus fixed (e.g., balances versus transactions; programs versus data; drafts versus published documents; images versus text) • Documents versus other • Permanent versus temporary • Business functional applications versus other (e.g., payroll, payables, sales) versus other (e.g., correspondence) • Active versus archival • Other enterprise-specific categories Each of these distinctions can be useful. Different procedures may be required for each. IDENTIFYING THE OWNER Prior to the use of the computer, management did not explicitly identify the owners of information. This was, in part, because the information of interest was the functional data of the organization. This information included pay records, customer records, sales records, etc. Ownership and custody of the information were almost always in the same hands. When the computer came along, it separated custody from ownership. The computer function found itself with custody of the information. Management did not even mind very much until decisions needed to be made about the care of the records. Management was particularly uncomfortable with decisions about access and security. They suddenly realized that one standard of care was not appropriate for all data and that they did not know enough about the 467

AU1518Ch28Frame Page 468 Thursday, November 14, 2002 6:11 PM

SECURITY MANAGEMENT PRACTICES data to feel comfortable making all the decisions. Everyone wanted discretion over the data but no one wanted responsibility. It was obvious that mistakes were going to be made. Often, by the time anyone recognized there was a problem, it was already a serious problem and resolving it was difficult. By this time, there was often so much data that discovering its owner was difficult. There were few volunteers. It was not unusual for the custodians to threaten to destroy the data if the owner did not step forward and take responsibility. Line Manager One useful way to assign ownership is to say that line managers are responsible for all of the resources allocated to them to accomplish their missions. This rule includes the responsibility to identify all of those assets. This ensures that the manager cannot escape responsibility for an asset by saying that he did not know. Business Function Manager Although this is where the problem got out of hand, it is the easiest to solve. It is not difficult to get the managers of payroll or payables to accept the fact that they own their data. It is usually sufficient to simply raise the question. When we finally got around to doing it, it was not much more difficult than going down the list of information assets. Author Another useful way to assign ownership is to say that the author or creator of a data object is its owner until and unless it is reassigned. This rule is particularly useful in modern systems where much of the data in the computer is created without explicit management direction and where many employees have discretion to create it. Like the first rule, it works by default. This is the rule that covers most of the data created and stored on the desktop. Surrogate Owners Even with functional data, problems still arise with shared data, as for example in modern normalized databases. One may go to great pains to eliminate redundant data and the inevitable inconsistencies, not to say inaccuracies, that go with it. The organization of the database is intended to reflect the relationships of the entities described rather than the organization of the owners or even the users. This may make mapping the data to its owners difficult. 468

AU1518Ch28Frame Page 469 Thursday, November 14, 2002 6:11 PM

Ownership and Custody of Data An example is a customer master record that is shared by three or four different business functions. If one of the functions assumes ownership, the data may be operated for their benefit at the expense of the others. If it is not well managed, the other functions may start keeping their own copies with a loss of both accuracy and efficiency. One solution to this problem is to create a surrogate function to act as the owner of the data. This surrogate acts as agent for his principals; he satisfies their ownership requirements while exercising their discretion. He is motivated to satisfy all of his customers equally. When conflicts arise between the requirements of one customer and another, he negotiates and resolves them. In modern systems, shared functional data is usually stored in databases rather than in flat files. Such systems permit more granular control and more choices about the assignment of ownership. Control is no longer limited by the physical organization of the data and storage. CLASSIFICATION AND LABELING One way for the owner to communicate his intentions about how to treat the information is to write instructions as metadata on the data object. A classification scheme provides an efficient language in which to write those instructions. The name of the class is both an assertion about the sensitivity of the data and the name of the set of protective measures to be used to protect it. The owner puts the label on the data object, and the custodian uses the associated protective measures. The number of classes must be must be small enough for one to be able to habitually remember the association between the name of the class and the related controls. It must be large enough to ensure that all data receives the appropriate protection, while expensive measures are reserved to the data that really requires them. We should prefer policies that enable us to detect objects that are not properly classified or labeled. Policies that require that all objects be labeled, even the least sensitive, make it easy to recognize omissions. Many organizations do not require that public data be labeled as such. This makes it difficult to distinguish between public data and data over which no decision has been made. While paper feels natural and comfortable to us, it has severe limitations not shared by more modern media. It is bulky, friable, flammable, resistant to timely update, and expensive to copy or back up. On the other hand, it has an interesting kind of integrity; it is both tamper-resistant and tamperevident. In paper systems, the label is immutably bound to the object and travels with it, but the controls are all manual. In automated systems, the label is no more reliable than the system and does not travel with the 469

AU1518Ch28Frame Page 470 Thursday, November 14, 2002 6:11 PM

SECURITY MANAGEMENT PRACTICES object beyond the system. However, controls can be based upon the label and automatically invoked. In mandatory access control systems, both the label and the controls are reliable. In discretionary access control systems, both the labels and the controls are less reliable but adequate for many applications and environments. Cryptographic systems can be used to bind the label to the object so that the label follows the object in such a way that the object can only be opened in environments that can be relied upon to enforce the label and the associated controls. Certain high-integrity imaging systems (e.g., Adobe Acrobat) can bind the label in such way that the object cannot be displayed or printed without the label. ACCESS CONTROL The owner uses access controls to automatically direct and restrain who sees or modifies the data. Mandatory access controls ensure consistent application of management’s policy across an entire system while minimizing the amount of administrative activity necessary to achieve it. Discretionary controls enable owners to implement their intent in a flexible way. However, consistent enforcement of policy may require more management attention and administrative activity. VARIANCE DETECTION AND CONTROL It must be possible for the owner to observe and measure how custodians and others comply with his instructions. He must have visibility. This visibility may be provided in part by alarms, messages, confirmations, and reports. It may be provided in part by feedback from such staffs as operations, security administration, and audit. The owner is interested in the reliability of the user identification and authentication (I&A) scheme. He is most likely to look to the audit report for this. Auditors should look at the fundamental strength of the I&A mechanism, log-on variances, the security of password change procedures where used, and weak passwords where these are possible. The owner is also likely to look to the audit report for information on the integrity of the access control system and the authorization scheme. The auditors will wish to look to the suitability of the controls to the applications and environment. Are they application-specific or provided by the system? Are the controls appropriately granular and responsive to the owner? They will be interested in whether the controls are mandatory or discretionary, rules-based or list-based. They will wish to know whether the controls have been subjected to third-party evaluation, how they are installed and operated, and how they are protected from late change or 470

AU1518Ch28Frame Page 471 Thursday, November 14, 2002 6:11 PM

Ownership and Custody of Data other interference. They will wish to know the number of privileged users of the system and how they are supervised. Periodically, the owner may wish to compare the access control rules to what he thinks he authorized. The frequency of this reconciliation will be a function of the number of rules and the amount of change. The owner will be interested in denied attempts to access his data; repeated attempts should result in alarms. Some number of denied attempts are probably intended to be authorized and will result in corrections to the rules. Others may require follow-up with the user. The user will want to be able to detect all accesses to the data that he owns so that he can compare actual access to what he thinks he authorized. This information may be in logs or reports from logs. RECOMMENDATIONS • Policy should provide that ownership of all assets should be explicitly assigned. This helps to avoid errors of omission. • Ownership of all records or data objects should be assigned to an appropriate level of granularity. In general, this means that there will be an owner for each document, file, folder, or directory, but not necessarily for each record or message. • The name of the owner should be included in the metadata for the object. • The classification or other reference to the protective measures should be included in the metadata for the object. • Because few modern systems provide abstractions or controls for data classification or owner, this metadata should be stored in the object name or in the object itself. • The owner should have responsive control over access. This can be through automated controls, administrators, or other surrogates. • There should be a clear agreement between the owner and the custodian as to how the data will be protected. Where a classification and labeling system exists, this can be the basis of sensitivity labels on the object. • Consider written agreements between owners and custodians that describe the protective measures to be used. As a rule, these agreements should be based upon offers made by the custodians. • The owner should have adequate visibility into the operation and effectiveness of the controls. • There should be prompt variance detection and corrective action. CONCLUSION The ideas of ownership and custody are fundamental to any information protection scheme. They enable management to fix responsibility and 471

AU1518Ch28Frame Page 472 Thursday, November 14, 2002 6:11 PM

SECURITY MANAGEMENT PRACTICES accountability for deciding how an object is to be protected and for protecting it in accordance with that decision. They are essential for avoiding errors of omission. They are essential for efficiency; that is, for ensuring that all data is appropriately protected while reserving expensive measures only for the data that requires them. While management must be cautious in assigning the discretion to use and the responsibility to protect so as not to give away its own rights in the data, it must be certain that control is assigned with sufficient granularity that decisions can be made and control exercised. While identifying the proper owner and ensuring that responsibility for all data is properly assigned are difficult, both are essential to accountability. Owners should measure custodians on their compliance, and management should measure owners on effectiveness and efficiency. References 1. Webster’s Dictionary, 1913. 2. Handbook of Information Security Management; Zella G. Ruthberg and Harold F. Tipton (Eds.), Auerbach (Boston): 1993. 3. Merriam Webster’s Collegiate Dictionary. 4. Handbook of Information Security Management; Zella G. Ruthberg and Harold F. Tipton (Eds.), Auerbach (Boston): 1993.

ABOUT THE AUTHOR William Hugh Murray, CISSP, is an executive consultant for IS security at Deloitte & Touche in New Caanan, Connecticut.

472

AU1518Ch29Frame Page 473 Thursday, November 14, 2002 6:11 PM

Domain 4

Application Program Security

AU1518Ch29Frame Page 474 Thursday, November 14, 2002 6:11 PM

APPLICATION PROGRAM SECURITY The placement of security and controls can be implemented in multiple locations: at the operating system or network level, within the database, or placed close to the data, within the application. Wherever implemented, however, it is essential that controls be integrated into the system’s development life cycle. Moreover, whether the application is built by a commercial, third-party vendor or developed in-house, the implementation of appropriate controls will mean the difference between a secure application and one that does not meet business requirements because many business requirements have serious security implications. The sections in this domain demonstrate the types of controls that should be considered prior to moving code into production, from the fundamental controls of good password management to the achievement of quality assurance via code reviews. Section 4.3 addresses threat detection, prevention, and protection against malicious code, Trojan horses, viruses, and network worms, including some of the newer considerations for personal firewalls and portable digital assistants (PDAs). Security certification and accreditation are not new concepts, most recently dating back to the U.S. government’s requirement mandating that government contractors attest to their systems successfully meeting compliance with security criteria. Today, many commercial organizations are also seeking that same level of assurance. In this domain, we have chapters that address information systems security certification and accreditation and the various sources that can be used to baseline security — two of which are formidable international standards: ISO 17799 and ISO 15408.

474

AU1518Ch29Frame Page 475 Thursday, November 14, 2002 6:11 PM

Chapter 29

Application Security Walter S. Kobus, Jr., CISSP

Application security is broken down into three parts: (1) the application in development, (2) the application in production, and (3) the commercial offthe-shelf software (COTS) application that is introduced into production. Each one requires a different approach to secure the application. As with the Common Criteria ISO 15408, one must develop a security profile or baseline of security requirements and level of reasonability of risk. The primary goal of application security is that it will operate with what senior management has decided is a reasonable risk to the organization’s goals and its strategic business plan. Second, it will ensure that the applications, once placed on the targeted platforms, are secure. APPLICATION SECURITY IN THE DEVELOPMENT LIFE CYCLE In an ideal world, information security starts when senior management is approached to fund the development of a new application. A welldesigned application would include at least one document devoted to the application’s security posture and plan for managing risks. This is normally referred to as a security plan.1 However, many application development departments have worried little about application security until the recent advent of Web applications addressing E-commerce. Rather than a firewall guarding the network against a threat, poor coding of Web applications has now caused a new threat to surface: the ability of hacking at the browser level using Secure Socket Layer (SSL) encrypted path to get access to a Web application and, finally, into the internal databases that support the core business. This threat has required many development shops to start a certification and accreditation (C&A) program or at least address security requirements during the development life cycle. SECURITY REQUIREMENTS AND CONTROLS Requirements that need to be addressed in the development cycle are sometimes difficult to keep focused on during all phases. One must remember that the security requirements are, in fact, broken down into two components: (1) security requirements that need to be in place to protect the 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

475

AU1518Ch29Frame Page 476 Thursday, November 14, 2002 6:11 PM

APPLICATION PROGRAM SECURITY application during the development life cycle, and (2) the security requirements that will follow the application into the targeted platform in the production environment. SECURITY CONTROLS IN THE DEVELOPMENT LIFE CYCLE Security controls in the development life cycle are often confused with the security controls in the production environment. One must remember that they are two separate issues, each with its own security requirements and controls. The following discussion represents some of the more important security application requirements on controls in the development life cycle. Separation of Duties There must be a clear separation of duties to prevent important project management controls from being overlooked. For example, in the production environment, developers must not modify production code without going through a change management process. In the development environment, code changes must also follow a development change management process. This becomes especially important when code is written that is highly sensitive, such as a cryptographic module or a calculation routine in a financial application. Therefore, developers must not perform quality assurance (QA) on their own code and must have peer or independent code reviews. Responsibilities and privileges should be allocated in such a way that prevents an individual or a small group of collaborating individuals from inappropriately controlling multiple key aspects of any process or causing unacceptable harm or loss. Segregation is used to preserve the integrity, availability, and confidentiality of information assets by minimizing opportunities for security incidents, outages, and personnel problems. The risk is when individuals are assigned duties in which they are expected to verify their own work or approve work that accomplishes their goals, hence the potential to bias the outcome. Separation of duties should be a concern throughout all phases of the development life cycle to ensure no conflict of duties or interests. This security requirement should start at the beginning of the development life cycle in the planning phase. The standard security requirements should be that no individual is assigned a position or responsibility that might result in a conflict of interest to the development of the application. The are several integrated development tools available that help development teams improve their productivity, version control, maintain a separation of duties within and between development phases, create quality software, and provide overall software configuration management through the system’s life cycle. 476

AU1518Ch29Frame Page 477 Thursday, November 14, 2002 6:11 PM

Application Security Reporting Security Incidents During the design, development, and testing of a new application, security incidents may occur. These incidents may result from people granted improper access or successful intrusion into both the software and hardware of a test environment and stealing new code. All security incidents must be tracked and corrective action taken prior to the system being placed into production. The failure to document, assess, and take corrective action on security incidents that arise in the development cycle could lead to the deployment of an application containing serious security exposures. Included are potential damage to the system or information contained within it and a violation of privacy rights. These types of incidents need to be evaluated for the possible loss of confidentiality, loss of integrity, denial of service, and the risk they present to the business goals in terms of customer trust. Security incidents can occur at any time during the development life cycle. It is important to inform all development project team members of this potential in the planning phase. Security Awareness Security awareness training must be required for all team members working on the development project. If a particular team member does not understand the need for the security controls and the measures implemented, there is a risk that he or she will circumvent or bypass these controls and weaken the security of the application. In short, inadequate security awareness training may translate into inadequate protection mechanisms within the application. The initial security briefing should be conducted during the planning phase, with additional security awareness, as appropriate, throughout the development life cycle. A standard for compliance with the security requirement is to review the security awareness training program to ensure that all project team members are aware of the security policies that apply to the development of the project. Access For each application developed, an evaluation must be made to determine who should be granted access to the application or system. A properly completed access form needs to be filled out by the development manager for each member who needs access to the development systems and development software package. User identification and an audit trail are essential for adequate accountability during the development life cycle. If this security requirement has not been satisfied, there is a possibility that unauthorized individuals may access the test system and data, thereby learning about the application design. This is of special concern on applications that are sensitive and critical to the business operations of the 477

AU1518Ch29Frame Page 478 Thursday, November 14, 2002 6:11 PM

APPLICATION PROGRAM SECURITY organization. Access decisions for team personnel should be made at the assignment stage of the development project and no later than the planning stage of the development life cycle. Determination of Sensitivity and Criticality For every application that will be placed into the development and production environments, there must be a determination regarding the sensitivity of the information that will reside on that system and its criticality to the business. A formal letter of determination of sensitivity and criticality is required. This should be done prior to the approval stage of the application by senior management because it will impact resources and money. The letter of determination of sensitivity is based upon an analysis of the information processed. This determination should be made prior to any development work on the project and coordinated with the privacy officer or general counsel. The letter of criticality is used to evaluate the criticality of the application and its priority to the business operation. This document should be coordinated with the disaster and contingency officer. Both documents should be distributed to the appropriate IT managers (operations, network, development, and security). Applications that are sensitive and critical require more care and, consequently, have more security requirements than a nonsensitive or noncritical system. The improper classification of information or criticality in an “undetermined state” could result in users not properly safeguarding information, inadequate security controls implemented, and inadequate protection and recovery mechanisms designed into the application or the targeted platform system. Labeling Sensitive Information All sensitive documentation must be properly labeled to inform others of their sensitive nature. Each screen display, report, or document containing sensitive information must have an appropriate label, such as Sensitive Information or Confidential Information. If labeling is incorrect or has not been performed, there is a risk that sensitive information will be read by those without a need to know when the application moves into production. Labeling should begin at the time that reports, screens, etc. are coded and continue through the system life cycle. Use of Production Data If production data is used for developing or testing an application, a letter specifying how the data will be safeguarded is required; and permission is needed from the owner of the data, operations manager, and security. Sensitive production data should not be used to test an application. If, however, production data must be used, it should be modified to remove 478

AU1518Ch29Frame Page 479 Thursday, November 14, 2002 6:11 PM

Application Security traceability and protect individual privacy. It may be necessary to use encryption or hash techniques to protect the data. When the development effort is complete, it is important to scrub the hardware and properly dispose of the production data to minimize security risk. The risk of using production data in a development and test environment is that there might be privacy violations that result in a loss of customer and employee trust or violation of law. Development personnel should not have access to sensitive information. Code Reviews The security purpose of the application code reviews is to deter threats under any circumstance; events with the potential to cause harm to the organization through the disclosure, modification, or destruction of information; or by the denial of critical services. Typical threats in an Internet environment include: • Component failure. Failure due to design flaws or hardware/software faults can lead to denial of service or security compromises through the malfunction of a system component. Downtimes of a firewall or false rejections by authorization servers are examples of failures that affect security. • Information browsing. Unauthorized viewing of sensitive information by intruders or legitimate users may occur through a variety of mechanisms. • Misuse. The use of information assets for other than authorized purposes can result in denial of service, increased cost, or damage to reputations. Internal or external users can initiate misuse. • Unauthorized deletion, modification, or disclosure of information. Intentional damage to information assets that result in the loss of integrity or confidentiality of business functions and information. • Penetration. Attacks by unauthorized persons or systems that may result in denial of service or significant increases in incident handling costs. • Misrepresentation. Attempts to masquerade as a legitimate user to steal services or information, or to initiate transactions that result in financial loss or embarrassment to the organization. An independent review of the application code and application documentation is an attempt to find defects or errors and to assure that the application is coded in a language that has been approved for company development. The reviewer shall assure that the implementation of the application faithfully represents the design. The data owner, in consultation with information security, can then determine whether the risks identified are acceptable or require remediation. Application code reviews are 479

AU1518Ch29Frame Page 480 Thursday, November 14, 2002 6:11 PM

APPLICATION PROGRAM SECURITY further divided into peer code reviews and independent code reviews, as discussed below. • Peer code reviews shall be conducted on all applications developed whether the application is nonsensitive, sensitive, or is defined as a major application. Peer reviews are defined as reviews by a second party and are sometimes referred to as walk-throughs. Peer code review shall be incorporated as part of the development life cycle process and shall be conducted at appropriate intervals during the development life cycle process. • The primary purpose of an independent code review is to identify and correct potential software code problems that might affect the integrity, confidentiality, or availability once the application has been placed into production. The review is intended to provide the company a level of assurance that the application has been designed and constructed in such a way that it will operate as a secure computing environment and maintain employee and public trust. The independent third-party code review process is initiated upon the completion of the application source code and program documentation. This is to ensure that adequate documentation and source code shall be available for the independent code review. Independent code reviews shall be done under the following guidelines: — Independent third-party code reviews should be conducted for all Web applications, whether they are classified sensitive or nonsensitive, that are designed for external access (such as E-commerce customers, business partners, etc.). This independent third-party code review should be conducted in addition to the peer code review. — Security requirements for cryptographic modules are contained in FIPS 140-2 and can be downloaded at http://csrc.nist.gov/cryptval/ 140-2.htm. When programming a cryptographic module, you will be required to seek independent validation of FIPS 140-2. You can access those approved vendors at http://csrc.nist.gov/cryptval/140-1/ 1401val2001.htm. APPLICATION SECURITY IN PRODUCTION When an application completes the development life cycle and is ready to move to the targeted production platform, a whole new set of security requirements must be considered. Many of the security requirements require the development manager to coordinate with other IT functions to ensure that the application will be placed into a secure production environment. Exhibit 29-1 shows an example representing an e-mail message addressed to the group maintaining processing hardware to confirm that the applications information, integrity, and availability are assured. 480

AU1518Ch29Frame Page 481 Thursday, November 14, 2002 6:11 PM

Application Security Exhibit 29-1. Confirmation that the Applications Information, Integrity, and Availability Are Assured As the development Project Manager of XYZ application, I will need the following number of (NT or UNIX) servers. These servers need to be configured to store and process confidential information and ensure the integrity and the availability of XYZ application. To satisfy the security of the application, I need assurance that these servers will have a minimum security configured as follows: Password standards Access standards Backup and disaster plan Approved banner log-on server Surge and power protection for all servers Latest patches installed Appropriate shutdown and restart procedures are in place Appropriate level of auditing is turned on Appropriate virus protection Appropriate vendor licenses/copyrights Physical security of servers Implementation of system timeout Object reuse controls Please indicate whether each security control is in compliance by indicating a “Yes” or “No.” If any of the security controls above is not in compliance, please comment as to when the risk will be mitigated. Your prompt reply would be appreciated not later than [date].”

A similar e-mail message could also be sent to the network function requesting the items in Exhibit 29-2. COMMERCIAL OFF-THE-SHELF SOFTWARE APPLICATION SECURITY It would be great if all vendors practiced application security and provided their clients with a report of the security requirements and controls that were used and validated. Unfortunately, that is far from the case, except when dealing with cryptographic modules. Every time an organization buys an off-the-shelf software application, it takes risk — risk that the code contains major flaws that could cause a loss in revenue, customer and employee privacy information, etc. This is why it is so important to think of protecting applications using the defense-in-depth methodology. With a tiny hole in Web application code, a hacker can reach right through from the browser to an E-commerce Web site. This is referred to as Web perversion, and hackers with a little determination can steal digital property, sensitive client information, trade secrets, and goods and services. There are two COTS packages available on the market today to protect E-commerce sites from such attacks. One software program on the market stops application-level attacks by identifying legitimate requests, and another software program automates the manual tasks of auditing Web applications. 481

AU1518Ch29Frame Page 482 Thursday, November 14, 2002 6:11 PM

APPLICATION PROGRAM SECURITY Exhibit 29-2. Request for Security As the development Project Manager of XYZ application, I will need the assurance that the production network environment is configured to process confidential information and ensure the integrity, and the availability of XYZ application to satisfy the security of the application. The network should have the following minimum security: Inbound/outbound ports Access control language Password standards Latest patches Firewall Configuration Inbound/outbound services Architecture provides security protection and avoids single point of failure Please indicate whether each security control is in compliance by indicating a “Yes” or “No.” If any of the security controls above is not in compliance please comment as to when the risk will be mitigated. Your prompt reply would be appreciated not later than [date].

OUTSOURCED DEVELOPMENT SERVICES Development outsourced services should be treated no differently than in-house development. Both should adhere to a strict set of security application requirements. In the case of the outsourced development effort, it will be up to technical contract representatives to ensure that all security requirements are addressed and covered during an independent code review. This should be spelled out in the requirements section of the Request for Proposal. Failure to pass an independent code review then requires a second review, which should be paid for by the contractor as a penalty. SUMMARY The three basic areas of applications security — development, production, and commercial off-the-shelf software — are present in all organizations. Some organizations will address application security in all three areas, while others only in one or two areas. Whether an organization develops applications for internal use, for clients as a service company, or for commercial sale, the necessity of practice plays a major role in the area of trust and repeated business. In today’s world, organizations are faced with new and old laws that demand assurance that the software was developed with appropriate security requirements and controls. Until now, the majority of developers, pressured by senior management or by marketing concerns, have pushed to get products into production without any guidance of or concern for security requirements or controls. Security now plays a major role on the bottom line of E-commerce and critical infrastructure 482

AU1518Ch29Frame Page 483 Thursday, November 14, 2002 6:11 PM

Application Security organizations. In some cases, it can be the leading factor as to whether a company can recover from a cyber-security attack. Represented as a major component in the protection of our critical infrastructure from cyber-security attacks, application security can no longer be an afterthought. Many companies have perceived application security as an afterthought, pushing it aside in order to get a product to market. Security issues were then taken care of through patches and version upgrades. This method rarely worked well, and in the end it led to a lack of customer trust and reflected negatively on the integrity of the development company. The practice of application security as an up-front design consideration can be a marketing advantage to a company. This can be marketed as an added feature so that, when the application is installed on an appropriately secure platform, it will enhance the customer’s enterprise security program — not help to compromise it. Reference 1. NIST Special Publication 800-16, Guide for Developing Security Plans for Information Technology Systems, 1999.

ABOUT THE AUTHOR Walter S. Kobus, Jr. CISSP, is Vice President, Security Consulting Services with Total Enterprise Security Solutions, LLC. He has over 35 years of experience in information systems with 15 years experience in security, and is a subject matter expert in several areas of information security, including application security, security management practice, certification and accreditation, secure infrastructure, and risk and compliance assessments. As a consultant, he has an extensive background in implementing information security programs in large environments. He has been credited with the development of several commercial software programs in accounting, military deployment, budgeting, marketing, and several IT methodologies in practice today in security and application development.

483

AU1518Ch29Frame Page 484 Thursday, November 14, 2002 6:11 PM

AU1518Ch30Frame Page 485 Thursday, November 14, 2002 6:10 PM

Chapter 30

Certification and Accreditation Methodology Mollie E. Krehnke, CISSP David C. Krehnke, CISSP

The implementation of a certification and accreditation (C&A) process within industry for information technology systems will support cost-effective, risk-based management of those systems and provide a level of security assurance that can be known (proven). The C&A process addresses both technical and nontechnical security safeguards of a system to establish the extent to which a particular system meets the security requirements for its business function (mission) and operational environment. DEFINITIONS Certification involves all appropriate security disciplines that contribute to the security of a system, including administrative, communications, computer, operations, physical, personnel, and technical security. Certification is implemented through involvement of key players, conduct of threat and vulnerability analyses, establishment of appropriate security mechanisms and processes, performance of security testing and analyses, and documentation of established security mechanisms and procedures. Accreditation is the official management authorization to operate a system in a particular mode, with a prescribed set of countermeasures, against a defined threat with stated vulnerabilities and countermeasures, within a given operational concept and environment, with stated interconnections to other systems, at an acceptable level of risk for which the accrediting authority has formally assumed responsibility, and for a specified period of time.

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

485

AU1518Ch30Frame Page 486 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY C&A TARGET The subject of the C&A, the information technology system or application (system), is the hardware, firmware, and software used as part of the system to perform organizational information processing functions. This includes computers, telecommunications, automated information systems, and automatic data processing equipment. It includes any assembly of computer hardware, software, and firmware configured to collect, create, communicate, compute, disseminate, process, store, and control data or information. REPEATABLE PROCESS The C&A is a repeatable process that can ensure an organization (with a higher degree of confidence) that an appropriate combination of security measures is correctly implemented to address the system’s threats and vulnerabilities. This assurance is sustained with the conduct of periodic reviews and monitoring of the system’s configuration throughout its life cycle, as well as recertification and reaccreditation on a routine, established basis. REFERENCES FOR CREATING A C&A PROCESS The performance of certification and accreditation is well established within the federal government sector, its civil agencies, and the Department of Defense. There are numerous processes that have been established, published, and implemented. Any of these documents could serve as an appropriate starting point for a business organization. Several are noted below: • Guideline for Computer Security Certification and Accreditation (Federal Information Processing Standard Publication 102)1 • Introduction to Certification and Accreditation (NCSC-TG-029, National Computer Security Center)2 • National Information Assurance Certification and Accreditation Process (NIACAP) (NTISSI No. 1000, National Security Agency)3 • Sample Generic Policy and High-Level Procedures Certification and Accreditation (National Institute of Standards and Technology)4 • DoD Information Technology Security Certification and Accreditation Process (DITSCAP) (Department of Defense Instruction Number 5200.40)5 • How to Perform Systems Security Certification and Accreditation (C&A) within the Defense Logistics Agency (DLA) Using Metrics and Controls for Defense-in-Depth6 • Certification and Accreditation Process Handbook for Certifiers (Defense Information Systems Agency [DISA])7 486

AU1518Ch30Frame Page 487 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology The FIPS guideline, although almost 20 years old, presents standards and processes that are applicable to government and industry. The NIACAP standards expand upon those presented in the NCSC documentation. The NIST standards are generic in nature and are applicable to any organization. The DLA documentation is an example of a best practice that was submitted to NIST and made available to the general public for consideration and use. TAKE UP THE TOOLS AND TAKE A STEP This chapter presents an overview of the C&A process, including key personnel, components, and activities within the process that contribute to its success in implementation. The conduct of the C&A process within an industrial organization can also identify areas of security practices and policies that are presently not addressed — but need to be addressed to ensure information resources are adequately protected. The C&A task may appear to be daunting, but even the longest journey begins with a single step. Take that step and begin. C&A COMPONENTS The timely, accurate, and effective implementation of a C&A initiative for a system is a choreography of people, activities, documentation, and schedules. To assist in the understanding of what is involved in a C&A, the usual resources and activities are grouped into the following tables and then described: • Identification of key personnel to support the C&A effort • Analysis and documentation of minimum security controls and acceptance • Other processes that support C&A effectiveness • Assessment and recertification timelines • Associated implementation factors The tables reflect the elements under discussion and indicate whether the element was cited by a reference used to create the composite C&A presented in this chapter. The content is very similar across references, with minor changes in terms used to represent a C&A role or phase of implementation. IDENTIFICATION OF KEY PERSONNEL TO SUPPORT C&A EFFORT The C&A process cannot be implemented without two key resources: people and funding. The costs associated with a C&A will be dependent upon the type of C&A conducted and the associated activities. For example, the NIACAP identifies four general certification levels (discussed later in the chapter). In contrast, the types of personnel, and their associated 487

AU1518Ch30Frame Page 488 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY Exhibit 30-1. Key personnel. Title Authorizing Official/Designated Approving Authority Certifier Information Systems Security Officer Program Manager/DAA Representative System Supervisor/Manager User/User Representative

FIPS

NCSC

NIACAP

NIST

DITSCAP

X

X

X

X

X

X X X X X

X X X X X

X X X X X

X X

X X X X X

X X

functions, required to implement the C&A remain constant. However, the number of persons involved and the time on task will vary with the number and complexity of C&As to be conducted and the level of testing to be performed. These personnel are listed in Exhibit 30-1. It is vital to the completeness and effectiveness of the C&A that these individuals work together as a team, and they all understand their roles and associated responsibilities. Authorizing Official/Designated Approving Authority The authorizing official/designated approving authority (DAA) has the authority to formally assume responsibility for operating a system at an acceptable level of risk. In a business organization, a vice president or chief information officer would assume this role. This individual would not be involved in the day-to-day operations of the information systems and would be supported in the C&A initiatives by designated representatives. Certifier This individual is responsible for making a technical judgment of the system’s compliance with stated requirements, identifying and assessing the risks associated with operating the system, coordinating the certification activities, and consolidating the final certification and accreditation packages. The certifier is the technical expert that documents trade-offs between security requirements, cost, availability, and schedule to manage the security risk. Information Systems Security Officer The information systems security officer (ISSO) is responsible to the DAA for ensuring the security of an IT system throughout its life cycle, from design through disposal, and may also function as a certifier. The ISSO provides guidance on potential threats and vulnerabilities to the IT system, provides guidance regarding security requirements and controls necessary to protect the system based on its sensitivity and criticality to the 488

AU1518Ch30Frame Page 489 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology organization, and provides advice on the appropriate choice of countermeasures and controls. Program Manager/DAA Representative The program manager is ultimately responsible for the overall procurement, development, integration, modification, operation, maintenance, and security of the system. This individual would ensure that adequate resources (e.g., funding and personnel) are available to conduct the C&A in a timely and accurate manner. System Supervisor or Manager The supervisor or manager of a system is responsible for ensuring the security controls agreed upon during the C&A process are consistently and correctly implemented for the system throughout its life cycle. If changes are required, this individual has the responsibility for alerting the ISSO as the DAA representative about the changes; and then a determination can be made about the need for a new C&A, because the changes could impact the security of the system. User and User Representative The user is a person or process that accesses the system. The user plays a key role in the security of the system by protecting the assigned passwords, following established rules to protect the system in its operating environment, being alert to anomalies that could indicate a security problem, and not sharing information with others who do not have a need to know that information. A user representative supports the C&A process by ensuring that system availability, access, integrity, functionality, performance, and confidentiality as they relate to the users, their business functions, and the operational environment are appropriately addressed in the C&A process. ANALYSIS AND DOCUMENTATION OF SECURITY CONTROLS AND ACCEPTANCE A system certification is a comprehensive analysis of technical and nontechnical security features of a system. Security features are also referred to as controls, safeguards, protection mechanisms, and countermeasures. Operational factors that must be addressed in the certification are system environment, proposed security mode of operation, specific users, applications, data sensitivity, system configuration, site/facility location, and interconnections with other systems. Documentation that reflects analyses of those factors and associated planning to address specified security requirements is given in Exhibit 30-2. This exhibit represents a composite of the documentation that is suggested by the various C&A references. 489

AU1518Ch30Frame Page 490 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY Exhibit 30-2. Analysis and documentation of security controls and acceptance. Documentation Threats, Vulnerabilities, and Safeguards Analysis Contingency/Continuity of Operations Plan Contingency/Continuity of Operations Plan Test Results Letter of Acceptance/Authorization Agreement Letter of Deferral/List of System Deficiencies Project Management Plan for C&A Risk Management Security Plan/Security Concept of Operations Security Specifications Security/Technical Evaluation and Test Results System Security Architecture User Security Rules Verification and Validation of Security Controls

FIPS NCSC NIACAP NIST DITSCAP X

X

X

X

X

X X

X X

X X

X X

X X

X X X X X X X X X X

X X

X X X X X X X X X X

X X

X X X X X X X X X X

X X X X X X X

X X X X X X

Threats, Vulnerabilities, and Safeguards Analysis A determination must be made that proposed security safeguards will effectively address the system’s threats and vulnerabilities in the operating environment at an acceptable level of risk. This activity could be a technical assessment that is performed by a certifier or contained in the risk management process (also noted in Exhibit 30-2). The level of analysis will vary with the level of certification that is performed. Contingency/Continuity of Operations Plan The resources allocated to continuity of operations will be dependent upon the system business functions, criticality, and interdependency with other systems. The plan for the system should be incorporated into the plan for the facility in which the system resides and should address procedures that will be implemented at varying levels of business function disruption and recovery. Contingency/Continuity of Operations Plan Test Results Testing of the continuity of operations plan should be conducted on an established schedule that is based upon system factors cited above and any associated regulatory or organizational requirements. There are various levels of testing that can be performed, depending on the system criticality and available resources, including checklists, table-top testing, drills, walk-throughs, selected functions testing, and full testing. 490

AU1518Ch30Frame Page 491 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology Letter of Acceptance/Authorization Agreement The decision to accredit a system is based upon many factors that are encompassed in the certification results and recommendations: threats and vulnerabilities, system criticality, availability and costs of alternative countermeasures, residual risks, and nonsecurity factors such as program and schedule risks. The DAA has several options available: • Full accreditation for the originally intended operational environment and acceptance of the associated recertification/reaccreditation timeline • Accreditation for operation outside of the originally intended environment (e.g., change in mission, crisis situation, more restrictive operations) • Interim (temporary) accreditation approval with a listing of activities to be performed in order to obtain full accreditation • Accreditation disapproval (see letter of deferral below) Letter of Deferral/List of System Deficiencies This letter indicates the accreditation is disapproved, and it includes recommendations and timelines for correcting specified deficiencies. Project Management Plan for C&A Many individuals (and organizations) provide support in the accurate and timely completion of a system C&A. A project management plan reflects the activities, timelines, and resources that have been allocated to the C&A effort; and it must be managed as any other tasking is managed. Risk Management The identification of system threats, vulnerabilities, and compensating controls that enable the system to function at an acceptable level of risk is key to the C&A process. Risk analysis should be conducted throughout the system life cycle to ensure the system is adequately protected, and it should be conducted as early as possible in the development process. The DAA must accept responsibility for system operation at the stated level of risk. A change in the threats, vulnerabilities, or acceptable level of risk may trigger a system recertification prior to the planned date as defined in the DAA acceptance letter. Security Plan/Concept of Operations The security plan/concept of operations (CONOPS) documents the security measures that have been established and are in place to address a system security requirement. Some organizations combine the security 491

AU1518Ch30Frame Page 492 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY plan and CONOPS into one document, and other organizations include the technical controls in the security plan and the day-to-day administrative controls in the CONOPS. The security plan/CONOPS is a living document that must be updated when security controls, procedures, or policies are changed. NIST has provided a generic security plan template for both applications and major systems that is recognized as appropriate for government and industry. Security Specifications The level to which a security measure must perform a designated function must be specified during the C&A process. Security functions will include authentication, authorization, monitoring, security management, and security labeling. These specifications will be utilized during the testing of the security controls prior to acceptance and periodically thereafter, particularly during the annual self-assessment process. Security/Technical Evaluation and Test Results The evaluation and testing of controls is performed to assess the performance of the security controls in the implementation of the security requirements. The controls must function as intended on a consistent basis over time. Each control must be tested to ensure conformance with the associated requirements. In addition, the testing must validate the functionality of all security controls in an integrated, operational setting. The level of evaluation and testing will depend upon the level of assurance required for a control. The testing should be performed at the time of installation and at repeated intervals throughout the life cycle of the control to ensure it is still functioning as expected. Evaluation and testing should include such areas as identification and authentication, audit capabilities, access controls, object reuse, trusted recovery, and network connection rule compliance. System Security Architecture A determination must be made that the system architecture planned for operation complies with the architecture description provided for the C&A documentation. The analysis of the system architecture and interconnections with other systems is conducted to assess how effectively the architecture implements the security policy and identified security requirements. The hardware, software, and firmware are also evaluated to determine their implementations of security requirements. Critical security features, such as identification, authentication, access controls, and auditing, are reviewed to ensure they are correctly and completely implemented. 492

AU1518Ch30Frame Page 493 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology User Security Rules All authorized users will have certain security responsibilities associated with their job functions and with a system. These responsibilities and the rules associated with system use must be clearly defined and understood by the user. General user rules and responsibilities may be covered during security awareness and training. Other rules and responsibilities associated with a particular system may be covered during specific system operational and security training. Verification and Validation of Security Controls The identification, evaluation, and tracking of the status of security safeguards is an ongoing process throughout the life cycle of a system. The evaluation of the security posture of a control can also be used to evaluate the security posture of the organization. The following evaluations should be considered: • Requirements evaluation. Are the security requirements acceptable? Certification is only meaningful if security requirements are well-defined. • Function evaluation. Does the design or description of security functions satisfy the security requirements? Basic evaluations should address all applicable control features down through the logical specification level as defined in the functional requirements document, and they should include internal computer controls and external physical and administrative controls. • Control implementation determination. Are the security functions implemented? Functions that are described in a document or discussed in an interview do not prove that they have been implemented. Visual inspection and testing will be necessary. • Methodology review. Does the implementation method provide assurance that security functions are acceptably implemented? This review may be used if extensive testing is not deemed necessary or cannot be implemented. The review contributes to a confidence judgment on the extent to which controls are reliably implemented and on the susceptibility of the system to flaws. If the implementation cannot be relied upon, then a detailed evaluation may be required. • Detailed evaluation. What is the quality of the security safeguards? First decide what safeguards require a detailed analysis, and then ask the following questions. Do the controls function properly? Do controls satisfy performance criteria? How readily can the controls be broken or circumvented? OTHER PROCESSES SUPPORTING C&A EFFECTIVENESS See Exhibit 30-3 for information on other processes supporting C&A effectiveness. 493

AU1518Ch30Frame Page 494 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY Exhibit 30-3. Other processes supporting C&A effectiveness. Topic/Activity Applicable laws, regulations, policies, guidelines, and standards — federal and state Applicable policies, guidelines, and standards — organizational Configuration and change management Incident response Incorporation of security into system life cycle Personnel background screening Security awareness training Security management organization Security safeguards and metrics

FIPS NCSC NIACAP NIST DITSCAP X

X

X

X

X

X

X

X

X

X

X

X X X X X X X

X X X X X X X

X X X X X

X X X

X X X X X X X

Applicable Laws, Regulations, Policies, Guidelines, and Standards — Federal and State Federal and state regulations and policies provide a valuable and worthwhile starting point for the formulation and evaluation of security requirements — the cornerstone of the C&A process. Compliance may be mandatory or discretionary, but implementing information security at a generally accepted level of due diligence can facilitate partnerships with government and industry. Applicable Policies, Guidelines, and Standards — Organizational Organizational policies reflect the business missions, organizational and environmental configurations, and resources available for information security. Some requirements will be derived from organizational policies and practices. Configuration and Change Management Changes in the configuration of a system, its immediate environment, or a wider organizational environment may impact the security posture of that system. Any changes must have approval prior to implementation so that the security stance of the system is not impacted. All changes to the established baseline must be documented. Significant changes may initiate a new C&A (discussed later in this chapter). Accurate system configuration documentation can also reduce the likelihood of implementing unnecessary security mechanisms. Extraneous mechanisms add unnecessary complexity to the system and are possible sources of additional vulnerabilities. 494

AU1518Ch30Frame Page 495 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology Incident Response Incidents are going to happen. An organization’s response to an incident — that is, identification, containment, isolation, resolution, and prevention of future occurrences — will definitely affect the security posture of the organization. The ability to respond to an incident in a timely and effective manner is necessary to maintaining an organization’s business functions and its perceived value to customers. Incorporation of Security into System Life Cycle The determination of applicable security functionality early in system design and development will reduce the security costs and increase the effectiveness and functionality of the designated security controls. Adding on security functions later in the development or production phase will reduce the security options and add to the development costs. The establishment of system boundaries will ensure that security for the system environment is adequately addressed, including physical, technical, and administrative security areas. Personnel Background Screening Managers are responsible for requesting suitability screening for the staff in their respective organizations. The actual background investigations are conducted by other authorized organizations. The determination of what positions will require screening is generally based upon the type of data to which an individual will have access and the ability to bypass, modify, or disable technical or operating system security controls. These requirements are reviewed by an organization’s human resources and legal departments, and are implemented in accordance with applicable federal and state laws and organizational policy. Security Awareness Training The consistent and appropriate performance of information security measures by general users, privileged users, and management cannot occur without training. Training should encompass awareness training and operational training, including basic principles and state-of-the-art technology. Management should also be briefed on the information technology security principles so that the managers can set appropriate security requirements in organizational security policy in line with the organization’s mission, goals, and objectives. Security Management Organization The security management organization supports the development and implementation of information security policy and procedures for the organization, security and awareness training, operational security and rules of 495

AU1518Ch30Frame Page 496 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY behavior, incident response plans and procedures, virus detection procedures, and configuration management. Security Safeguards and Metrics A master list of safeguards or security controls and an assessment of the effectiveness of each control supports the establishment of an appropriate level of assurance for an organization. The master list should contain a list of uniquely identified controls, a title that describes the subject area or focus of the control, a paragraph that describes the security condition or state that the control is intended to achieve, and the rating of compliance based on established metrics for the control. The levels of rating are: 1: 2: 3: 4:

No awareness of the control or progress toward compliance Awareness of the control and planning for compliance Implementation of the security control is in progress Security control has been fully implemented, and the security profile achieved by the control is actively maintained

The metrics can be based upon federal policy, audit findings, commercial best practices, agency system network connection agreements, local security policy, local configuration management practices, information sensitivity and criticality, and DAA specified requirements. Assessment and Recertification Timelines Certification and accreditation should be viewed as continuing and dynamic processes. The security posture of a system must be monitored, tracked, and assessed against the security controls and processes established at the time of the approval and acceptance of the certification documentation (see Exhibit 30-4). Exhibit 30-4. Assessment and recertification timelines. Topic/Activity Annual assessment between C&As Recertification required every three to five years Significant change or event Security safeguards operating as intended

FIPS NCSC NIACAP NIST DITSCAP X

X

X X

X X

X X

X X

X X

X X

X X

X X

Annual Assessment between C&As The annual assessment of a system should include a review of the system configuration, connections, location, authorized users, and information 496

AU1518Ch30Frame Page 497 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology sensitivity and criticality. The assessment should also determine if the level of threat has changed for the system, making the established controls less effective and thereby necessitating the need for a new C&A. Recertification Required Every Three to Five Years Recertification is required in the federal government on a three- to fiveyear basis, or sooner if there has been a significant change to the system or a significant event that alters the security stance (or effectiveness of the posture) of a system. The frequency with which recertification is conducted in a private organization or business will depend upon the sensitivity and criticality of the system and the impact if the system security controls are not adequate for the organizational environment or its user population. Significant Change or Event The C&A process may be reinitiated prior to the date established for recertification. Examples of a significant change or event are: • Upgrades to existing systems: upgrade/change in operating system, change in database management system, upgrade to central processing unit (CPU), or an upgrade to device drivers. • Changes to policy or system status: change to the trusted computing base (TCB) as specified in the security policy, a change to the application’s software as specified in the security policy, a change in criticality or sensitivity level that causes a change in the countermeasures required, a change in the security policy (e.g., access control policy), a change in activity that requires a different security mode of operation, or a change in the threat or system risk. • Configuration changes to the system or its connectivity: additions or changes to the hardware that require a change in the approved security countermeasures, a change to the configuration of the system that may affect the security posture (e.g., a workstation is connected to the system outside of the approved configuration), connection to a network, and introduction of new countermeasures technology. • Security breach or incident: if a security breach or significant incident occurs for a system. • Results of an audit or external analysis: if an audit or external analysis determines that the system was unable to adequately respond to a higher level of threat force than that originally determined, or a change to the system created new vulnerabilities, then a new C&A would be initiated to ensure that the system operates at the acceptable level of risk. 497

AU1518Ch30Frame Page 498 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY Exhibit 30-5. Associated implementation factors. Topic/Activity Documentation available in hard copy and online Grouping of systems for C&A Presentation of C&A process to management Standardization of procedures, templates, worksheets, and reports Standardization of responses to report sections for enterprise use

FIPS NCSC NIACAP NIST DITSCAP X X X

X

X X X

X

X

X

Security Safeguards Operating as Intended An evaluation of the system security controls should be performed to ensure that the controls are functioning as intended. This activity should be performed on a routine basis throughout the year and is a component of the annual self-assessment conducted in support of the C&A process. ASSOCIATED IMPLEMENTATION FACTORS Associated implementation factors are listed in Exhibit 30-5. Documentation Available in Hard Copy and Online If a number of systems are undergoing the C&A process, it is beneficial to have the C&A documentation available in hard copy and online so that individuals responsible for its completion can have ready access to the forms. This process can save time and ensure a higher level of accuracy in the C&A results because all individuals have the appropriate forms. Grouping of Systems for C&A It is acceptable to prepare one C&A for like systems that have the same configuration, controls, location, function, and user groups. The grouping of systems does not reduce the effectiveness of the C&A process, as long as it can be assured that all of the systems are implementing the established controls in the appropriate manner and that the controls are appropriate for each system. Presentation of C&A Process to Management Management at all levels of an organization must understand the need for and importance of the C&A process and the role that each plays in its successful implementation. Management must also understand that the C&A process is an ongoing activity that is going to require resources (at a 498

AU1518Ch30Frame Page 499 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology predesignated level) over the system life cycle to preserve its security posture and reduce risk to an acceptable level. Standardization of C&A Procedures, Templates, Worksheets, and Reports Standardization within an organization supports accuracy and completeness in the forms that are completed and the processes that are performed. Standardized forms enhance the analysis and preparation of summary C&A reports and enable a reviewer to readily locate needed information. Standardization also facilitates the identification of gaps in the information provided and in the organization’s security posture. Standardization of Responses to Report Sections for Enterprise Use The results of the C&A process will be provided to management. The level of detail provided may depend upon the responsibilities of the audience, but consistency across systems will allow the organization to establish an enterprisewide response to a given threat or vulnerability, if required. C&A PHASES The C&A process is a method for ensuring that an appropriate combination of security measures are implemented to counter relevant threats and vulnerabilities. Activities conducted for the C&A process can be grouped into phases, and a composite of suggested activities (from the various references) is described below. The number of activities or steps varies slightly among references. Phase 1: Precertification Activity 1: Preparation of the C&A Agreement. Analyze pertinent regulations that impact the content and scope of the C&A. Determine usage requirements (e.g., operational requirements and security procedures). Analyze risk-related considerations. Determine the certification type. Identify the C&A team. Prepare the C&A agreement.

Aspects to be considered in this activity include mission criticality, functional requirements, system security boundary, security policies, security concept of operations, system components and their characteristics, external interfaces and connection requirements, security mode of operation or overall risk index, system and data ownership, threat information, and identification of the DAAs. Activity 2: Plan for C&A. Plan the C&A effort, obtain agreement on the approach and level of effort, and identify and obtain the necessary resources (including funding and staff). 499

AU1518Ch30Frame Page 500 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY Aspects to be considered in this activity include reusability of previous evidence, life-cycle phase, and system milestones (time constraints). Phase 2: Certification Activity 3: Perform the Information Security Analysis of Detailed System Information. Conduct analyses of the system documentation, testing per-

formed, and architecture diagrams. Conduct threat and vulnerability assessments, including impacts on confidentiality, integrity, availability, and accountability. Aspects to be considered in this activity include the certification team becoming more familiar with the security requirements and security aspects of individual system components, specialized training on the specific system (depending upon the scope of this activity and the experience of the certification team), determining whether system security controls adequately satisfy security requirements, identification of system vulnerabilities, and determination of residual risks. Activity 4: Document the Certification Results in a Certification Package. Document all analyses, testing results, and findings. The certification package is the consolidation of all the certification activity results. This documentation will be used as supporting documentation for the accreditation decision and will also support recertification/reaccreditation activities.

Aspects to be considered in this documentation package include system need/mission overview, security policy, security CONOPS or security plan, contingency plan/continuity of operations, system architectural description and configuration, reports of evaluated products, statements from other responsible agencies indicating specified security requirements have been met, risk analysis report and associated countermeasures, test plans, test procedures, test results, analytic results, configuration management plan, and previous C&A information. Phase 3: Accreditation Activity 5: Perform Risk Assessment and Final Testing. Review the analysis, documentation, vulnerabilities, and residual risks. Final testing is conducted at this time to ensure the DAAs are satisfied that the residual risk identified meets an acceptable level of risk.

Aspects to be considered in this activity include assessment of system information via the certification package review, the conduct of a site accreditation survey to verify that the residual risks are at an acceptable level, and verification of the contents of the C&A package. 500

AU1518Ch30Frame Page 501 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology Activity 6: Report Findings and Recommendations. The recommendations are derived from documentation gathered by the certification team, testing conducted, and business functions/mission considerations, and include a statement of residual risk and supporting documentation.

Aspects to be considered in this activity include executive summary of mission overview, architectural description, system configuration, including interconnections; memoranda of agreement (MOA), waivers signed by the DAA that specific security requirements do not need to be met or are met by other means (e.g., procedures), residual risk statement, including rationale for why residual risks should be accepted or rejected; recommendation for accreditation decision. Activity 7: Make the Accreditation Decision. The decision will be based upon the recommendation from the certifier or certification authority. Is the operation of the system, under certain conditions, in a specified environment, functioning at an acceptable level of risk?

Accreditation decision options include full accreditation approval, accreditation for operations outside the originally intended environment, interim (temporary) accreditation approval, or accreditation disapproval. Phase 4: Post-Accreditation Activity 8: Maintain the Security Posture and Accreditation of the System.

Periodic compliance inspections of the system and recertification at established time frames will help to ensure that the system continues to operate within the stated parameters as specified in the accreditation letter. A configuration management or change management system must be implemented and procedures established for baselining, controlling, and monitoring changes to the system. Substantive changes may require the system to be recertified and reaccredited prior to the established time frame. However, maximum reuse of previous evaluations or certifications will expedite this activity. Aspects to be considered in this activity include significant changes that may impact the security of the system. TYPES OF CERTIFICATION NIACAP identifies four general certification levels: Level 1 — Basic Security Review, Level 2 — Minimum Analysis, Level 3 — Detailed Analysis, and Level 4 — Comprehensive Analysis. FIPS PUB 102 presents three levels of evaluation: basic, detailed, and detailed focusing. DISA identified the following types of C&A: 501

AU1518Ch30Frame Page 502 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY Type 1: Checklist This type of certification completes a checklist with yes or no responses to the following content areas: administrative, personnel authorization, risk management, personnel security, network security, configuration management, training, media handling, and physical security. This type of certification also includes verification that procedures for proper operation are established, documented, approved, and followed. Type 2: Abbreviated Certification This type of certification is more extensive than Type 1 certification but also includes the completion of the Type 1 checklist. The amount of documentation required and resources devoted to the Type 2 C&A is minimal. The focus on this type of certification is information security functionality (e.g., identification and authentication, access control, auditing). FIPS Pub. 102’s first level of evaluation, the basic evaluation, is similar to the Type 2 category; it is concerned with the overall functional security posture, not with the specific quality of individual controls. The basic evaluation has four tasks: 1. Security requirements evaluation. Are applicable security requirements acceptable? — Assets. What should be protected? — Threats. What are assets protected against? — Exposures. What might happen to assets if a threat is realized? — Controls. How effective are safeguards in reducing exposures? 2. Security function evaluation. Do application security functions satisfy the requirements? — Defined requirements/security functions. Authentication, authorization, monitoring, security management, security labeling. — Undefined requirements/specific threats. Analysis of key controls; that is, how effectively do controls counter specific threats? — Completed to the functional level. Logical level represented by functions as defined in the functional requirements document. 3. Control existence determination. Do the security functions exist? — Assurance that controls exist via visual inspection or testing of internal controls. 4. Methodology review. Does the implementation method provide assurance that security functions are acceptably implemented? — Documentation. Is it current, complete, and of acceptable quality? — Objectives. Is security explicitly stated and treated as an objective? — Project control. Was development well controlled? Were independent reviews and testing performed, and did they consider security? Was an effective change control program used? 502

AU1518Ch30Frame Page 503 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology — Tools and techniques. Were structured design techniques used? Were established programming practices and standards used? — Resources. How experienced in security were the people who developed the application? What were the sensitivity levels or clearances associated with their positions? Type 3: Moderate Certification This type of certification is more detailed and complex and requires more resources. It is generally used for systems that require higher degrees of assurance, have a greater level of risk, or are more complex. The focus of this type of certification is also information security functionality (e.g., identification and authentication, access control, auditing); however, more extensive evidence is required to show that the system meets the security requirements. FIPS Pub. 102’s second level of evaluation, the detailed evaluation, is similar to the Type 3 category; and it provides further analysis to obtain additional evidence and increased confidence in evaluation judgments. The detailed evaluation may be initiated because (1) the basic evaluation revealed problems that require further analysis, (2) the application has a high degree of sensitivity, or (3) primary security safeguards are embodied in detailed internal functions that are not visible or suitable for examination at the basic evaluation level. Detailed evaluations involve analysis of the quality of security safeguards. The tasks include: • Functional operation. Do controls function properly? — Control operation. Do controls work? — Parameter checking. Are invalid or improbable parameters detected and properly handled? — Common error conditions. Are invalid or out-of-sequence commands detected and properly handled? — Control monitoring. Are security events properly recorded? Are performance measurements properly recorded? — Control management. Do procedures for changing security tables work? • Performance. Do controls satisfy performance criteria? — Availability. What proportion of time is the application or control available to perform critical or full services? — Survivability. How well does the application or control withstand major failures or natural disasters? — Accuracy. How accurate is the application or control, including the number, frequency, and significance of errors? — Response time. Are response times acceptable? Will the user bypass the control because of the time required? 503

AU1518Ch30Frame Page 504 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY — Throughput. Does the application or control support required usage capabilities? • Penetration resistance. How readily can controls be broken or circumvented? Resistance testing is the extent to which the application and controls must block or delay attacks. The focus of the evaluation activities will depend upon whether the penetrators are users, operators, application programmers, system programmers, managers, or external personnel. Resistance testing should also be conducted against physical assets and performance functions. This type of testing can be the most complex of detailed evaluation categories, and it is often used to establish a level of confidence in security safeguards. Areas to be considered for detailed testing are: • • • • • • • • • •

Complex interfaces Change control process Limits and prohibitions Error handling Side effects Dependencies Design modifications/extensions Control of security descriptors Execution chain of security services Access to residual information

Additional methods of testing are flaw identification or hypothesizing generic flaws and then determining if they exist. These methods can be applied to software, hardware, and physical and administrative controls. Type 4: Extensive Certification This type of certification is the most detailed and complex type of certification and generally requires a great deal of resources. It is used for systems that require the highest degrees of assurance and may have a high level of threats or vulnerabilities. The focus of this type of certification is also information security functionality (e.g., identification and authentication, access control, auditing) and assurance. Extensive evidence, generally found in the system design documentation, is required for this type of certification. FIPS Pub. 102’s third level of evaluation, the detailed focusing evaluation, is similar to the Type 4 category. Two strategies for focusing on a small portion of the security safeguards for a system are: (1) security-relevant components and (2) situational analysis. 504

AU1518Ch30Frame Page 505 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology The security-relevant components strategy addresses previous evaluation components in a more detailed analysis: • Assets. Which assets are most likely at risk? Examine assets in detail in conjunction with their attributes to identify the most likely targets. • Threats. Which threats are most likely to occur? Distinguish between accidental, intentional, and natural threats and identify perpetrator classes based on knowledge, skills, and access privileges. Also consider threat frequency and its components: magnitude, asset loss level, exposures, existing controls, and expected gain by the perpetrator. • Exposures. What will happen if the threat is realized, for example, internal failure, human error, errors in decisions, fraud? The focus can be the identification of areas of greatest potential loss or harm. • Controls. How effective are the safeguards in reducing exposures? Evaluations may include control analysis (identifying vulnerabilities and their severity), work-factor analysis (difficulty in exploiting control weaknesses), or countermeasure trade-off analysis (alternative ways to implement a control). Situational analysis may involve an analysis of attack scenarios or an analysis of transaction flows. Both of these analyses are complementary to the high-level basic evaluation, providing a detailed study of a particular area of concern. An attack scenario is a synopsis of a projected course of events associated with the realization of a threat. A manageable set of individual situations is carefully examined and fully understood. A transaction flow is a sequence of events involved in the processing of a transaction, where a transaction is an event or task of significance and visible to the user. This form of analysis is often conducted in information systems auditing and should be combined with a basic evaluation. CONCLUSION Summary There are a significant number of components associated with a certification and accreditation effort. Some of the key factors may appear to be insignificant, but they will greatly impact the success of the efforts and the quality of the information obtained. • All appropriate security disciplines must be included in the scope of the certification. Although a system may have very strong controls in one area, weak controls in another area may undermine the system’s overall security posture. • Management’s political and financial support is vital to the acceptance and implementation of the C&A process. Management should be briefed on the C&A program, its objectives, and its processes. 505

AU1518Ch30Frame Page 506 Thursday, November 14, 2002 6:10 PM

APPLICATION PROGRAM SECURITY • Information systems to undertake a C&A must be identified and put in a priority order to ensure that the most important systems are addressed first. • Security requirements must be established (if not already available); and the requirements must be accurate, complete, and understandable. • Technical evaluators must be capable of performing their assigned tasks and be able to remain objective in their evaluation. They should have no vested interest in the outcome of the evaluation. • Access to the personnel and documentation associated with an information system is vital to the completion of required documentation and analyses. • A comprehensive basic evaluation should be performed. A detailed evaluation should be completed where necessary. Industry Implementation Where do you stand? • If your organization’s security department is not sufficiently staffed, what type of individuals (and who) can be tasked to support C&As on a part-time basis? • C&A process steps and associated documentation will be necessary. Use the references presented in this chapter as a starting point for creating the applicable documentation for your organization. • Systems for which a C&A will be conducted must be identified. Consider sensitivity and criticality when you are creating your list. Identify those systems with the highest risks and most impact if threats are realized. Your organization has more to lose if those systems are not adequately protected. • The level of C&A to be conducted will depend on the available resources. You may suggest that your organization start with minimal C&A levels and move up as time and funding permit. The level of effort required will help you determine the associated costs and the perceived benefits (and return on investment) for conducting the C&As. Take that Step and Keep Stepping You may have to start at a lower level of C&A than you would like to conduct for your organization, but you are taking a step. Check with your colleagues in other organizations on their experiences. Small, successful C&As will serve as a marketing tool for future efforts. Although the completion of a C&A is no guarantee that there will not be a loss of information confidentiality, integrity, or availability, the acceptance of risk is based upon increased performance of security controls, user awareness, and increased management understanding and control. Remember: take that step. A false sense of security is worse than no security at all. 506

AU1518Ch30Frame Page 507 Thursday, November 14, 2002 6:10 PM

Certification and Accreditation Methodology References 1. Guideline for Computer Security Certification and Accreditation, Federal Information Processing Standards Publication 102, U.S. Department of Commerce, National Bureau of Standards, September 27, 1983. 2. Introduction to Certification and Accreditation, NCSC-TG-029, National Computer Security Center, U.S. Government Printing Office, January 1994. 3. National Information Assurance Certification and Accreditation Process (NIACAP), National Security Telecommunications and Information Systems Security Committee, NSTISSC 1000, National Security Agency, April 2000. 4. Sample Generic Policy and High Level Procedures, Federal Agency Security Practices, National Institute of Standards and Technology, www.csrc.nist.gov/fasp. 5. Department of Defense (DoD) Information Technology Security Certification and Accreditation Process (DITSCAP), DoD Instruction 5200.40, December 30, 1997. 6. How to Perform Systems Security Certification and Accreditation (C&A) within the Defense Logistics Agency (DLA) using Metrics and Controls for Defense-in-Depth (McDid), Federal Agency Security Practices, National Institute of Standards and Technology, www.csrc.nist.gov/fasp. 7. The Certification and Accreditation Process Handbook for Certifiers, Defense Information Systems Agency, INFOSEC Awareness Division, National Security Agency.

ABOUT THE AUTHORS Mollie E. Kreknke, CISSP, and David C. Krehnke, CISSP, are principal information security analysts for Northrop Grumman in Raleigh, North Carolina.

507

AU1518Ch30Frame Page 508 Thursday, November 14, 2002 6:10 PM

AU1518Ch31Frame Page 509 Thursday, November 14, 2002 8:28 PM

Chapter 31

A Framework for Certification Testing Kevin J. Davidson, CISSP

The words have often been heard, “We have a firewall” in response to the question, “What are you doing to protect your information?” Security professionals recognize the fact that the mere existence of a firewall does not in and of itself constitute good information security practices. Information system owners and managers generally are not aware of a need to verify that the security policies and procedures they have established are followed, if in fact they have established policies or procedures. In this chapter, the focus is on system security certification as an integral part of the system accreditation process. Accreditation may also be called authorization or approval. The fact is that each and every information system that is operating in the world today has been through some type of accreditation or approval process, either through some formal or informal process or, in many cases, by default because the process does not exist. System owners and managers along with information owners and managers have approved the system to operate, either by some identified and documented process or by default. It is incumbent upon information security professionals and practitioners to subscribe to a method of ensuring those systems operate as safely and securely as possible in the interconnected and open environment that exists in the world today. The approaches and methods outlined in this chapter are intended as guidelines and a framework from which to build an Information System Security Certification Test. They are not intended to be a set of rules; rather, they are intended to be a process that can be tailored to meet the needs of each unique environment. INTRODUCTION To provide a common frame of reference, it is necessary to define the terms that are used in this chapter. The following definitions apply to the discussion herein. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

509

AU1518Ch31Frame Page 510 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY What Is Accreditation? Accreditation refers to the approval by a cognitive authority to operate a computer system within a set of parameters. As previously mentioned, the process for approving the operation of the information system may be formal, informal, or nonexistent. Take the case of a consumer who purchases a personal computer (PC) from a vendor as an example. The proud new owner of that PC takes it home, connects all the wires in the right places, and turns it on. Probably one of the next actions that new PC owner will take is to connect the PC to an Internet service provider (ISP) by means of some type of communication device. In this scenario, the owner of that PC has unwittingly assumed the risk and responsibility for the operation of that computer within the environment the owner has selected. There is no formal approval process in place, yet the owner assumes the responsibility for the operation of that computer. This responsibility extends to any potential activity that may be initiated from that computer — even illegal activity. The owner also assumes the responsibility for the operation of that computer even if it becomes a zombie used for a distributed denial-of-service (DDoS) attack. No formal policies have been established, and no formal procedures are in place. Dependent upon the skill and experience of the owner, the computer may be correctly configured to defend against hostile actions. Additionally, if other persons, such as family members, use this computer, there may be little control over how this computer is used, what software is installed, what hostile code may be introduced, or what information is stored. At the other end of the scale, a government entity may acquire a largescale computer system. Many governments have taken action to introduce a formal accreditation process. The governments of Canada, Australia, and the United States, among others, have developed formal accreditation or approval processes. Where these processes are developed, information security professionals should follow those processes. They identify specific steps that must be followed in order to approve a computer system to operate. In some cases, specific civil and criminal liabilities are established to encourage the responsible authorities within those government entities to follow the process. A huge middle ground exists between the new PC owner and the large computer system in the government entity. This middle ground encompasses small business owners, medium-sized business entities, and large corporations. The same principle applies to these entities. Somewhere within the management of the organization, someone has made a decision to operate one or more computer systems. These systems may be interconnected and may have access to the global communications network. Business owners, whether sole proprietors, partnerships, or corporations, have assumed the risk and responsibility associated with operating those 510

AU1518Ch31Frame Page 511 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing computer systems. It would be advisable for those business owners to implement a formal accreditation process, as many have. By so doing, business owners can achieve a higher level of assurance that their computer systems are part of the solution to the information security problem instead of being potential victims or contributors to the information security problem. In addition, implementing and practicing a formal accreditation process will help to show that the owners have exercised due diligence if a problem or incident should arise. Elements of Accreditation What are the elements of an accreditation process? One of the major advantages of having a formal accreditation process is the documentation generated by the process itself. By following a process, the necessary rules and procedures are laid down. Conscious thought is given to the risks associated with operating the identified computer system. Assets are identified and relative values are assigned to those assets, including information assets. In following the process, protection measures are weighed against the benefit to the information or asset protected, and a determination is made regarding the cost effectiveness of that protection measure. Methods to maintain the security posture of the system are identified and planned. Also, evidence is generated to help protect the business unit against potential future litigations. Some of the documents that may be generated include Security Policy, Security Plan, Security Procedures, Vulnerability Assessment, Risk Assessment, Contingency Plan, Configuration Management Plan, Physical Security Plan, Certification Plan, and Certification Report. This is neither an inclusive nor exhaustive list. The contents of these documents may be combined or separated in a manner that best suits the environment accredited. A brief explanation of each document follows. Security Policy. The Security Policy for the information system contains the rules under which the system must operate. The Security Policy will be one of the major sources of the system security requirements, which are discussed later in this chapter. Care should be exercised to see that statements in the Security Policy are not too restrictive. Using less restrictive rules avoids the pitfall of having to change policy every time technology changes.

An example of a policy statement is shown in Exhibit 31-1. This clearly states the purpose of the statement without dictating the method by which the policy will be enforced. A policy statement such as this one could be fulfilled by traditional user ID and password mechanisms, smart card systems, or biometric authentication systems. As technology changes, the policy does not need to be changed to reflect advances in the technology. 511

AU1518Ch31Frame Page 512 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Exhibit 31-1. Sample security policy statement. Users of the XYZ Information System will be required to identify themselves and authenticate their identification prior to being granted access to the information system.

Exhibit 31-2. Sample security plan statement. A thumbprint reader will be used to identify users of the XYZ Information System. Users who are positively identified by a thumbprint will then be required to enter a personal identification number (PIN) to authenticate their identity.

Exhibit 31-3. Sample security procedure. Log-On Procedure for the XYZ Information System 1. Place your right thumb on the thumbprint reader window so that your thumbprint is visible to the window. 2. When your name is displayed on the display monitor, remove your thumb from the thumbprint reader. 3. From the keyboard, enter your personal identification number (PIN). 4. Press Enter (or Return). 5. Wait for your personal desktop to be displayed on the display monitor.

Security Plan. The Security Plan for the information system is a fluid document. It identifies the methods employed to meet the policy. This document will change with technology. As new mechanisms are developed that satisfy Security Policy statements, they can be incorporated into the Security Plan and implemented when it is appropriate to do so within the environment.

To satisfy the Security Policy statement given in Exhibit 31-1, the Security Plan may contain a statement such as the one given in Exhibit 31-2. This Security Plan statement identifies the mechanism that will be used to satisfy the statement in the policy. Security Procedures. Security Procedures for the information system are usually written in language intended for a less technical audience. Security Procedures may cover a wide variety of topics, from physical security to firewall configuration guidelines. They generally provide step-by-step instructions for completing a specific task. One such procedure may include a series of statements similar to those given in Exhibit 31-3. By following this procedure, the system user would successfully gain access to the computer system, while satisfying the Security Policy statement given 512

AU1518Ch31Frame Page 513 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing in Exhibit 31-1, using the mechanism identified in Exhibit 31-2. The user need not be familiar with either the Security Policy or the Security Plan when the procedure identifies the steps necessary to accomplish the task within the parameters laid down in the policy and the plan. Vulnerability Assessment. Vulnerability Assessment is often confused with Risk Assessment. They are not the same thing. While the results of a Vulnerability Assessment and a Risk Assessment are often reported in the same document, it is important to note the differences.

A Vulnerability Assessment is that part of the accreditation process that identifies weaknesses in the security of the information system. Vulnerabilities are not limited to technical vulnerabilities such as those reported by Carnegie Mellon’s Computer Emergency Response Team (CERT). Vulnerabilities could also include physical security weaknesses, natural disaster susceptibilities, or resource shortages. Any of these contingencies could introduce risk to an information system. For example, the most technically secure operating system offers little protection if the system console is positioned in the parking lot with the administrator’s password taped to the monitor. Vulnerability Assessments attempt to identify those weaknesses and document them in order. Risk Assessment. The Risk Assessment attempts to quantify the likelihood that hostile persons will exploit the vulnerabilities identified in the Vulnerability Assessment. The Risk Assessment will serve as a major source for system security requirements. There are two basic schools of thought when it comes to assessing risk. One school of thought attempts to quantify risk in terms of absolute monetary value or annual loss expectancy (ALE). The other school of thought attempts to quantify risk in subjective terms such as high, medium, or low. It is not the purpose of this chapter to justify either approach. Insight is given into these approaches so that the information security professional is apprised that risk assessment methodologies may take a variety of forms and approaches. It is left to the discretion of the information security professional and the accrediting authority — who, after all, is the one who will have to approve the results of the process to determine the best risk assessment method for the environment. The Risk Assessment will qualify the risk associated with the vulnerabilities identified in the Vulnerability Assessment so that they may be mitigated through security countermeasures or accepted by the Approving Authority. Contingency Plan. There may be a Contingency Plan or Business Continuity Plan for the information system. This plan will identify the plans for maintaining critical business operations of the information system in the event one or more occurrences cause the information system to be inoperable or marginally operable for a specified period of time. Contingency 513

AU1518Ch31Frame Page 514 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY planning is probably of more value to businesses such as E-commerce sites or ISPs, and one is more likely to expect this type of documentation for these types of organizations. The plan should identify critical assets, operations, and functions. These are noteworthy for the information security professional in that this information identifies critical assets — both physical assets and information assets — that should be the focus of the certification effort. Configuration Management. Configuration Management is that discipline by which changes to the system are made using a defined process that incorporates management approval. Larger installations will usually have a Configuration Management Plan. It is important to systematically consider changes to the information system in order to avoid introducing undesirable results and potential vulnerabilities into the environment. Good configuration management discipline will be reflected favorably in the certification process, as is discussed later in this chapter. Physical Security. Again, good information security is dependent upon good physical security. Banks usually build vaults to protect their monetary assets. In like manner, physical security of information assets is a necessity. Organizations may have physical security plans to address their physical security needs. Regardless of the existence of a plan, the certification effort will encompass the physical security needs of the information system certified. Training. No system security program can be considered complete without some form of security awareness and training provisions. The training program will address those principles and practices specific to the security environment. Training should be both formal and informal. It should include classroom training and awareness reminders such as newsletters, e-mails, posters, or signs.

Certification Certification means many different things to many different people. The context in which one discusses certification has much to do with the meaning derived from the word. The following are some examples of how this word may be used. Professional organizations provide certifications of individuals. A person may carry the designation of Certified Public Accountant (CPA), Certified Information Systems Security Professional (CISSP), or perhaps Certified Protection Professional (CPP). These designations, along with a multitude of others, state that the individual holding the designation has met a defined standard for the designation held. 514

AU1518Ch31Frame Page 515 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing Vendors may provide certifications of individuals on their products. The vendor offers this certification to say that an individual has met the minimum standards or level of expertise on the products for which they are certified. Examples of this type of certification include the Cisco Certified Network Associate (CCNA) or Check Point Certified Security Administrator (CCSA), among many others. Vendors also provide certifications for products. Many vendors offer certifications of interoperability or compatibility, stating that the standards for interoperability or compatibility have been met. For example, Microsoft offers a certification for computer manufacturers that the operating system and the hardware are compatible. Governments offer certifications for a wide variety of persons, products, processes, facilities, utilities, and many other things too numerous to list in this chapter. These government certifications state that the person, object, or process certified has met the standard as defined by that government. Standards organizations may offer certifications. For example, a corporate entity may be certified by the standards organization to perform testing under the Common Criteria for Information Technology Security Evaluation (ISO/IEC 15408). A certified laboratory has met the standards defined by the standards organization. These certified laboratories might in turn offer certification for vendor products to given evaluation assurance levels (EALs), which range from 1 through 6. By giving a certification to a product, these certified labs are stating that the product has met the standard defined for the product. For the purpose of the discussion within this chapter, certification refers to that part of the accreditation process in which a computer system is evaluated against a defined standard. The results of that evaluation are documented, repeatable, defendable, and reportable. The results are presented to the Approving Authority as evidence for approval or disapproval of the information system. The common theme that runs through the world of certification is that there is a defined standard and that the standard has been met. Certification does not attempt to quantify or qualify the degree to which the standard may have been met or exceeded. Certification states that the minimum standard has been achieved. What Is It? Simply put, system security certification is the process by which a system is measured against a defined standard. In a formal certification process, the results of that measurement are recorded, documented, and reported. Cost versus Benefits. The direct monetary benefits to conducting a certification of the information system may not be obvious to management. The 515

AU1518Ch31Frame Page 516 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY question then becomes: Why spend the time, effort, and money if a monetary benefit is not readily obvious? Further, how does the information security professional convince management of the need for certification? To answer these questions, one needs to identify the assets protected. • Financial information. Financial information assets are deserving of protection. The system may process information such as bank accounts, including their transaction balances. It may store the necessary information, such as log-on identification and passwords that would allow a would-be thief to transfer funds to points unknown. Adequate protection mechanisms may be in place to protect financial information, and conducting a certification is one of the best ways to know for sure that the security mechanisms are functioning as advertised and as expected. • Personal information. Many governments have taken steps to provide their citizens with legal protection of personal and private information. In addition to legal requirements that may be imposed by a local authority, civil liabilities may be incurred if personal information is released by the information system. In the event of a civil or criminal proceeding, it would be advantageous to be able to document due diligence. Conducting a certification is a good way to show that due diligence has been exercised. • Corporate information. Information that is considered proprietary in nature or company confidential needs to be protected for reasons determined by managers and owners. This information has value to the business interests of the corporation, agency, or entity. For this reason, certification should be considered part of the approval process in order to verify that the installed security mechanisms are functioning in such a manner as to provide adequate protection to that information. Serious damage to the business interests of the corporation, agency, or entity may be incurred if corporate information were to fall into the wrong hands. • Legal requirements. Laws are constantly changing. Regulatory bodies may change the rules. Conducting a certification of the information system help to keep managers one step ahead of the changing environment and perhaps avoid fines and penalties resulting from a failure to meet legal requirements. Why Certify? It is left to the reader to determine the best justification for proceeding with the certification part of the accreditation process. Remember the earlier discussion regarding the approval to operate an information system? In that discussion, it was discovered that approval and certification are done either through a conscious effort, be it formal or informal, or by default. Choosing to do nothing is not a wise course of action. The fact that you have a firewall, a “secure” operating system, or 516

AU1518Ch31Frame Page 517 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing other security measures installed does not ensure that those features and functions are operating correctly. Many times, the certification process has discovered that these security measures have provided only a false sense of security and that they did not provide any real protection to the information system. ROLES AND RESPONSIBILITIES Once the decision has been made to proceed with a certification, it is necessary to assemble a team of qualified individuals to perform the certification. It can be performed in-house or may be outsourced. In the paragraphs that follow, a suggested list of Roles and Responsibilities for the Certification Test Team are presented. The roles and responsibilities do not necessarily require one person for each role. Roles may be combined or modified to meet the requirements of the environment. Resource availability as well as the size and complexity of the system evaluated will drive the decision on the number of personnel needed. • Approving Authority. The Approving Authority is the person legally responsible for approving the operation of the information system. This person will give the final approval or accreditation for the information system to go into production. The authority of this individual may be derived from law or from business directive. This person will have not only the legal authority to assume the residual risk associated with the operation of the information system, but will also assume the civil and criminal liabilities associated with the operation of the information system. • Certifying Authority. The Certifying Authority or Certifier is the individual responsible for approving, certifying, and reporting the results of the certification. This person is sometimes appointed by the Approving Authority but most certainly has the full faith and support of those in authority to make such an appointment within the agency, business, or corporation. This person must possess a sufficient level of technical expertise to understand the results presented. This individual will function on behalf of the Approving Authority, or those having authority to make the appointment, in all matters pertaining to certification as it relates to the accreditation process. This individual may also be called upon to contribute to a recommendation to the Approving Authority regarding approval or disapproval of the information system to operate. • Test Director. The Test Director operates under the direction of the Certifying Authority. This individual is responsible for the day-to-day conduct of the certification test. The Test Director ensures that the tests are conducted as prescribed and that the results are recorded, collected, preserved, and reported. Depending on the size and complexity of the information system certified, the Test Director may be 517

AU1518Ch31Frame Page 518 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY









required to provide periodic updates to the Certifying Authority. Periods may be weekly, daily, or perhaps hourly, if needed. The Test Director will ensure that all tests are performed in accordance with the test plan. System Manager. The System Manager must be an integral part of the certification process. It is impossible for anyone to know everything about a given information system, even if the system is well documented. The System Manager will usually have the most intimate and current knowledge of the information system. This individual will make significant contributions to preparing test scenarios and test scripts necessary to document the test plan. The System Manager, or a designee of the System Manager, will actually perform many of the tests prescribed in the test plan. Test Observer. Test Observers may be required if the information system is of sufficient size and complexity. At a minimum, it is recommended that there be at least one test observer to capture and record the results of the test as they are performed. Test Observers operate under the direction of the Test Director. Test Recorder. The Test Recorder is responsible to the Test Director for logging and preserving the test results, evidence, and artifacts generated during the test. In the case of smaller installations, the Test Recorder may be the same person as the Test Director. In larger installations, the Test Recorder may be more than one person. The size and complexity of the information system, as well as resource availability, will dictate the number of Test Recorders needed. IV&V. Independent Verification and Validation (IV&V) is recommended as a part of all certification tests. IV&V is a separate task not directly associated with the tasks of the Certifying Authority or the Certification Test Team. The IV&V is outside the management structure of the Certifying Authority, the Test Director, and their teams. Under ideal conditions, IV&V will provide a report directly to the Approving Authority. In this manner, the Approving Authority will have a second opinion regarding the security of the information system certified. IV&V will have access to all the information generated by the Certification Test Team and will have the authority to direct deviations from the test plan. At the discretion of the Approving Authority, the Certification Test Team may not necessarily have access to information generated by the IV&V. The IV&V task may be outsourced if inadequate resources are not available in-house.

DOCUMENTATION With the Certification Test Team in place and the proper authorities, appointments, and reporting structure established, it is now time to begin the task of generating a Certification Test Plan. The Certification Test Plan 518

AU1518Ch31Frame Page 519 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing covers preparation and execution of the certification; delineates schedules and resources for the certification; identifies how results are captured, stored, and preserved; and describes how the Certifying Authority reports the results of the certification to the Approving Authority. Policy Security requirements are derived from a variety of sources. There was a discussion of Security Policies and Security Plans earlier in this chapter. Policy statements are usually found in the Security Policy; however, information security professionals should be watchful for policy statements that appear in Security Plans. Often, these are not separate documents, and the Security Plans for the information system are combined with the policy into a single document. Policy statements are also derived from public law, regulations, and policies. Information security professionals need to be versed in the local laws, regulations, and policies that affect the operations of information systems within the jurisdiction in which they operate. Failing to recognize the legal requirements of local governments could lead to providing false certification results by certifying a system that is operating illegally under local law. For example, some countries require information systems connecting to the Internet to be routed through a national firewall, making it illegal to connect directly to an ISP. Plans Security Plans may contain policy statements, as mentioned previously. Security Plans may also address future implementations of security measures. Information security professionals need to carefully read Security Plans and test only those features that are supposed to be installed in the current configuration. The Certification Test Plan will also ensure that Physical Security, Configuration Management, and Contingency or Emergency Plans are being followed. The absence of these plans must be noted in the Certification Test Report, as the lack of such planning may affect the decision of the Approving authority. Procedures Any Security Procedures that were generated as a part of the overall security program for the information system must be tested. The goal of testing these procedures is to ensure that user and operator personnel are aware of the procedures, know where the procedures are kept, and that the procedures are followed. Occasionally it is discovered that the procedures are not followed and, if not followed, the procedures are worthless. The 519

AU1518Ch31Frame Page 520 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Approving Authority must be made aware of this fact if discovered during the test. Risk Assessment The Risk Assessment is also a major source for security requirements. The Risk Assessment should identify the security countermeasures and mechanisms chosen to mitigate the risk associated with identified vulnerabilities. The Risk Assessment may also prioritize the implementation of countermeasures, although this is normally done in the Security Plan. DETERMINING REQUIREMENTS Here is where the hard work begins. Up to this point in the process, available and appropriate documentation has been collected, a Certification Test Team has been appointed and assembled, and the beginnings of a Certification Test Plan have been initiated. So what is covered by the Certification Test? It tests security requirements. For certification purposes, testing is not limited to technical security requirements of the information system. Later in this chapter, there is a discussion of categorization of requirements; however, before requirements can be categorized, they must be identified, derived, and decomposed. This phase of the certification process may be called the Requirements Analysis Phase. During this phase, direct and derived requirements are identified. The result of this phase is a Requirements Matrix that traces the decomposed requirements to their source. Direct requirements are those clearly identified and clearly stated in a policy document. Going back to Exhibit 31-1, a clear requirement is given for user identification and subsequent authentication. Derived requirements are those requirements that cannot be directly identified in a policy statement; rather, they must be inferred or derived from a higher-level requirement. Using Exhibit 31-2 as an example, the need for a thumbprint reader to be installed on the information system must be derived because it is not stated directly in the plan. Requirements are discussed in the following paragraphs in general order of precedence. The order of precedence given here is not intended to be inflexible; rather, it can be used as a guideline that should be tailored to fit the environment in which it is used. Legal Legal requirements are those requirements promulgated by the law of the land. If, in the case of Exhibit 31-2, the law required the use of smart cards instead of biometrics to identify users, then the policy statement given in Exhibit 31-2 could be considered an illegal requirement. It is the 520

AU1518Ch31Frame Page 521 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing responsibility of the information security professional to be aware of the local laws, and it would be the responsibility of the information security professional to report this inconsistency. The Approving Authority would decide whether to accept the legal implication of approving the information system to operate in the current configuration. Regulatory The banking industry is among the most regulated industries in the world. The banking industry is an example of how government regulations can affect how an information system will function. The types of industries regulated and the severity of regulation within those industries vary widely. Information security professionals need to be familiar with the regulatory requirements associated with the industry in which they operate. Local Local requirements are the policies and requirements implemented by the entity, agency, business, or corporation. These requirements are usually written in manuals, policies, guidance documents, plans, and procedures specific to the entity, agency, business, or corporation. Functional Sometimes security requirements stand in the way of functional or mission requirements, and vice versa. Information security professionals need to temper the need to protect information with the need to get the job done. For this reason, it is recommended that security requirements be tested using functional and operational scenarios. By so doing, a higher level of assurance is given that security features and mechanisms will not disrupt the functional requirements for the information system. It allows the information security professional to evaluate how the security features and mechanisms imposed on the information system may affect the functional mission. Operational Operational considerations are also an important part of the requirements analysis. Operational requirements can sometimes be found in the various plans and procedures. It is necessary to capture these requirements in the Requirements Matrix also, so that they can be tested as part of the overall information security program. Operational requirements may include system backup, contingencies, emergencies, maintenance, etc. Requirements Decomposition Decomposing a requirement refers to the process by which a requirement is broken into smaller requirements that are quantifiable and testable. Each 521

AU1518Ch31Frame Page 522 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Exhibit 31-4. Sample decomposed policy requirements. 1.1 Users of the XYZ Information System will be required to identify themselves prior to being granted access to the information system. 2.2 Users of the XYZ Information System will be required to authenticate their identity prior to being granted access to the information system. 2.1 A thumbprint reader will be used to identify users of the XYZ Information System. 2.1.a Thumbprint readers are installed on the target configuration. 2.2 Users who are positively identified by a thumbprint will then be required to enter a personal identification number (PIN) to authenticate their identification. 2.2.a Keyboards are installed on the target configuration.

decomposed requirement should be testable on a pass-or-fail basis. As an example, Exhibit 31-1 contains at least two individual testable requirements. Likewise, Exhibit 31-2 contains at least two individual testable requirements. Exhibit 31-4 shows the individual decomposed requirements. Requirements Matrix A Requirements Matrix is an easy way to display and trace a requirement to its source. It provides a column for categorization of each requirement. The Matrix also provides a space for noting the evaluation method that will be used to test that requirement and a space for recording the results of the test. The following paragraphs identify column heading for the Requirements Matrix and provide an explanation of the contents of that column. Exhibit 31-5 is an example of how the Requirements Matrix may appear. Category. Categories may vary, depending upon the environment of the information system certified. The categories listed in the following paragraphs are suggested as a starting point. The list can be tailored to meet the needs of the environment. Further information on security services and mechanisms listed in the subsequent paragraphs can be found in ISO 7498-2, Information Processing Systems — Open Systems Interconnection — Basic Reference Model — Part 2: Security Architecture (1989). The following definitions are attributed to ISO 7498-2. Note that a requirement may fit in more than one category.

• Security services. Security services include authentication, access control, data confidentiality, data integrity, and nonrepudiation. — Authentication. Authentication is the corroboration that a peer entity is the one claimed. — Access control. Access control is the prevention of unauthorized use of a resource, including the prevention of use of a resource in an unauthorized manner. 522

Category

I&A

I&A

I&A

Architecture

I&A

Architecture

Req. No.

1

2

3

4

5

6

Derived

XYZ Security Plan

XYZ Security Plan Derived

XYZ Security Policy

XYZ Security Policy

Source Reference Users of the XYZ Information System will be required to identify themselves prior to being granted access to the information system Users of the XYZ Information System will be required to authenticate their identify prior to being granted access to the information system A thumbprint reader will be used to identify users of the XYZ Information System Thumbprint readers are installed on the target configuration Users who are positively identified by a thumbprint will then be required to enter a personal identification number (PIN) to authenticate their identification Keyboards are installed on the target configuration

Stated Requirement

Exhibit 31-5. Example requirements matrix.

Observation

Demonstrate

Observation

Demonstrate

Test

Test

Evaluation Method

AR001A

IA003S

AR001A

IA003S

IA002S

IA002S

Test Procedure

Pass

Fail

AU1518Ch31Frame Page 523 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing

523

AU1518Ch31Frame Page 524 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY — Data confidentiality. Data confidentiality is the property that information is not made available or disclosed to unauthorized individuals, entities, or processes. — Data integrity. Data integrity is the property that data has not been altered or destroyed in an unauthorized manner. — Non-repudiation. Non-repudiation is proof of origin or receipt such that one of the entities involved in a communication cannot deny having participated in all or part of the communication. • Additional Categories. The following categories are not defined in ISO 7498. These categories, however, should be considered as part of the system security Certification Test. — Physical security. Physical security of the information system is integral to the overall information security program. At a minimum, the Certification Test should look for obvious ways to gain physical access to the information system. — Operational security. Operational security considerations include items such as backup schedules and their impact on the operational environment. For example, if a system backup is performed every day at noon, the Certification Test should attempt to determine if this schedule has an operational impact on the mission of the system, remembering that availability of information is one of the tenets of sound information security practice. — Configuration management. At a minimum, the Certification Test should select one change at random to determine that the process for managing changes was followed. — Security awareness and training. At a minimum, the Certification Test should randomly select user and operator personnel to determine that there is an active Security Awareness and Training Program. — System security procedures. At a minimum, the Certification Test should select an individual at random to determine if the System Security Procedures are being followed. — Contingency planning. The Certification Test should look for evidence that the Contingency Plan is routinely tested and updated. — Emergency Planning. The Certification Test should determine if adequate and appropriate emergency plans are in place. • Technical. Technical controls are those features designed into or added onto the computer system that are intended to satisfy requirements through the use of technology. — Access controls. The technical access control mechanisms are those that permit or deny access to systems or information based on rules that are defined by system administration and management personnel. This is the technical implementation of the access control security service. 524

AU1518Ch31Frame Page 525 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing — Architecture. Technical architecture is of great importance to the Certification Test process. Verifying the existence of a well-developed system architecture will provide assurance that backdoors into the system do not exist unless there is a strong business case to support the backdoor, and then only if it is properly secured. — Identification and authentication. Identification and authentication is the cornerstone of information security. The Certification Test Plan must thoroughly detail the mechanisms and features associated with the process of identifying a user or process, as well as the mechanisms and features associated with authenticating the identity of the user or process. — Object reuse. In most information systems, shared objects, such as memory and storage, are allocated to subjects (users, processes, etc.) and subsequently released by those subjects. As subjects release objects back to the system to be allocated to other subjects, residual information is normally left behind in the object. Unless the object is cleared of its residual content, it is available to a subject that is granted an allocation to that object. This situation creates insecurity, particularly when the information may be passed outside the organization, thereby unintentionally releasing sensitive information to the public that resides in the file slack space. Clearing the object, either upon release of the object or prior to its allocation to a subject, is the technique used to prevent this insecurity. The test facilities necessary to test shared resources for residual data may not be available to the information security professional. To test this feature, the Certification Test Team may be required to seek the services of a certified testing facility. At a minimum, the Certification Test Plan should determine if this feature is available on the system under test and also determine if this feature is enabled. If these features have been formally tested by a reputable testing facility, their test results may be leveraged into the local test process. On a related subject, data remanence may be left on magnetic storage media. That is, the electrical charges on given magnetic media may not be completely discharged by overwriting the information on the media. Sophisticated techniques can be employed to recover information from media, even after it has been rewritten several times. This fact becomes of particular concern when assets are either discarded or transferred out of the organization. Testing this feature requires specialized equipment and expertise that may not be available within the Certification Test Team. At a minimum, the Certification Test Plan should determine if procedures and policies are in place to securely erase all data remanence from media upon destruction or transfer, through a process known as degaussing. 525

AU1518Ch31Frame Page 526 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Audit Auditing is the technical security mechanism that records selected actions on the information system. Audit logs must be protected from tampering, destruction, or unauthorized access. The Certification Test Plan should include a test of the audit features of the system to determine their effectiveness. System Integrity Technical and nontechnical features and mechanisms should be implemented to protect the integrity of the information system. Where these features are implemented, the Certification Test Plan should examine them to determine their adequacy to meet their intended results. Security Practices and Objectives Test categories that address security practices and objectives may be found in the International Organization for Standards (ISO) and the International Electrotechnical Commission (IEC) from their adaptation of British Standard (BS) 7799, which was published as ISO/IEC International Standard (IS) 17799, Information Technology — Code of Practice for Information Security Management, dated December 2000. ISO/IEC IS 17799 recommends standards for and identifies several objectives that are elements of information security management. In keeping with the spirit of the IS, the elements herein identified are recommendations and not requirements. These elements can be tailored to adapt to the environment in which the test is executed. For a further explanation and detailed definition of each of these categories, the reader is referred to ISO/ISE IS 17799. Exhibit 31-6 lists the various security services and mechanisms from ISO 7498-2 and the various security management practices and objectives from ISO 17799. Source Each requirement must be traceable to its source. The source may be any one or more of the documents identified above. Specific Requirement. Each decomposed requirement will be listed separately. This allows for easy reference to the individual requirement. Evaluation Method. This column identifies the method that will be used to evaluate the requirement. Possible evaluation methods include Test, Demonstration, Inspection, Not Evaluated, or Too General.

• Test. This evaluation method calls for the requirement to be tested on a system of the same configuration as the live system. Testing on a live system is not recommended; however, if resource constraints necessitate 526

AU1518Ch31Frame Page 527 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing Exhibit 31-6. Security services, practices, and objectives .. SECURITY SERVICES (ISO 7498-2) Authentication Peer entity authentication Data origin authentication Access control Data confidentiality Connection confidentiality Connectionless confidentiality Selective field confidentiality Traffic flow confidentiality Data integrity Connection integrity with recovery Connection integrity without recovery Selective field connection integrity Connectionless integrity Selective field connectionless integrity Non-repudiation Non-repudiation with proof of origin Non-repudiation with proof of delivery SPECIFIC SECURITY MECHANISMS (ISO 7498-2) Encipherment Digital signature Access control Data integrity Authentication exchange Traffic padding Routing control Notarization PERVASIVE SECURITY MECHANISMS (ISO 7498-2) Trusted functionality Security labels Event detection Security audit trail Security recovery Security policy Information security policy document Review and evaluation Organizational Security Information security infrastructure Management information security forum Information security coordination Allocation of information security responsibilities Authorization process for information processing facilities Specialist information security advice Cooperation between organizations Independent review of information security

527

AU1518Ch31Frame Page 528 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Exhibit 31-6. Security services, practices, and objectives (Continued). Security of third-party access Identification of risks from third-party access Security requirements in third-party contracts Outsourcing Security requirements in outsourcing contracts Asset Classification and Control Accountability for assets Inventory of assets Information classification Classification guidelines Information labeling and handling Personnel Security Security in job definition and resourcing Including security in job responsibilities Personnel screening and policy Confidentiality agreements Terms and conditions of employment User training Information security education and training Responding to security incidents and malfunctions Reporting security incidents Reporting security weaknesses Reporting security malfunctions Learning from incidents Disciplinary process Physical and Environmental Security Secure areas Physical security perimeter Physical entry controls Security offices, rooms and facilities Working in secure areas Isolated delivery and loading areas Equipment security Equipment sitting and protection Power supplies Cabling security Equipment maintenance Security of equipment off-premises Secure disposal or reuse of equipment General controls Clear desk and clear screen policy Removal of property Communications and Operations Management Operational procedures and responsibilities Documented operating procedures Operational change control Incident management procedures Segregation of duties

528

AU1518Ch31Frame Page 529 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing Exhibit 31-6. Security services, practices, and objectives (Continued). Separation of development and operational facilities External facilities management System planning and acceptance Capacity planning System acceptance Protection against malicious software Controls against malicious software Housekeeping Information backup Operator logs Fault logging Network management Network controls Media handling and security Management of removable computer media Disposal of media Information handling procedures Security of system documentation Exchanges of information and software Information and software exchange agreements Security of media in transit Electronic commerce security Security of electronic mail Security of electronic office systems Publicly available systems Other forms of information exchange Access control Business requirements for access control Access control policy User access management User registration Privilege management User password management Review of user access rights User responsibilities Password use Unattended user equipment Network access control Policy on use of network services Enforced path User authentication for external connections Node authentication Remote diagnostic port protection Segregation in networks Network connection control Network routing control Security of network services Operating system access control Automatic terminal identification

529

AU1518Ch31Frame Page 530 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Exhibit 31-6. Security services, practices, and objectives (Continued). Terminal log-on procedures User identification and authentication Password management system Use of system utilities Duress alarm to safeguard users Terminal timeout Limitation of connection time Application access control Information access restriction Sensitive system isolation Monitoring system access and use Event logging Monitoring system use Clock synchronization Mobile computing and teleworking Mobile computing Teleworking Systems Development and Maintenance Security requirements of systems Security requirements analysis and specification Security in application systems Input data validation Control of internal processing Message authentication Output data validation Cryptographic controls Policy on the use of cryptographic controls Encryption Digital signatures Non-repudiation services Key management Security of system files Control of operational software Protection of system test data Access control to program source library Security in development and support processes Change control procedures Technical review of operating system changes Restriction on changes to software packages Covert channels and Trojan code Outsourced software development Business Continuity Management Aspects of business continuity management Business continuity management process Business continuity and impact analysis Writing and implementing continuity plans Business continuity planning framework Testing, maintaining, and reassessing business continuity plans

530

AU1518Ch31Frame Page 531 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing Exhibit 31-6. Security services, practices, and objectives (Continued). Compliance Compliance with legal requirements Identification of applicable legislation Intellectual property rights Safeguarding of organizational records Data protection and privacy of personal information Prevention of misuse of information processing facilities Regulation of cryptographic controls Collection of evidence Reviews of security policy and technical compliance Compliance with security policy Technical compliance checking System audit considerations System audit controls Protection of system audit tools









testing on the live system, all parties must be advised and agree to the risk associated with that practice. Demonstration. When testing is inappropriate, a demonstration may be substituted. For example, if a requirement calls for hard-copy output from the information system to be marked in a specific manner, personnel associated with the operation of the system could easily demonstrate that task. Inspection. Inspection is an appropriate test method for requirements such as having visiting personnel register their visit or a requirement that personnel display an identification card while in the facility. Not evaluated. This method should only be chosen at the direction of the Approving Authority. There are occasions where testing a requirement may cause harm to the system. For example, testing a requirement to physically destroy a hard disk prior to disposal would cause an irrecoverable loss. In cases such as these, the Approving Authority may accept the process as evidence that the requirement is met. Too general. Occasionally, requirements cannot be quantified in a pass-or-fail manner. This is usually due to a requirement that is too general. An example might be a requirement that the information system is operated in a secure manner. This requirement is simply too general to quantify and test.

Test Procedure. Identify the test procedure that is used to test the requirement. Building test scenarios and test scripts is discussed later in this chapter. The combination of these items forms a test procedure. The test procedures are identified in this column on the matrix. Pass or Fail. The last column is a placeholder for a pass or fail designator. The Test Recorder will complete this column after the test is executed. 531

AU1518Ch31Frame Page 532 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY BUILDING A CERTIFICATION TEST PLAN The Test Team has been established and appointed, and requirements have been identified and broken down into individual testable requirements. The Certification Test Plan can now be written. The Certification Test Plan will address test objectives and schedules; and it will provide a method for executing the individual tests and for recording, compiling, and reporting results. To maintain integrity of the system functional requirements, tests can be structured around real-life functional and operational scenarios. By so doing, the Certifying Authority and the Approving Authority can obtain a higher level of assurance that the system will not only be a more secure system but also will meet its operational mission requirements. Remember: Certification Testing is designed to show that the system meets the minimum requirements — not to show that security features and mechanisms are all installed, enabled, and configured to their most secure settings. This may seem somewhat contrary to good security practice; however, it is not. Most of the security engineering and architecture work would have been accomplished in the initial design and implementation phases for the system. Of course, it is incumbent upon information security professionals to identify those practices that introduce vulnerabilities into the system. Information security professionals must identify those weaknesses before entering into a Certification Test. Under these conditions, the test would proceed only after the managers and owners of the system agree to accept the risk associated with the vulnerabilities. The goal is to avoid any surprises introduced in the final report on the Certification Test. Introduction and Background The Certification Test Plan should begin with some introductory and background information. This information would identify the system under test, its mission and purpose. The Plan should identify the reasons for conducting the test, whether for initial accreditation and approval of the system or as part of an ongoing information security management program. This provides historical information to those who may wish to review the results in the future, and it also provides a framework for persons who may be involved in Independent Verification and Validation (IV&V) efforts and who may not be familiar with the system tested. Adequate detail should be provided to satisfy these two goals. The Certification Test Plan should define its purpose. Providing a defined purpose will help to limit the scope of the Test Plan in order to avoid either testing too little, thereby rendering the test evidence inadequate to support conclusions in the test report, or testing too much, thereby rendering the test unmanageable and the results suspect. 532

AU1518Ch31Frame Page 533 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing The scope of the test should be identified. That is, the configuration boundaries should be defined and the limit of requirements and standards should be identified. These factors would have been identified prior to reaching this point in the process. It is important to document them in the Certification Test Plan because the supporting documentation upon which this plan is built may change in the future, causing a loss of the current frame of reference. For example, if a UNIX-based system is tested today, and it is retrofitted with a Windows-based system next year, the results of the test are not valid for the new configuration. If the test plan fails to identify its own scope, there is no basis for determining that the test results are still valid. Assumptions and Constraints Assumptions and Constraints must be identified. These items will cover topics like the availability of a test suite of equipment, disruption of mission operations, access to documentation such as policies and procedures, working hours for the test team, scheduling information, access to the system, configuration changes, etc. Test Objectives High-level Test Objectives are identified early in the certification test plan. These objectives should identify the major requirements tested. Test Scenarios will break down these overall objectives into specific requirements, so there is no need to be detailed in this section of the plan. Highlevel objectives can include items such as access control, authentication, audit, system architecture, system integrity, facility security management, standards, functional requirements, or incident response. Remember that a Requirements Matrix has already been built and that the Test Scenarios, discussed later in this chapter, will provide the detailed requirements and detailed test objectives. Here the reader of the Certification Test Plan is given a general idea of those objectives to which the system will be tested. System Description This section of the Certification Test Plan should identify and describe the hardware, software, and network architecture of the system under test. Configuration drawings and tables should be used wherever possible to describe the system. Include information such as make and model number, software release and version numbers, cable types and ratings, and any other information that may be relevant to conducting of the test. Test Scenario The next step in developing the Certification Test Plan is to generate Test Scenarios. The scenario can simulate real operational conditions. By 533

AU1518Ch31Frame Page 534 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY Exhibit 31-7. Sample test scenario. Title: Number: Purpose:

Team Members Required: Required Support: Evaluation Method: Entrance Criteria: Exit Criteria: Test Scripts Included:

Identification and Authentication Procedure IA002 In this test procedure, a user will demonstrate the procedures for gaining access to the XYZ Information System. Evaluators and observers will verify that the procedure is followed as documented. This scenario is a prerequisite to other test scenarios that require access to the system and will, therefore, be tested many times during the course of the certification test. Evaluators, Observers, User Representative, IV&V User Representative Observation, Demonstration (Identify tests that must be successfully completed before this test can begin) (Identify how the tester will know that this test is completed) IA001S

Procedure: 1. Power on the workstation, if not already powered on. 2. Demonstrate the proper method of identifying the user to the system. 3. Demonstrate the proper method of authenticating the identified user to the system. 4. Observers will verify that all steps in the test script are executed. 5. Evaluators will complete the attached checklist. 6. Completed scripts, checklists, and observer notes will be collected and transmitted to the test recorder.

so doing, functional considerations are included within the Certification Test Plan. The members of the Test Team should be familiar with the operational and functional needs of the system in order to show that the security of the system does not adversely impact the functional and operational considerations. This is the reason system administration and system user representatives are members of the Test Team. The Test Scenario should identify the Test Objective and expected results of the scenario. Using the example presented earlier in this chapter, an example Test Scenario is shown as Exhibit 31-7. In this scenario, user identification and authentication procedures are tested by having a user follow the published procedure to accomplish that task. Test Script The Test Scenario identifies Test Scripts that are attached to the Scenario. The persons actually executing the test procedures use Test Scripts. Persons most familiar with the operation of the system should prepare 534

AU1518Ch31Frame Page 535 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing Exhibit 31-8. Sample test script. Title

Identification and Authentication Procedure

Test Script Number:

IA002S

Equipment:

Standard Workstation

Step:

Script

1. Power on workstation

1.1. Determine if workstation is powered on.

Pass/Fail

1.2. If yes, go to step 2. 1.3. Power on workstation and wait for log-in prompt. 2. Identify user to system

2.1. The user will place the right thumb on the thumbprint reader. 2.2. Wait for system to identify the user.

3. Authenticate identity

3.1. The user will enter the personal identification number (PIN) using the keyboard. 3.2. Wait for authentication information to be verified by the system.

Test Scripts. These people know how the system functions on a day-to-day basis. Depending on the stage of development of the system, those persons may be developers, system administrators, or system users. The Test Script will provide step-by-step instructions for completing the operations prescribed in the Test Scenario. Each step in the Script should clearly describe the expected results of the step. This level of detail is required to assure reproducibility. Test Results are worthless if they cannot be reproduced at a later date. Exhibit 31-8 is an example of a Test Script. Test Results The results of each individual test are recorded as the test is executed. This is the reason for adding the third column on the Test Script. This column is provided for the observer and evaluator to indicate that the step was successfully completed. Additionally, space should be provided or a separate page attached for observers and evaluators to record any thoughts or comments they feel may have an impact on the Certification Test Report. It is not necessary that all the observers agree on the results, but it is necessary that the team be as thorough as necessary to document what happened, when it happened, and whether did it happen as expected. This information will be consolidated and presented in the Certification Test Report, which becomes the basis for recommending certification of the system.

535

AU1518Ch31Frame Page 536 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY DOCUMENTING RESULTS The next step in the process of system security certification is to document the results of the Certification Test. Remember that this document will become part of the accreditation package and must be presented fairly and completely. Security professionals should not try to skew the results of the test in favor of any party involved in the certification or accreditation process. Results must be presented in an unbiased fashion. This is necessary in order to preserve the security of the system and also the integrity of the profession. Report The Certification Test Report must be able to stand on its own. Sufficient information should be presented that the reader of the report does not need to refer to other documents to understand the report. As such, the report will document the purpose and scope of the test. It will identify mode of operation chosen for the system, the configuration and the perimeter of the system under test, and who was involved and the roles each person played. It will summarize the findings. Finally, the Certification Test Report will state whether the system under test meets the security requirements. Any other appropriate items should be included, such as items identified as meeting requirements but not meeting the security goals and objectives. For example, a system could have a user identification code of userid, and a password of password. While this may meet the requirement of having a username and password assigned to the user, it fails to meet security objectives because the combination is inadequate to provide a necessary level of protection to the system. The Certification Test Report should identify this as a weakness and recommend that a policy for username and password strength and complexity be adopted. Completed Requirements Matrix Among the various attachments to the Certification Test Report is the completed Requirements Matrix. The Test Recorder would transfer the results of the Test Scenarios to the Requirements Matrix. Presenting this information in this manner allows someone reviewing the report to easily scan the table for requirements that have not been met. These unsatisfied requirements will be of great interest to the Approving Authority because the legal and civil liabilities of accepting the risk associated with unsatisfied requirements will belong to that person. Exhibit 31-9 is an example of a completed Requirements Matrix. RECOMMENDATIONS Finally, the Certification Test Report will provide sufficient justification for the recommendations it makes. The report could make recommendations to the Certifying Authority, if prepared by the Test Director or person 536

Category

I&A

I&A

I&A

Architecture

I&A

Architecture

Req. No.

1

2

3

4

5

6

Derived

XYZ Security Plan

Derived

XYZ Security Plan

XYZ Security Policy

XYZ Security Policy

Source Reference Users of the XYZ Information System will be required to identify themselves prior to being granted access to the information system. Users of the XYZ Information System will be required to authenticate their identity prior to being granted access to the information system. A thumbprint reader will be used to identify users of the XYZ Information System. Thumbprint readers are installed on the target configuration. Users who are positively identified by a thumbprint will then be required to enter a personal identification number (PIN) to authenticate their identification. Keyboards are installed on the target configuration.

Stated Requirement

Exhibit 31-9. Completed requirements matrix.

Observation

Demonstrate

Observation

Demonstrate

Test

Test

Evaluation Method

AR001A

IA003S

AR001A

IA003S

IA002S

IA002S

Test Procedure

X

X

X

X

X

X

Pass

Fail

AU1518Ch31Frame Page 537 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing

537

AU1518Ch31Frame Page 538 Thursday, November 14, 2002 8:28 PM

APPLICATION PROGRAM SECURITY of similar capacity. The report could make recommendations to the Accrediting Authority, if prepared by the Certifying Authority. Regardless of the audience or the author of the report, it will contain recommendations that include those identified in the following paragraphs. Certify or Not Certify The recommendation either to certify or not certify is the professional opinion of the person or persons preparing the report. Just as a recommendation to certify must be justified by the material presented in the report, so should a recommendation not to certify. Documentation and justification are the keys to successfully completing a Certification Test. If it is discovered at this point in the process that there is insufficient information to justify the conclusion, it would be necessary to regress and acquire the necessary information. Security professionals must be prepared to justify the conclusion and provide the documentation to support it. Meets Requirements but Not Secure On rare occasions, it is necessary to identify areas of weakness that meet the requirements for the system but fail to satisfy system security objectives. Usually these are identified early in the certification process, when policies are reviewed and requirements are decomposed. If, however, one or more of these items should make it through the certification process, it would be incumbent upon security professionals to identify them in the Certification Test Report. Areas to Improve No system security approach is perfect. Total security is unachievable. With this in mind, the security professional should identify areas that could be improved. Certainly, if the recommendation were not to certify, this section of the Certification Test Report would include those items that need to be fixed before certification could be recommended. Likewise, if items are identified that do not meet the security objectives, a recommendation should be made regarding repairing the policies that allowed this situation to occur, along with a recommendation for improving the security of the system by fixing the technology, process, or procedure that is errant. Also, if the recommendation is to certify the system, all security approaches could use some improvements. Those items and recommendations should be identified in the report. Recertification Recommendations Conditions under which the certification becomes invalid should be identified in the Certification Test Report. Often these conditions are dictated by policy and are usually linked to the passage of time or to the 538

AU1518Ch31Frame Page 539 Thursday, November 14, 2002 8:28 PM

A Framework for Certification Testing reconfiguration of the system. Regardless of whether these conditions are identified in the policies for the system, the Certification Test Report should identify them. A major reason for including this in the report is so that future uses of its contents will be within the context it is intended. For example, it would be inappropriate to use the results of the Certification Test from five years ago, when the hardware, software, and operating systems were different, to justify certification of the system as it exists today. DISSENTING OPINIONS Certification is not an exact science. Occasionally, there is a difference of opinion regarding the conclusions drawn against the evidence presented. The Certification Test Report must report those dissenting opinions because it is necessary that the Accrediting Authority have as much information as is available before formulating an informed opinion. Every effort should be made to resolve the difference of opinion; however, if a resolution cannot be found, it is the obligation of the security professional to report that difference of opinion. Independent Verification and Validation (IV&V) will submit the report directly to the Accrediting Authority without consulting the Certifying Authority or the Certification Test Team. This independent opinion gives the Accrediting Authority another perspective on the results of the Certification Test results. There should be little, if any, difference between the findings in the Certification Test Report and those of the IV&V if the test was properly structured and executed. FINAL THOUGHTS Final thoughts are similar to initial thoughts. Computer systems large and small, or anywhere in between, are approved for use and are certified either by conscious and deliberate effort or blindly by default. It would be better to make an informed decision rather than rely on luck or probabilities. Granted, there is a possibility that the system will never be subject to attacks, whether physical or electronic. Taking that chance leaves one exposed to the associated legal, civil, or criminal liabilities. Security professionals should insist on some type of certification, formal or informal, before putting any computer system into production and exposing it to the communication world. ABOUT THE AUTHOR Kevin J. Davidson, CISSP, is a senior staff systems engineer with Lockheed Martin Mission Systems in Gaithersburg, Maryland. He earned a B.S. in computer science from Thornewood University in Amsterdam, the Netherlands. He has developed and performed certification tests for the U.S. Department of Defense and the U.S. Department of Justice. 539

AU1518Ch31Frame Page 540 Thursday, November 14, 2002 8:28 PM

AU1518Ch32Frame Page 541 Thursday, November 14, 2002 7:59 PM

Chapter 32

Malicious Code: The Threat, Detection, and Protection Ralph Hoefelmeyer, CISSP Theresa E. Phillips, CISSP

Malicious code is logically very similar to known biological attack mechanisms. This analogy is critical; like the evolution of biological mechanisms, malicious code attack mechanisms depend on the accretion of information over time. The speed of information flow in the Internet is phenomenally faster than biological methods, so the security threat changes on a daily if not hourly basis. One glaring issue in the security world is the unwillingness of security professionals to discuss malicious code in open forums. This leads to the hacker/cracker, law enforcement, and the anti-virus vendor communities having knowledge of attack vectors, targets, and methods of prevention; but it leaves the security professional ignorant of the threat. Trusting vendors or law enforcement to provide information on the threats is problematic and is certainly not due diligence. Having observed this, one must stress that, while there is an ethical obligation to publicize the potential threat, especially to the vendor, and observe an embargo to allow for fixes to be made, exploit code should never be promulgated in open forums. Macro and script attacks are occurring at the rate of 500 to 600 a month. In 2001, Code Red and Nimda caused billions of dollars of damage globally in remediation costs. The anti-virus firm McAfee.com claims that the effectiveness of the new wave of malicious codes was due to a one-two punch of traditional virus attributes combined with hacking techniques. Industry has dubbed this new wave of attacks the hybrid threat. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

541

AU1518Ch32Frame Page 542 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY Exhibit 32-1. Viruses, 1986–2001. Virus Brain Lehigh Dark Avenger Michelangelo Tequila Virus Creation Laboratory Sme.g.,pathogen Wm.concept Chernobyl Explore.zip Magistr

First Observed 1986 1987 1989 1991 1991 1992 1994 1995 1998 1999 2001

Type .com infector Command.com infector .exe infector Boot sector Polymorphic, multipartite file infector A virus builder kit; allowed non-programmers to create viruses from standard templates Hard drive deletion Macro virus Flash BIOS rewrite File erasure E-mail worm; randomly selects files to attach and mail

The goals in this chapter are to educate the information security practitioner on the current threat environment, future threats, and preventive measures. CURRENT THREATS Viruses The classic definition of a virus is a program that can infect other programs with a copy of the virus. These are binary analogues of biological viruses. When these viruses insert themselves into a program — the program being analogous to a biological cell — they subvert the control mechanisms of the program to create copies of themselves. Viruses are not distinct programs — they cannot run on their own and need to have some host program, of which they are a part, executed to activate them. Fred Cohen clarified the meaning of virus in 1987 when he defined a virus as “a program that can ‘infect’ other programs by modifying them to include a possibly evolved copy of itself.” Cohen earned a Ph.D. proving that it was impossible to create an accurate virus-checking program. One item to note on viruses is the difference between damage as opposed to infection. A system may be infected with a virus, but this infection may not necessarily cause damage. Infected e-mail that has viral attachments that have not been run are referred to as latent viruses. Exhibit 32-1 describes some examples of viruses released over the years. (Note: This is not an exhaustive list — there are arguably 60,000 known viruses.) 542

AU1518Ch32Frame Page 543 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection Worms Worms are independent, self-replicating programs that spread from machine to machine across network connections, leveraging some network medium — e-mail, network shares, etc. Worms may have portions of themselves running on many different machines. Worms do not change other programs, although they may carry other code that does (e.g., a virus). Worms illustrate attacks against availability, where other weapons may attack integrity of data or compromise confidentiality. They can deny legitimate users access to systems by overwhelming those systems. With the advent of the blended threat worm, worm developers are building distributed attack and remote-control tools into the worms. Worms are currently the greatest threat to the Internet. Morris Worm. Created by Robert T. Morris, Jr. in 1988, the Morris worm was the first active Internet worm that required no human intervention to spread. It attacked multiple types of machines, exploited several vulnerabilities (including a buffer overflow in fingered and debugging routines in sendmail), and used multiple streams of execution to improve its speed of propagation. The worm was intended to be a proof of concept; however, due to a bug in the code, it kept reinfecting already infected machines, eventually overloading them. The heavy load crashed the infected systems, resulting in the worm’s detection. It managed to infect some 6200 computers — 10 percent of the Internet at that time — in a matter of hours. As a result of creating and unleashing this disruptive worm, Morris became the first person convicted under the Computer Fraud and Abuse Act. Code Red Worm. The Code Red worm infected more than 360,000 computers across the globe on July 19, 2001. This action took less than 14 hours. The intention of the author of Code Red was to flood the White House with a DDoS attack. The attack failed, but it still managed to cause significant outages for other parties with infected systems. This worm used the ida and idq IIS vulnerabilities. The patch to correct this known vulnerability had been out for weeks prior to the release of the worm. Nimda. Nimda also exploited multimode operations: it was an e-mail worm, it attacked old bugs in Explorer and Outlook, and it spread through Windows shares and an old buffer overflow in IIS. It also imitated Code Red 2 by scanning logically adjacent IP addresses. The net result was a highly virulent, highly effective worm that revealed that exploiting several old bugs can be effective, even if each hole is patched on most machines: all patches must be installed and vulnerabilities closed to stop a Nimda-like worm. Such a worm is also somewhat easier to write because one can use many well-known exploits to get wide distribution instead of discovering new attacks. 543

AU1518Ch32Frame Page 544 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY Exhibit 32-2. Trojan horses and payloads. Trojan Horse

“Legitimate” Program

PrettyPark

Screen Saver

Back Orifice Goner W32.DIDer

Program Screen Saver Lottery game “ClickTilUWin”

Trojan Auto e-mailer; tries to connect to specific IRC channel to receive commands from attacker Allows intruders to gain full access to the system Deletes AV files; installs DDoS client Transmits personal data to a Web address

Trojan Horses A Trojan horse, like the eponymous statue, is a program that masquerades as a legitimate application while containing another program or block of undesired, malicious, destructive code, deliberately disguised and intentionally hidden in the block of desirable code. The Trojan Horse program is not a virus but a vehicle within which viruses may be concealed. Exhibit 32-2 lists some Trojan horses, their distribution means, and payloads. Operating System-Specific Viruses DOS. DOS viruses are checked for by current anti-virus software. They are a threat to older machines and systems that are still DOS capable. DOS viruses typically affect either the command.com file, other executable files, or the boot sector. These viruses spread by floppy disks as well as e-mail. They are a negligible threat in today’s environment. Windows. Macro viruses take advantage of macros — commands that are embedded in files and run automatically. Word-processing and spreadsheet programs use small executables called macros; a macro virus is a macro program that can copy itself and spread from one file to another. If you open a file that contains a macro virus, the virus copies itself into the application’s start-up files. The computer is now infected. When you next open a file using the same application, the virus infects that file. If your computer is on a network, the infection can spread rapidly; when you send an infected file to someone else, they can also become infected.

Visual Basic Script (VBS) is often referred to as Virus Builder Script. It was a primary method of infection via e-mail attachments. Now, many network or system administrators block these attachments at the firewall or mail server. UNIX/Linux/BSD. UNIX, Linux, and BSD were not frequently targeted by malicious code writers. This changed in 2001, with new Linux worms target544

AU1518Ch32Frame Page 545 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection ing systems by exploiting flaws in daemons that automatically perform network operations. Examples are the Linux/Lion, which exploits an error in the bind program code and allows for a buffer overflow. Another example of a UNIX worm is SadMind. This worm uses a buffer overflow in Sun Solaris to infect the target system. It searches the local network for other Solaris servers, and it also searches for Microsoft IIS servers to infect and deface. Many of the UNIX variant exploits also attempt to download more malicious code from an FTP server to further corrupt the target system. The goal of UNIX attacks involves placing a root kit on the target system; these are typically social engineering attacks, where a user is induced to run a Trojan, which subverts system programs such as login. Macintosh. Main attack avenues are bootable Macintosh disks, Hyper-

Card stacks, and scripts. An example is the Scores virus, first detected in early 1988. This virus targeted EDS and contained code to search for the code words ERIC and VULT. It was later ascertained that these were references to internal EDS projects. This is notable in that this is the first example of a virus targeting a particular company. Scores infected applications and then scanned for the code words on the target system. Resources that were so identified were terminated or crashed when they were run. As cross-platform attacks become more common, Macintosh platforms will become increasingly vulnerable. Cross-Platform. An example of cross-platform malicious code is the Lindose/Winux virus. This virus can infect both Linux Elf and Windows PE executables. Many installations of Linux are installed on dual-boot systems, where the system has a Linux partition and a Windows partition, making this a particularly effective attack mechanism.

Other attacks target applications that span multiple platforms, such as browsers. A good source of information on cross-platform vulnerabilities is http://www.sans.org/newlook/digests/SAC/cross.htm. Polymorphic Viruses Virus creators keep up with the state-of-the-art in antiviral technology and improve their malicious technology to avoid detection. Because the order in which instructions are executed can sometimes be changed without changing the ultimate result, the order of instructions in a virus may be changed to bypass the anti-virus signature. Another method is to randomly insert null operations instructions to the computer, mutating the sequence of instructions the anti-virus software recognizes as malicious. Such changes result in viruses that are polymorphic — they constantly change the structural characteristics that would have enabled their detection. 545

AU1518Ch32Frame Page 546 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY Script Attacks Java and JavaScript. Java-based attacks exploit flaws in the implementation of Java classes in an application. A known early attack was the BrownOrifice applet. This applet exploited flaws in Netscape’s Java class libraries.

JavaScript has been used in the Coolnow-A worm to exploit vulnerabilities in Microsoft Internet Explorer. ActiveX. ActiveX controls have more capabilities than tools that run strictly in a sandbox. Because ActiveX controls are native code that run directly on a physical machine, they are capable of accessing services and resources that are not available to code that runs in a restricted environment. There are a few examples of ActiveX attack code as of this writing. There is example code called Exploder, which crashed Windows 95 systems. There is also a virus, the HTML.bother.3180, that uses ActiveX controls to perform malicious activity on the target system.

FUTURE THREATS: WHO WILL WRITE THEM? The Script Kiddie Threat There are automated hacking tools on the Internet, readily available at many hacker sites. These tools are of the point-and-click genre, requiring little to no programming knowledge. The security practitioner must visit these hacker sites to understand the current threat environment. Fair warning: these sites often have attack scripts, and many hackers use pornography to prevent or limit official perusal of their sites by legitimate authorities. The script kiddies are a serious threat due to their numbers. The recent goner worm was the work of three teenagers in Israel; other malicious code has been created by untrained people in Brazil, Finland, and China. Criminal Enterprises The amount of commerce moving to the Internet is phenomenal, in the multibillion-dollar range. Wherever there are large transactions, or high transaction volumes, the criminal element will attempt to gain financial advantage. Malicious code introduced by criminals may attempt to gain corporate financial information, intellectual property, passwords, access to critical systems, and personnel information. Their goals may be industrial espionage, simple theft by causing goods and services to be misdelivered, fraud, or identity theft. Ideologues Small groups of ideologues may use the Internet and malicious code to punish, hinder, or destroy the operations of groups or governments they find objectionable. Examples are the anti-WTO groups, which have 546

AU1518Ch32Frame Page 547 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection engaged in hacking WTO systems in Europe, and various anti-abortion groups in the United States. Also, individual citizens may take action, as recently seen in the Chinese fighter striking the American surveillance plane; many Chinese citizens, with tacit government approval, have launched attacks on American sites. Terrorist Groups Terrorist groups differ from ideologues in that they are generally better funded, better trained, and want to destroy some target. Since September 11, the seriousness of the terrorist threat cannot be stressed enough. The goals of a terrorist group may be to use malicious code to place root kits on systems responsible for dam control, electrical utilities’ load balancing, or nuclear power plants. A speedy propagating worm, such as the Warhol, would be devastating if not quickly contained. Additionally, terrorist groups may use malicious code to manipulate financial markets in their favor; attacked companies may lose stock value over a short time, allowing for puts and calls to be made with foreknowledge of events. Terrorists generally fall into two categories: (1) well-educated and dedicated, and (2) highly motivated Third- or Fourth-World peasants. An example of the first would be the Bader Meinhoff group; for the second, the Tamil Tigers of Sri Lanka. Government Agencies The Internet has allowed many government and corporate entities to place their functions and information to be readily accessible from the network. The flip side of this is that, logically, one can “touch” a site from anywhere in the world. This also means that one can launch attacks using malicious code from anywhere on the planet. Intelligence agencies and military forces have already recognized that the Internet is another battlefield. The U.S. National Security Agency, FBI, and U.K. MI5 and MI6 all evince strong interest in Internet security issues. The U.S. Air Force has in place a cyber-warfare center at Peterson Air Force Base, Colorado Springs, Colorado. Its Web site is http://www.spacecom. af.mil/usspacecom/jtf-cno.htm. Note that their stated mission is: Subject to the authority and direction of USCINCSPACE, JTF-CNO will, in conjunction with the unified commands, services and DoD agencies, coordinate and direct the defense of DoD computer systems and networks; coordinate and, when directed, conduct computer network attack in support of CINCs and national objectives.

The intelligence and military attackers will be well-educated professionals with the financial and technical backing of nation-states. Their attacks will not fail because of bad coding. 547

AU1518Ch32Frame Page 548 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY Warhol Nimda was the start of multiple avenues and methods of attack. After Code Red, researchers began to investigate more efficient propagation or infection methods. One hypothetical method is described in a paper by Nicholas Weaver of the University of California, Berkeley; the paper can be obtained at http://www.cs.berkeley.edu/~nweaver/warhol.html. Weaver named this attack methodology the Warhol Worm. There are several factors affecting malicious code propagation: the efficiency of target selection, the speed of infection, and the availability of targets. The Warhol method first builds a list of potentially vulnerable systems with high-speed Internet connections. It then infects these target systems because they are in the best position to propagate the malicious code to other systems. The newly infected system then receives a portion of the target list from the infecting system. Computer simulations by Weaver indicate that propagation rates across the Internet could reach one million computers in eight minutes. His initial assumptions were to start with a 10,000-member list of potentially vulnerable systems; the infecting system could perform 100 scans per second; and infecting a target system required one second. Cross-Platform Attacks: Common Cross-Platform Applications A very real danger is the monoculture of applications and operating systems (OS) across the Internet. Identified flaws in MS Windows are the targets of malicious code writers. Applications that span platforms, such as MS Word, are subject to macro attacks that will execute regardless of the underlying platform; such scripts may contain logic to allow for cross-platform virulence. Intelligent Scripts These scripts detect the hardware and software on the target platform, and they have different attack methods scripted specifically for a given platform/OS combination. Such scripts can be coded in Java, Perl, and HTML. We have not seen an XML malicious code attack method to date; it is really only a matter of time. Self-Evolving Malicious Code Self-evolving malicious code will use artificial neural networks (ANNs) and genetic algorithms (GA) in malicious code reconstruction. These platforms will change their core structures and attack methods in response to the environment and the defenses encountered. We see some of this in Nimda, where multiple attack venues are used. Now add an intelligence capability to the malicious code, where the code actively seeks information on new vulnerabilities; an example would be scanning the Microsoft patch site for patches, creation of exploits that take advantage of these 548

AU1518Ch32Frame Page 549 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection patch fixes, and release of the exploit. These will have far larger payloads than current attacks and may require a home server site for evolution. As networks evolve, these exploits may live in the network. The development of distributed computing has led to the idea of parasitic computing. This model would allow the intelligent code to use the resources of several systems to analyze the threat environment using the distributed computing model. The parasitic model also allows exploits to steal cycles from the system owner for whatever purpose the exploit builder desires to use them for, such as breaking encryption keys. Router Viruses or Worms Attack of routers and switches is of great concern; successful cross-platform attacks on these devices could propagate across the Internet in a manner akin to the aforementioned Warhol worm. Analysis of Formal Protocol Description. This attack method requires a formal analysis of the protocol standard and the various algorithms used to implement the protocol. We have seen an example of this with the SNMP v1 vulnerability, released publicly in February 2002. The flaw is not in the protocol but in the implementation of the protocol in various applications.

Further research of protocols such as the Border Gateway Protocol (BGP), Enhanced Interior Gateway Routing Protocol (EIGRP), testing the implementation versus the specification, may lead to other vulnerabilities. Test against Target Implementations. The malicious code builders simply gain access to the target routing platform and the most prevalent version of the routing software and proceed to test various attack methods until they succeed. Also, with privileged access to a system, attackers may reverse-engineer the implementation of the target protocols underlying software instance. An analysis of the resulting code may show flaws in the logic or data paths of the code.

The primary target of router attacks will be the BGP. This protocol translates routing tables from different vendors’ routing platforms for interoperability. It is ubiquitous across the Internet. By targeting ISPs’ routers, the attackers can potentially take down significant portions of the Internet, effectively dropping traffic into a black hole. Other methods use packetflooding attacks to effect denial-of-service to the network serviced by the router. Router or switch operating system vulnerabilities are also targeted, especially because these network devices tend not to be monitored as closely as firewalls, Web servers, or critical application servers. Wireless Viruses Phage is the first virus to be discovered that infects hand-held devices running the PalmOS. There were no confirmed reports of users being 549

AU1518Ch32Frame Page 550 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY affected by the virus, and it is considered a very low threat. It overwrites all installed applications on a PalmOS handheld device. Wireless phones are another high-risk platform. An example is the Short Messaging Service (SMS) exploit, where one sends malformed data headers to the target GSM phone from an SMS client on a PC, which can crash the phone. In June of 2001, the Japanese I-mode phones were the targets of an e-mail that caused all I-mode phones to dial 110, the Japanese equivalent of 911. Flaws in the software allowed embedded code in the e-mail to be executed. The growing wireless market is sure to be a target for malicious code writers. Additionally, the software in these mobile devices is not implemented with security foremost in the minds of the developers, and the actual infrastructures are less than robust. Active Content Active content, such as self-extracting files that then execute, will be a great danger in the future. The security and Internet communities have come to regard some files as safe, unlike executable files. Many organizations used Adobe PDF files instead of Microsoft Word, because Adobe was perceived as safe. We now see exploits in PDF files. Additionally, there is now a virus, SWF/LFM-926, which infects Macromedia Flash files. PROTECTION Defense-in-Depth A comprehensive strategy to combat malicious code encompasses protection from, and response to, the variety of attacks, avenues of attack, and attackers enumerated above. Many companies cocoon themselves in secure shells, mistakenly believing that a perimeter firewall and anti-virus software provide adequate protection against malicious code. Only when their systems are brought to a halt by a blended threat such as the Code Red worm do they recognize that, once malicious code penetrates the first line of defense, there is nothing to stop its spread throughout the internal network and back out to the Internet. Malicious code has multiple ways to enter the corporate network: e-mail, Web traffic, instant messenger services, Internet chat (IRC), FTP, handheld devices, cell phones, file sharing programs such as Napster, peer-to-peer programs such as NetMeeting, and unprotected file shares through any method by which files can be transferred. Therefore, a sound protection strategy against malicious code infiltration requires multiple overlapping approaches that address the people, policies, technologies, and operational processes of information systems. 550

AU1518Ch32Frame Page 551 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection Exhibit 32-3. Safe computing practices for the Windows user community. 1. Install anti-virus software. Make sure the software is set to run automatically when the system is started, and do not disable real-time protection. 2. Keep anti-virus software up-to-date. Configure systems to automatically download updated signature files from the company-approved server or vendor site on a regular basis. 3. Install the latest operating system and application security patches. 4. Do not share folders or volumes with other users. If drive sharing is necessary, do not share the full drive and do password-protect the share with a strong password. 5. Make file extensions visible. Windows runs with the default option to “hide file extensions for known file types.” Multiple e-mail viruses have exploited hidden file extensions; the VBS/LoveLetter worm contained an e-mail attachment, a malicious VBS script, named “LOVE-LETTER-FOR-YOU.TXT.vbs; the .vbs extension was hidden. 6. Do not forward or distribute non-job-related material (jokes, animations, screen savers, greeting cards). 7. Do not activate unsolicited e-mail attachments and do not follow the Web links quoted in advertisements. 8. Do not accept unsolicited file transfers from strangers in online peer-to-peer computing programs such as Instant Messaging or IRC. 9. Beware of virus hoaxes. Do not forward these messages, and do not follow the instructions contained therein. 10. Protect against infection from macro viruses: If Microsoft Word is used, write-protect the global template. Consider disabling macros in MS Office applications through document security settings. Consider using alternate document formats such as rtf (Rich Text Format) that do not incorporate executable content such as macros. 11. Check ALL attachments with anti-virus software before launching them. Scan floppy disks, CDs, DVDs, Zip disks, and any other removable media before using them. 12. Turn off automatic opening of e-mail attachments or use another mail client. BadTrans spread through Microsoft Internet Explorer-based clients by exploiting a vulnerability in auto-execution of embedded MIME types. 13. Establish a regular backup schedule for important data and programs and adhere to it.

Policy An organization’s first step in the battle against malicious code is the development and implementation of a security policy addressing the threat to information systems and resources (see Exhibit 32-3). The policy describes proactive measures the organization has taken to prevent infection; safe computing rules and prevention procedures that users must follow; tools and techniques to implement and enforce the rules; how to recognize and report incidents; who will deal with an outbreak; and the consequences of noncompliance. The policy should make employees assume responsibility and accountability for the maintenance of their computers. 551

AU1518Ch32Frame Page 552 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY When users understand why procedures and policies are implemented, and what can happen if they are not followed, there tends to be a higher level of compliance. Suggested Policy Areas. Require the use of company-provided, up-todate anti-virus software on all computing devices that access the corporate network, including handheld and wireless devices. Inform users that removing or disabling protection is a policy violation. Address remote and mobile Windows users by specifying that they must have up-to-date protection in order to connect to the network. Consider establishing virus protection policies for guest users, such as vendors and consultants, and for protecting Linux, UNIX, and Macintosh operating systems as well.

Weaknesses in software programs are routinely discovered and exploited; therefore, a sound anti-virus policy must address how and when patching will be done, as well as the means and frequency for conducting backups. The information security practitioner needs to recognize that users with Web-based e-mail accounts can circumvent the carefully constructed layers of protection at the firewall, e-mail gateway, and desktop by browsing to a Web-based e-mail server. Policy against using external e-mail systems is one way to prevent this vector, but it must be backed up with an HTTP content filter and firewall rules to block e-mail traffic from all but approved servers or sources. Finally, include a section in the policy about virus warnings. Example: “Do not forward virus warnings of any kind to anyone other than the incident handling/response team. A virus warning that comes from any other source should be ignored.” Education and Awareness Security policy must be backed up with awareness and education programs that teach users about existing threats, explain how to recognize suspicious activity, and how to protect the organization and their systems from infection. The information security practitioner must provide the user community with safe computing practices to follow, and supply both the tools (e.g., anti-virus software) and techniques (e.g., automatic updates) to protect their systems. Awareness training must include the social engineering aspects of viruses. The AnnaKournikova and NakedWife viruses, for example, took advantage of human curiosity to propagate; and communications-enabled worms spread via screen savers or attachments from known correspondents whose systems had been infected. 552

AU1518Ch32Frame Page 553 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection The awareness program should reiterate policy on how to recognize and deal with virus hoaxes. E-mail hoaxes are common and can be as costly in terms of time and money as the real thing. Tell users that if they do forward the “notify everyone you know” warnings to all their colleagues, it can create a strain on mail servers and make them crash — having the same effect as the real thing. Protection from Malicious Active Code Protect against potentially malicious scripts by teaching users how to configure their Internet browsers for security by disabling or limiting automatic activation of Java or ActiveX applets. Teach users how to disable Windows Scripting Host and to disable scripting features in e-mail programs — many email programs use the same code as Web browsers to display HTML; therefore, vulnerabilities that affect ActiveX, Java, and JavaScript are often applicable to e-mail as well as Web pages. System and Application Protection. Consider using alternative applications and operating systems that are less vulnerable to common attacks. The use of the same operating system at the desktop or in servers allows one exploit to compromise an entire enterprise. Similarly, because virus writers often develop and test code on their home computers, corporate use of technologies and applications that are also popular with home users increases the threat to the corporation from malicious code designed to exploit those applications. If trained support staff is available in-house, the organization may decide to run services such as DNS, e-mail, and Web servers on different operating systems or on virtual systems. With this approach, an attack on one operating system will have less chance of affecting the entire network.

Regardless of which operating system or application is used, it is critical to keep them up-to-date with the latest security patches. Worms use known vulnerabilities in the OS or application to take over systems. Frequently, vendors have released patches months in advance of the first exploitation of a weakness. Rather than being in the reactive mode of many system administrators who were caught by the Code Red worm, be proactive about testing and applying patches as soon as possible after receiving notification from the vendor. Use scripts or other tools to harden the operating system and disable all unnecessary services. Worms have taken advantage of default installations of Web server and OS software. Layered Anti-virus Protection. Because malicious code can enter the enterprise through multiple avenues, it is imperative that protective controls be applied at multiple levels throughout the enterprise. In the time prior to macro viruses, there was little benefit to be gained by using antivirus controls anywhere but the desktop. However, when macro viruses 553

AU1518Ch32Frame Page 554 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY became prevalent, placing controls at the file server helped reduce infection. In today’s environment of communication-enabled worms and viruses, a thorough protection strategy involves integrated anti-virus solutions at the desktop, file and application servers, groupware servers, and Internet e-mail gateway and firewall; and inspection of all traffic flowing between the external gateway and internal network. Protect the Desktop. Desktop protection remains a crucial component of an effective protection strategy. The information security practitioner must ensure that the organization has an enterprise license for anti-virus software, along with a procedure to automate installation and updates. Anti-virus software should be part of the standard build for desktops, laptops, and workstations, backed up by policy that makes it a violation to disable or uninstall the real-time scanning. It is prudent to give remote users a license for company-approved anti-virus software to enable them to run it on their end systems, regardless of whether the company owns those nodes.

Because current viruses and worms can spread worldwide in 12 hours or less (and new ones may propagate much faster), the ability to quickly update systems during an outbreak can limit the infection. However, the heavy traffic caused by thousands or millions of users trying to simultaneously update their definition files will hamper the ability to obtain an update from the vendor’s site during an outbreak. Instead, the enterprise anti-virus administrator can provide a local site for updating. The antivirus administrator can download once from the vendor site, allowing the entire network to be updated locally. This approach avoids network congestion and reduces the risk of infection from users who are unable to obtain a timely update from the vendor. Server Protection. Although infection via macro viruses is no longer widespread, protection for network files and print servers can prevent infection from old or infrequently used files. Regardless of policies or training, there are always some users without up-to-date anti-virus protection — whether from naïveté, deliberately disabling the software, or because of system problems that prevent the anti-virus software from starting. One unprotected system can infect many files on the network server if serverside protection is not installed. Fortify the Gateway. The speed of infection and the multiple vectors through which malicious code can enter the enterprise provide the impetus to protect the network at the perimeter. Rather than trying to keep current on the list of ports known to be used by malicious programs, configure firewalls to use the default deny all approach of closing all ports and only opening those ports that are known to be needed by the business. Virus writers are aware of this approach, so they attack ports that are usually open such as HTTP, e-mail, and FTP. Because e-mail is the current method 554

AU1518Ch32Frame Page 555 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection of choice for malicious code propagation, the information security practitioner must implement gateway or network-edge protection. This protection is available as anti-virus software for a particular brand of e-mail server, as gateway SMTP systems dedicated to scanning mail before passing the messages to the corporate e-mail servers, or anti-virus and malicious code services provided by an e-mail service provider. To protect against infection via Web and FTP, gateway virus protection is available for multiple platforms. The software can scan both incoming and outgoing FTP traffic, and it scans HTTP traffic for hostile Java, JavaScript, or ActiveX applets. Protect the Routing Infrastructure. As companies learn to patch their systems, block certain attachments, and deploy malicious code-detection software at the gateway, attackers will turn to other vectors. As mentioned earlier, routers are attractive targets because they are more a part of the network infrastructure than computer systems; and they are often less protected by security policy and monitoring technology than computer systems, enabling intruders to operate with less chance of discovery.

To protect these devices, practice common-sense security: change the default passwords, set up logging to an external log server, use AAA with a remote server, or require access through SSH or VPNs. Vulnerability Scans. A proactive security program includes running periodic vulnerability scans on systems; results of the scans can alert the information security practitioner to uninstalled patches or security updates, suddenly opened ports, and other vulnerabilities. System administrators can proactively apply patches and other system changes to close identified vulnerabilities before they are exploited by attackers using the same tools. There are a number of commercial and open-source scanning tools, such as SATAN, SAINT, and Nessus. Handhelds. As IP-enabled handhelds such as PDAs, palmtops, and smart

phones become more popular, they will be targeted by attackers. To keep these computing devices from infecting the network, provide a standard anti-virus software package for mobile devices and instruct users on how to download updates and how to run anti-virus software when synching their handheld with their PC. Personal Firewalls. Personal firewalls offer another layer of protection for users, especially for remote users. Properly configured personal firewalls can monitor both incoming and outgoing traffic, detect intrusions, block ports, and provide application (e-mail, Web, chat) controls to stop malicious code. The firewalls function as an agent on the desktop, intercepting and inspecting all data going into or out of the system. To facilitate enterprise management, the personal firewall software must be centrally managed so that the administrator can push policy to users, limit the ability 555

AU1518Ch32Frame Page 556 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY of users to configure the software, and check for the presence of correctly configured and active firewalls when the remote user connects to the network. The firewall logging feature should be turned on to log security — relevant events such as scans, probes, viruses detected, and to send the logs to a central server. Research If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle. — Sun Tzu, 6th-century BC Chinese general, Author of The Art of War

Knowing what direction virus development is taking, and knowing and eliminating potential vulnerabilities before they can be exploited, is one of the most positive steps an organization can take toward defense. Virus creators keep up with the state-of-the-art in antiviral technology and improve their malicious technology to avoid detection. The information security practitioner must do likewise. Monitor hacker and black-hat sites (follow precautions listed earlier) to keep abreast of the threat environment. Visit anti-virus vendor sites: EICAR (European Institute of Computer Anti-virus Researchers), The Virus Bulletin, and the Wild List of viruses at www.wildlist.org. Other sources to monitor are the Honeynet Project and SecurityFocus’ ARIS (Attack Registry and Intelligence Services) predictor service (fee based). These sites monitor exploits and develop statistical models that can predict attacks. DETECTION AND RESPONSE Virus and Vulnerability Notification Monitor sites such as BugTraq and SecurityFocus that publish vulnerability and malicious code information. Subscribe to mailing lists, alert services, and newsgroups to be notified of security patches. Subscribe to alerts from anti-virus vendors, organizations such as SANS, Carnegie Mellon’s CERT, NIPC (National Infrastructure Protection Center), Mitre’s CVE (Common Vulnerabilities and Exposures), and BugTraq. Monitor the anti-virus vendor sites and alerts for information about hoaxes as well, and proactively notify end users about hoaxes before they start flooding the corporate e-mail server. Anti-virus (AV) software vendors rely on customers and rival AV companies for information on the latest threats. Typically, if a corporation thinks that an as-yet unidentified virus is loose on its network, it sends a sample 556

AU1518Ch32Frame Page 557 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection to the AV vendor to be analyzed. This sample is then passed on to other AV vendors so that all work in concert to identify the virus and develop signature updates. This cooperative effort ensures that end users receive timely protection, regardless of which AV vendor is used. Virus researchers also spend time visiting underground virus writing sites where some authors choose to post their latest code. This allows AV companies to work to develop methods to detect any new techniques or potential threats before they are released. Current Methods for Detecting Malicious Code The propagation rate of malware attacks is rapidly reaching the point of exceeding human ability for meaningful reaction. The Code Red and Nimda worms were virulent indicators of the speed with which simple active worms can spread. By the time humans detected their presence, through firewall probes or monitoring of IP ranges, the worms had spread almost worldwide. Signature Scanning. Signature scanning, the most common technique for virus detection, relies on pattern-matching methods. This technique searches for an identifiable sequence or string in suspect files or traffic samples and uses this virus fingerprint or signature to detect infection. While this method is acceptable for detecting file and macro viruses or scripts that require activation to spread, it is not very effective against worms or polymorphic viruses. This reactive method also allows a new virus a window of opportunity between the initial appearance of the virus and the time it takes for the industry to analyze the threat, determine the virus signature, and rush to deploy updates to detect the signature.

The response time to worm outbreaks is shrinking to a few hours. Worms can spread faster than virus updates can be created. Even faster infection strategies have been postulated, such as the Warhol worm and Flash worms, which theoretically may allow a worm to infect all vulnerable machines in minutes. Firewall and anti-virus development must move in the direction of detecting and automatically responding to new attacks. Client or Desktop AV to Detect and Remove Viral Code. Client AV programs can detect and often disinfect viruses, and they must provide both onaccess and static virus checking. Static file scanning checks a file or file volume for viruses; on-access, real-time virus checking scans files before they are fully opened. Suspect files are treated according to configurable rules — they may be repaired, disinfected, quarantined for later treatment, or deleted.

Anti-virus software generally uses virus signatures to recognize virus threats. Most viruses that arrive via e-mail have been released within the 557

AU1518Ch32Frame Page 558 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY previous year or more recently; therefore, virus software containing old signatures is essentially useless. It is vital to ensure that virus software is updated on a regular basis — weekly at a minimum for desktops. To ensure that desktop protection is up-to-date, the information security practitioner should provide an automated update mechanism. The client software can be configured to periodically check for new AV signatures and automatically install them on the desktop. Desktop anti-virus software must be able to scan compressed and encoded formats to detect viruses buried in multiple levels of compression. Because laptops and notebooks are frequently used without being connected to the network, when an unprotected machine attaches to the network, some mechanism needs to be in place to detect the connection and force either the installation or update of anti-virus software, or force the computer to disconnect. Another way to check a laptop system is to run a vulnerability scan each time a remote desktop authenticates to the network in order to ensure it has not already been compromised. Many of the enterprise Code Red infections occurred not through Internet-facing MS Internet Information Services (IIS) servers but through infected notebook computers or systems connecting via VPNs. Once Code Red enters the internal network, it infects unpatched systems running IIS, although those systems were inaccessible from the Internet. Recently, anti-virus vendors have recommended that companies update their virus software every day instead of weekly. With the arrival of viruses such as Nimda, some customers pull software updates every hour. Besides detection through technology, user observation is another way to detect worm activity. The “goner” worm disabled personal firewall and anti-virus software; users should recognize this, if through no other means than by missing icons in their Windows system tray, and notify the incident handling team. Server Detection. Server administrators must regularly review their system and application logs for evidence of viral or Trojan activity, such as new user accounts and new files, (rootkits or root.exe in the scripts directory), and remove these files and accounts. Remove worm files and Trojans using updated virus scanners to detect their presence. Discovery of warez directories on FTP servers is proof that systems have been compromised. Performance of real-time anti-virus scanners may impact servers; not all files need to be scanned, but at a minimum critical files should be scanned. Server performance monitoring will also provide evidence of infection, either through reduced performance or denial of service. File Integrity Checkers. File integrity tools are useful for determining if any files have been modified on a system. These tools help protect systems against computer viruses and do not require updated signature files. When 558

AU1518Ch32Frame Page 559 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection an integrity checker is installed, it creates a database of checksums for a set of files. The integrity checker can determine if files have been modified by comparing the current checksum to the checksum it recorded when it was last run. If the checksums do not match, then the file has been modified in some manner. Some integrity checkers may be able to identify the virus that modified a file, but others may only alert that a change exists. Real-Time Content Filtering. To prevent the entry of malicious code into the corporate network, implement content filtering at the gateways for Web, mail, and FTP traffic. Set the filters to block known vulnerable attachments at the gateway. Filter attachments that have been delivery vehicles for malicious code, such as .exe, .com, .vbs, .scr, .shs, .bat, .cmd, .com, .dll, .hlp, .pif, .hta, .js, .lnk, .reg, .vbe, .vbs, .wsf, .wsh, and .wsc. Inform users that if they are trying to receive one of these files for legitimate purposes, they can have the sender rename the extension when they send the attachment. Many worms use double extensions, so block attachments with double extensions (e.g., .doc.vbs or .bmp.exe.) at the gateway or firewall.

At the initial stages of an infection, when new signatures are not available, block attachments or quarantine e-mails that contain certain words in the subject line or text until the anti-virus vendor has a signature update. E-mail and HTML filtering products can examine file attachments and HTML pages. Objects such as executable files or code can be stripped out before passing them on, or they can be quarantined for later inspection. Deploy software that performs real-time virus detection and cleanup for all SMTP, HTTP, and FTP Internet traffic at the gateway. SMTP protection complements the mail server to scan all inbound and outbound SMTP traffic for viruses. Set up scanning rules on the gateway SMTP system to optimize scanning of incoming e-mail. Some systems scan attachments only, and others scan both attachments and e-mail text — this distinction is important because some viruses, such as BubbleBoy, can infect without existing as an attachment. Be aware of the capabilities of the system selected. As with desktop software, gateway systems provide options to scan all attachments or only selected attachments. Handling viruses is tunable as well — the attachment can be deleted, repair can be attempted, or it can be logged and forwarded. Files with suspect viruses can be quarantined until new updates are received, and repair can be attempted at that time. HTTP protection keeps infected files from being downloaded and allows the information security practitioner to set uniform, system-wide security standards for Java and Authenticode. It also affords protection against malicious Java and ActiveX programs for users. FTP protection works to ensure that infected files are not downloaded from unsecured remote sites. 559

AU1518Ch32Frame Page 560 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY Proactive Detection Detecting Anomalous Activity: Sandboxing and Heuristics. Sandboxing is a proactive technique that works by monitoring the behavior of certain attachments in real-time, blocking malicious content from running before it can negatively impact a system. It essentially places a barrier in front of the operating system resources and lets the barrier determine which access programs and applications have to operating system resources. Programs are classified as low, medium, or high restricted, and system resources’ access controls are assigned accordingly. An anti-virus package is still required to identify and disinfect known malicious code, but the threat is removed regardless of whether the anti-virus system reacts.

Heuristic scanning uses an algorithm to determine whether a file is performing unauthorized activities, such as writing to the system registry or activating its own built-in e-mail program. Both sandboxing and heuristic techniques at the desktop can be useful as the final layer of defense. Both examine the behavior of executed code to attempt to identify potentially harmful actions, and they flag the user for action should such behavior be identified. Because behavior-blocking tools do not need to be updated with signatures, layering traditional anti-virus solutions with these proactive solutions can create an effective approach to block both known and new malicious code. The drawback to both methods is the tendency to generate false positives; to get their work done, users often end up saying yes to everything, thus defeating the protection. Worm Detection: Firewalls and Intrusion Detection Systems (IDSs). H y b r i d firewalls (those that combine application proxies with stateful inspection technologies) can be used effectively to repel blended threats such as Code Red and Nimda. Application inspection technology analyzes HTTP and other protocol requests and responses to ensure they adhere to RFC standards.

Worms can also be detected by their excessive scanning activity — network monitoring on the LAN should send alerts to the network operations staff when unusual scanning activity is detected, whether the activity is generated externally or internally. Monitoring the network for normal activity will allow operators to set thresholds and trip alarms when those thresholds are exceeded. A number of machines suddenly scanning all its neighbors should send an alarm in fairly short order. A network IDS that combines heuristics and signature technologies can provide monitoring staff with the first indication of a worm infection by identifying anomalous network traffic with known worm signatures or unusual traffic patterns. The alert still requires analysis by humans to determine if it is malicious, but such systems can provide early warning of potential infection. Many modern firewalls and IDS systems have the ability 560

AU1518Ch32Frame Page 561 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection to detect certain types of virus and worm attacks such as Code Red and Nimda, alert network support personnel, and immediately drop the connection. Some intelligent routing and switching equipment also comes with the ability to foil certain types of attacks. Deploy IDS at the network level to detect malicious code that passes the firewall on allowed ports. The information security practitioner should also consider deploying IDS on subnets that house critical servers and services to detect malicious code activity, such as unusual scanning activity or mailing patterns. Have alerts sent when unusual traffic is logged to or from your e-mail server; the LoveLetter e-mail virus, for example, sent out 100 infected e-mails per minute from one user. Possible responses to these communication-enabled viruses include blocking e-mail with the suspect subject line, automatically (based on thresholds) blocking the victim’s outbound mail queue, and contacting both the victim and the sender to notify them of the infection. Tarpits. Tarpits such as LaBrea are a proactive method used to prevent worms from spreading. A tarpit installed on a network seeks blocks of unused IP addresses and uses them to create virtual machines. When a worm hits one of the virtual machines, LaBrea responds and keeps the worm connected indefinitely, preventing it from continuing to scan and infect other systems.

RESPONSE AND CLEANUP If it appears that a system or network is under attack by a worm, it is prudent to sever the network connection immediately in order to isolate the local network. If the worm is already loose in the system, this act may limit its spread and may also prevent important data from being sent outside of the local area network. It may be appropriate to take the system offline until the breach has been repaired and any necessary patches installed. Critical servers should have backup systems that can be installed while the infected machine is rebuilt with fresh media. Worms seldom attack single systems, so the incident response team will need to inspect all systems on the network to determine if they have been affected. With expanding use of extranets for customers and partners, and as Web services proliferate, responding to an intrusion or worm may involve contacting partners or customers who could lose their access to services or be compromised themselves. Such notification should be detailed in escalation procedures and incident response plans. Incident Response and Disaster Recovery Plans It is imperative that the information security practitioner create and test a rapid-response plan for malicious code emergencies. Infections will happen 561

AU1518Ch32Frame Page 562 Thursday, November 14, 2002 7:59 PM

APPLICATION PROGRAM SECURITY despite defense measures, so be prepared to wipe them out quickly. The recovery plan must include escalation levels, malicious code investigators, and repair teams equipped with the tools and techniques to recover lost data. A consistent, strong backup policy, for both users and systems administrators, is essential for restoring lost or damaged data. Ensure that backup operators or system administrators have backups of all data and software, including operating systems. If the organization is affected by a virus, infected files and programs can be replaced with clean copies. For particularly nasty viruses, worms, and remote-access Trojans, the administrator may have no choice but to reformat and rebuild — this process can be simplified using a disk-imaging program such as GHOST. SUMMARY Practice defense-in-depth — deploy firewalls, proxy servers, intrusion detection systems, on-demand and on-access scanners at the network gateway, mail, file and application servers, and on the desktop. Employ proactive techniques such as integrity checkers, vulnerability scans, email filters, behavior blockers, and tarpits to protect against incursions by malicious code. All of these tools and techniques must enforce a security policy and be clearly laid out and explained in procedures. The enterprise is complex, with many operating systems and applications running simultaneously. To address this complexity, protection must be multi-layered — controlling all nodes, data transmission channels, and data storage areas. Expect that new vulnerabilities will emerge at least as fast as old ones are repaired, and that attackers will take advantage of any that are not yet repaired. To fight malicious code, enterprises must take a holistic approach to protection. Every aspect of the enterprise should be examined for ways to reduce the impact of malicious code and allow the organization to fight infection in a coordinated fashion. Once effective measures are in place, the information security practitioner should maintain vigilance by researching new attack methodologies and devising strategies to deal with them. By doing this, the enterprise can remain relatively virus-free, and the end users can concentrate on the business. References 1. F. Cohen, Trends in Computer Virus Research, http://all.net/books/integ/japan.html. 2. A. Chuvakin, Basic Security Checklist for Home and Office Users, November, 2001, http://www.securityfocus.com. 3. P. Schmehl, Holistic Enterprise Anti-Virus Protection, January, 2002, http://online.securityfocus.com/infocus/. 4. J. Martin, A Practical Guide to Enterprise Anti-Virus and Malware Prevention, August, 2001, http://www.sans.org. 5. D. Banes, How to Stay Virus, Worm, and Trojan Free — Without Anti-Virus Software, May, 2001, http://www.sans.org.

562

AU1518Ch32Frame Page 563 Thursday, November 14, 2002 7:59 PM

Malicious Code: The Threat, Detection, and Protection 6. G. Hulme, Going the distance, Nov. 2001, Information Week. 7. R. Nichols, D. Ryan, and J. Ryan, Defending Your Digital Assets, McGraw-Hill, 2000. 8. G. Spafford and S. Garfinkel, Practical UNIX and Internet Security, 2nd ed., O’Reilly & Associates, Inc., 1996. 9. Responding to the Nimda Worm: Recommendations for Addressing Blended Threats, Symantec Enterprise Security, http://securityresponse.symantec.com.

ABOUT THE AUTHORS Ralph S. Hoefelmeyer, CISSP, began his career as a U.S. Air Force officer and went on to defense work. He has more than 20 years of experience in operations, systems design, analysis, security, software development, and network design. Hoefelmeyer has earned a B.S. and M.S. in computer science and has one patent with several patents pending. He is currently a senior engineer with WorldCom in Colorado Springs, Colorado. Theresa E. Phillips, CISSP, is a senior engineer with WorldCom. She has five years’ experience in information security engineering, architecture, design, and policy development. Prior to that, she held management positions in not-for-profit membership organizations dealing with open systems and quality engineering. Phillips earned a B.S. in social work, which provides her with the background to deal with people and policy issues related to information security.

563

AU1518Ch32Frame Page 564 Thursday, November 14, 2002 7:59 PM

AU1518Ch33Frame Page 565 Thursday, November 14, 2002 7:58 PM

Chapter 33

Malware and Computer Viruses Robert Slade, CISSP

Malware is a relatively new term in the security field. It was created to address the need to discuss software or programs that are intentionally designed to include functions for penetrating a system, breaking security policies, or carrying malicious or damaging payloads. Because this type of software has started to develop a bewildering variety of forms such as backdoors, data diddlers, DDoS, hoax warnings, logic bombs, pranks, RATs, Trojans, viruses, worms, zombies, etc., the term malware has come to be used for the collective class of malicious software. The term is, however, often used very loosely simply as a synonym for virus, in the same way that virus is often used simply as a description of any type of computer problem. This chapter attempts to define the problem more accurately and to describe the various types of malware. Viruses are the largest class of malware, both in terms of numbers of known entities and in impact on the current computing environment. Viruses will, therefore, be given primary emphasis in this chapter but will not be the only malware type examined. Programming bugs or errors are generally not included in the definition of malware, although it is sometimes difficult to make a hard and fast distinction between malware and bugs. For example, if a programmer left a buffer overflow in a system and it creates a loophole that can be used as a backdoor or a maintenance hook, did he do it deliberately? This question cannot be answered technically, although we might be able to guess at it, given the relative ease of use of a given vulnerability. In addition, it should be noted that malware is not only a collection of utilities for the attacker. Once launched, malware can continue an attack without reference to the author or user; and in some cases it will expand the attack to other systems. There is a qualitative difference between malware and the attack tools, kits, or scripts that have to operate under an attacker’s control and which are not considered to fall within the definition 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

565

AU1518Ch33Frame Page 566 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY of malware. There are gray areas in this aspect as well, because RATs and DDoS zombies provide unattended access to systems but need to be commanded in order to deliver a payload. POTENTIAL SECURITY CONCERNS Malware can attack and destroy system integrity in a number of ways. Viruses are often defined in terms of the ability to attach to programs (or to objects considered to be programmable) and so must, in some way, compromise the integrity of applications. A number of viruses attach themselves to the system in ways that either keep them resident in the system or invoke them each time the system starts, and they compromise the overall system even if individual applications are not touched. RATs (remote-access Trojans/tools, basically remotely installed backdoors) are designed to allow a remote user or attacker to completely control a system, regardless of local security controls or policies. The fact that viruses modify programs is seen as evidence that viruses inherently compromise systems, and therefore the concept of a good or even benign virus is a contradiction in terms. The concept of good viruses will be discussed more in the detailed section concerning virus functions. Many viruses or other forms of malware contain payloads (such as data diddlers) that may either erase data files or interfere with application data over time in such a way that data integrity is compromised and data may become completely useless. In considering malware, there is an additional type of attack on integrity. As with attacks where the intruder takes control of your system and uses it to explore or assail further systems in order to hide his own identity, malware (viruses and DDoS zombies in particular) is designed to use your system as a platform to continue further assaults, even without the intervention of the original author or attacker. This can create problems within domains and intranets where equivalent systems trust each other, and it can also create bad will when those with whom you do business find out that your system is sending viruses or probes to theirs. As noted, malware can compromise programs and data to the point where they are no longer available. In addition, malware generally uses the resources of the system it has attacked; and it can, in extreme cases, exhaust CPU cycles, available processes (process numbers, tables, etc.), memory, communications links and bandwidth, open ports, disk space, mail queues, etc. Sometimes this can be a direct denial-of-service (DoS) attack, and sometimes it is a side effect of the activity of the malware. Malware, such as backdoors and RATs, is intended to make intrusion and penetration easier. Viruses such as Melissa and SirCam send data files from your system to others (in these particular cases, seemingly as a side 566

AU1518Ch33Frame Page 567 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses effect of the process of reproduction and spread). Malware can be written to do directed searches and send confidential data to specific parties, and it can also be used to open covert channels of other types. The fact that you are infected with viruses, or compromised by other types of malware, can become quite evident to others. This compromises confidentiality by providing indirect evidence of your level of security, and it may also create seriously bad publicity. THE COMPUTING ENVIRONMENT IN REGARD TO MALWARE In the modern computing environment, everything — including many supposedly isolated mainframes — is next to everything else. Where older Trojans relied on limited spread for as long as users on bulletin board systems could be fooled, and early-generation viruses required manual disk and file exchange, current versions of malware use network functions. For distribution of contemporary malware, network functions used can include e-mail of executable content in file attachments, compromise of active content on Web pages, and even direct attacks on server software. Attack payloads can attempt to compromise objects accessible via the Net, can deny resource services by exhausting them, can corrupt publicly available data on Web sites, or spread plausible but misleading misinformation. It has long been known that the number of variants of viruses or other forms of malware is directly related to the number of instances of a given platform. The success of a given piece of malware is also associated with the relative proportion of a given platform in the overall computing environment. Attacks are generally mounted at least semi-randomly; attacks on incompatible targets are wasted and, conversely, attacks on compatible targets are successful and may help to escalate the attack. Although it may not seem so to harried network administrators, the modern computing environment is one of extreme consistency. The Intel platform has severe dominance in hardware, and Microsoft has a near monopoly of operating systems and applications on the desktop. In addition, compatible application software (and the addition of functional programming capabilities in those applications) can mean that malware from one hardware and operating system environment works perfectly well in another. The functionality added to application macro and script languages has given them the capability either to directly address computer hardware and resources or to easily call upon utilities or processes that have such access. This means that objects previously considered to be data, and therefore immune to malicious programming, must now be checked for malicious functions or payloads. 567

AU1518Ch33Frame Page 568 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY In addition, these languages are very simple to learn and use; and the various instances of malware carry their own source code, in plaintext and sometimes commented, making it simple for individuals wanting to learn how to craft an attack to gather templates and examples of how to do so — without even knowing how the technology actually works. This enormously expands the range of authors of such software. OVERVIEW AND HISTORY We are faced with a rapid evolution of computer viruses, and we are experiencing difficulties in addressing the effects of these viruses, just as in the biological world. IBM’s computer virus research team has extensively examined the similarities and differences between biological and computer viruses and epidemiology. Many excellent papers are available through their Web site at http://www.research.ibm.com/antivirus/. The evolution of computer viruses is dramatically accelerated when compared to the development of their biological counterparts. This is easy to understand when you examine the rapid development of computer technology as well as the rapid homogenization of computers, operating systems, and software. Many claims have been made for the existence of viruses prior to the 1980s, but so far these claims have either been unaccompanied by proof or have referred to entities that can be considered viruses only under the broadest definition of the term. The Core Wars programming contests did involve self-replicating code, but usually within a structured and artificial environment. Examples of other forms of malware have been known almost since the advent of computing. At least two Apple II viruses are known to have been created in the early 1980s. Fred Cohen’s pioneering academic research was undertaken during the middle of that decade, and there is some evidence that the first viruses to be successful in the normal computing environment were created late in the 1980s. However, it was not until the end of the decade (and 1987 in particular) that knowledge of real viruses became widespread, even among security experts. For many years boot-sector infectors and file infectors were the only types of common viruses. These programs spread relatively slowly, primarily distributed on floppy disks, and were thus slow to disseminate geographically. However, the viruses tended to remain in the environment for a long time. During the early 1990s, virus writers started experimenting with various functions intended to defeat detection. (Some forms had seen limited trials earlier.) Among these were polymorphism, to change code strings in order to defeat scanners, and stealth, to attempt to confound any type of detection. None of these virus technologies had a significant impact. Most 568

AU1518Ch33Frame Page 569 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses viruses using these advanced technologies were easier to detect because of a necessary increase in program size. Although demonstration programs had been created earlier, the middle 1990s saw the introduction of macro and script viruses in the wild. These were initially confined to word-processing files, particularly files associated with the Microsoft Office suite. However, the inclusion of programming capabilities eventually led to script viruses in many objects that would normally be considered to contain data only, such as Excel spreadsheets, PowerPoint presentation files, and e-mail messages. This fact led to greatly increased demands for computer resources among anti-viral systems because many more objects had to be tested, and Windows OLE (Object Linking and Embedding) format data files presented substantial complexity to scanners. Macro viruses also increased new variant forms very quickly because the viruses carried their own source code, and anyone who obtained a copy could generally modify it and create a new member of the virus family. E-mail viruses became the major new form in the late 1990s and early 2000s. These viruses may use macro capabilities, scripting, or executable attachments to create e-mail messages or attachments sent out to e-mail addresses harvested from the infected machine or other sources. E-mail viruses spread with extreme rapidity, distributing themselves worldwide in a matter of hours. Some versions create so many copies of themselves that corporate and even service provider mail servers are flooded and cease to function. Prolific e-mail viruses are very visible and thus tend to be identified within a short space of time, but many are macros or scripts and generate many variants. With the strong integration of the Microsoft Windows operating system with its Internet Explorer browser, Outlook mailer, Office suite, and system scripting, recent viruses have started to blur the normal distinctions. A document sent as an e-mail file attachment can make a call to a Web site that starts active content, which installs a remote-access tool acting as a portal for the client portion of a distributed denial-of-service network. This convergence of technologies is not only making discussion more difficult but is also leading to the development of much more dangerous and (from the perspective of an attacker) effective forms of malware. Because the work has had to deal with detailed analyses of low-level code, virus research has led to significant advances in the field of forensic programming. However, to date computer forensic work has concentrated on file recovery and decryption, so the contributions in this area still lie in the future. Many computer pundits, as well as some security experts, have proposed that computer viruses are the result of the fact that currently popular 569

AU1518Ch33Frame Page 570 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY desktop operating systems have only nominal security provisions. They further suggest that viruses will disappear as security functions are added to operating systems. This thesis ignores the facts — well established by Cohen’s research and subsequently confirmed — that viruses use the most basic of computer functions, and a perfect defense against viruses is impossible. This is not to say that an increase in security measures by operating system vendors could not reduce the risk of viruses — the current danger could be drastically reduced with relatively minor modifications to system functions. It is going too far to say (as some have) that the very existence of viral programs, and the fact that both viral strains and the numbers of individual infections are growing, means that computers are finished. At the present time, the general public is not well informed about the virus threat, so more copies of viral programs are being produced than are being destroyed. Indeed, no less an authority than Fred Cohen has championed the idea that viral programs can be used to great effect. An application using a viral form can improve performance in the same way that computer hardware benefits from parallel processors. It is, however, unlikely that viral programs can operate effectively and usefully in the current computer environment without substantial protective measures built into them. MALWARE TYPES Viruses are not the only form of malicious software. Other forms include worms, Trojans, zombies, logic bombs, and hoaxes. Each of these has its own characteristics, and we will discuss each of the forms below. Some forms of malware combine characteristics of more than one class, and it can be difficult to draw hard and fast distinctions with regard to individual examples or entities; but it is important to keep the specific attributes in mind. It should be noted that we are increasingly seeing convergence in malware. Viruses and Trojans are used to spread and plant RATs, and RATs are used to install zombies. In some cases, hoax virus warnings are used to spread viruses. Virus and Trojan payloads may contain logic bombs and data diddlers. VIRUSES A computer virus is a program written with functions and intent to copy and disperse itself without the knowledge and cooperation of the owner or user of the computer. All researchers have not yet agreed upon a final definition. A common definition is “a program that modifies other programs to contain a possibly altered version of itself.” This definition is generally 570

AU1518Ch33Frame Page 571 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses attributed to Fred Cohen from his seminal research in the middle 1980s, although Dr. Cohen’s actual definition is in mathematical form. (The term computer virus was first defined by Dr. Cohen in his graduate thesis in 1984. Cohen credits a suggestion from his advisor, Leonard Adelman [of RSA fame], for the use of the term.) Another possible definition is an entity that uses the resources of the host (system or computer) to reproduce itself and spread without informed operator action. Cohen’s definition is specific to programs that attach themselves to other programs as their vector of infection. However, common usage now holds viruses to consist of a set of coded instructions that are designed to attach to an object capable of containing the material, without knowledgeable user intervention. This object may be an e-mail message, program file, document, floppy disk, CD-ROM, short message system (SMS) message on cellular telephones, or any similar information medium. A virus is defined by its ability to reproduce and spread. A virus is not merely anything that goes wrong with a computer, and a virus is not simply another name for malware. Trojan horse programs and logic bombs do not reproduce themselves. A worm, which is sometimes seen as a specialized type of virus, is currently distinguished from a virus because a virus generally requires an action on the part of the users to trigger or aid reproduction and spread. (There will be more on this distinction in the section on worms later in this chapter.) The actions on the part of the users are generally common functions, and the users generally do not realize the danger of their actions or the fact that they are assisting the virus. The only requirement that defines a program as a virus is that it reproduces. There is no necessity that viruses carry a payload, although a number of viruses do. In many cases (in most cases of successful viruses), the payload is limited to some kind of message. A deliberately damaging payload, such as erasure of the disk or system files, usually restricts the ability of the virus to spread because the virus uses the resources of the host system. In some cases, a virus may carry a logic bomb or time bomb that triggers a damaging payload on a certain date or under a specific, often delayed, condition. Because a virus spreads and uses the resources of the host, it affords the kind of power to software that parallel processors provide to hardware. Therefore, some have theorized that viral programs could be used for beneficial purposes, similar to the experiments in distributed processing that are testing the limits of cryptographic strength. (Various types of network management functions and updating of system software are seen as candidates.) However, the fact that viruses change systems and applications is seen as problematic in its own right. Many viruses that carry no overtly 571

AU1518Ch33Frame Page 572 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY damaging payload still create problems with systems. A number of virus and worm programs have been written with the obvious intent of proving that viruses could carry a useful payload, and some have even had a payload that could be said to enhance security. Unfortunately, all such viruses have created serious problems. The difficulties of controlling viral programs have been addressed in theory, but the solutions are also known to have faults and loopholes. (One of the definitive papers on this topic is available at http://www.frisk.is/~bontchev/papers/goodvir.html.) Types of Viruses There are a number of functionally different types of viruses, such as a file infector, boot-sector infector (BSI), system infector, e-mail virus, multipartite, macro virus, or script virus. These terms do not necessarily indicate a strict division. A file infector may also be a system infector. A script virus that infects other script files may be considered to be a file infector — although this type of activity, while theoretically possible, is unusual in practice. There are also difficulties in drawing a hard distinction between macro and script viruses. Later in this chapter there is a section enumerating specific examples of malware, where the viruses noted in the next few paragraphs are discussed in detail. We have tried to include examples that explain and expand on these different types. File Infectors. A file infector infects program (object) files. System infectors that infect operating system program files (such as COMMAND.COM in DOS) are also file infectors. File infectors can attach to the front of the object file (prependers), attach to the back of the file and create a jump at the front of the file to the virus code (appenders), or overwrite the file or portions of it (overwriters). A classic is Jerusalem. A bug in early versions caused it to add itself over and over again to files, making the increase in file length detectable. (This has given rise to the persistent myth that it is a characteristic of a virus that it will fill up all disk space eventually; by far, the majority of file infectors add minimally to file lengths.) Boot-Sector Infectors. Boot-sector infectors (BSIs) attach to or replace the master boot record, system boot record, or other boot records and blocks on physical disks. (The structure of these blocks varies, but the first physical sector on a disk generally has some special significance in most operating systems and usually it is read and executed at some point in the boot process.) BSIs usually copy the existing boot sector to another unused sector, and then copy themselves into the physical first sector, ending with a call to the original programming. Examples are Brain, Stoned, and Michelangelo. 572

AU1518Ch33Frame Page 573 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses System Infectors. System infector is a somewhat vague term. The phrase is often used to indicate viruses that infect operating system files, or boot sectors, in such a way that the virus is called at boot time and has or may have preemptive control over some functions of the operating system. (The Lehigh virus infected only COMMAND.COM on MS-DOS machines.) In other usage, a system infector modifies other system structures, such as the linking pointers in directory tables or the MS Windows system Registry, in order to be called first when programs are invoked on the host computer. An example of directory table linking is the DIR virus family. Many email viruses target the Registry: MTX and Magistr can be very difficult to eradicate. Companion Virus. Some viral programs do not physically touch the target file at all. One method is quite simple and may take advantage of precedence in the system. In MS-DOS, for example, when a command is given, the system checks first for internal commands, then .COM, .EXE, and .BAT files in that order. The .EXE files can be infected by writing a .COM file in the same directory with the same filename. This type of virus is most commonly known as a companion virus, although the term spawning virus is also used. E-Mail Virus. An e-mail virus specifically, rather than accidentally, uses

the e-mail system to spread. While virus-infected files may be accidentally sent as e-mail attachments, e-mail viruses are aware of e-mail system functions. They generally target a specific type of e-mail system (Microsoft’s Outlook is the most commonly used), harvest e-mail addresses from various sources, and may append copies of themselves to all e-mails sent or generate e-mail messages containing copies of themselves as attachments. Some e-mail viruses may monitor all network traffic and follow up legitimate messages with messages that they generate. Most e-mail viruses are technically considered to be worms because they do not often infect other program files on the target computer, but this is not a hard and fast distinction. There are known examples of e-mail viruses that are file infectors, macro viruses, script viruses, and worms. Melissa, LoveLetter, Hybris, and SirCam are all widespread current examples, and the CHRISTMA exec is an older example of the same type of activity. E-mail viruses have made something of a change to the epidemiology of viruses. Traditionally, viruses took many months to spread but stayed around for many years in the computing environment. Many e-mail viruses have become “fast burners” that can spread around the world, infecting hundreds of thousands or even millions of machines within hours. However, once characteristic indicators of these viruses become known, they die off almost immediately when users stop running the attachments. 573

AU1518Ch33Frame Page 574 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Multipartite. Originally the term multipartite was used to indicate a virus that was able to infect both boot sectors and program files. (This ability is the origin of the alternate term dual infector.) Current usage tends to mean a virus that can infect more than one type of object or that infects or reproduces in more than one way. Examples of traditional multipartites are Telefonica, One Half, and Junkie, but these programs have not been very successful. Macro Virus. A macro virus uses macro programming of an application such as a word processor. (Most known macro viruses use Visual Basic for Applications in Microsoft Word; some are able to cross between applications and functions in, for example, a PowerPoint presentation and a Word document, but this ability is rare.) Macro viruses infect data files and tend to remain resident in the application by infecting a configuration template such as MS Word’s NORMAL.DOT. Although macro viruses infect data files, they are not generally considered to be file infectors; a distinction is generally made between program and data files. Macro viruses can operate across hardware or operating system platforms as long as the required application platform is present. (For example, many MS Word macro viruses can operate on both the Windows and Macintosh versions of MS Word.) Examples are Concept and CAP. Melissa is also a macro virus, in addition to being an e-mail virus; it mailed itself around as an infected document. Script Virus. Script viruses are generally differentiated from macro viruses in that script viruses are usually stand-alone files that can be executed by an interpreter, such as Microsoft’s Windows Script Host (.vbs files). A script virus file can be seen as a data file in that it is generally a simple text file, but it usually does not contain other data and generally has some indicator (such as the.vbs extension) that it is executable. LoveLetter is a script virus.

Virus Examples and Encyclopedias Examples of recent viruses, in very brief form, can be found at http://www.osborne.com/virus_alert/. More comprehensive information on a much greater number of viruses can be found at the various virus encyclopedia sites. Two of the best are: • F-Secure: http://www.f-secure.com/v-descs/ • Sophos: http://www.sophos.com/virusinfo/analyses/ Others can be found at: • http://www.viruslist.com/eng/viruslist.asp http://www.symantec. com/avcenter/vinfodb.html 574

AU1518Ch33Frame Page 575 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses • http://www.antivirus.com/vinfo/virusencyclo/ http://www.cai.com/ virusinfo/encyclopedia/ • http://antivirus.about.com/library/blency.htm http://vil.mcafee.com/ • http://www.pandasoftware.com/library/defaulte.htm Virus Structure In considering computer viruses, three structural parts are considered important: the replication or infection mechanism, the trigger, and the payload. Infection Mechanism. The first and only necessary part of the structure is the infection mechanism. This is the code that allows the virus to reproduce and thus to be a virus. The infection mechanism has a number of parts to it.

The first function is to search for, or detect, an appropriate object to infect. The search may be active, as in the case of some file infectors that take directory listings in order to find appropriate programs of appropriate sizes; or it may be passive, as in the case of macro viruses that infect every document as it is saved. There may be some additional decisions taken once an object is found. Some viruses may actually try to slow the rate of infection to avoid detection. Most will check to see if the object has already been infected. The next action will be the infection itself. This may entail the writing of a new section of code to the boot sector, the addition of code to a program file, the addition of macro code to the Microsoft Word NORMAL.DOT file, the sending of a file attachment to harvested e-mail addresses, or a number of other operations. There are additional subfunctions at this step as well, such as the movement of the original boot sector to a new location or the addition of jump codes in an infected program file to point to the virus code. There may also be changes to system files, to try to ensure that the virus will be run every time the computer is turned on. This can be considered the insertion portion of the virus. At the time of infection, a number of steps may be taken to try to keep the virus safe from detection. The original file creation date may be conserved and used to reset the directory listing to avoid a change in date. The virus may have its form changed in some kind of polymorphism. The active portion of the virus may take charge of certain system interrupts in order to make false reports when someone tries to look for a change to the system. There may also be certain prompts or alerts generated in an attempt to make any odd behavior noticed by the user appear to be part of a normal, or at least innocent, computer error. 575

AU1518Ch33Frame Page 576 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Trigger. The second major component of a virus is the payload trigger. The virus may look for a specific number of infections, a certain date or time, or a particular piece of text. A section of code does not have to contain either a trigger or a payload to be defined as a virus. Payload. If a virus does have a trigger, then it usually has a payload. The payload can be pretty much anything, from a simple one-time message, to a complicated display, to reformatting the hard disk. However, the bigger the payload, the more likely it is that the virus will get noticed. A virus carrying a very destructive payload will also eradicate itself when it wipes out its target. Therefore, while you may have seen lists of payload symptoms to watch for, such as text messages, ambulances running across the screen, letters falling down, and such, checking for these payloads is not a very good way to keep free of viruses. The successful ones keep quiet.

STEALTH A great many people misunderstand the term stealth. It is often misused as the name of a specific virus. At other times, there are references to stealth viruses as if they were a class such as file infectors or macro viruses. In fact, stealth refers to technologies that can be used by any virus and by other forms of malware as well, and often it is used as a reference to all forms of anti-detection technology. Stealth is used inconsistently even within the virus research community. A specific usage of the term refers to an activity also known as tunneling, which (in opposition to the usage in virtual private networks) describes the act of tracing interrupt links and system calls in order to intercept calls to read the disk, or performing other measures that could be used to determine that an infection exists. A virus using this form of stealth would intercept a call to display information about the file (such as its size) and return only information suitable to the uninfected object. This type of stealth was present in one of the earliest MS-DOS viruses, Brain. (If you gave commands on an infected system to display the contents of the boot sector, you would see the original boot sector and not the infected one.) Polymorphism (literally many forms) refers to a number of techniques that attempt to change the code string on each generation of a virus. These vary from using modules that can be rearranged to encrypting the virus code itself, leaving only a stub of code that can decrypt the body of the virus program when invoked. Polymorphism is sometimes also known as self-encryption or self-garbling, but these terms are imprecise and not recommended. Examples of viruses using polymorphism are Whale and Tremor. Many polymorphic viruses use standard mutation engines such as MtE. These pieces of code actually aid detection because they have a known signature. 576

AU1518Ch33Frame Page 577 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses A number of viruses also demonstrate some form of active detection avoidance, which may range from disabling on-access scanners in memory to deletion of anti-virus and other security software (Zonealarm is a favorite target) from the disk. WORMS A worm reproduces and spreads, like a virus and unlike other forms of malware. Worms are distinct from viruses, although they may have similar results. Most simply, a worm may be thought of as a virus with the capacity to propagate independently of user action. That is, they do not rely on (usually) human-initiated transfer of data between systems for propagation; instead, they spread across networks of their own accord, primarily by exploiting known vulnerabilities in common software. Originally, the distinction was made that worms used networks and communications links to spread and that a worm, unlike a virus, did not directly attach to an executable file. In early research into computer viruses, the terms worm and virus tended to be used synonymously because it was felt that the technical distinction was unimportant to most users. The technical origin of the term worm program matched that of modern distributed processing experiments: a program with segments working on different computers, all communicating over a network (Shoch and Hupp, 1982). In fact, the use and origin of the term worm in relation to computer programs is rather cloudy. There are references in early computing to wormhole programs that escaped from their assigned partitions. The wormhole reference may note the similarity that random damage bears to the characteristic patterns of holes in worm-eaten wood, or relate to the supposition in science fiction stories that wormholes may carry you to random places. The Shoch and Hupp article contains a quote from John Brunner’s novel, The Shockwave Rider, that describes a tapeworm program, although this entity bears little resemblance to modern malware. The first worm to garner significant attention was the Internet Worm of 1988. Recently, many of the most prolific virus infections have not been strictly viruses, but have used a combination of viral and worm techniques to spread more rapidly and effectively. LoveLetter was an example of this convergence of reproductive technologies. While infected e-mail attachments were perhaps the most widely publicized vectors of infection, LoveLetter also spread by actively scanning attached network drives and infecting a variety of common file types. This convergence of technologies will be an increasing problem in the future. Code Red and a number of Linux programs (such as Lion) are modern examples of worms. (Nimda is an example of a worm, but it also spreads in a number of other ways; so it could be considered to be an e-mail virus and multipartite as well.) 577

AU1518Ch33Frame Page 578 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY HOAXES Hoax virus warnings or alerts have an odd double relation to viruses. First, hoaxes are usually warnings about “new” viruses — new viruses that do not, of course, exist. Second, hoaxes generally carry a directive to the user to forward the warning to all addresses available to them. Thus, these descendants of chain letters form a kind of self-perpetuating spam. Hoaxes use an odd kind of social engineering, relying on the naturally gregarious nature of people and their desire to communicate a matter of urgency and importance, using the human ambition to be the first to provide important new information. Hoaxes do, however, have common characteristics that can be used to determine whether their warnings are valid: • Hoaxes generally ask the reader to forward the message. • Hoaxes make reference to false authorities such as Microsoft, AOL, IBM, and the FCC (none of which issue virus alerts), or to completely false entities. • Hoaxes do not give specific information about the individual or office responsible for analyzing the virus or issuing the alert. • Hoaxes generally state that the new virus is unknown to authorities or researchers. • Hoaxes often state that there is no means of detecting or removing the virus. • Many of the original hoax warnings stated only that you should not open a message with a certain phrase in the subject line. (The warning, of course, usually contained that phrase in the subject line. Subject-line filtering is known to be a very poor method of detecting malware.) • Hoaxes often state that the virus does tremendous damage and is incredibly virulent. • Hoax warnings very often contain A LOT OF CAPITAL-LETTER SHOUTING AND EXCLAMATION MARKS!!!!!!!!!! • Hoaxes often contain technical-sounding nonsense (technobabble) such as references to nonexistent technologies like “nth complexity binary loops.” It is wisest in the current environment to doubt all virus warnings, unless they come from a known and historically accurate source such as a vendor with a proven record of providing reliable and accurate virus alert information, or preferably an independent researcher or group. It is best to check any warnings received against known virus encyclopedia sites. It is best to check more than one such site — in the initial phases of a fast burner attack, some sites may not have had time to analyze samples to 578

AU1518Ch33Frame Page 579 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses their own satisfaction; and the better sites will not post unverified information. A recent example of a hoax, referring to SULFNBK.EXE, got a number of people to clear this legitimate utility off their machines. The origin was likely the fact that the Magistr virus targets Windows system software, and someone with an infection did not realize that the file is actually present on all Windows 98 systems. TROJANS Trojans, or Trojan horse programs, are the largest class of malware aside from viruses. However, use of the term is subject to much confusion, particularly in relation to computer viruses. A Trojan is a program that pretends to do one thing while performing another, unwanted action. The extent of the pretense may vary greatly. Many of the early PC Trojans merely used the filename and a description on a bulletin board. Log-in Trojans, popular among university student mainframe users, mimicked the screen display and the prompts of the normal log-in program and could, in fact, pass the username and password along to the valid log-in program at the same time as they stole the user data. Some Trojans may contain actual code that does what it is supposed to be doing while performing additional nasty acts. Some data security writers consider that a virus is simply a specific example of the class of Trojan horse programs. There is some validity to this usage because a virus is an unknown quantity that is hidden and transmitted along with a legitimate disk or program, and any program can be turned into a Trojan by infecting it with a virus. However, the term virus more properly refers to the added, infectious code rather than the virus/target combination. Therefore, the term Trojan refers to a deliberately misleading or modified program that does not reproduce itself. An additional confusion with viruses involves Trojan horse programs that may be spread by e-mail. In years past, a Trojan program had to be posted on an electronic bulletin board system or a file archive site. Because of the static posting, a malicious program would soon be identified and eliminated. More recently, Trojan programs have been distributed by mass e-mail campaigns, by posting on Usenet newsgroup discussion groups, or through automated distribution agents (bots) on Internet relay chat (IRC) channels. Because source identification in these communications channels can be easily hidden, Trojan programs can be redistributed in a number of disguises, and specific identification of a malicious program has become much more difficult. 579

AU1518Ch33Frame Page 580 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Social Engineering A major aspect of Trojan design is the social engineering component. Trojan programs are advertised (in some sense) as having a positive component. The term positive can be in dispute, because a great many Trojans promise pornography or access to pornography — and this still seems to be depressingly effective. However, other promises can be made as well. A recent e-mail virus, in generating its messages, carried a list of a huge variety of subject lines, promising pornography, humor, virus information, an anti-virus program, and information about abuse of the recipient’s e-mail account. Sometimes, the message is simply vague and relies on curiosity. It is instructive to examine some classic social engineering techniques. Formalizing the problem makes it easier to move toward effective solutions and making use of realistic, pragmatic policies. Effective implementation of such policies, however good they are, is not possible without a considered user education program and cooperation from management. Social engineering really is nothing more than a fancy name for the type of fraud and confidence games that have existed since snakes started selling apples. Security types tend to prefer a more academic-sounding definition, such as the use of nontechnical means to circumvent security policies and procedures. Social engineering can range from simple lying (such as a false description of the function of a file), to bullying and intimidation (in order to pressure a low-level employee into disclosing information), to association with a trusted source (such as the username from an infected machine), to dumpster diving (to find potentially valuable information people have carelessly discarded), to shoulder-surfing (to find out personal identification numbers and passwords). REMOTE-ACCESS TROJANS (RATs) Remote-access Trojans are programs designed to be installed, usually remotely, after systems are installed and working (and not in development, as is the case with logic bombs and backdoors). Their authors would generally like to have the programs referred to as remote administration tools so as to convey a sense of legitimacy. All networking software can, in a sense, be considered remote access tools — we have file transfer sites and clients, World Wide Web servers and browsers, and terminal emulation software that allows a microcomputer user to log on to a distant computer and use it as if on-site. The RATs considered to be in the malware camp tend to fall somewhere in the middle of the spectrum. Once a client such as Back Orifice, Netbus, Bionet, or SubSeven is installed on the target computer, the controlling computer is able to obtain information about the target computer. The master computer will be able to download files from, and upload files to, the target. The control 580

AU1518Ch33Frame Page 581 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses computer will also be able to submit commands to the victim, which basically allows the distant operator to do pretty much anything to the prey. One other function is quite important: all of this activity goes on without any alert given to the owner or operator of the targeted computer. When a RAT program has been run on a computer, it will install itself in such a way as to be active every time the computer is started subsequent to the installation. Information is sent back to the controlling computer (sometimes via an anonymous channel such as IRC) noting that the system is active. The user of the command computer is now able to explore the target, escalate access to other resources, and install other software, such as DDoS zombies, if so desired. Once more, it should be noted that remote access tools are not viral. When the software is active, the master computer can submit commands to have the installation program sent on, via network transfer or e-mail, to other machines. In addition, RATs can be installed as a payload from a virus or Trojan. Rootkits, containing software that can subvert or replace normal operating system software, have been around for some time. RATs differ from rootkits in that a working account must be either subverted or created on the target computer in order to use a rootkit. RATs, once installed by a virus or Trojan, do not require access to an account. DDoS ZOMBIES DDoS (distributed denial-of-service) is a modified denial-of-service (DoS) attack. Denial-of-service attacks do not attempt to destroy or corrupt data; rather, they attempt to use up a computing resource to the point where normal work cannot proceed. The structure of a DDoS attack requires a master computer to control the attack, a target of the attack, and a number of computers in the middle that the master computer uses to generate the attack. These computers in between the master and the target are variously called agents or clients, but are usually referred to as running zombie programs. Again, note that DDoS programs are not viral, but checking for zombie software protects not only your system but also prevents attacks on others. It is, however, still in your best interest to ensure that no zombie programs are active. If your computers are used to launch an assault on some other system, you could be liable for damages. The efficacy of this platform was demonstrated in early 2000 when a couple of teenagers successfully paralyzed various prominent online players in quick succession, including Yahoo, Amazon, and eBay. 581

AU1518Ch33Frame Page 582 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY LOGIC BOMBS Logic bombs are software modules set up to run in a quiescent state — but to monitor for a specific condition or set of conditions and to activate their payloads under those conditions. A logic bomb is generally implanted in or coded as part of an application under development or maintenance. Unlike a RAT or Trojan, it is difficult to implant a logic bomb after the fact. There are numerous examples of this type of activity, usually based upon actions taken by a programmer to deprive a company of needed resources in the event of employment termination. A Trojan or a virus may contain a logic bomb as part of the payload. A logic bomb involves no reproduction and no social engineering. A persistent legend in regard to logic bombs involves what is known as the salami scam. According to the story, this involves siphoning off small amounts of money (in some versions, fractions of a cent) and crediting it to the account of the programmer over a very large number of transactions. Despite the fact that these stories appear in a number of computer security texts, this author has a standing challenge to anyone to come up with a documented case of such a scam. Over a period of eight years, the closest anyone has come is a story about a fast-food clerk who diddled the display on a drive-through window and collected an extra dime or quarter from most customers. PRANKS Pranks are very much a part of the computer culture — so much so that you can now buy commercially produced joke packages that allow you to perform “Stupid Mac (or PC, or Windows) Tricks.” There are countless pranks available as shareware. Some make the computer appear to insult the user; some use sound effects or voices; some use special visual effects. A fairly common thread running through most pranks is that the computer is, in some way, nonfunctional. Many pretend to have detected some kind of fault in the computer (and some pretend to rectify such faults, of course making things worse). One entry in the virus field is PARASCAN, the paranoid scanner. It pretends to find large numbers of infected files, although it does not actually check for any infections. Generally speaking, pranks that create some kind of announcement are not malware; viruses that generate a screen or audio display are actually quite rare. The distinction between jokes and Trojans is harder to make, but pranks are intended for amusement. Joke programs may, of course, result in a denial-of-service if people find the prank message frightening. One specific type of joke is the easter egg, a function hidden in a program and generally accessible only by some arcane sequence of commands. These may be seen as harmless but they do consume resources, even if 582

AU1518Ch33Frame Page 583 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses only disk space, and also make the task of ensuring program integrity much more difficult. MALWARE AND VIRUS EXAMPLES It is all very well to provide academic information about the definitions and functions of different types of malware. It may be difficult to see how all this works in practice. In addition, it is often easier for people to understand how a particular technology works when presented with an actual example. Here, then, are specific examples of viruses and malware. All of these have been seen and been successful, to an extent, in the wild (outside of research situations). One benefit of looking at malware in this way is that the discussion is removed from the realms of the possible to the actual. For example, there has been a great deal of debate over the years about whether a virus can do damage to hardware. Theoretically, it is possible. In actual fact, it has not happened. Viruses do dominate in this section, and there are reasons for this. First, there are more examples of viruses to draw from. This chapter is not meant to, and cannot, be an encyclopedia of the tens of thousands of viruses; but it is important to give examples of the major classes of viruses. Second, the possible range of Trojans is really only limited by what can be done with software. People generally do not feel that there is much difference between a Trojan that reformats the hard disk and one that only erases all the files. From the user’s perspective, the effect is pretty much the same; and the defensive measure that should have been taken (do not run unknown software) is also identical. This material not only provides technical details but also looks at the history and some social factors involved. Social engineering is often involved in malware, and it is instructive to look at strategies that have been successful to determine policies that will protect users. BOOT-SECTOR INFECTORS Brain Technically, the Brain family (Pakistani, Pakistani Brain, Lahore, and Ashar), although old and seldom seen anymore, raises a number of interesting points. Brain itself was the first known PC virus, aside from those written by Fred Cohen for his thesis. Unlike Cohen’s file viruses, however, Brain is a boot-sector infector. Brain has been described as the first stealth virus. A request to view the boot sector of an infected disk on an infected system will result in a display of the original (pre-infection) boot sector. However, the volume label of an 583

AU1518Ch33Frame Page 584 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY infected diskette is set to “©Brain,” “©Ashar,” or “Y.C.1.E.R.P,” depending on the variant. Every time a directory listing is requested, the volume label is displayed; so it is difficult to understand why the virus uses stealth in dealing with the display of the boot sector. In one of the most common Brain versions, there is unencrypted text giving the name, address, and telephone numbers of Brain Computer Services in Pakistan. The virus is copyrighted by “Ashar and Ashars” or “Brain & Amjads.” Brain is not intentionally or routinely destructive, and it is possible that the virus was intended to publicize the company. This was the earliest known PC virus, and viruses did not inspire the same revulsion that they tend to do today. Even some time after the later and more destructive viruses, Lehigh and Jerusalem, viruses were still seen as possibly neutral or even in some way beneficial. It may be that the author saw a self-reproducing program that lost, at most, three kilobytes of disk space as simply a novelty. In a way, such a virus as this would not be dissimilar to the easter egg applet pranks used by programmers working for major application publishers to express their individuality. Fridrik Skulason, whose F-Prot has provided the engine for a number of anti-virus products over the years, exhaustively analyzed the later Ohio and Den Zuk versions of the Brain virus. The Ohio (Den Zuk 1) and Den Zuk (Venezuelan, Search) variants contain some of the same code as Brain in order to prevent overlaying by Brain. However, Ohio and Den Zuk identify and overwrite Brain infections with themselves. They can be described as single-shot anti-virus utilities targeting the Brain virus (at the expense, however, of causing the Ohio and Den Zuk infections). Skulason also found that the Den Zuk version would overwrite an Ohio infection. (This seeking activity gives rise to one of Den Zuk’s aliases: search.) It was also suspected that denzuko might have referred to the search for Brain infections. Extensive searches for the meaning of the words den zuk and denzuko in a number of languages, as an attempt to find clues to the identity of the virus author, turned up closely related words meaning sugar and knife as well as search. However, these turned out to be quite beside the point. There is text in both Den Zuk and Ohio that suggests they were written by the same author. Ohio contains an address in Indonesia (and none in Ohio — the name derives from Ohio State University, where it was first identified). Both contain a ham-radio license number issued in Indonesia. Both contain the same programming bug. The FAT (file allocation table) and data areas are overwritten if a floppy disk with a higher capacity than 360 kB is infected. Den Zuk is a more sophisticated exercise in programming. Skulason concluded, therefore, that Ohio was in fact an earlier version of Den Zuk. 584

AU1518Ch33Frame Page 585 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses The virus’s author, apparently a college student in Indonesia, confirmed Skulason’s hypotheses. There had been attempts to trace the virus’s origins through the words denzuk and denzuko. In fact, Den Zuko turned out to be the author’s nickname, derived from John Travolta’s character in the movie Grease. Stoned (and Variants) The Stoned virus seems to have been written by a high school student in New Zealand — hence its other main alias, New Zealand. All evidence suggests that he wrote the virus only for study and that he took precautions against the release of the code. These safeguards proved insufficient, as it turned out. It is reported that his brother stole a copy and decided to infect the machines of friends. The original version of Stoned is said to have been restricted to infecting floppy disks. The current, most common version of Stoned, however, infects all disks. It is an example of a second class of boot-sector-infecting viral programs in that it places itself in the master boot record or partition boot record of a hard disk instead of the boot sector (as it does on floppy disks). In common with most BSIs, Stoned moves the original sector into a new location on the disk. On hard disks and double-density floppies, this movement is not usually a problem. On high-density floppies, however, system information can be overwritten, resulting in loss of data. One version of Stoned reportedly does not infect 3½-inch diskettes; this version may well be the template for Michelangelo, which does not infect 720 kB disks either. Michelangelo, Monkey, and Other Stoned Variants. Stoned has spawned a large number of mutations ranging from minor variations in the spelling of the payload message to the functionally different Empire, Monkey, and NoInt variations.

Michelangelo is generally believed by researchers to have been built on or mutated from the Stoned virus. The similarity of the replication mechanism, down to the inclusion of the same bugs, puts this theory beyond any reasonable doubt. Any successful virus is likely to be copied. Michelangelo is unusual only in the extent to which the payload has been modified. Roger Riordan reported and named the virus in Australia in February of 1991. He suspected that the virus had entered the victim company on disks of software from Taiwan, but this hypothesis remains unproven. The date indicates the existence of the virus prior to March 6, 1991. This demonstrates that the virus can survive its own deletion of disk information every March 6, even though it destroys itself along with the system tracks of disks overwritten on that date. This resiliency is not really surprising — few computer users understand that boot viruses can, in principle, infect 585

AU1518Ch33Frame Page 586 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY any disk from any other disk, regardless of whether the disk is bootable, contains any program files, or contains any files at all. Riordan determined that March 6 was the trigger date. It is often assumed from the name of the virus that it was intended to trigger on March 6 because that is the birthday of Michelangelo Buonarotti, the Renaissance artist, sculptor, and engineer. However, there is no text in the body of the virus, no reference to Michelangelo, and no evidence of any sort that the author of the virus was aware of the significance of that particular date. The name is simply the one that Riordan chose to give it, based on the fact that a friend with the same birth date knew that it was also Michelangelo’s. By the beginning of 1992, commercial production software was being shipped on Michelangelo-infected floppies, and at least one company was shipping infected PC systems. It has been suggested that, by the end of February of that year, when the general public was becoming aware of the problem, the number of infected floppies out in the field may have been in the millions. Fortunately, most infected machines were checked and diagnosed before March 6 of that year. The replication mechanism of Michelangelo is basically that of Stoned. It replaces the original boot sector on a floppy disk with a copy of itself. The virus moves the original boot sector to sector 3 (for 360 kB diskettes) or 14 (for 1.2 MB or 1.44 MB diskettes), and the virus contains a “loader” that points to this location. After the virus loads itself into memory, the original boot sector is run; to the user, the boot process appears to proceed normally. On hard disks, the original partition sector is moved to (0,0,7). Michelangelo is no stealth virus. Examination of the boot blocks shows a clear difference between a valid sector and the one that is infected. (The absence of the normal system messages should be a tip-off — Michelangelo contains no text whatsoever.) In addition, Michelangelo reserves itself 2 kB at the top of memory. A simple run of DOS’s CHKDSK utility will show total conventional memory on the system; and if a 640 kB machine shows 655,360 bytes, then the computer is not infected with Michelangelo. (If the number is less, there may still be reasons other than a virus; and if the number is 655,360, that does not, of course, prove that no virus is present or active.) Removal is a simple matter of placing the original sector back where it belongs, thus wiping out the infection. This can be done with sector-editing utilities, or even with DEBUG, although it would normally be easier and safer to simply use an anti-virus utility. There have been many cases where a computer has been infected with both Stoned and Michelangelo. In this situation, the boot sector cannot be recovered, because both Stoned and 586

AU1518Ch33Frame Page 587 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses Michelangelo use the same “landing zone” for the original sector; and the infection by the second virus overwrites the original boot sector with the contents of the first virus. When an infected computer boots up, Michelangelo checks the date via Interrupt 1Ah. If the date is March 6, the virus then overwrites the first several cylinders of the disk with the contents of memory. Interrupt 1Ah is not usually available on the earliest PCs and XTs (with some exceptions). However, the disk that is overwritten is the disk from which the system is booting; a hard disk can be saved simply by booting from a floppy. Also, the damage is triggered only at boot time, although this is not altogether a positive. The fact that the damage occurs during the boot process means that the payload, like the infection mechanism, is no respecter of operating systems — it and can and does trash non-DOS operating systems such as UNIX. A number of suggestions were made in early 1992 as to how to deal with Michelangelo without using anti-virus software. Because so many anti-viral programs — commercial, shareware, and freeware — identified the virus, it seems odd that people were so desperate to avoid this obvious step of using a scanning program to find the virus. Some people recommended backing up data, which is always a good idea. And, given that Michelangelo is a boot-sector infector, it would not be stored on a tape backup. However, diskettes are a natural target for BSIs. Today, diskettes are much less favored for major backup purposes. Zip disks, tapes, and other high-capacity writeable media are cheap and highly available. At that time, however, many popular backup programs used proprietary non-DOS disk formats for reasons of speed and additional storage. These, if infected by Michelangelo, would become unusable. Changing the computer clock was also a popular suggestion. Because Michelangelo was set to go off on March 6, theoretically you could just set the computer clock to make sure that it never reached March 6. However, many people did not understand the difference between the MS-DOS clock and the system clock read by Interrupt 1Ah. The MS-DOS DATE command did not always alter the system clock. Network-connected machines often have time-server functions so that the date would be reset to conform to the network. The year 1992 was a leap year, and many clocks did not deal with it properly. Thus, for many computers, March 6 came on Thursday, not Friday. This suggestion comes up time and again for dealing with viruses with a known trigger date (CIH, for example) and was trotted out again for dealing with the Millennium Bug. An even sillier suggestion was to test for Michelangelo by setting the date to March 6 and then rebooting the computer. This strategy became known as Michelangelo roulette. One vendor actually reported an incident 587

AU1518Ch33Frame Page 588 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY where a customer switched on a machine on the fatal morning and when the machine promptly died, they switched on the other machines in the office to see if the same thing happened. It did. Many people suggested a modem avoidance strategy. Such a strategy is, of course, no defense worth mentioning against any boot-sector virus. Neither the master/partition boot record nor the boot sector is an identifiable, transferable file. Neither can be transmitted by an every-day user as a file over a modem or Ethernet connection, although an infected disk can be transferred over a network connection as a binary image. Although dropper programs are theoretically possible, they are rarely used as a means of disseminating a virus through unsuspecting users. The danger of getting a Michelangelo infection from a BBS was, therefore, so small that for all practical purposes it did not exist. Warning against bulletin boards, or, more recently, Web sites, merely proscribes a major source of advice and utility software. Unlike the Columbus Day/Datacrime hypefest of 1989, the epidemic of Michelangelo in the spring of 1992 had its basis in fact. Vendors were making unsubstantiated claims for the numbers of infections, which, in retrospect, turned out to have been surprisingly accurate. More importantly, the research community as a whole was seeing large numbers of infections. The public was seeing them as well. No fewer than 15 companies shipped commercial products that turned out to be infected with the Michelangelo virus. Two producers of commercial anti-viral programs released crippled freeware versions of their scanners. The programs did briefly mention that they checked only for Michelangelo, but certainly gave users the impression that they were checking the whole system. Happily, the trend over recent years has been to produce small, single-shot programs for dealing urgently with high-profile viruses rather than a crippled version of a free package. Even this approach has its drawbacks — recently, there was an instance where a Hybris infection was almost overlooked because the freeware program used could detect only a single variant. Oddly, it was a later variant than the one actually found on the machine in question. It seems that the vendor assumed that anyone using it would already have updates of their product for the previous versions. Because the vendor in question was also responsible for one of the free Michelangelo scanners, perhaps the average vendor’s sense of ethical responsibility has not been raised as far as one could hope. Because of the media attention, a number of checks were made that would not have been done otherwise. Hundreds and even thousands of copies of Michelangelo were found within single institutions. Because many copies had been found and removed, the number of hits on March 6 was not spectacular. Predictably, perhaps, media reports on March 6 588

AU1518Ch33Frame Page 589 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses started to dismiss the Michelangelo scare as another over-hyped rumor, completely missing the reality that millions of machines had possibly been struck. FILE INFECTORS Lehigh Lehigh only infects COMMAND.COM, the operating system interpreter program in MS-DOS, which rather restricts its capacity to spread because bootable floppy disks became much less common with the rise of hard disk drives and almost completely vanished with the advent of Windows. (The target of infection means that Lehigh can be considered a system infector under the more recent definition of that term.) Nevertheless, it received a great deal of publicity and had a direct impact on the anti-virus scene. Ken van Wyk, who was working at Lehigh at the time (and went on to join CERT [Carnegie Mellon University’s Computer Emergency Response Team]), set up the VIRUS-L/comp.virus mailing list and newsgroup. Unfortunately, VIRUS-L seems to have disappeared, but it was for a number of years the primary source of accurate virus information and, in large measure, responsible for ensuring that the anti-virus research community did in fact become a community. The Lehigh virus overwrote the slack space at the end of the COMMAND.COM file. This meant that the virus did not increase the size of infected files. A later report of a 555-byte increase in file size was due to confusion over the size of the overwriting code. When an infected COMMAND.COM was run (usually upon booting from an infected disk), the virus stayed resident in memory. When any access was made to another disk, via the TYPE, COPY, DIR, or other normal DOS commands, COMMAND.COM files would be infected. The virus kept a counter of infections: after four infections, the virus would overwrite the boot and FAT areas of disks with bytes copied from BIOS. Lehigh (the virus, not the campus) is remarkably stealth-free. The primary defense of the virus was that, at the time, no one would have been looking for it. The virus altered the date stamp of infected COMMAND.COM files. If attempting an infection on a write-protected disk, the virus would not trap the WRITE PROTECT ERROR message. This message is a serious giveaway if seen as a result of typing DIR — generating the directory listing should not require writing to the diskette (unless output is redirected.) The virus was limited in its target population to those disks that had a COMMAND.COM file and, more particularly, those that contained a full operating system. The virus was also self-limiting in that it would destroy itself once activated and would activate after only four reproductions. The Lehigh virus never did spread beyond the campus in that initial attack. 589

AU1518Ch33Frame Page 590 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Although it is found in a number of private virus collections and may be released into the wild from time to time, the virus has no real chance of spreading. Jerusalem In terms of the number of infections (copies or reproductions) that a virus produces, boot-sector viral programs long held an advantage in the microcomputer environment. Among file-infecting viral programs, however, the Jerusalem virus was the clear winner. It has another claim to fame as well: it almost certainly has the largest number of variants of any virus program known to date, at least in its class of parasitic file infectors. Initially known to some as the Israeli virus, the version reported by Y. Radai in early 1988 (also sometimes referred to as 1813 or Jerusalem-B) was the most commonly encountered version. Although it was the first to be widely disseminated and was the first to be discovered and publicized, analysis suggests that it was the outcome of previous viral experiments. A few things are common to pretty much all of the Jerusalem family. They usually infect both .COM and .EXE files. When an infected file is executed, the virus “goes TSR (Terminate and Stay Resident)” — that is, it installs itself into memory. Thus, it remains active even after the originally infected program is terminated. The .EXE programs executed after the program goes resident are infected by appending the virus code to the end of the file. Prepending code infects .COM files. Most variants carry some kind of date logic-bomb payload, often triggered on Friday the 13th. Sometimes the logic bomb is simply a message; often, it deletes programs as they are accessed. Although Jerusalem tends to work well with .COM files, the differing structure of .EXE files has presented Jerusalem with a number of problems. Early versions of Jerusalem, not content with one infection, will reinfect .EXE files again and again so that they continually grow in size. This growth renders pointless the attempt at stealth that the programmer built in when he ensured that the file creation date was conserved and unchanged in an infected file. Also, .EXE programs that use internal loaders or overlay files tend to be infected in the wrong place and have portions of the original program overwritten. Although the virus was reported to slow down systems that were infected, it seems to have been the continual growth of .EXE files that led to the detection of the virus. The great number of variants has contributed to severe naming and identification problems. Because a number of the variants are based on the same code, the signatures for one variant often match another — thus generating even more naming confusion. This confusion is not unique to the Jerusalem family, of course, and is an ongoing concern in the anti-virus 590

AU1518Ch33Frame Page 591 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses research community, while systems administrators are growing increasingly forceful and vociferous in their demands for a unified nomenclature. An early infection was found in an office belonging to the Israeli defense forces, giving rise to the occasional synonym IDF. This synonym was actually problematical because it was more often used as a synonym for the unrelated Frodo virus. The common Jerusalem payload of file deletion on Friday the 13th (yet another alias) begged a question as to why the logic bomb had not gone off on Friday, November 13, 1987. Subsequent analysis has shown that the virus will activate the payload only if the year is not 1987. The next following Friday the 13th was May 13th, 1988. Because the last day that Palestine existed as a nation was May 13, 1948, it was felt that the virus might have been an act of political terrorism. This supposition led to another alias, the PLO virus. However, Israel celebrates its holidays according to the Jewish calendar (no surprises there), and the independence celebrations were slated for three weeks before May 13, 1988. These facts, and the links between Jerusalem and the sURIV family, suggest that there is no intentional political link. It is almost certain that the Jerusalem virus is, in fact, two viral programs combined. The two viruses, and others in the development family, have been found. sURIV 1.01 is a .COM-file infector — .COM is the easier file structure and therefore the easier program to infect. sURIV 2 is an .EXE-only infector and has considerably longer and more complex code. sURIV 3 infects both types of program files and has considerable duplication of code; it is, in fact, simply the first two versions concatenated together. Although the code in the sURIV programs and the 1813 version of Jerusalem is not absolutely identical, all the same features are duplicated. The payload date for sURIV is April 1, and the year has to be later than 1988. Although this seems to suggest that sURIV is a descendant of Jerusalem, the reverse is probably the case. Certainly the code is less sophisticated in the sURIV variants. More recent viruses that infect Windows portable executable (PE) files, as well as Lindose/Winux, which infects both Windows PE and Linux ELF files, are considered to be an advance in virus technology. In fact, they are simply following in the footsteps of Jerusalem. The Jerusalem virus was immensely successful as a template for variants. The code is reasonably straightforward and, for those with some familiarity with assembly programming, an excellent primer for writing viral programs affecting both .COM and .EXE files. It has a number of annoying bugs, however. It can misinfect some .EXE files. It can conflict with Novell NetWare, which requires the use of Interrupt 21h subfunctions that are also used by the virus. One of the Sunday variants is supposed to 591

AU1518Ch33Frame Page 592 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY delete files on the seventh day of the week. The author did not realize that computers start counting from zero and that Sunday is actually the zero day of the week — so there is no seventh day, and the file deletions never actually happen. E-MAIL VIRUSES CHRISTMA exec CHRISTMA exec, the Christmas Tree Virus/Worm, sometimes referred to as the BITNET chain letter, was probably the first major malware attack across networks. It was launched on December 9, 1987, and spread widely on BITNET, EARN, and IBM’s internal network (VNet). It has a number of claims to a small place in history. It was written, unusually, in REXX. It was mainframe-hosted (on VM/CMS systems) rather than microcomputerhosted — quaint as that distinction sounds today, when the humblest PC can run UNIX. CHRISTMA presented itself as a chain letter inviting the recipient to execute its code. This involvement of the user led to the definition of the first e-mail virus rather than a worm. When it was executed, the program drew a Christmas tree and mailed a copy of itself to everyone in the account holder’s equivalent to an address book, the user files NAMES and NETLOG. Conceptually, there is a direct line of succession from this worm to the social engineering worm/Trojan hybrids of today. W97M/Melissa (Mailissa) She came from alt.sex. Now, as the old joke goes, that I have your attention …

In this instance, however, the lure of sex was certainly employed to launch the virus into the wild. The source of the infestation of the Melissa Word macro virus (more formally identified as some variation on W97M/Melissa) was a posting on the Usenet newsgroup alt.sex. The message had a Word document attached. (More details of macro viruses are given later in regard to the Concept virus.) The posting suggested that the document contained account names and passwords for Web sites carrying salacious material. As one might expect in such a newsgroup, a number of people read the document. It carried a macro that used the functions of Microsoft Word and the Microsoft Outlook mailer program to reproduce and spread itself — rather successfully, as it turns out. Melissa is not the fastest-burning e-mail-aware malware to date, but it certainly held the record for awhile. Many mail programs, in the name of convenience, are becoming more automated. Much of this automation has focused on running attached files, 592

AU1518Ch33Frame Page 593 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses or scripting functions included in HTML-formatted messages, without requiring the intervention of the victim. To be susceptible to the effects of Melissa, a victim needed to be running Microsoft Word 97 or later, or Microsoft Outlook 98 or later. It was also necessary to receive an infected file and read it into Word without disabling the macro capability. However, all of these conditions are normal for many users. Receiving infected documents has never been a problem, from WM/Concept onward. Melissa increased the likelihood that any given individual user would eventually receive an infected document by the sheer weight of numbers. However, by judicious social engineering, the virus also increased the chances of persuading a victim to open an infected document. Many mail programs will now detect the type of a file from its extension and start the appropriate program automatically. On execution, the virus first checks to see whether an infectable version of Word is running. If so, Melissa reduces the level of security on Word so that no future warnings of macro content are displayed. Under Word 2000, the virus blocks access to the menu item that allows you to raise your security level and sets your macro virus detection to the lowest level — that is, to none. Restoring the security level requires the deletion of the NORMAL.DOT file and the consequent loss of legitimate macros and customizations. The virus checks for the Registry key HKEY_CURRENT_USER\Software\Microsoft\Office\Melissa?\ with a value of “… by Kwyjibo.” (The “Kwyjibo” entry seems to be a reference to the “Bart the Genius” episode of The Simpsons television cartoon program wherein Bart Simpson used this word to win a Scrabble match.) If that key is not found, the macro starts up Outlook and sends itself as an attachment to the top 50 names in each of your address lists. Most people have only one (the default is Contacts); but if there is more than one, then Outlook will send more than 50 copies of the message. Outlook also sorts address lists so that other mailing lists are at the top of the list. In addition, under a Microsoft Exchange Server, the macro can send copies out to the global address lists on the server. Therefore, a single infected machine may distribute far more than 50 copies of the message/virus in the next “hop.” Like most macro viruses, Melissa worked by infecting the global template and infecting all documents thereafter. Each document created or reviewed was infected when closed. Each infected document activated the macro when the file was opened. Avoiding Outlook did not offer protection from the virus; it only meant that the 50 copies would not be sent out automatically. If Microsoft Word was used, but not Outlook, the machine would still be infected, and infected documents could still be sent out in the normal course of operations. 593

AU1518Ch33Frame Page 594 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY The virus cannot invoke the mass-mailer dispersal mechanism on Macintosh systems, but it can be stored and resent from Macs. As with any Word macro virus, the source code travels with the infection and it was very easy to create modifications to Melissa. Many Melissa variants with different subjects and messages started to appear shortly after the original virus appeared. The first similar Excel macro virus was called Papa, although this and its progeny never had the same global impact as Melissa. In fact, the source code was published more widely than usual in newgroups, on the Web, and elsewhere. In one distressing instance, a major security organization issued a flash advisory including a range of information of varying quality and relevance. Unfortunately, it also included the entire source code, trivially modified so that it would not run without some tweaking. As with many more recent mail-borne nuisances, a number of fixes such as sendmail and procmail recipes for mail servers and mail filtering systems were devised very quickly. However, these fixes were often not fully tested or debugged. One version would trap most of the warning messages about Melissa. Mail filters can, of course, become problems. In the mailing of the author’s initial report on the virus, it bounced from one system because of an automated filter that interpreted the message as a hoax virus warning. W95.Hybris The Hybris worm started to make its mark in late September 2000. It is disseminated by an e-mail message that is often but by no means always sent from [email protected]. This address is forged to make it harder to trace the infected source. However, the sexyfun.net domain was later set up and used as a Hybris information resource. The worm may sometimes check the language settings of the host computer and select a “story” relating to Snow White and the Seven Dwarfs in English, French, Spanish, or Portuguese, used as message text to accompany the copy of the worm when it is mailed out, and implying that the attached file is a kind of pornographic screen saver. When the worm attachment is executed, the WSOCK32.DLL file is modified or replaced so that it can track e-mail and other Internet traffic. When the worm detects an e-mail address, it sends infected e-mail to that address. It also connects to alt.comp.virus and uploads encrypted plug-in modules to the group. If it finds newer plug-ins, the worm downloads them for its own use. For several months, alt.comp.virus was almost unusable because of the sheer numbers of plug-ins clogging the group. 594

AU1518Ch33Frame Page 595 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses WORMS The Morris Worm (Internet Worm) In the autumn of 1988, most people were blissfully ignorant of viruses and the Internet. However, I recall that VIRUS-L had been established and was very active. At that time the list was still an exploder re-mailer, rather than a digest; but postings were coming out pretty much on a daily basis. However, there were no postings on November 3 or on November 4. It was not until November 5, actually, that I found out why. The Morris Worm did not actually bring the Internet in general and email in particular to the proverbial grinding halt. It was able to run and propagate only on machines running specific versions of the UNIX operating system on specific hardware platforms. However, given that the machines that are connected to the Internet also comprise the transport mechanism for the Internet, a “minority group” of server-class machines, thus affected, degraded the performance of the Net as a whole. Indeed, it can be argued that, despite the greater volumes of mail generated by Melissa and LoveLetter and the tendency of some types of mail servers to achieve meltdown when faced with the consequent traffic, the Internet as a whole has proved to be somewhat more resilient in recent years. During the 1988 mailstorm, a sufficient number of machines had been affected to impair e-mail and distribution-list mailings. Some mail was lost, either by mailers that could not handle the large volumes that backed up or by mail queues being dumped in an effort to disinfect systems. Most mail was substantially delayed. In some cases, mail would have been rerouted via a possibly less efficient path after a certain time. In other cases, backbone machines, affected by the problem, were simply much slower at processing mail. In still others, mail-routing software would crash or be taken out of service, with a consequent delay in mail delivery. Ironically, electronic mail was the primary means of communication of the various parties attempting to deal with the trouble. By Sunday, November 6, mail was flowing, distribution lists and electronic periodicals were running, and the news was getting around. However, an enormous volume of traffic was given over to one topic — the Internet Worm. In many ways, the Internet Worm is the story of data security in miniature. The Worm used trusted links, password cracking, security holes in standard programs, standard and default operations, and, of course, the power of viral replication. “Big Iron” mainframes and other multi-user server systems are generally designed to run constantly, and they execute various types of programs and procedures in the absence of operator intervention. Many hundreds of functions and processes may be running all the time, expressly designed to neither require nor report to an operator. Some processes cooperate with 595

AU1518Ch33Frame Page 596 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY each other; others run independently. In the UNIX world, such small utility programs are referred to as daemons, after the supposedly subordinate entities that take over mundane tasks and extend the power of the wizard, or skilled operator. Many of these utility programs deal with the communications between systems. Mail, in the network sense, covers much more than the delivery of text messages between users. Network mail between systems may deal with file transfers, the routing of information for reaching remote systems, or even upgrades and patches to system software. When the Internet Worm was well established on a machine, it would try to infect another. On many systems this attempt was all too easy — computers on the Internet were meant to generate activity on each other, and some had no protection in terms of the type of access and activity allowed. The finger program is one that allows a user to obtain information about another user. The server program fingerd is the daemon that listens for calls from the finger client. The version of fingerd common at the time of the Internet Worm had a minor problem: it did not check how much information it was given. It would take as much as it could hold and leave the rest to overflow. The rest, unfortunately, could be used to start a process on the computer, and this process was used as part of the attack. This kind of buffer overflow attack continues to be very common, taking advantage of similar weaknesses in a wide range of applications and utilities. The sendmail program is the engine of most mail-oriented processes on UNIX systems connected to the Internet. In principle, it should only allow data received from another system to be passed to a user address. However, there is a debug mode that allows commands to be passed to the system. Some versions of UNIX were shipped with the debug mode enabled by default. Even worse, the debug mode was often enabled during installation of sendmail for testing and then never turned off. When the Worm accessed a system, it was fed with the main program from the previously infected site. Two programs were used, one for each infected platform. If neither program could work, the Worm would erase itself. If the new host was suitable, the Worm looked for further hosts and connections. The program also tried to break into user accounts on the infected machine. It used standard password-cracking techniques such as simple variations on the name of the account and the user. It carried a dictionary of words likely to be used as passwords, and would also look for a dictionary on the new machine and attempt to use that as well. If an account were cracked, the Worm would look for accounts that this user had on other computers, using standard UNIX tools. The Worm did include a means of checking for copies already running on a target computer. However, it took some time to terminate the program; and 596

AU1518Ch33Frame Page 597 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses the Worm regularly produced copies of itself that would not respond to the request for termination at all. The copies of the Worm did destroy themselves — having first made a new copy. In this way, the identifying process ID number would continually change. The Worm was not intentionally destructive. However, the mere presence of the program had implications for the infected systems and for those associated with them. The multiple copies of the program that ran on the host machines had a serious impact on other processes. Also, communications links and processes were used to propagate the Worm rather than to support the legitimate work for which they were intended. Linux Worms By spring 2001, a number of examples of Linux malware had been seen. Interestingly, while the Windows viruses generally followed the CHRISTMA exec style of having users run the scripts and programs, the new Linux worms were similar to the Internet/Morris/UNIX worms in that they rely primarily on bugs in automatic networking software. Ramen. The Ramen worm makes use of security vulnerabilities in default installations of Red Hat Linux 6.2 and 7.0 using specific versions of the wu-ftp, rpc.statd, and LPRng programs. The worm defaces Web servers by replacing index.html and scans for other vulnerable systems. It does this initially by opening an ftp connection and checking the remote system’s ftp banner message. If the system is vulnerable, the worm uses one of the exploitable services to create a working directory; it then downloads a copy of itself from the local (attacking) system.

Compromised systems send out e-mail messages to two Hotmail and Yahoo accounts, and ftp services are disabled. Ramen’s SYN scanning may disrupt network services if multicasting is supported by the network. Lion. Lion uses a buffer overflow vulnerability in the bind program to spread. When it infects, Lion sends a copy of output from the ifconfig commands etc/passwd and/etc/shadow to an e-mail address in the china.com domain. Next, the worm adds an entry to etc/inetd.conf and restarts inetd. This entry would allow Lion to download components from a (now closed) Web server located in China. Subsequently, Lion scans random class B subnets in much the same way as Ramen, looking for vulnerable hosts. The worm may install a rootkit onto infected systems. This backdoor disables the syslogd daemon and adds a Trojanized ssh (secure shell) daemon.

The worm replaces several system executables with modified versions. The /bin/in.telnetd and/bin/mjy files provide additional backdoor functionality and attempt to conceal the rootkit’s presence by hiding files and processes. 597

AU1518Ch33Frame Page 598 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Adore (Linux/Red). Adore is a Linux worm similar to Linux/Ramen and Linux/Lion. It uses vulnerabilities in wu-ftpd, bind, lpd, and RPC.statd that enable an intruder to gain root access and run unauthorized code. The worm attempts to send IP configuration data, information about running processes, and copies of /etc/hosts and/etc/shadow to e-mail addresses in China. It also scans for class B IP addresses.

Adore drops a script called 0anacron into the /etc/cron.daily directory so that the script runs as a daily cron job. The cron utility executes scheduled tasks at predetermined times. This script removes the worm from the infected host. A modified version of the system program /bin/ps that conceals the presence of the worm’s processes replaces the original. Code Red Code Red uses a known vulnerability to target Microsoft IIS (Internet Information Server) Web servers. Despite the fact that a patch for the loophole had been available for five months prior to the release of Code Red, the worm managed to infect 350,000 servers within nine to thirteen hours. When a host gets infected, it starts to scan for other hosts to infect. It probes random IP addresses, but the code is flawed by always using the same seed for the random number generator. Therefore, each infected server starts probing the same addresses that have been done before. (It was this bug that allowed the establishment of such a precise count for the number of infections.) During a certain period of time the worm only spreads, but then it initiates a denial-of-service (DoS) attack against www1.whitehouse.gov. However, because this particular machine name was only an overflow server, it was taken offline prior to the attack and no disruptions resulted. The worm changed the front page of an infected server to display certain text and a background color of red — hence the name of the worm. Code Red definitely became a media virus. Although it infected at least 350,000 machines within hours, it had probably almost exhausted its target population by that time. Despite this, the FBI held a rather ill-informed press conference to warn of the worm. Code Red seems to have spawned quite a family, each variant improving slightly on the random probing mechanism. In fact, there is considerable evidence that Nimda is a descendent of Code Red. Nimda variants all use a number of means to spread. Like Code Red, Nimda searches random IP addresses for unpatched Microsoft IIS machines. Nimda will also alter Web pages in order to download and install itself on computers browsing an infected Web site using a known exploit in Microsoft Internet Explorer’s handling of Java. Nimda will also mail itself as 598

AU1518Ch33Frame Page 599 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses a file attachment and will install itself on any computer on which the file attachment is executed. Nimda is normally e-mailed in HTML format and may install automatically when viewed using a known exploit in Microsoft Internet Explorer. Nimda will also create e-mail and news files on network shares and will install itself if these files are opened. MACRO VIRUSES Concept WM/Concept was by no means the first macro virus. HyperCard viruses were already commonplace in the Macintosh arena when WM/Concept appeared, and a number of anti-virus researchers had explored WordBasic and other malware-friendly macro environments (notably Lotus 1–2–3) long before the virus appeared in 1995. However, WM/Concept was the first macro virus to be publicly described as such, and certainly the most successful in terms of spread. For awhile, it was easily the most widely found virus in the world. Oddly enough, however, its appearance was greeted with disbelief in some quarters. After all, a Word file is usually thought of as data rather than a program file. People cling to the belief that, because executable files run programs and data files contain data, there is a clear-cut distinction between the two file types. In fact, this has never been true; and the von Neumann architecture makes such a differentiation impossible. What may be perceived as a data file may be, in reality, a program. A PostScript file is, in fact, a program read and acted upon by a PostScript interpreter program. A printer normally executes this program, but a program such as GhostView can also interpret a PostScript file and print it to the screen on the host computer. The first in-the-wild examples specifically targeted Microsoft Word version 6, but code for viruses infecting Excel and Ami Pro also appeared very quickly. All versions of Word for Windows and Word 6 and later for the Macintosh include a sophisticated macro language (WordBasic in older versions, and later Visual Basic for Applications, or VBA). Such applications are capable of all the functions normally associated with a high-level programming language such as BASIC. In fact, macro languages used by Windows applications are based on Microsoft’s Visual Basic. Concept spread far and (for its time) rapidly. It got something of a boost when two companies accidentally shipped it in infected documents on CD-ROM. The first instance was a Microsoft CD called MicroSoft Windows ’95 Software Compatibility Test. The CD was shipped to a number of large original equipment manufacturing (OEM) companies in the summer of 1995 as a means of checking compatibility with Windows 95, which was due for imminent release. However, the CD contained a document called 599

AU1518Ch33Frame Page 600 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY OEMLTR.DOC, which was infected with Concept. A few months later, Microsoft UK distributed the virus on another CD, The Microsoft Office 95 and Windows 95 Business Guide, in a document called HELPDESK.DOC. Concept was fairly obvious and could be forestalled and even fixed (with patience) without the aid of anti-virus software. When a Concept-infected file was opened, a message box appeared containing the number 1 and an OK button. You could also detect the virus, presence by checking the Tools/Macros submenu for the presence of macros. A WM/Concept.A infection is characterized by the presence of the macros AAAZFS, AAAZAO, AutoOpen, Payload, and FileSaveAs. Any document might legitimately use AutoOpen or FileSaveAs. However, macros with the names Payload, AAAZFS, and AAAZAO are something of a giveaway. The macros are not encrypted, so it is easy to spot the virus. On the other hand, this lack of encryption also made it easy to modify the code. Virus writers learned almost immediately to conceal the internals of their macros by implementing them as execute-only macros, which cannot be edited or easily viewed. Although Concept.A has a Payload macro, it has no actual payload. Famously, it contains the string “That’s enough to prove my point,” which explains the name Concept (as in “proof of concept”). Concept.A was a fairly harmless affair, as viruses go: it tampered with Word 6’s global template (normally NORMAL.DOT, or Normal on a Macintosh) so that files were saved as templates and ran the infective AutoOpen macro. This gave Mac users an additional advantage in that template files on the Mac have a different icon to document files. As long as the virus infected only template files, this icon was a frequently found heads-up to Mac users that they might have a virus problem. However, in later versions of Word, the distinction between documents and templates is less absolute; and that particular heuristic has become less viable. In a sense, the main importance of Concept was that the code could be altered very quickly to incorporate a destructive payload, alternative infection techniques, and evasion of the first attempts at detecting it. This virus has been described as the first cross-platform virus in that it works on any platform. However, this description is not altogether accurate: it only infected systems running Word 6 or Word 95, although versions are known that can infect Word 97 and later. SCRIPT VIRUSES VBS/LoveLetter LoveLetter first hit the nets on May 3, 2000. It spread rapidly, arguably faster than Melissa had the previous year. 600

AU1518Ch33Frame Page 601 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses The original LoveLetter came in an e-mail with a subject line of “I LOVE YOU.” The message consisted of a short note urging you to read the attached love letter. The attachment filename, LOVE-LETTER-FORYOU.TXT.vbs, was a fairly obvious piece of social engineering. The .TXT bit was supposed to make people think that the attachment was a text file and thus safe to read. At that point, many people had no idea what the .VBS extension signified; and in any case they might have been unaware that, if a filename has a double extension, only the last filename extension has any special significance. Putting vbs in lower case was likely meant to play down the extension’s significance. However Windows, like DOS before it, is not case sensitive when it comes to filenames, and the .vbs extension indicates a Visual Basic script. If Windows 98, Windows 2000, Internet Explorer 5, Outlook 5, or a few other programs are installed, then so is Windows Script Host (WSH); and there is a file association binding the .vbs extension to wscript.exe. In that case, double-clicking on the file attachment is enough to start WSH and interpret the contents of the “love letter.” The infection mechanism included the installation of some files in the Windows and System directories. These files were simply copies of the original .vbs file — in one case keeping the name of LOVE-LETTER-FORYOU.TXT.vbs, but in other cases renaming files to fool people into thinking that they were part of the system (MSKERNEL32.vbs and WIN32DLL.vbs). The virus made changes to the Registry so that these files would be run when the computer started up. Today, many organizations routinely quarantine or bounce files with a .VBS extension (especially a double extension) at the mail gateway. LoveLetter infects files with the extensions .VBS, .VBE, .JS, .JSE, .CSS, .WSH, .SCT, .HTA, .JPG, .JPEG, .MP2, and .MP3. The infection routine searches local drives and all mounted network drives, so shared directories can be an additional source of infection. The routines overwrite most of these files with a copy of the script (that is, the original file is not preserved anywhere, although the new file has a different name) and change the filenames from (for example) picture.jpg to picture.jpg.vbs. In some cases, the virus simply deletes the original file. MPEGs, however, are not overwritten. The original file, say song.mp3, is marked as hidden; and a new file, song.mp3.vbs, is created with a copy of the virus. The .vbs extension must, of course, be added for the virus to be effective. Once the virus has copied itself all over a host machine, it starts to spread to other machines. If Outlook is present, the virus will use any addresses associated with the mail program to send copies of itself (but once only). As with Melissa, this means that when a copy of LoveLetter was received, it would appear to come from someone known to the recipient. In 601

AU1518Ch33Frame Page 602 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY addition, the program tries to make a connection to IRC, using the mIRC chat program, and spread that way. The Love Bug (as it was also known) creates another copy of the file, LOVE-LETTER-FOR-YOU.HTM, in the Windows System directory, and then sends that copy to any user who joins the IRC channel while the session is active. When a system is infected, the worm attempts to download a Trojan application from a Web site in the Philippines by changing the start-up URL in Internet Explorer. The file, named WIN-BUGSFIX.exe, will try to collect various password files and e-mail them to an address in the Philippines. If the file is executed, the Trojan also creates a hidden window called BAROK and remains resident and active in memory. However, this site was probably overloaded in the early hours of the LoveLetter infection, and was quickly taken down. A very large number of LoveLetter “cleaners” were made available. Interestingly, most of them were Visual Basic scripts themselves. Unfortunately, at least two variants of the virus pretended to be disinfecting tools and did more damage than the original virus. Because the virus is an unencrypted script file, it carries its own source code with it. This means that variants started appearing within hours. Over a dozen were reported in the weekend after the virus first struck, and many more have been observed since. One of the more successful of these thanked the recipient for the order of a Mother’s Day gift and claimed that the recipient’s credit card had been charged $326.92 as per the attached invoice. Obviously, this ruse relied on people being too angry to think about how anybody could charge their credit card when they had not given the number to a vendor. Certainly, the variants showed a certain amount of innovation in the field of social engineering, if not in the actual code. One derivative targets UNIX systems using shell scripts but uses a very similar mechanism. There have been estimates of damage stemming from LoveLetter in the billions of dollars. It is very difficult to justify those figures. Certainly, a number of e-mail systems were clogged, including those of some very large organizations. Many administrators shut down mail entirely rather than turn to filtering. In addition, the resetting of Registry entries is likely to be somewhat time-consuming. Text in the virus includes the string “Manila, Philippines.” There are also the two Philippine e-mail addresses in the code and the Web site’s URL. However, all charges against the individual long thought to have been the culprit were eventually dropped by the Manila Department of Justice. 602

AU1518Ch33Frame Page 603 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses COMBINATIONS AND CONVERGENCE BadTrans BadTrans is a Win32 e-mail virus with backdoor functionality. It was found in the wild in April 2001. The worm uses MAPI functions to access and respond to unread messages. The Trojan component is a version of Hooker, a password-stealing Trojan, and mails system information to [email protected]. On infection, the worm copies itself to \Windows as INETD.EXE and drops the HKK32.EXE Trojan, also to the Windows folder. The password stealer is executed and then moved to the system director y as KERN32.EXE, dropping a keystroke logging DLL (dynamic link library) at the same time. The worm modifies WIN.INI (Windows 9x) or the Registry (Windows NT/2000) so that it is run on start-up. When infective mail is sent, the worm randomly selects the attachment filename from a number of variants, some of them obviously influenced by previous worms. The subject field in worm messages is the same as in the original message, preceded by “Re:” so that it appears to be a response to that message. The message body also looks like a reply to the original message, which the body quotes in full. At the end of the quote, there is a single line, “Take a look to the attachment.” The worm attempts to avoid answering the same mail twice or answering its own messages from other victim systems by adding two spaces to the end of the subject field and not responding to any mail with such a subject line. This mechanism is unreliable, however, because mail servers are likely to discard trailing spaces. In this event, an infective message received on a machine already infected will generate a response from the local instance of the worm, thus initiating a potential loop. A loop can also be initiated if the worm is unable to mark answered messages, as can happen with certain mail clients. Such a loop could result in a mail server meltdown. HOAXES Good Times Good Times is probably the most famous of all false alerts, and it was certainly the earliest that got widely distributed. Some controversy persists over the identity of the originators of the message, but it is possible that it was a sincere, if misguided, attempt to warn others. The hoax probably started in early December of 1994. In 1995, the FCC variant of the hoax began circulating. It seems most likely that the Good Times alert was started by a group or an individual who had seen a computer failure without understanding the cause and associated it with an e-mail message that had Good Times in the 603

AU1518Ch33Frame Page 604 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY subject line. (In fact, there are indications that the message started out on the AOL system, and it is known that there are bugs in AOL’s mail software that can cause the program to hang.) The announcement states that there was a message identified by the title of Good Times that, when read, would crash a computer. The message was said to be a virus, although there was nothing viral about that sort of activity (even if it were possible). At the time of the original Good Times message, e-mail was almost universally text based. Suffice it to say that the possibility of a straightforward text message carrying a virus in an infective form is remote. The fact that the warning contained almost no details at all should have been an indication that the message was not quite right. There was no information on how to detect, avoid, or get rid of the virus, except for its warning not to read messages with Good Times in the subject line. (The irony of the fact that many of the warnings contained these words seems to have escaped most people.) Pathetically (and far from uniquely), a member of the vx community (Virus eXchange, those who write and spread viruses) produced a Good Times virus. Like the virus named after the older Proto-T hoax, the real Good Times was an uninteresting specimen, having nothing in common with the original alert. It is generally known as GT-Spoof by the anti-virus community, and was hardly ever found in the field. Hoaxes are depressingly common and tend to have a number of common characteristics. Here is an annotated version of one: There is a virus out now sent to people via e-mail … it is called the A.I.D.S. VIRUS.

There are, in fact, an AIDS virus or two, but they are simple file-infecting viruses that have nothing to do with e-mail. It will destroy your memory, sound card and speakers, drive.

Many hoaxes suggest this kind of massive damage, including damage to hardware. And it will infect your mouse or pointing device as well as your keyboards.

Hoaxes also tend to state that the new virus has extreme forms of infection. In this case, it would be impossible for a virus to infect pointing devices or keyboards unless those pieces of equipment have memory and processing capabilities. None of these hoax warnings really detail how the virus is supposed to pass itself along. Making what you type not able to register on the screen. It self-terminates only after it eats 5MB of hard drive space 604

AU1518Ch33Frame Page 605 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses More damage claims … It will come via e-mail called “OPEN: VERY COOL! :)”

And the virus has no other characteristics, according to this alert. PASS IT ON QUICKLY & TO AS MANY PEOPLE AS POSSIBLE!!

This, of course, is the real virus, getting the user to spread it. TROJAN The AIDS Trojan Extortion Scam In the fall of 1989, approximately 10,000 copies of an “AIDS Information” package were sent out from a company calling itself PC Cyborg. Some were received at medical establishments; a number were received at other types of businesses. The packages appeared to have been professionally produced. Accompanying letters usually referred to them as sample or review copies. However, the packages also contained a very interesting license agreement: In case of breach of license, PC Cyborg Corporation reserves the right to use program mechanisms to ensure termination of the use of these programs. These program mechanisms will adversely affect other program applications on microcomputers. You are hereby advised of the most serious consequences of your failure to abide by the terms of this license agreement.

Further in the license is the sentence: “Warning: Do not use these programs unless you are prepared to pay for them.” The disks contained an installation program and a very simple AIDS information file and risk assessment. The installation program appeared to only copy the AIDS program onto the target hard disk, but in reality did much more. A hidden directory was created with a nonprinting character name, and a hidden program file with a nonprinting character in the name was installed. The AUTOEXEC.BAT file was renamed and replaced with one that called the hidden program and then the original AUTOEXEC. The hidden program kept track of the number of times the computer was rebooted and, after a certain number, encrypted the hard disk. The user was then presented with an invoice and a demand to pay the license fee in return for the encryption key. Two major versions were found to have been shipped. One, which waited for 90 reboots, was thought to be the real attempt; an earlier version, which encrypted after one reboot, alerted authorities and was thought to be an error on the part of the principals of PC Cyborg. The Panamanian address for PC Cyborg, thought by some to be a fake, turned out to be real. Four principals were identified, as well as an American accomplice who seems to have had plans to send 200,000 copies to 605

AU1518Ch33Frame Page 606 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY American firms if the European test worked. The trial of the American, Joseph Popp, was suspended in Britain because his bizarre behavior in court was seen as an indication that he was unfit to plead. An Italian court, however, found him guilty and sentenced him in absentia. RATS BackOrifice BackOrifice was developed by the hacker group Cult of the Dead Cow in order to take control of Windows 95 and 98 systems. A newer version, BackOrifice2000 (BO2K), was created in July 1999 in order to control Windows NT and 2000 systems. As with all RATs, the BackOrifice2000 backdoor has two major parts: client and server. The server part needs to be installed on a computer system to gain access to it with the client part. The client part connects to the server part via network and is used to perform a wide variety of actions on the remote system. The client part has a dialogue interface that eases the process of hacking the remote computer. In the same package there is also a configuration utility that is used to configure the server part of BO2K. It asks the user to specify networking type (TCP or UDP); port number (1-65535); connection encryption type, simple (XOR) or strong (3DES); and password for encryption that will be the password for the server access also. The configuration utility allows flexibility in configuring the server part. It can add or remove plug-ins (DLLs) from the server application, configure file transfer properties, TCP and UDP settings, built-in plug-in activation, encryption key, and start-up properties. The start-up properties setup allows configuration of automatic installation to systems, server file names, process names, process visibility, and also NT-specific properties (NT service and host process names). The file from which the server part started can be deleted. After that, BO2K will be active in memory each time Windows starts and will provide access to the infected system for hackers who have the client part and the correct password. The active server part can hide its process or prevent its task from being killed from the Task Manager (on NT). The backdoor uses a smart trick on NT by constantly changing its PID (process ID) and by creating the additional process of itself that will keep the backdoor alive even if one of the processes is killed. The server part adds a random (but large) number of spaces and ‘e’ at the end of its name; thus, the server part file cannot be deleted from Windows (invalid or long name error). The server file can be only deleted from DOS. 606

AU1518Ch33Frame Page 607 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses DDoS ZOMBIES Trin00 Also known as Trinoo, this is a distributed tool used to launch coordinated UDP flood DoS attacks from many sources. An intruder can actually communicate with a Trinoo master computer by communicating with port 27665, typically by Telnet. The master sends UDP packets to daemons on destination port 27444. The daemons send UDP flood packets to the target. The binary for the trinoo daemon contains IP addresses for one or more trinoo master systems. When the trinoo daemon is executed, the daemon announces its availability by sending a UDP packet containing the string HELLO to its programmed trinoo master IP addresses on port 31335. The trinoo master stores a list of known daemons. The trinoo master can be instructed to send a broadcast request to all known daemons to confirm availability. Daemons receiving the broadcast respond to the master with a UDP packet containing the string PONG. The trinoo master then communicates with the daemons, giving instructions to attack one or more IP addresses for a specified period of time. All communications to the master on port 27665/tcp require a password, with a default of betaalmostdone, which is stored in the daemon binary in encrypted form. All UDP communications with the daemon on port 27444 require the UDP packet to contain the string l44 (that is a lower-case letter L, not a one). Tribe Flood Network (TFN) TFN, much like Trinoo, is a distributed tool used to launch coordinated DoS attacks from many sources against one or more targets. In addition to the ability to generate UDP flood attacks, a TFN network can generate TCP SYN flood, ICMP echo request flood, and ICMP directed broadcast (e.g., smurf) DoS attacks. TFN has the capability to generate packets with spoofed source IP addresses. A TFN master is executed from the command line to send commands to TFN daemons. The master communicates with the daemons using ICMP echo reply packets with 16-bit binary values embedded in the ID field and any arguments embedded in the data portion of the packet. The binary values, which are definable at compile time, represent the various instructions sent between TFN masters and daemons. 607

AU1518Ch33Frame Page 608 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY DETECTION/PROTECTION When dealing with malware, the only safe assumption is that everything that can go wrong will go wrong, and at the worst possible time. Until the need for this level of security diligence is accepted as the general business case, the information security practitioner will have an uphill battle. However, training and explicit policies can greatly reduce the danger to users. Some guidelines that can really help in the current environment are: • Do not double-click on attachments. • When sending attachments, provide a clear and specific description as to the content of the attachment. • Do not blindly use Microsoft products as a company standard. • Disable Windows Script Host. Disable ActiveX. Disable VBScript. Disable JavaScript. Do not send HTML-formatted e-mail. • Use more than one scanner, and scan everything. Whether these guidelines are acceptable in a specific environment is a business decision based upon the level of acceptable risk. But remember: whether risks are evaluated, and whether policies are explicitly developed, every environment has a set of policies (some are explicit, while some are implicit), and every business accepts risk. The distinction is that some companies are aware of the risks that they choose to accept. Protective tools in the malware area are generally limited to anti-virus software. To this day there are three major types, first discussed by Fred Cohen in his research. These types are known as signature scanning, activity monitoring, and change detection. These basic types of detection systems can be compared with the common intrusion detection system (IDS) types, although the correspondence is not exact. A scanner is like a signature-based IDS. An activity monitor is like a rule-based IDS or an anomalybased IDS. A change detection system is like a statistical-based IDS. These software types will be examined very briefly. SCANNERS Scanners examine files, boot sectors, and memory for evidence of viral infection, and many may detect other forms of malware. They generally look for viral signatures, sections of program code that are known to be in specific malicious programs but not in most other programs. Because of this, scanning software will generally detect only known malware and must be updated regularly. (Currently, with fast-burner e-mail viruses, this may mean daily or even hourly.) Some scanning software has resident versions that check each file as it is run. Scanners have generally been the most popular form of anti-viral software, probably because they make a specific identification. In fact, scanners offer 608

AU1518Ch33Frame Page 609 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses somewhat weak protection because they require regular updating. Scanner identification of a virus may not always be dependable: a number of scanner products have been known to identify viruses based on common families rather than definitive signatures. In addition, scanners fail “open;” if a scanner does not trigger an alert when scanning an object, that does not mean the object is not infected or that it is not another type of malware. It is currently popular to install anti-viral software as a part of filtering firewalls or proxy servers. It should be noted that such automatic scanning is demonstrably less effective than manual scanning and subject to a number of failure conditions. Activity Monitors An activity monitor performs a task very similar to an automated form of traditional auditing; it watches for suspicious activity. It may, for example, check for any calls to format a disk or attempts to alter or delete a program file while a program other than the operating system is in control. It may be more sophisticated, and check for any program that performs “direct” activities with hardware, without using the standard system calls. Activity monitors represent some of the oldest examples of anti-viral software, and are usually effective against more than just viruses. Generally speaking, such programs followed in the footsteps of the earlier antiTrojan software, such as BOMBSQAD and WORMCHEK in the MS-DOS arena, which used the same “check what the program tries to do” approach. This tactic can be startlingly effective, particularly given the fact that so much malware is slavishly derivative and tends to use the same functions over and over again. It is, however, very hard to tell the difference between a word processor updating a file and a virus infecting a file. Activity monitoring programs may be more trouble than they are worth because they can continually ask for confirmation of valid activities. The annals of computer virus research are littered with suggestions for virus-proof computers and systems that basically all boil down to the same thing: if the operations that a computer can perform are restricted, viral programs can be eliminated. Unfortunately, so is most of the usefulness of the computer. Heuristic Scanners A recent addition to scanners is intelligent analysis of unknown code, currently referred to as heuristic scanning. It should be noted that heuristic scanning does not represent a new type of anti-viral software. More closely akin to activity monitoring functions than traditional signature scanning, this looks for suspicious sections of code that are generally found in viral programs. While it is possible for normal programs to try to “go resident,” look for other program files, or even modify their own code, 609

AU1518Ch33Frame Page 610 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY such activities are telltale signs that can help an informed user come to some decision about the advisability of running or installing a given new and unknown program. Heuristics, however, may generate a lot of false alarms, and may either scare novice users or give them a false sense of security after “wolf” has been cried too often. CHANGE DETECTION Change detection software examines system and program files and configurations, stores the information, and compares it against the actual configuration at a later time. Most of these programs perform a checksum or cyclic redundancy check (CRC) that will detect changes to a file even if the length is unchanged. Some programs will even use sophisticated encryption techniques to generate a signature that is, if not absolutely immune to malicious attack, prohibitively expensive, in processing terms, from the point of view of a piece of malware. Change detection software should also note the addition of completely new entities to a system. It has been noted that some programs have not done this and allowed the addition of virus infections or malware. Change detection software is also often referred to as integrity-checking software, but this term may be somewhat misleading. The integrity of a system may have been compromised before the establishment of the initial baseline of comparison. A sufficiently advanced change-detection system, which takes all factors including system areas of the disk and the computer memory into account, has the best chance of detecting all current and future viral strains. However, change detection also has the highest probability of false alarms because it will not know whether a change is viral or valid. The addition of intelligent analysis of the changes detected may assist with this failing. GRATUITOUS SUMMARY OPINION Malware is a problem that is not going away. Unless systems are designed with security as an explicit business requirement, which current businesses are not supporting through their purchasing decisions, malware will be an increasingly significant problem for networked systems. It is the nature of networks that a problem for a neighboring machine may well become a problem for local systems. To prevent this, it is critical that the information security professional help business leaders recognize the risks incurred by their decisions and help mitigate those risks as effectively and economically as possible. With computer viruses and similar phenomena, each system that is inadequately protected increases the risk to all systems to which it is connected. Each system that is compromised 610

AU1518Ch33Frame Page 611 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses can become a system that infects others. If you are not part of the solution in the world of malware, you are most definitely part of the problem. GLOSSARY This glossary is not a complete listing of malware-related terms. Many others can be found in the security glossary posted at http://victoria.tc.ca/ techrev/secgloss.htm and mirrored at http://sun.soci.niu.edu/~rslade/ secgloss.htm. Activity monitor: A type of anti-viral software that checks for signs of suspicious activity, such as attempts to rewrite program files, format disks, etc. Some versions of activity monitor will generate an alert for such operations, while others will block the behavior. ANSI bomb: Use of certain codes (escape sequences, usually embedded in text files or e-mail messages) that remap keys on the keyboard to commands such as DELETE or FORMAT. ANSI (the American National Standards Institute) is a short form that refers to the ANSI screen formatting rules. Many early MS-DOS programs relied on these rules and required the use of the ANSI.SYS file, which also allowed keyboard remapping. The use of ANSI.SYS is very rare today. Anti-viral: Although an adjective, frequently used as a noun as a short form for anti-virus software or systems of all types. AV: An abbreviation used to distinguish the anti-viral research community (AV) from those who call themselves virus researchers but who are primarily interested in writing and exchanging viral programs (vx). Also an abbreviation for anti-virus software. See also vx. Backdoor: A hidden software or hardware mechanism that can be triggered to permit system protection mechanisms to be circumvented. The function will generally provide unusually high, or even full, access to the system either without an account or from a normally restricted account. Synonymous with trap door, which was formerly the preferred usage. Usage back door is also very common. BSI: A boot-sector infector; a virus that replaces the original boot sector on a disk, which normally contains executable code. Change detection: Anti-viral software that looks for changes in the computer system. A virus must change something, and it is assumed that program files, disk system areas, and certain areas of memory should not change. This software is very often referred to as integrity checking software, but it does not necessarily protect the integrity of data, nor does it always assess the reasons for a possibly valid change. Change detection using strong encryption is sometimes also known as authentication software. 611

AU1518Ch33Frame Page 612 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Companion virus: A type of viral program that does not actually attach to another program, but which interposes itself into the chain of command so that the virus is executed before the infected program. Most often, this is done by using a similar name and the rules of program precedence to associate itself with a regular program. Also referred to as a spawning virus. DDoS: Distributed denial-of-service. A form of network denial-of-service (DoS) attack in which a master computer controls a number of client computers to flood the target (or victim) with traffic, using backdoor agent, client, or zombie software on a number of client machines. Disinfection: In virus work, the term can mean either the disabling of a virus’s ability to operate, the removal of virus code, or the return of the system to a state identical to that prior to infection. Because these definitions can differ substantially in practice, discussions of the ability to disinfect an infected system can be problematic. Disinfection is the means users generally prefer to use in dealing with virus infections, but the safest means of dealing with an infection is to delete all infected objects and replace with safe files from backup. Dropper: A program, not itself infected, that will install a virus on a computer system. Virus authors sometimes use droppers to seed their creations in the wild, particularly in the case of boot-sector infectors. The term injector may refer to a dropper that installs a virus only in memory. False negative: There are two types of false reports from anti-viral or anti-malware software. A false negative report is when an anti-viral reports no viral activity or presence when there is a virus present. References to false negatives are usually only made in technical reports. Most people simply refer to an anti-viral missing a virus. In general security terms, a false negative is called a false acceptance, or Type II error. False positive: The second kind of false report that an anti-viral can make is to report the activity or presence of a virus when there is, in fact, no virus. False positive has come to be very widely used among those who know about viral and anti-viral programs. Very few use the analogous term, false alarm. In general security terms, a false positive is known as a false rejection, or Type I error. File infector: A virus that attaches itself to, or associates itself with, a file, usually a program file. File infectors most often append or prepend themselves to regular program files, or they overwrite program code. The file infector class is often also used to refer to programs that do not physically attach to files but associate themselves with program filenames. (See system infector, companion.) 612

AU1518Ch33Frame Page 613 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses Heuristic: In general, heuristics refer to trial-and-error or seat-of-thepants thinking rather than formal rules. In anti-viral jargon, however, the term has developed a specific meaning regarding the examination of program code for functions or opcode strings known to be associated with viral activity. In most cases, this is similar to activity monitoring but without actually executing the program; in other cases, code is run under some type of emulation. Recently, the meaning has expanded to include generic signature scanning meant to catch a group of viruses without making definite identifications. Infection: In a virus, the process of attaching to or associating with an object in such a way that, when the original object is called, or the system is invoked, the virus will run in addition to or in place of the original object. Kit: Usually refers to a program used to produce a virus from a menu or a list of characteristics. Use of a virus kit involves no skill on the part of the user. Fortunately, most virus kits produce easily identifiable code. Packages of anti-viral utilities are sometimes referred to as toolkits, occasionally leading to confusion of the terms. Logic bomb: A resident computer program that triggers the perpetration of an unauthorized act when particular states of the system are realized. Macro virus: A macro is a small piece of programming in a simple language, used to perform a simple, repetitive function. Microsoft’s Word Basic and VBA macro languages can include macros in data files and have sufficient functionality to write complete viruses. Malware: A general term used to refer to all forms of malicious or damaging software, including viral programs, Trojans, logic bombs, and the like. Multipartite: Formerly a viral program that infects both boot sector/MBRs and files. Possibly now a virus that will infect multiple types of objects or reproduces in multiple ways. Payload: Used to describe the code in a viral program that is not concerned with reproduction or detection avoidance. The payload is often a message but is sometimes code to corrupt or erase data. Polymorphism: Techniques that use some system of changing the form of the virus on each infection to try to avoid detection by signature-scanning software. Less sophisticated systems are referred to as self-encrypting. RAT (Remote-Access Trojan): A program designed to provide access to, and control over, a network-attached computer from a remote computer or location, in effect providing a backdoor. 613

AU1518Ch33Frame Page 614 Thursday, November 14, 2002 7:58 PM

APPLICATION PROGRAM SECURITY Scanner: A program that reads the contents of a file looking for code known to exist in specific viral programs. Script virus: It is difficult to make a strong distinction between script and macro programming languages, but generally a script virus is a standalone object contained in a text file or e-mail message. A macro virus is generally contained in a data file, such as a Microsoft Word document. Social engineering: Attacking or penetrating a system by tricking or subverting operators or users rather than by means of a technical attack. More generally, the use of fraud, spoofing, or other social or psychological measures to get legitimate users to break security policy. Stealth: Various technologies used by viral programs to avoid detection on disk. The term properly refers to the technology and not a particular virus. System infector: A virus that redirects system pointers and information in order to infect a file without actually changing the infected program file. (This is a type of stealth technology.) Or, a virus that infects objects related to the operating system. Trojan horse: A program that either pretends to have, or is described as having, a (beneficial) set of features but that, either instead or in addition, contains a damaging payload. Most frequently, the usage is shortened to Trojan. Virus, computer: Researchers have not yet agreed upon a final definition. A common definition is “a program that modifies other programs to contain a possibly altered version of itself.” This definition is generally attributed to Fred Cohen, although Dr. Cohen’s actual definition is in mathematical form. Another possible definition is “an entity that uses the resources of the host (system or computer) to reproduce itself and spread, without informed operator action.” vx: An abbreviated reference to the “Virus eXchange” community; those people who consider it proper and right to write, share, and release viral programs, including those with damaging payloads. Probably originated by Sara Gordon, who has done extensive studies of the virus exchange and security-breaking community and who has an aversion to using the SHIFT key. Wild, in the: A jargon reference to those viral programs that have been released into, and successfully spread in, the normal computer user community and environment. It is used to distinguish those viral programs that are written and tested in a controlled research environment, without escaping, from those that are uncontrolled in the wild. 614

AU1518Ch33Frame Page 615 Thursday, November 14, 2002 7:58 PM

Malware and Computer Viruses Worm: A self-reproducing program that is distinguished from a virus by copying itself without being attached to a program file, or that spreads over computer networks, particularly via e-mail. A recent refinement is the definition of a worm as spreading without user action, for example by taking advantage of loopholes and trapdoors in software. Zombie: A specialized type of backdoor or remote access program designed as the agent, or client (middle layer) component of a DDoS (Distributed Denial of Service) network. Zoo: Jargon reference to a set of viral programs of known characteristics used to test anti-viral software. ACKNOWLEDGMENTS The author would like to thank David Harley and Lee Imrey for their valuable contributions to this chapter. References 1. Cohen, Fred, 1994, A Short Course on Computer Viruses, 2nd ed., Wiley, New York. 2. Ferbrache, David, 1992, A Pathology of Computer Viruses, Springer-Verlag, London. 3. Gattiker, Urs, Harley, David, and Slade, Robert, 2001, Viruses Revealed, McGraw-Hill, New York. 4. Highland, Harold Joseph, 1990, Computer Virus Handbook, Elsevier Advanced Technology, New York. 5. Hruska, Jan, 1992, Computer Viruses and Antivirus Warfare, 2nd ed., Ellis Horwood, London. 6. Kane, Pamela, 1994, PC Security and Virus Protection Handbook, M&T Books, New York. 7. Slade, Robert Michael, 1996, Robert Slade’s Guide to Computer Viruses, 2nd ed., SpringerVerlag, New York. 8. Slade, Robert Michael, 2002, Computer viruses, Encyclopedia of Information Systems, Academic Press, San Diego. 9. Solomon, Alan, 1991, PC Viruses: Detection, Analysis, and Cure, Springer-Verlag, London. 10. Solomon, Alan, 1995, Dr. Solomon’s Virus Encyclopedia, S&S International PLC, Aylesbury, U.K. 11. Vibert, Robert S., 2000, The Enterprise Anti-Virus Book, Segura Solutions Inc., Braeside, Canada. 12. Virus Bulletin, 1993, Survivor’s Guide to Computer Viruses, Abingdon, U.K.

ABOUT THE AUTHOR Robert Slade, CISSP, is a security consultant and educator. A long-time virus researcher, he is the author of Robert Slade’s Guide to Computer Viruses and co-author of Viruses Revealed.

615

AU1518Ch33Frame Page 616 Thursday, November 14, 2002 7:58 PM

AU1518Ch34Frame Page 617 Thursday, November 14, 2002 7:57 PM

Domain 5

Cryptography

AU1518Ch34Frame Page 618 Thursday, November 14, 2002 7:57 PM

CRYPTOGRAPHY The Cryptography domain for this volume consists of two sections. The first deals with cryptographic concepts, methodologies, and practices. Chapter 34 discusses the art of steganography — now you see it, now you don’t. Steganography is simply the art of hiding messages in otherwise normal-looking files, pictures, etc. The focus is on hiding information in graphic images. This chapter covers steganalysis and several attacks that can be used against it. Media conjecture after the broadcast of Osama Bin Laden’s TV comments subsequent to the September 11 terrorist attacks was that the broadcast contained secret messages he was sending to his supporters in the field. Studies conducted by the University of Michigan have failed to find traces of steganography in his comments. However, that is the kind of capability that this art provides. The chapter also covers the history and current use of steganography as well as restrictions to its use. Chapter 35 presents the basic ideas behind cryptography and the promising technologies and algorithms that information security practitioners might encounter. The history of cryptography is reviewed and its relationship to integrity, confidentiality, and accountability is explained. Stream and block ciphers are described, and both symmetric and asymmetric types of cr yptography are covered. This comprehensive chapter addresses the uses and attacks against crypto systems and explains digital signatures as well as PKI technology pros and cons. Chapter 36 is devoted to the topic of hash algorithms — from message digests to signatures. The author explains why they are needed, what they are, their properties, and how they work. Included is a discussion on keyed-hash algorithms for better security. Finally, how hash algorithms are used in modern cryptographic systems is explained. The second section in this domain focuses on public key infrastructure (PKI), which is the current “cash cow” for consultants. There is only one chapter and it is devoted to PKI registration, which involves almost every one of the many components of PKI. Both automated and manual registration processes are described, including when each would provide the best, most cost-effective level of control. The more we learn about the use of PKI, the sooner we will be able to conduct E-business safely.

618

AU1518Ch34Frame Page 619 Thursday, November 14, 2002 7:57 PM

Chapter 34

Steganography: The Art of Hiding Messages Mark Edmead, CISSP

In the past year or so, there has been an increased interest in steganography (also called stego). We have recently seen this technology mentioned during the investigation of the September 11 attacks, where the media reported that the terrorists used it to hide their attack plans, maps, and activities in chat rooms, bulletin boards, and Web sites. Steganography had been widely used long before these attacks and, as with many other technologies, its use has increased due to the popularity of the Internet. The word steganography comes from the Greek, and it means covered or secret writing. As defined today, it is the technique of embedding information into something else for the sole purpose of hiding that information from the casual observer. Many people know a distant cousin form of steganography called watermarking — a method of hiding trademark information in images, music, and software. Watermarking is not considered a true form of steganography. In stego, the information is hidden in the image; watermarking actually adds something to the image (such as the word Confidential), and therefore it becomes part of the image. Some people might consider stego to be related to encryption, but they are not the same thing. We use encryption — the technology to translate something from readable form to something unreadable — to protect sensitive or confidential data. In stego, the information is not necessarily encrypted, only hidden from plain view. One of the main drawbacks of using encryption is that with an encrypted message — while it cannot be read without decrypting it — it is recognized as an encrypted message. If someone captures a network data stream or an e-mail that is encrypted, the mere fact that the data is 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

619

AU1518Ch34Frame Page 620 Thursday, November 14, 2002 7:57 PM

CRYPTOGRAPHY encrypted might raise suspicion. The person monitoring the traffic may investigate why, and use various tools to try to figure out the message’s contents. In other words, encryption provides confidentiality but not secrecy. With steganography, however, the information is hidden; and someone looking at a JPEG image, for instance, would not be able to determine if there was any information within it. So, hidden information could be right in front of our eyes and we would not see it. In many cases, it might be advantageous to use encryption and stego at the same time. This is because, while we can hide information within another file and it is not visible to the naked eye, someone can still (with a lot of work) determine a method of extracting this information. Once this happens, the hidden or secret information is visible for them to see. One way to circumvent this situation is to combine the two — by first encrypting the data and then using steganography to hide it. This two-step process adds additional security. If someone manages to figure out the steganographic system used, he would not be able to read the data he extracted because it is encrypted. HIDING THE DATA There are several ways to hide data, including data injection and data substitution. In data injection, the secret message is directly embedded in the host medium. The problem with embedding is that it usually makes the host file larger; therefore, the alteration is easier to detect. In substitution, however, the normal data is replaced or substituted with the secret data. This usually results in very little size change for the host file. However, depending on the type of host file and the amount of hidden data, the substitution method can degrade the quality of the original host file. In the article “Techniques for Data Hiding,” Walter Bender outlines several restrictions to using stego. They are: • The data that is hidden in the file should not significantly degrade the host file. The hidden data should be as imperceptible as possible. • The hidden data should be encoded directly into the media and not placed only in the header or in some form of file wrapper. The data should remain consistent across file formats. • The hidden (embedded) data should be immune to modifications from data manipulations such as filtering or resampling. • Because the hidden data can degrade or distort the host file, error-correction techniques should be used to minimize this condition. • The embedded data should still be recoverable even if only portions of the host image are available. 620

AU1518Ch34Frame Page 621 Thursday, November 14, 2002 7:57 PM

Steganography: The Art of Hiding Messages

Exhibit 34-1. Eight-bit pixel. 1

1

0

0

1

1

0

1

STEGANOGRAPHY IN IMAGE FILES As outlined earlier, information can be hidden in various formats, including text, images, and sound files. In this chapter we limit our discussion to hidden information in graphic images. To better understand how information can be stored in images, we need to do a quick review of the image file format. A computer image is an array of points called pixels (which are represented as light intensity). Digital images are stored in either 24-bit or 8-bit pixel files. In a 24-bit image there is more room to hide information, but these files are usually very large in size and not the ideal choice for posting them on Web sites or transmitting over the Internet. For example, a 24-bit image that is 1024 × 768 in size would have a size of about 2 MB. A possible solution to the large file size is image compression. The two forms of image compression to be discussed are lossy and lossless compression. Each one of these methods has a different effect on the hidden information contained within the host file. Lossy compression provides high compression rates, but at the expense of data image integrity loss. This means the image might lose some of its image quality. An example of a lossy compression format is JPEG (Joint Photographic Experts Group). Lossless, as the name implies, does not lose image integrity, and is the favored compression used for steganography. GIF and BMP files are examples of lossless compression formats. A pixel’s makeup is the image’s raster data. A common image, for instance, might be 640 by 480 pixels and use 256 colors (eight bits per pixel). In an eight-bit image, each pixel is represented by eight bits, as shown in Exhibit 34-1. The four bits to the left are the most significant bits (MSB), and the four bits to the right are the least significant bits (LSB). Changes to the MSB will result in a drastic change in the color and the image quality, while changes in the LSB will have minimal impact. The human eye cannot usually detect changes to only one or two bits of the LSB. So if we hide data in any two bits in the LSB, the human eye will not detect it. For instance, if we have a bit pattern of 11001101 and change it to 11001100, they will look the same. This is why the art of steganography uses these LSBs to store the hidden data. 621

AU1518Ch34Frame Page 622 Thursday, November 14, 2002 7:57 PM

CRYPTOGRAPHY

Exhibit 34-2. Unmodified image.

A PRACTICAL EXAMPLE OF STEGANOGRAPHY AT WORK To best demonstrate the power of steganography, Exhibit 34-2 shows the host file before a hidden file has been introduced. Exhibit 34-3 shows the image file we wish to hide. Using a program called Invisible Secrets 3, by NeoByte Solution, Exhibit 34-3 is inserted into Exhibit 34-2. The resulting image file is shown in Exhibit 34-4. Notice that there are no visual differences to the human eye. One significant difference is in the size of the resulting image. The size of the original Exhibit 34-2 is 18 kB. The size of Exhibit 34-3 is 19 kB. The size of the resulting stego-file is 37 kB. If the size of the original file were known, looking at the size of the new file would be a clear indication that something made the file size larger. In reality, unless we know what the sizes of the files should be, looking at the size of the file would not be the best way to determine if an image is a stego carrier. A practical way to determine if files have been tampered with is to use available software products that can take a snapshot of the appropriate images and calculate a hash value. This baseline value can then be periodically checked for changes. If the hash value of the file changes, it means that tampering has occurred. PRACTICAL (AND NOT SO LEGAL) USES FOR STEGANOGRAPHY There are very practical uses for this technology. One use would be to store password information on an image file on a hard drive or Web page. In applications where encryption is not appropriate (or legal), stego can be 622

AU1518Ch34Frame Page 623 Thursday, November 14, 2002 7:57 PM

Steganography: The Art of Hiding Messages

Exhibit 34-3. Image to be hidden into Exhibit 34-2.

Exhibit 34-4. Image with Exhibit 34-3 inserted into Exhibit 34-2.

used for covert data transmissions. While this technology has been used mainly for military operations, it is now gaining popularity in the commercial marketplace. As with every technology, there are illegal uses for stego 623

AU1518Ch34Frame Page 624 Thursday, November 14, 2002 7:57 PM

CRYPTOGRAPHY as well. As we discussed earlier, it was reported that terrorists use this technology to hide their attacks plans. Child pornographers have also been known to use stego to illegally hide pictures inside other images. DEFEATING STEGANOGRAPHY Steganalysis is the technique of discovering and recovering the hidden message. There are terms in steganography that are closely associated with the same terms in cryptography. For instance, a steganalyst, like its counterpart a cryptanalyst, applies steganalysis in an attempt to detect the existence of hidden information in messages. One important — and crucial — difference between the two is that in cryptography, the goal is not to detect if something has been encrypted. The fact that we can see the encrypted information already tells us that it is. The goal in cryptanalysis is to decode the message. In steganography, the main goal is to first determine if the image has a hidden message and to determine the specific steganography algorithm used to hide the information. There are several known attacks available to the steganalyst. They are stego-only, known cover, known message, chosen stego, and chosen message. In a stego-only attack, the stego host file is analyzed. A known cover attack is used if both the original (unaltered) media and the stego-infected file are available. A known message attack is used when the hidden message is revealed. A chosen stego attack is performed when the algorithm used is known and the stego host is available. A chosen message attack is performed when a stego-media is generated using a predefined algorithm. The resulting media is then analyzed to determine the patterns generated, and this information is used to compare it to the patterns used in other files. This technique will not extract the hidden message, but it will alert the steganalyst that the image in question does have embedded (and hidden) information. Another attack method is using dictionary attacks against steganographic systems. This will test to determine if there is a hidden image in the file. All of the stenographic systems used to create stego images use some form of password validation. An attack could be perpetrated on this file to try to guess the password and determine what information had been hidden. Much like cryptographic dictionary attacks, stego dictionary attacks can be performed as well. In most steganographic systems, information is embedded in the header of the image file that contains, among other things, the length of the hidden message. If the size of the image header embedded by the various stego tools is known, this information could be used to verify the correctness of the guessed password. Protecting yourself against steganography is not easy. If the hidden text is embedded in an image, and you have the original (unaltered) image, a file comparison could be made to see if they are different. This comparison would not be to determine if the size of the image has changed — remember, 624

AU1518Ch34Frame Page 625 Thursday, November 14, 2002 7:57 PM

Steganography: The Art of Hiding Messages in many cases the image size does not change. However, the data (and the pixel level) does change. The human eye usually cannot easily observe subtle changes — detection beyond visual observation requires extensive analysis. Several techniques are used to do this. One is the use of stego signatures. This method involves analysis of many different types of untouched images, which are then compared to the stego images. Much like the analysis of viruses using signatures, comparing the stego-free images to the stego-images may make it possible to determine a pattern (signature) that was used of a particular tool used in the creation of the stego-image. SUMMARY Steganography can be used to hide information in text, video, sound, and graphic files. There are tools available to detect steganographic content in some image files, but the technology is far from perfect. A dictionary attack against steganographic systems is one way to determine if content is, in fact, hidden in an image. Variations of steganography have been in use for quite some time. As more and more content is placed on Internet Web sites, the more corporations — as well as individuals — are looking for ways to protect their intellectual properties. Watermarking is a method used to mark documents, and new technologies for the detection of unauthorized use and illegal copying of material are continuously being improved. References W. Bender, D. Gruhl, N. Morimoto, and A. Lu, Techniques for data hiding. In IBM Syst. J., Vol. 35, Nos. 3–4, pages 313–336, February 1996.

Additional Sources of Information http://www.cs.uct.ac.za/courses/CS400W/NIS/papers99/dsellars/ stego.html — Great introduction to steganography by Duncan Sellars. http://www.jjtc.com/Steganography/ — Neil F. Johnson’s Web site on steganography. Has other useful links to other sources of information. http://stegoarchive.com/ — Another good site with reference material and software you can use to make your own image files with hidden information. http://www.sans.org/infosecFAQ/covertchannels/steganography3. htm — Article by Richard Lewis on steganography. http://www.sans.org/infosecFAQ/encryption/steganalysis2.htm — Great article by Jim Bartel on steganalysis.

625

AU1518Ch34Frame Page 626 Thursday, November 14, 2002 7:57 PM

CRYPTOGRAPHY ABOUT THE AUTHOR Mark Edmead, CISSP, SSCP, TICSA, is president of MTE Software, Inc. (www.mtesoft.com) and has more than 25 years of experience in software development, product development, and network/information systems security. Fortune 500 companies have often turned to Mark to help them with projects related to Internet and computer security. Mark previously worked for KPMG Information Risk Management Group and IBM’s Privacy and Security Group, where he performed network security assessments, security system reviews, development of security recommendations, and ethical hacking. Other projects included helping companies develop secure and reliable network system architecture for their Web-enabled businesses. Mark was managing editor of SANS Digest (Systems Administration and Network Security) and contributing editor to the SANS Step-by-Step Windows NT Security Guide. He is co-author of Windows NT: Performance, Monitoring and Tuning, and he developed the SANS Business Continuity/Disaster Recovery Plan Step-by-Step Guide.

626

AU1518Ch35Frame Page 627 Thursday, November 14, 2002 7:56 PM

Chapter 35

An Introduction to Cryptography Javek Ikbal, CISSP

This chapter presents some basic ideas behind cryptography. This is intended for an audience who will be evaluators, recommenders, and end users of cryptographic algorithms and products rather than implementers. Hence, the mathematical background will be kept to a minimum. Only widely adopted algorithms are described with some mathematical detail. We also present promising technologies and algorithms that information security practitioners might encounter and may have to choose or discard. THE BASICS What Is Cryptography? Cryptography is the art and science of securing messages so unintended audiences cannot read, understand, or alter that message. Related Terms and Definitions A message in its original form is called the plaintext or cleartext. The process of securing that message by hiding its contents is encryption or enciphering. An encrypted message is called ciphertext, and the process of turning the ciphertext back to cleartext is called decryption or deciphering. Cryptography is often shortened to crypto. Practitioners of cryptography are known as cryptographers. The art and science of breaking encryptions is known as cryptanalysis, which is practiced by cryptanalysts. Cryptography and cryptanalysis are covered in the theoretical and applied branch of mathematics known as cryptology, and practiced by cryptologists. A cipher or cryptographic algorithm is the mathematical function or formula used to convert cleartext to ciphertext and back. Typically, a pair of algorithms is used to encrypt and decrypt. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

627

AU1518Ch35Frame Page 628 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY

Plaintext

Encryption

Ciphertext

Decryption

Plaintext

Exhibit 35-1. Encryption and decryption with restricted algorithms.

Key Plaintext

Encryption

Key Ciphertext

Decryption

Plaintext

Exhibit 35-2. Encryption and decryption with keys.

An algorithm that depends on keeping the algorithm secret in order to keep the ciphertext safe is known as a restricted algorithm. Security practitioners should be aware that restricted algorithms are inadequate in the current world. Unfortunately, restricted algorithms are quite popular in some settings. Exhibit 35-1 shows the schematic flow of restricted algorithms. This can be mathematically expressed as E(M) = C and D(C) = M, where M is the cleartext message, E is the encryption function, C is the ciphertext, and D is the decryption function. A major problem with restricted algorithms is that a changing group cannot use it; every time someone leaves, the algorithm has to change. Because of the need to keep it a secret, each group has to build its own algorithms and software to use it. These shortcomings are overcome by using a variable known as the key or cryptovariable. The range of possible values for the key is called the keyspace. With each group using its own key, a common and well-known algorithm may be shared by any number of groups. The mathematical representation now becomes: Ek(M) = C and Dk(C) = M, where the suscript k refers to the encryption and decryption key. Some algorithms will utilize different keys for encryption and decryption. Exhibit 35-2 illustrates that the key is an input to the algorithm. Note that the security of all such algorithms depends on the key and not the algorithm itself. We submit to the information security practitioner that any algorithm that has not been publicly discussed, analyzed, and withstood attacks (i.e., zero restriction) should be presumed insecure and rejected. A Brief History Secret writing probably came right after writing was invented. The earliest known instance of cryptography occurred in ancient Egypt 4000 years 628

AU1518Ch35Frame Page 629 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography Exhibit 35-3. Caesar cipher (Shift-3) and ROT-13. English Alphabet A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Caesar Cipher (3) D E F G H I J K L M N O P Q R S T U V W X Y Z A B C ROT-13

N O P Q R S T U V W X Y Z A B C D E F G H I J K L M

ago, with the use of hieroglyphics. These were purposefully cryptic; hiding the text was probably not the main purpose — it was intended to impress. In ancient India, government spies communicated using secret codes. Greek literature has examples of cryptography going back to the times of Homer. Julius Caesar used a system of cryptography that shifted each letter three places further through the alphabet (e.g., A shifts to D, Z shifts to C, etc.). Regardless of the amount of shift, all such monoalphabetic substitution ciphers (MSCs) are also known as Caesar ciphers. While extremely easy to decipher if you know how, a Caesar cipher called ROT-13 (N = A, etc.) is still in use today as a trivial method of encryption. Why ROT-13 and not any other ROT-N? By shifting down the middle of the English alphabet, ROT-13 is self-reversing — the same code can be used to encrypt and decrypt. How this works is left as an exercise for the reader. Exhibit 35-3 shows the alphabet and corresponding Caesar cipher and ROT-13. During the seventh century AD, we see the first treatise on cryptanalysis. The technique involves counting the frequency of each ciphertext letter. We know that the letter E occurs the most in English. So if we are trying to decrypt a document written in English where the letter H occurs the most, we can assume that H stands for E. Provided we have a large enough sample of the ciphertext for the frequency count to be statistically significant, this technique is powerful enough to cryptanalyze any MSC and is still in use. Leon Battista Alberti invented a mechanical device during the 15th century that could perform a polyalphabetic substitution cipher (PSC). A PSC can be considered an improvement of the Caesar cipher because each letter is shifted by a different amount according to a predetermined rule. The device consisted of two concentric copper disks with the alphabet around the edges. To start enciphering, a letter on the inner disk is lined up with any letter on the outer disk, which is written as the first character of the ciphertext. After a certain number of letters, the disks are rotated and the encryption continues. Because the cipher is changed often, frequency analysis becomes less effective. The concept of rotating disks and changing ciphers within a message was a major milestone in cryptography. The public interest in cryptography dramatically increased with the invention of the telegraph. People wanted the speed and convenience of 629

AU1518Ch35Frame Page 630 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY the telegraph without disclosing the message to the operator, and cryptography provided the answer. After the World War I, U.S. military organizations poured resources into cryptography. Because of the classified nature of this research, there were no general publications that covered cryptography until the late 1960s; and the public interest went down again. During this time, computers were also gaining ground in nongovernment areas, especially the financial sector; and the need for a nonmilitary crypto-system was becoming apparent. The organization currently known as the National Institute of Standards and Technology (NIST), then called the National Bureau of Standards (NBS), requested proposals for a standard cryptographic algorithm. IBM responded with Lucifer, a system developed by Horst Feistel and colleagues. After adopting two modifications from the National Security Agency (NSA), this was adopted as the federal Data Encryption Standard (DES) in 1976.1 NSA’s changes caused major controversy, specifically because it suggested DES use 56-bit keys instead of 112-bit keys as originally submitted by IBM. During the 1970s and 1980s, the NSA also attempted to regulate cryptographic publications but was unsuccessful. However, general interest in cryptography increased as a result. Academic and business interest in cryptography was high, and extensive research led to significant new algorithms and techniques. Advances in computing power have made 56-bit keys breakable. In 1998, a custom-built machine from the Electronic Frontier Foundation costing $210,000 cracked DES in four and a half days.2 In January 1999, a distributed network of 100,000 machines cracked DES in 22 hours and 15 minutes. As a direct result of these DES cracking examples, NIST issued a Request for Proposals to replace DES with a new standard called the Advanced Encryption Standard (AES). On November 26, 2001, NIST selected Rijndael as the AES. The Alphabet-Soup Players: Alice, Bob, Eve, and Mike In our discussions of cryptographic protocols, we will use an alphabet soup of names that are participating in (or are trying to break into) a secure message exchange. They are: • • • • 630

Alice. first participant Bob. second participant Eve. eavesdropper Mike. masquerader

AU1518Ch35Frame Page 631 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography Ties to Confidentiality, Integrity, and Authentication Cryptography is not limited to confidentiality only — it can perform other useful functions. • Authentication. If Alice is buying something from Bob’s online store, Bob has to assure Alice that it is indeed Bob’s Web site and not Mike’s, the masquerader pretending to be Bob. Thus, Alice should be able to authenticate Bob’s Web site, or know that a message originated from Bob. • Integrity. If Bob is sending Alice, the personnel manager, a message informing her of a $5000 severance pay for Mike, Mike should not be able to intercept the message in transit and change the amount to $50,000. Cryptography enables the receiver to verify that a message has not been modified in transit. • Non-repudiation. Alice places an order to sell some stocks at $10 per share. Her stockbroker, Bob, executes the order, but then the stock goes up to $18. Now Alice claims she never placed that order. Cryptography (through digital signatures) will enable Bob to prove that Alice did send that message. Section Summary • Any message or data in its original form is called plaintext or cleartext. • The process of hiding or securing the plaintext is called encryption (verb: to encrypt or to encipher). • When encryption is applied on plaintext, the result is called ciphertext. • Retrieving the plaintext from the ciphertext is called decryption (verb: to decrypt or to decipher). • The art and science of encryption and decryption is called cryptography, and its practitioners are cryptographers. • The art and science of breaking encryption is called cryptanalysis, and its practitioners are cryptanalysts. • The process and rules (mathematical or otherwise) to encrypt and decrypt are called ciphers or cryptographic algorithms. • The history of cryptography is over 4000 years old. • Frequency analysis is an important technique in cryptanalysis. • Secret cryptographic algorithms should not be trusted by an information security professional. • Only publicly available and discussed algorithms that have withstood analysis and attacks may be used in a business setting. • Bottom line: do not use a cryptographic algorithm developed in-house (unless you have internationally renowned experts in that field). 631

AU1518Ch35Frame Page 632 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY

Key

Keystream Generator 00010010 (Keystream)

Z=01011010 (Plaintext)

Encryption

01001000=H (Ciphertext)

Exhibit 35-4. Stream cipher operation.

SYMMETRIC CRYPTOGRAPHIC ALGORITHMS Algorithms or ciphers that use the same key to encrypt and decrypt are called symmetric cryptographic algorithms. There are two basic types: stream and block. Stream Ciphers This type of cipher takes messages in a stream and operates on individual data elements (characters, bits, or bytes). Typically, a random-number generator is used to produce a sequence of characters called a key stream. The key stream is then combined with the plaintext via exclusive-OR (XOR) to produce the ciphertext. Exhibit 35-4 illustrates this operation of encrypting the letter Z, the ASCII value of which is represented in binary as 01011010. Note that in an XOR operation involving binary digits, only XORing 0 and 1 yields 1; all other XORs result in 0. Exhibit 35-4 shows how a stream cipher operates. Before describing the actual workings of a stream cipher, let us examine how shift registers work because they have been the mainstay of electronic cryptography for a long time. A linear feedback shift register (LFSR) is very simple in principle. For readers not versed in electronics, we present a layman’s representation. Imagine a tube that can hold four bits with a window at the right end. Because the tube holds four bits, we will call it a four-bit shift register. We shift all bits in the tube and, as a result, the bit showing through the window changes. Here, shifting involves pushing from the left so the rightmost bit falls off; and to keep the number of bits in the tube constant, we place the output of some addition operation as the new left-most bit. In the following example, we will continue with our four-bit LFSR, and the new left-most bit will be the result of adding bits three and four (the feedback) and keeping the right-most bit (note that in binary mathematics, 1+1 = 10, with 0 being the right-most bit, and 1+0 = 1). For every shift that occurs, we 632

AU1518Ch35Frame Page 633 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography Exhibit 35-5. 4-bit LFSR output. 1111.-> 0111 -> 0011 -> 0001 -> 1000 -> 0100 -> 0010 -> 1001 -> 1100 -> 0110 -> 1011 -> 0101 -> 1010 -> 1101 -> 1110 -> 1111 Keystream: 111100010011010 (Right-most bit through the window before repetition).

look through the window and note the right-most bit. As a result, we will see the sequence shown in Exhibit 35-5. Note that after 2(N = 4) – 1 = 15 iterations, we will get a repetition. This is the maximum number of unique sequences (also called period) when dealing with a four-bit LFSR (since we have to exclude 0000, which will always produces a sequence of 0000s). Choosing a different feedback function may have reduced the period, and the longest unique sequence is called the maximal length. The maximal length is important because repeating key streams mean the same plaintext will produce the same ciphertext, and this will be vulnerable to frequency analysis and other attacks. To construct a simple stream cipher, take an LFSR (or take many of different sizes and different feedback functions). To encrypt each bit of the plaintext, take a bit from the plaintext, XOR it with a bit from the key stream to generate the ciphertext (refer to Exhibit 35-4), and so on. Of course, other stream ciphers are more complex and involve multiple LFSRs and other techniques.4 We will discuss RC4 as an example of a stream cipher. First, let us define the term S-box. An S-box is also known as a substitution box or table and, as the name implies, it is a table or system that provides a substitution scheme. Shift registers are S-boxes; they provide a substitution mechanism. RC4 uses an output feedback mechanism combined with 256 S-boxes (numbered S0…S255) and two counters, i and j. A random byte K is generated through the following steps: i = (i +1) mod 256 j = (j + Si) mod 256 swap (Si, Sj) t = (Si + Sj) mod 256 K = St Now, K XOR Plaintext = Ciphertext, and K XOR Ciphertext = Plaintext Block Ciphers A block cipher requires the accumulation of some amount of data or multiple data elements before ciphering can begin. Encryption and decryption 633

AU1518Ch35Frame Page 634 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY happen on chunks of data, unlike stream ciphers, which operate on each character or bit independently. DES. The Data Encryption Standard (DES) is over 25 years old; and because of its widespread implementation and use, it will probably coexist with the new Advanced Encryption Standard (AES) for a few years.

Despite initial concern about NSA’s role in crafting the standard, DES generated huge interest in cryptography; and vendors and users alike were eager to adopt the first government-approved encryption standard that was released for public use. The DES calls for reevaluations of DES every five years. Starting in 1987, the NSA warned that it would not recertify DES because it was likely that it would be soon broken; and they proposed secret algorithms available on tamper-proof chips only. Users of DES, including major financial institutions, protested; and DES got a new lease on life until 1992. Because no new standards became available in 1992, it lived on to 1998 and then until the end of 2001, when AES became the standard. DES is a symmetric block cipher that operates in blocks of 64 bits of data at a time, with 64-bit plaintext resulting in 64-bit ciphertext. If the data is not a multiple of 64 bits, then it is padded at the end. The effective keylength is 56 bits with 8 bits of parity. All security rests with the key. A simple description of DES would be as follows:1 Take the 64-bit block of message (M). Rearrange the bits of M (initial permutation, IP). Break IP down the middle into two 32-bit blocks (L & R). Shift the key bits, and take a 48-bit portion from the key. Save the value of R into Rold. Expand R via a permutation to 48 bits. XOR R with the 48-bit key and transform via eight S-boxes into a new 32-bit chunk. Now, R takes on the value of the new R XORed with L. And L takes on the value of Rold. Repeat this process 15 more times (total 16 rounds). Join L and R. Reverse the permutation IP (final permutation, FP). There are some implementations without IP and FP; because they do not match the published standard, they should not be called DES or DES-compliant, although they offer the same degree of security. Certain DES keys are considered weak, semi-weak, or possibly weak: a key consisting of all 1s or all 0s is considered weak, or if half the keys are 1s and the other half are 0s.5 634

AU1518Ch35Frame Page 635 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography Conspiracy theories involving NSA backdoors and EFFs DES-cracking machine notwithstanding, DES lives on in its original form or a multipleiteration form, popularly known as Triple-DES. Triple-DES is DES done thrice, typically with two 56-bit keys. In the most popular form, the first key is used to DES-encrypt the message. The second key is used to DES-decrypt the encrypted message. Because this is not the right key, the attempted decryption only scrambles the data even more. The resultant ciphertext is then encrypted again with the first key to yield the final ciphertext. This three-step procedure is called Triple-DES. Sometimes, three keys are used. Because this follows an Encryption > Decryption > Encryption scheme, it is often known as DES-EDE. ANSI standard X9.52 describes Triple-DES encryption with keys k1, k2, k3 as: C = Ek3(Dk2(Ek1(M)))

where Ek and Dk denote DES encryption and DES decryption, respectively, with the key k. Another variant is DES-EEE, which consists of three consecutive encryptions. There are three keying options defined in ANSI X9.52 for DES-EDE: The three keys k1, k2 and k3 are different (three keys). k1 and k2 are different, but k1 = k3 (two keys). k1 = k2 = k3 (one key). The third option makes Triple-DES backward-compatible with DES and offers no additional security. AES (Rijndael). In 1997, NIST issued a Request for Proposals to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) federal information. This was to become the Advanced Encryption Standard (AES), the DES replacement. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm.

NIST reviewed the results of this preliminary research and selected MARS, RC6™, Rijndael, Serpent, and Twofish as finalists. After additional review, in October 2000, NIST proposed Rijndael as AES. For research results and rationale for selection, see Reference 5. Before discussing AES, let us quote the most important answer from the Rijndael FAQ: “If you’re Dutch, Flemish, Indonesian, Surinamer or South African, it’s pronounced like you think it should be. Otherwise, you could 635

AU1518Ch35Frame Page 636 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY pronounce it like Reign Dahl, Rain Doll, or Rhine Dahl. We’re not picky. As long as you make it sound different from Region Deal.6 Rijndael is a block cipher that can process blocks of 128-, 192- and 256-bit length using keys 128-, 192- and 256-bits long. All nine combinations of block and key lengths are possible. The AES standard specifies only 128-bit data blocks and 128-, 192- and 256-bit key lengths. Our discussions will be confined to AES and not the full scope of Rijndael. Based on the key length, AES may be referred to as AES-128, AES-192, or AES-256. We will present a simple description of Rijndael. For a mathematical treatment, see References 8 and 9. Rijndael involves an initial XOR of the state and a round key, nine rounds of transformations (or rounds), and a round performed at the end with one step omitted. The input to each round is called the state. Each round consists of four transformations: SubBytes, ShiftRow, MixColumn (omitted from the tenth round), and AddRoundKey. In the SubBytes transformation, each of the state bytes is independently transformed using a nonlinear S-box. In the ShiftRow transformation, the state is processed by cyclically shifting the last three rows of the state by different offsets. In the MixColumn transformation, data from all of the columns of the state are mixed (independently of one another) to produce new columns. In the AddRoundKey step in the cipher and inverse cipher transformations, a round key is added to the state using an XOR operation. The length of a round key equals the size of the state. Weaknesses and Attacks A well-known and frequently used encryption is the stream cipher available with PKZIP. Unfortunately, there is also a well-known attack involving known plaintext against this — if you know part of the plaintext, it is possible to decipher the file.10 For any serious work, information security professionals should not use PKZIP’s encryption. In 1975, it was theorized that a customized DES cracker would cost $20 million. In 1998, EFF built one for $220,000.2 With the advances in computing power, the time and money required to crack DES has significantly gone down even more by now. While it is still being used, if possible, use AES or Triple-DES. Section Summary • Symmetric cryptographic algorithms or ciphers are those that use the same key to encrypt and decrypt. • Stream ciphers operate on a bit at a time. 636

AU1518Ch35Frame Page 637 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography • Stream ciphers use a key stream generator to continuously produce a key stream that is used to encrypt the message. • A repeating key stream weakens the encryption and makes it vulnerable to cryptanalysis. • Shift registers are often used in stream ciphers. • Block ciphers operate on a block of data at a time. • DES is the most popular block cipher. • DES keys are sometimes referred to as 64-bit, but the effective length is 56 bits with 8 parity bits; hence, the actual key length is 56 bits. • There are known weak DES keys; ensure that those are not used. • DES itself has been broken and it should be assumed that it is not secure against attack. • Make plans to migrate away from DES; use Triple-DES or Rijndael instead of DES, if possible. • Do not use the encryption offered by PKZIP for nontrivial work. ASYMMETRIC (PUBLIC KEY) CRYPTOGRAPHY Asymmetric is the term applied in a cryptographic system where one key is used to encrypt and another is used to decrypt. Background This concept was invented in 1976 by Whitfield Diffie and Martin Hellman11 and independently by Ralph Merkle. The basic theory is quite simple: is there a pair of keys so that if one is used to encrypt, the other can be used to decrypt — and given one key, finding the other would be extremely hard? Luckily for us, the answer is yes, and this is the basis of asymmetric (often called public key) cryptography. There are many algorithms available, but most of them are either insecure or produce ciphertext that is larger than the plaintext. Of the algorithms that are both secure and efficient, only three can be used for both encryption and digital signatures.4 Unfortunately, these algorithms are often slower by a factor of 1000 compared to symmetric key encryption. As a result, hybrid cryptographic systems are popular: Suppose Alice and Bob want to exchange a large message. Alice generates a random session key, encrypts it using asymmetric encryption, and sends it over to Bob, who has the other half of the asymmetric key to decode the session key. Because the session key is small, the overhead to asymmetrically encipher/decipher it is not too large. Now Alice encrypts the message with the session key and sends it over to Bob. Bob already has the session key and deciphers the message with it. As the large message is enciphered/deciphered using much faster symmetric encryption, the performance is acceptable. 637

AU1518Ch35Frame Page 638 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY RSA We will present a discussion of the most popular of the asymmetric algorithms — RSA, named after its inventors, Ron Rivest, Adi Shamir, and Leonard Adleman. Readers are directed to Reference 12 for an extensive treatment. RSA’s patent expired in September 2000; and RSA has put the algorithm in the public domain, enabling anyone to implement it at zero cost. First, a mathematics refresher: • If an integer P cannot be divided (without remainders) by any number other than itself and 1, then P is called a prime number. Other prime numbers are 2, 3, 5, and 7. • Two integers are relatively prime if there is no integer greater than one that divides them both (their greatest common divisor is 1). For example, 15 and 16 are relatively prime, but 12 and 14 are not. • The mod is defined as the remainder. For example, 5 mod 3 = 2 means divide 5 by 3 and the result is the remainder, 2. Note that RSA depends on the difficulty of factoring large prime numbers. If there is a sudden leap in computer technology or mathematics that changes that, security of such encryption schemes will be broken. Quantum and DNA computing are two fields to watch in this arena. Here is a step-by-step description of RSA: 1. Find P and Q, two large (e.g., 1024-bit or larger) prime numbers. For our example, we will use P = 11 and Q = 19, which are adequate for this example (and more manageable). 2. Calculate the product PQ, and also the product (P – 1)(Q – 1). So PQ = 209, and (P – 1)(Q – 1) = 180. 3. Choose an odd integer E such that E is less than PQ, and such that E and (P – 1)(Q – 1) are relatively prime. We will pick E = 7. 4. Find the integer D so that (DE – 1) is evenly divisible by (P – 1)(Q – 1). D is called the multiplicative inverse of E. This is easy to do: let us assume that the result of evenly dividing (DE – 1) by (P – 1)(Q – 1) is X, where X is also an integer. So we have X = (DE – 1)/(P – 1)(Q – 1); and solving for D, we get, D = (X(P – 1)(Q – 1) + 1)/E. Start with X = 1 and keep increasing its value until D is an integer. For our example, D works out to be 103. 5. The public key is (E and PQ), the private key is D. Destroy P and Q (note that given P and Q, it would be easy to work out E and D; but given only PQ and E, it would be hard to determine D). Give out your public key (E, PQ) and keep D secure and private. 6. To encrypt a message M, we raise M to the Eth power, divide it by PQ, and the remainder (the mod) is the ciphertext. Note that M must be less than PQ. A mathematical representation will be ciphertext = ME 638

AU1518Ch35Frame Page 639 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography mod PQ. So if we are encrypting 13 (M = 13), our ciphertext = 137 mod 209 = 29. 7. To decrypt, we take the ciphertext, raise it to the Dth power, and take the mod with PQ. So plaintext = 29103 mod 209 = 13. Compared to DES, RSA is about 100 times slower in software and 1000 times slower in hardware. Because AES is even faster than DES in software, the performance gap will widen in software-only applications. Elliptic Curve Cryptosystems (ECC) As we saw, solving RSA depends on a hard math problem: factoring very large numbers. There is another hard math problem: reversing exponentiation (logarithms). For example, it is possible to easily raise 7 to the 4th power and get 2401; but given only 2401, reversing the process and obtaining 74 is more difficult (at least as hard as performing large factorizations). The difficulty in performing discrete logarithms over elliptic curves (not to be confused with an ellipse) is even greater;13 and for the same key size, it presents a more difficult challenge than RSA (or present the same difficulty/security with a smaller key size). There is an implementation of ECC that uses the factorization problem, but it offers no practical advantage over RSA. An elliptic curve has an interesting property: it is possible to define a point on the curve as the sum of two other points on the curve. Following is a high-level discussion of ECC. For details, see Reference 13. Example: Alice and Bob agree on a nonsecret elliptic curve and a nonsecret fixed curve point F. Alice picks a secret random integer Ak as her secret key and publishes the point AP = Ak *F as her public key. Bob picks a secret random integer Bk as his secret key and publishes the point BP = Bk *F as his public key. If Alice wants to send a message to Bob, she can compute Ak *BP and use the result as the secret key for a symmetric block cipher like AES. To decrypt, Bob can compute the same key by finding Bk*AP, because Bk*AP = Bk*(Ak*F) = Ak *(Bk*F) = Ak *Bp. ECC has not been subject to the extensive analysis that RSA has and is comparatively new. Attacks It is possible to attack RSA by factoring large numbers, or guessing all possible values of (P – 1)(Q – 1) or D. These are computationally infeasible, and users should not worry about them. But there are chosen ciphertext attacks against RSA that involve duping a person to sign a message (provided by the attacker). This can be prevented by signing a hash of the message, or by making minor cosmetic changes to the document by signing it. 639

AU1518Ch35Frame Page 640 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY For a description of attacks against RSA, see Reference 14. Hash functions are described later in this chapter. Real-World Applications Cryptography is often a business enabler. Financial institutions encrypt the connection between the user’s browser and Web pages that show confidential information such as account balances. Online merchants similarly encrypt the link so customer credit card data cannot be sniffed in transit. Some even use this as a selling point: “Our Web site is protected with the highest encryption available.” What they are really saying is that this Web site uses 128-bit SSL. As an aside, there are no known instances of theft of credit card data in transit; but many high-profile stories of customer information theft, including theft of credit card information, are available. The theft was possible because enough safeguards were not in place, and the data was usable because it was in cleartext, that is, not encrypted. Data worth protecting should be protected in all stages, not just in transit. SSL and TLS. Normal Web traffic is cleartext — your ISP can intercept it easily. Secure Sockets Layer (SSL) provides encryption between the browser and a Web server to provide security and identification. SSL was invented by Netscape15 and submitted to the Internet Engineering Task Force (IETF). In 1996, IETF began with SSL v3.0 and, in 1999, published TLS v1.0 as a proposed standard.16 TLS is a term not commonly used, but we will use TLS and SSL interchangeably.

Suppose Alice, running a popular browser, wants to buy a book from Bob’s online book store at bobsbooks.com, and is worried about entering her credit card information online. (For the record, SSL/TLS can encrypt connections between any two network applications and not only Web browsers and servers.) Bob is aware of this reluctance and wants to allay Alice’s fears — he wants to encrypt the connection between Alice’s browser and bobsbooks.com. The first thing he has to do is install a digital certificate on his Web server. A certificate contains information about the owner of the certificate: email address, owner’s name, certificate usage, duration of validity, and resource location or distinguished name (DN), which includes the common name (CN) (Web site address or e-mail address, depending on the usage), and the certificate ID of the person who certifies (signs) this information. It also contains the public key and finally a hash to ensure that the certificate has not been tampered with. Anyone can create a digital certificate with freely available software, but just like a person cannot issue his own passport and expect it to be accepted at a border, browsers will not recognize self-issued certificates. 640

AU1518Ch35Frame Page 641 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography Digital certificate vendors have spent millions to preinstall their certificates into browsers, so Bob has to buy a certificate from a well-known certificate vendor, also known as root certificate authority (CA). There are certificates available with 40-bit and 128-bit encryptions. Because it usually costs the same amount, Bob should buy a 128-bit certificate and install it on his Web server. As of this writing, there are only two vendors with wide acceptance of certificates: Verisign and Thawte. Interestingly, Verisign owns Thawte, but Thawte certificate prices are significantly lower. So now Alice comes back to the site and is directed toward a URL that begins with https instead of http. That is the browser telling the server that an SSL session should be initiated. In this negotiation phase, the browser also tells the server what encryption schemes it can support. The server will pick the strongest of the supported ciphers and reply back with its own public key and certificate information. The browser will check if it has been issued by a root CA. If not, it will display a warning to Alice and ask if she still wants to proceed. If the server name does not match the name contained in the certificate, it will also issue a warning. If the certificate is legitimate, the browser will: • • • •

Generate a random symmetric encryption key Encrypt this symmetric key with the server’s public key Encrypt the URL it wants with the symmetric key Send the encrypted key and encrypted URL to the server

The server will: • • • • •

Decrypt the symmetric key with its private key Decrypt the URL with the symmetric key Process the URL Encrypt the reply with the symmetric key Send the encrypted reply back to the browser

In this case, although encryption is two-way, authentication is one-way only: the server’s identity is proven to the client but not vice versa. Mutual authentication is also possible and performed in some cases. In a highsecurity scenario, a bank could issue certificates to individuals, and no browser would be allowed to connect without those individual certificates identifying the users to the bank’s server. What happens when a browser capable of only 40-bit encryption (older U.S. laws prohibited export of 128-bit browsers) hits a site capable of 128 bits? Typically, the site will step down to 40-bit encryption. But CAs also sell Super or Step-up certificates that, when encountered with a 40-bit browser, will actually temporarily enable 128-bit encryption in those browsers. Step-up certificates cost more than regular certificates. 641

AU1518Ch35Frame Page 642 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY Note that the root certificates embedded in browsers sometimes expire; the last big one was Verisign’s in 1999. At that time, primarily financial institutions urged their users to upgrade their browsers. Finally, there is another protocol called Secure HTTP that provides similar functionality but is very rarely used. CHOOSING AN ALGORITHM What encryption algorithm, with what key size, would an information security professional choose? The correct answer is: it depends; what is being encrypted, who do we need to protect against, and for how long are the questions that need to be answered. If it is stock market data, any encryption scheme that will hold up for 20 minutes is enough; in 20 minutes, the same information will be on a number of free quote services. Your password to the New York Times Web site? Assuming you do not use the same password for your e-mail account, SSL is overkill for that server. Credit card transactions, bank accounts, and medical records need the highest possible encryption, both in transit and in storage. Export and International Use Issues Until recently, exporting 128-bit Web browsers from the United States was a crime according to U.S. law. Exporting software or hardware capable of strong encryption is still a crime. Some countries have outlawed the use of encryption, and some other countries require a key escrow if you want to use encryption. Some countries have outlawed use of all but certain approved secret encryption algorithms. We strongly recommend that information security professionals become familiar with the cryptography laws of the land, especially if working in an international setting.17 Section Summary • In asymmetric cryptography, one key is used to encrypt and another is used to decrypt. • Asymmetric cryptography is often also known as public key cryptography. • Asymmetric cryptography is up to 1000 times slower than symmetric cryptography. • RSA is the most popular and well-understood asymmetric cryptographic algorithm. • RSA’s security depends on the difficulty of factoring very large (>1024bit) numbers. • Elliptic curve cryptography depends on the difficulty of finding discrete logarithms over elliptic curves. 642

AU1518Ch35Frame Page 643 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography • Smaller elliptic curve keys offer similar security as comparatively larger RSA keys. • It is possible to attack RSA through chosen plaintext attacks. • SSL is commonly used to encrypt information between a browser and a Web server. • Choosing a cipher and key length depends on what needs to be encrypted, for how long, and against whom. • There are significant legal implications of using encryption in a multinational setting. KEY MANAGEMENT AND EXCHANGE In symmetric encryption, what happens when one person who knows the keys goes to another company (or to a competitor)? Even with public key algorithms, keeping the private key secret is paramount: without it, all is lost. For attackers, the reverse is true; it is often easier to attack the key storage instead of trying to crack the algorithm. A person who knows the keys can be bribed or kidnapped and tortured to give up the keys, at which time the encryption becomes worthless. Key management describes the problems and solutions to securely generating, exchanging, installing and storing, verifying, and destroying keys. Generation Encryption software typically generates its own keys (it is possible to generate keys in one program and use them in another); but because of the implementation, this can introduce weaknesses. For example, DES software that picks a known weak or semi-weak key will create a major security issue. It is important to use the largest possible keyspace: a DES 56-bit key can be picked from the 256 ASCII character set, the first 128 of ASCII, or the 26 letters of the alphabet. Guessing the 56-bit DES key (an exhaustive search) involves trying out all 56-bit combinations from the keyspace. Common sense tells us that the exhaustive search of 256 bytes will take much longer than that for 26 bytes. With a large keyspace, the keys must be random enough so as to be not guessable. Exchange Alice and Bob are sitting on two separate islands. Alice has a bottle of fine wine, a lock, its key, and an empty chest. Bob has another lock and its key. An islander is willing to transfer items between the islands but will keep anything that he thinks is not secured, so you cannot send a key, an unlocked lock, or a bottle of wine on its own. How does Alice send the wine to Bob? See the answer at the end of this section. 643

AU1518Ch35Frame Page 644 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY This is actually a key exchange problem in disguise: how does Alice get a key to Bob without its being compromised by the messenger? For asymmetric encryption, it is easy — the public key can be given out to the whole world. For symmetric encryption, a public key algorithm (like SSL) can be used; or the key may be broken up and each part sent over different channels and combined at the destination. Answer to our key/wine exchange problem: Alice puts the bottle into the chest and locks it with her lock, keeps her key, and sends the chest to the other island. Bob locks the chest with his lock, and sends it back to Alice. Alice takes her lock off from the chest and sends it back to Bob. Bob unlocks the chest with his key and enjoys the wine. Installation and Storage How a key is installed and stored is important. If the application does no initial validation before installing a key, an attacker might be able to insert a bad key into the application. After the key is installed, can it be retrieved without any access control? If so, anyone with access to the computer would be able to steal that key. Change Control How often a key is changed determines its efficiency. If a key is used for a long time, an attacker might have sufficient samples of ciphertext to be able to cryptanalyze the information. At the same time, each change brings up the exchange problem. Destruction A key no longer in use has to be disposed of securely and permanently. In the wrong hands, recorded ciphertext may be decrypted and give an enemy insights into current ciphertext. Examples and Implementations PKI. A public key infrastructure (PKI) is the set of systems and software required to use, manage, and control public key cryptography. It has three primary purposes: publish public keys, certify that a public key is tied to an individual or entity, and provide verification as to the continued validity of a public key. As discussed before, a digital certificate is a public key with identifying information for its owner. The certificate authority (CA) “signs” the certificate and verifies that the information provided is correct. Now all entities that trust the CA can trust that the identity provided by a certificate is correct. The CA can revoke the certificate and put it in the certificate revocation list (CRL), at which time it will not be trusted anymore. An extensive set of PKI standards and documentation is available.18 Large companies run their own CA for intranet/extranet use. In Canada and Hong 644

AU1518Ch35Frame Page 645 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography Kong, large public CAs are operational. But despite the promises of the “year of the PKI,” market acceptance and implementation of PKIs are still in the future. Kerberos. From the comp.protocol.kerberos FAQ: Kerberos; also spelled Cerberus. n. The watchdog of Hades, whose duty it was to guard the entrance — against whom or what does not clearly appear; it is known to have had three heads. — Ambrose Bierce The Enlarged Devil’s Dictionary

Kerberos was developed at MIT in the 1980s and publicly released in 1989. The primary purposes were to prevent cleartext passwords from traversing the network and to ease the log-in process to multiple machines.19 The current version is 5 — there are known security issues with version 4. The three heads of Kerberos comprise the key distribution center (KDC), the client, and the server that the client wants to access. Kerberos 5 is built into Windows 2000 and later and will probably result in wider adoption of Kerberos (notwithstanding some compatibility issues of the Microsoft implementation of the protocol.20) The KDC runs two services: authentication service (AS) and ticket granting service (TGS). A typical Kerberos session (shown in Exhibit 35-6) proceeds as follows when Alice wants to log on to her e-mail and retrieve it. 1. She will request a ticket granting ticket (TGT) from the KDC, where she already has an account. The KDC has a hash of her password, and she will not have to provide it. (The KDC must be extremely secure to protect all these passwords.) 2. The TGS on the KDC will send Alice a TGT encrypted with her password hash. Without knowing the password, she cannot decrypt the TGT. 3. Alice decrypts the TGT; then, using the TGT, she sends another request to the KDC for a service ticket to access her e-mail server. The service ticket will not be issued without the TGT and will only work for the e-mail server. 4. The KDC grants Alice the service ticket. 5. Alice can access the e-mail server. Note that both the TGT and the ST have expiration times (default is ten hours); so even if one or both tickets are captured, the exposure is only until the ticket expiration time. All computer system clocks participating in a Kerberos system must be within five minutes of each other and all services that grant access. Finally, the e-mail server must be kerberized (support Kerberos). 645

AU1518Ch35Frame Page 646 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY

1 I am Alice, and I need a TGT. 2 Here’s your encrypted TGT, but you need Alice’s password to decrypt it.

KDC Authentication Service (AS) Ticket Granting Service (TGS)

3

4 Here’s my TGT. Give me a Service Ticket.

Alice wants Access to E-Mail Server

Here’s the Service Ticket.

5 I am Alice, and here is my Service Ticket.

6 OK, you can access now.

(Kerberized) E-Mail Server

Exhibit 35-6. Kerberos in operation.

Section Summary • Key management (generating/exchanging/storing/installing/destroying keys) can compromise security. • Public key cryptography is often the best solution to key distribution issues. • A public key infrastructure (PKI) is a system that can manage public keys. • A certificate authority (CA) is a PKI that can validate public keys. • Digital certificates are essentially public keys that also include key owner information. The key and information are verified by a CA. • If an entity trusts a CA, it can also trust digital certificates that the CA signs (authenticates). • Kerberos is a protocol for eliminating cleartext passwords across networks. • A ticket granting ticket (TGT) is issued to the user, who will use that to request a service ticket. All tickets expire after a certain time. • Under Kerberos, tickets are encrypted and cleartext passwords never cross the network.

646

AU1518Ch35Frame Page 647 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography HASH FUNCTIONS A hash function is defined as a process that can take an arbitrary-length message and return a fixed-length value from that message. For practical use, we require further qualities: • Given a message, it should be easy to find the hash. • Given the hash, it should be hard to find the message. • Given the message, it should be hard to find another (specific or random) message that produces the same hash. Message Digests A message digest is the product of a one-way hash function applied on a message: it is a fingerprint or a unique summary that can uniquely identify the message. MD2, MD4, and MD5. Ron Rivest (the R in RSA) designed all of these. All three produce 128-bit hashes. MD4 has been successfully attacked. MD5 has been found weak in certain cases; it is possible to find another random message that will produce the same hash. MD2 is slower, although no known weaknesses exist. SHA. The secure hash algorithm (SHA) was designed by NIST and NSA and is used in the digital signature standard, officially known as the Secure Hash Standard (SHS) and is available as FIPS-180-1.21

The current SHA produces a 160-bit hash and is also known as SHA-1. There are additional standards undergoing public comments and reviews that will offer 256-, 384- and 512-bit hashes. The draft standard is available.16 The proposed standards will offer security matching the level of AES. The draft is available as FIPS-180-2.22 Applications of Message Digests Message digests are useful and should be used to provide message integrity. Suppose Alice wants to pay $2000 to Eve, a contract network administrator. She types an e-mail to Bob, her accountant, to that effect. Before sending the message, Alice computes the message digest (SHA-1 or MD5) of the message and then sends the message followed by the message digest. Eve intercepts the e-mail and changes $2000 to $20,000; but when Bob computes the message digest of the e-mail, it does not match the one from Alice, and he knows that the e-mail has been tampered with. But how do we ensure that the e-mail to Bob indeed came from Alice, since faking an e-mail source address is notoriously easy? This is where digital signatures come in. 647

AU1518Ch35Frame Page 648 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY Digital Signatures Digital signatures were designed to provide the same features of a conventional (“wet”) signature. The signature must be non-repudiatable, and it must be nontransferable (cannot be lifted and reused on another document). It must also be irrevocably tied back to the person who owns it. It is possible to use symmetric encryption to digitally sign documents using an intermediary who shares keys with both parties, but both parties do not have a common key. This is cumbersome and not practical. Using public key cryptography solves this problem neatly. Alice will encrypt a document with her private key, and Bob will decrypt it with Alice’s public key. Because it could have been encrypted with only Alice’s private key, Bob can be sure it came from Alice. But there are two issues to watch out for: (1) the rest of the world may also have Alice’s public key, so there will be no privacy in the message; and (2) Bob will need a trusted third party (a certificate authority) to vouch for Alice’s public key. In practice, signing a long document may be computationally costly. Typically, first a one-way hash of the document is generated, the hash is signed, and then both the signed hash and the original document are sent. The recipient also creates a hash and compares the decrypted signed hash to the generated one. If both match, then the signature is valid. Digital Signature Algorithm (DSA) NIST proposed DSA in 1991 to be used in the Digital Signature Standard and the standard issued in May 1994, and in January 2000 it announced the latest version as FIPS PUB 186-2.23 As the name implies, this is purely a signature standard and cannot be used for encryption or key distribution. The operation is pretty simple. Alice creates a message digest using SHA-1, uses her private key to sign it, and sends the message and the digest to Bob. Bob also uses SHA-1 to generate the message digest from the message and uses Alice’s public key on the received message digest to decrypt it. Then the two message digests are compared. If they match, the signature is valid. Finally, digital signatures should not be confused with the horribly weakened “electronic signature” law passed in the United States, where a touchtone phone press could be considered an electronic signature and enjoy legal standing equivalent to an ink signature. Message Authentication Codes (MACs) MACs are one-way hash functions that include the key. People with the identical key will be able to verify the hash. MACs provide authentication of files between users and may also provide file integrity to a single user to 648

AU1518Ch35Frame Page 649 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography ensure files have not been altered in a Web site defacement. On a Web server, the MAC of all files could be computed and stored in a table. With only a one-way hash, new values could have been inserted in the table and the user will not notice. But in a MAC, because the attacker will not know the key, the table values will not match; and an automated process could alert the owner (or automatically replace files from backup). A one-way hash function can be turned into a MAC by encrypting the hash using a symmetric algorithm and keeping the key secret. A MAC can be turned into a one-way hash function by disclosing the key. Section Summary • Hash functions can create a fixed-length digest of arbitrary-length messages. • One-way hashes are useful: given a hash, finding the message should be very hard. • Two messages should not generate the same hash. • MD2, MD4, and MD5 all produce 128-bit hashes. • SHA-1 produces a 160-bit hash. • Encrypting a message digest with a private key produces a digital signature. • Message authentication codes are one-way hashes with the key included. OTHER CRYPTOGRAPHIC NOTES Steganography Steganography is a Greek word that means sheltered writing. This is a method that attempts to hide the existence of a message or communication. In February 2001, USA Today reported that terrorists are using steganography to hide their communication in images on the Internet,24 and various other news organizations also circulated this story. A University of Michigan study25 examined this by analyzing two million images downloaded from the Internet and failed to find a single instance. In its basic form, steganography is simple. For example, every third letter of a memo could hide a message. And it has the added advantage over encryption that it does not arouse suspicion: often, the presence of encryption could set off an investigation; but a message hidden in plain sight would be ignored. The medium that hides the message is called the cover medium, and it must have parts that can be altered or used without damaging or noticeably changing the cover media. In case of digital cover media, these alterable parts are called redundant bits. These redundant bits or a subset can be replaced with the message we want to hide. 649

AU1518Ch35Frame Page 650 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY Interestingly, steganography in digital media is very similar to digital watermarking, where a song or an image can be uniquely identified to prevent theft or unauthorized use. Digital Notary Public Digital notary service is a logical extension of digital signatures. Without this service, Alice could send a digitally signed offer to Bob to buy a property; but after property values drop the next day, she could claim she lost her private key and call the message a forgery. Digital notaries could be trusted third parties that will also time-stamp Alice’s signature and give Bob legal recourse if Alice tries to back out of the deal. There are commercial providers of this type of service. With time-sensitive offers, this becomes even more important. Time forgery is a difficult if not impossible task with paper documents, and it is easy for an expert to detect. With electronic documents, time forgeries are easy and detection is almost impossible (a system administrator can change the time stamp of an e-mail on the server). One do-it-yourself timestamping method suggests publishing the one-way hash of the message in a newspaper (as a commercial notice or advertisement). From then on, the date of the message will be time-stamped and available for everyone to verify. Backdoors and Digital Snake Oil We will reiterate our warnings about not using in-house cryptographic algorithms or a brand-new encryption technology that has not been publicly reviewed and analyzed. It may promise speed and security or low cost, but remember that only algorithms that withstood documented attacks are worthy of serious use — others should be treated as unproven technology, not ready for prime time. Also, be careful before using specific software that a government recommends. For example, Russia mandates use of certain approved software for strong encryption. It has been mentioned that the government certifies all such software after behind-the-scenes key escrow. To operate in Russia, a business may not have any choice in this matter, but knowing that the government could compromise the encryption may allow the business to adopt other safeguards. References 1. Data Encryption Standard (DES): http://www.itl.nist.gov/fipspubs/fip46–2.htm. 2. Specialized DES cracking computer: http://www.eff.org/descracker.html. 3. Advanced Encryption Standard (AES): http://csrc.nist.gov/publications/fips/fips197/fips197.pdf. 4. Bruce Schneier, Applied Cryptography, 2nd edition,. ISBN 0-471-11709-9. 5. Weak DES keys: http://www.ietf.org/rfc/rfc2409.txt, Appendix A. 6. AES selection report: http://csrc.nist.gov/encryption/aes/round2/r2report.pdf.

650

AU1518Ch35Frame Page 651 Thursday, November 14, 2002 7:56 PM

An Introduction to Cryptography 7. Rijndael developer’s site: http://www.esat.kuleuven.ac.be/~rijmen/rijndael/. 8. Rijndael technical overview: http://www.baltimore.com/devzone/aes/tech_overview. html. 9. Rijndael technical overview: http://www.sans.org/infosecFAQ/encryption/mathematics. htm. 10. PKZIP encryption weakness: http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get. cgi/1994/CS/CS0842.ps.gz. 11. Diffie and Hellman paper on Public Key Crypto: http://cne.g.,mu.edu/modules/acmpkp/ security/texts/NEWDIRS.PDF. 12. RSA algorithm: http://www.rsasecurity.com/rsalabs/rsa_algorithm/index.html. 13. Paper on elliptic curve cryptography: ftp://ftp.rsasecurity.com/pub/ctryptobytes/ crypto1n2. pdf. 14. Attacks on RSA: http://crypto.stanford.edu/~dabo/abstracts/RSAattack-survey.html. 15. SSL 3.0 protocol: http://www.netscape.com/eng/ssl3/draft302.txt. 16. TLS 1.0 protocol: http://www.ietf.org/rfc/rfc2246.txt. 17. International encryption regulations: http://cwis.kub.nl/~frw/people/koops/lawsurvy. htm. 18. IETF PKI working group documents: http://www.ietf.org/html.charters/pkix-charter.html. 19. Kerberos documentation collection: http://web.mit.edu/kerberos/www/. 20. Kerberos issues in Windows 2000: http://www.nrl.navy.mil/CCS/people/kenh/ kerberos-faq.html#ntbroken. 21. Secure Hash Standard (SHS): http://www.itl.nist.gov/fipspubs/fip180-1.htm. 22. Improved SHS draft: http://csrc.nist.gov/encryption/shs/dfips-180-2.pdf. 23. Digital Signature Standard (DSS): http://csrc.nist.gov/publications/fips/fips186-2/fips1862-change1.pdf. 24. USA Today story on steganography: http://www.usatoday.com/life/cyber/tech/2001-02-05binladen.htm#more. 25. Steganography study: http://www.citi.umich.edu/techreports/reports/citi-tr-01-11.pdf.

ABOUT THE AUTHOR Javed Ikbal, CISSP, works at a major financial services company as director, IT security, where he is involved in security architecture, virus/cyber incident detection and response, policy development, and building custom tools to solve problems. A proponent of open-source security tools, he is a believer in the power of PERL.

651

AU1518Ch35Frame Page 652 Thursday, November 14, 2002 7:56 PM

AU1518Ch36Frame Page 653 Thursday, November 14, 2002 7:56 PM

Chapter 36

Hash Algorithms: From Message Digests to Signatures Keith Pasley, CISSP

There are many information-sharing applications that are in use on modern networks today. Concurrently, there are a growing number of users sharing data of increasing value to both sender and recipient. As the value of data increases among users of information-sharing systems, the risks of unauthorized data modification, user identity theft, fraud, unauthorized access to data, data corruption, and a host of other business-related problems, mainly dealing with data integrity and user authentication, are introduced. The issues of integrity and authentication play an important part in the economic systems of human society. Few would do business with companies and organizations that do not prove trustworthy or competent. For example, the sentence “I owe Alice US$500” has a hash result of “gCWXVcL3fPV8VrJNajm8JKA== ,” while the sentence “I owe Alice US$5000” has a hash of “DSAyXRTza2bHLH46IPMrSq==.” As can be seen, there is a big difference in hash results between the two sentences. If an attacker were trying to misappropriate the $4500 difference, hashing would allow detection. Why they are needed and the problems they solve: • Is the e-mail you received really from who it says it is? • Can you ensure the credit card details you submit are going to the site you expected? • Can you be sure the latest anti-virus, firewall, or operating system software upgrade you install is really from the vendor? • Do you know if the Web link you click on is genuine? • Does the program hash the password when performing authentication or just passing it in the clear? 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

653

AU1518Ch36Frame Page 654 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY Exhibit 36-1. The hash function. 4∗3 Drop the first digit (1) leaves 2 ∗ next number (3)6 6 ∗ next number (7) Drop the first digit leaves 2 ∗ next number (3) 6 ∗ next number (8) Drop the first digit (4)

12 2 42 2 6 48 8

• Is there a way to know who you are really dealing with when disclosing your personal details over the Internet? • Are you really you? • Has someone modified a Web page or file without authorization? • Can you verify that your routers are forwarding data only to authorized peer routers? • Has any of the data been modified in route to its destination? • Can hash algorithms help answer these questions? WHAT ARE HASH ALGORITHMS? A hash algorithm is a one-way mathematical function that is used to compress a large block of data into a smaller, fixed-size representation of that data. To understand the concept of hash functions, it is helpful to review some underlying mathematical structures. One such structure is called a function. When hash functions were first introduced in the 1950s, the goal was to map a message into a smaller message called a message digest. This smaller message was used as a sort of shorthand of the original message. The digest was used originally for detection of random and unintended errors in processing and transmission by data processing equipment Functions A function is a mathematical structure that takes one or more variables and outputs a variable. To illustrate how scientists think about functions, one can think of a function in terms of a machine (see Exhibit 36-1). The machine in this illustration has two openings. In this case the input opening is labeled x and the output opening is labeled y. These are considered traditional names for input and output. The following are the basic processing steps of mathematical functions: 1. A number goes in. 2. Something is done to it. 3. The resulting number is the output. 654

AU1518Ch36Frame Page 655 Thursday, November 14, 2002 7:56 PM

Hash Algorithms: From Message Digests to Signatures The same thing is done to every number input into the function machine. Step 2 above describes the actual mathematical transformation done to the input value, or hashed value, which yields the resulting output, or hash result. In this illustration, Step 2 can be described as a mathematical rule as follows: x + 3 = y. In the language of mathematics, if x is equal to 1, then y equals 4. Similarly, if x is equal to 2, then y equals 5. In this illustration the function, or mathematical structure called an algorithm, is: for every number x, add 3 to the number. The result, y, is dependant on what is input, x. As another example, suppose that, to indicate an internal company product shipment, the number 43738 is exchanged. The hash function, or algorithm, is described as: multiply each number from left to right, and the first digit of any multiplied product above 9 is dropped. The hash function could be illustrated in mathematical notation as: x ∗ the number to the right = y (see Exhibit 36-1). The input into a hash algorithm can be of variable length, but the output is usually of fixed length and somewhat shorter in length than the original message. The output of a hash function is called a message digest. In the case of the above, the hash input was of arbitrary (and variable) length; but the hash result, or message digest, was of a fixed length of 1 digit, 8. As can be seen, a hash function provides a shorthand representation of the original message. This is also the concept behind error checking (checksums) done on data transmitted across communications links. Checksums provide a nonsecure method to check for message accuracy or message integrity. It is easy to see how the relatively weak mathematical functions described above could be manipulated by an intruder to change the hash output. Such weak algorithms could result in the successful alteration of message content leading to inaccurate messages. If you can understand the concept of what a function is and does, you are on your way to understanding the basic concepts embodied in hash functions. Providing data integrity and authentication for such applications requires reliable, secure hash algorithms. Secure Hash Algorithms A hash algorithm was defined earlier as a one-way mathematical function that is used to compress a large block of data into a smaller, fixed size representation of that data. An early application for hashing was in detecting unintentional errors in data processing. However, due to the critical nature of their use in the high-security environments of today, hash algorithms must now also be resilient to deliberate and malicious attempts to break secure applications by highly motivated human attackers — more so than by erroneous data processing. The one-way nature of hash algorithms is one of the reasons they are used in public key cryptography. A one-way 655

AU1518Ch36Frame Page 656 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY Exhibit 36-2. Output bit lengths. Hash Algorithm SHA-1 SHA-256 SHA-384 SHA-512

Output Bit Length 160 256 384 512

hash function processes a bit stream in a manner that makes it highly unlikely that the original message can be deduced by the output value. This property of a secure hash algorithm has significance in situations where there is zero tolerance for unauthorized data modification or if the identity of an object needs to be validated with a high assurance of accuracy. Applications such as user authentication and financial transactions are made more trustworthy by the use of hash algorithms. Hash algorithms are called secure if they have the following properties: • The hash result should not be predictable. It should be computationally impractical to recover the original message from the message digest (one-way property). • No two different messages, over which a hash algorithm is applied, will result in the same digest (collision-free property). Secure hash algorithms are designed so that any change to a message will have a high probability of resulting in a different message digest. As such, the message alteration can be detected by comparing hash results before and after hashing. The receiver can tell that a message has suspect validity by the fact that the message digest computed by the sender does not match the message digest computed by the receiver, assuming both parties are using the same hash algorithm. The most common hash algorithms as of this writing are based on secure hash algorithm-1 (SHA-1) and message digest 5 (MD5). Secure Hash Algorithm SHA-1, part of the Secure Hash Standard, was one of the earliest hash algorithms specified for use by the U.S. federal government (see Exhibit 36-2). SHA-1 was developed by NIST and the NSA. SHA-1 was published as a federal government standard in 1995. SHA-1 was an update to the SHA, which was published in 1993. How SHA-1 Works Think of SHA-1 as a hash machine that has two openings, input and output. The input value is called the hashed value, and the output is called the 656

AU1518Ch36Frame Page 657 Thursday, November 14, 2002 7:56 PM

Hash Algorithms: From Message Digests to Signatures hash result. The hashed values are the bit streams that represent an electronic message or other data object. The SHA-1 hash function, or algorithm, transforms the hashed value by performing a mathematical operation on the input data. The length of the message is the same as the number of bits in the message. The SHA-1 algorithm processes blocks of 512 bits in sequence when computing the message digest. SHA-1 produces a 160-bit message digest. SHA-1 has a limitation on input message size of less than 18 quintillion (that is, 264 or 18,446,744,073,709,551,616) bits in length. SHA-1 has five steps to produce a message digest: 1. Append padding to make message length 64 bits less than a multiple of 512. 2. Append a 64-bit block representing the length of the message before padding out. 3. Initialize message digest buffer with five hexadecimal numbers. These numbers are specified in the FIPS 180-1 publication. 4. The message is processed in 512-bit blocks. This process consists of 80 steps of processing (four rounds of 20 operations), reusing four different hexadecimal constants, and some shifting and adding functions. 5. Output blocks are processed into a 160-bit message digest. MD5 SHA was derived from the secure hash algorithms MD4 and MD5, developed by Professor Ronald L. Rivest of MIT in the early 1990s. As can be expected, SHA and MD5 work in a similar fashion. While SHA-1 yields a 160bit message digest, MD5 yields a 128-bit message digest. SHA-1, with its longer message digest, is considered more secure than MD5 by modern cryptography experts, due in part to the longer output bit length and resulting increased collision resistance. However, MD5 is still in common use as of this writing. Keyed Hash (HMAC) Modern cryptographers have found the hash algorithms discussed above to be insufficient for extensive use in commercial cryptographic systems or in private electronic communications, digital signatures, electronic mail, electronic funds transfer, software distribution, data storage, and other applications that require data integrity assurance, data origin authentication, and the like. The use of asymmetric cryptography and, in some cases, symmetric cryptography, has extended the usefulness of hashing by associating identity with a hash result. The structure used to convey the property of identity (data origin) with a data object’s integrity is hashed message authentication code (HMAC), or keyed hash. 657

AU1518Ch36Frame Page 658 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY For example, how does one know if the message and the message digest have not been tampered with? One way to provide a higher degree of assurance of identity and integrity is by incorporating a cryptographic key into the hash operation. This is the basis of the keyed hash or hashed message authentication code (HMAC). The purpose of a message authentication code (MAC) is to provide verification of the source of a message and integrity of the message without using additional mechanisms. Other goals of HMAC are as follows: • To use available cryptographic hash functions without modification • To preserve the original performance of the selected hash without significant degradation • To use and handle keys in a simple way • To have a well-understood cryptographic analysis of the strength of the mechanism based on reasonable assumptions about the underlying hash function • To enable easy replacement of the hash function in case a faster or stronger hash is found or required To create an HMAC, an asymmetric (public/private) or a symmetric cryptographic key can be appended to a message and then processed through a hash function to derive the HMAC. In mathematical terms: if x = (key + message) and f = SHA-1, then f(x) = HMAC. Any hash function can be used, depending on the protocol defined, to compute the type of message digest called an HMAC. The two most common hash functions are based on MD5 and SHA. The message data and HMAC (message digest of a secret key and message) are sent to the receiver. The receiver processes the message and the HMAC using the shared key and the same hash function as that used by the originator. The receiver compares the results with the HMAC included with the message. If the two results match, then the receiver is assured that the message is authentic and came from a member of the community that shares the key. Other examples of HMAC usage include challenge–response authentication protocols such as Challenge Handshake Authentication Protocol (CHAP, RFC 1994). CHAP is defined as a peer entity authentication method for Point-to-Point Protocol (PPP), using a randomly generated challenge and requiring a matching response that depends on a cryptographic hash of the challenge and a secret key. Challenge–Response Authentication Mechanism (CRAM, RFC 2195), which specifies an HMAC using MD5, is a mechanism for authenticating Internet Mail Access Protocol (IMAP4) users. Digital signatures, used to authenticate data origin and integrity, employ HMAC functions as part of the “signing” process. A digital signature is created as follows: 1. A message (or some other data object) is input into a hash function (i.e., SHA-1, MD5, etc.). 2. The hash result is encrypted by the private key of the sender. 658

AU1518Ch36Frame Page 659 Thursday, November 14, 2002 7:56 PM

Hash Algorithms: From Message Digests to Signatures Exhibit 36-3. Other hash algorithms. Hash Algorithm

Output Bit Length

RIPEMD (160,256,320) HAS-160 Tiger

160, 256, 320 160 128,160,192

Country Germany, Belgium Korea United Kingdom

The result of these two steps yields what is called a digital signature of the message or data object. The properties of a cryptographic hash ensure that, if the data object is changed, the digital signature will no longer match it. There is a difference between a digital signature and an HMAC. An HMAC uses a shared secret key (symmetric cryptography) to “sign” the data object, whereas a digital signature is created by using a private key from a private/public key-pair (asymmetric cryptography) to sign the data object. The strengths of digital signatures lend themselves to use in high-value applications that require protection against forgery and fraud. See Exhibit 36-3 for other hash algorithms. HOW HASH ALGORITHMS ARE USED IN MODERN CRYPTOGRAPHIC SYSTEMS While in the past hash algorithms were used for rudimentary data integrity and user authentication, today hash algorithms are incorporated into other protocols — digital signatures, virtual private network (VPN) protocols, software distribution and license control, Web page file modification detection, database file system integrity, and software update integrity verification are just a few. Hash algorithms use in hybrid cryptosystems will be discussed next. Transport Layer Security (TLS) Transport Layer Security (TLS) is a network security protocol that is designed to provide data privacy and data integrity between two communicating applications. TLS was derived from the earlier Secure Sockets Layer (SSL) protocol developed by Netscape in the early 1990s. TLS is defined in IETF RFC 2246. TLS and SSL do not interoperate due to differences between the protocols. However, TLS 1.0 does have the ability to drop down to the SSL protocol during initial session negotiations with an SSL client. Deference is given to TLS by developers of most modern security applications. The security features designed into the TLS protocol include hashing. The TLS protocol is composed of two layers: 1. TLS Record Protocol 2. TLS Handshake Protocol (really a suite of three subprotocols) 659

AU1518Ch36Frame Page 660 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY The Record Protocol provides in-transit data privacy by specifying that symmetric cryptography be used in TLS connections. Connection reliability is accomplished by the Record Protocol through the use of HMACs. The Handshake Protocol is encapsulated within the Record Protocol. The TLS Handshake Protocol handles connection parameter establishment. The Handshake Protocol also provides for peer identity verification in TLS through the use of asymmetric (public/private) cryptography. There are several uses of keyed hash algorithms (HMAC) within the TLS protocol: TLS uses HMAC in a conservative fashion. The TLS specification calls for the use of both HMAC MD5 and HMAC SHA-1 during the Handshake Protocol negotiation. Throughout the protocol, two hash algorithms are used to increase the security of various parameters. • Pseudorandom number function • Protect record payload data • Protect symmetric cryptographic keys (used for bulk data encrypt/decrypt) • Part of the mandatory cipher suite of TLS If any of the above parameters were not protected by security mechanisms such as HMACs, an attacker could thwart the electronic transaction between two or more parties. The TLS protocol is the basis for most Webbased in-transit security schemes. As can be seen by this example, hash algorithms provide an intrinsic security value to applications that require secure in-transit communication using the TLS protocol. IPSec The Internet Protocol Security (IPSec) Protocol was designed as the packet-level security layer included in IPv6. IPv6 is a replacement TCP/IP protocol suite for IPv4. IPSec itself is flexible and modular in design, which allows the protocol to be used in current IPv4 implementations. Unlike the session-level security of TLS, IPSec provides packet-level security. VPN applications such as intranet and remote access use IPSec for communications security. Two protocols are used in IPSec operations, Authentication Header (AH) and Encapsulating Security Payload (ESP). Among other things, ESP is used to provide data origin authentication and connectionless integrity. Data origin authentication and connectionless integrity are joint services and are offered as an option in the implementation of the ESP. RFC 2406, which defines the ESP used in IPSec, states that either HMAC or one-way hash algorithms may be used in implementations. The authentication algorithms are used to create the integrity check value (ICV) used to authenticate an ESP 660

AU1518Ch36Frame Page 661 Thursday, November 14, 2002 7:56 PM

Hash Algorithms: From Message Digests to Signatures packet of data. HMACs ensure the rapid detection and rejection of bogus or replayed packets. Also, because the authentication value is passed in the clear, HMACs are mandatory if the data authentication feature of ESP is used. If data authentication is used, the sender computes the integrity check value (ICV) over the ESP packet contents minus the authentication data. After receiving an IPSec data packet, the receiver computes and compares the ICV of the received datagrams. If they are the same, then the datagram is authentic; if not, then the data is not valid, it is discarded, and the event can be logged. MD5 and SHA-1 are the currently supported authentication algorithms. The AH protocol provides data authentication for as much of the IP header as possible. Portions of the IP header are not authenticated due to changes to the fields that are made as a matter of routing the packet to its destination. The use of HMAC by the ESP has, according to IPSec VPN vendors, negated the need for AH. Digital Signatures Digital signatures serve a similar purpose as those of written signatures on paper — to prove the authenticity of a document. Unlike a pen-andpaper signature, a digital signature can also prove that a message has not been modified. HMACs play an important role in providing the property of integrity to electronic documents and transactions. Briefly, the process for creating a digital signature is very much like creating an HMAC. A message is created, and the message and the sender’s private key (asymmetric cryptography) serve as inputs to a hash algorithm. The hash result is attached to the message. The sender creates a symmetric session encryption key to optionally encrypt the document. The sender then encrypts the session key with the sender’s private key, re-encrypts it with the receiver’s public key to ensure that only the receiver can decrypt the session key, and attaches the signed session key to the document. The sender then sends the digital envelope (keyed hash value, encrypted session key, and the encrypted message) to the intended receiver. The receiver performs the entire process in reverse order. If the results match when the receiver decrypts the document and combines the sender’s public key with the document through the specified hash algorithm, the receiver is assured that (1) the message came from the original sender and (2) the message has not been altered. The first case is due to use of the sender’s private key as part of the hashed value. In asymmetric cryptography, a mathematical relationship exists between the public and private keys such that either can encrypt and either can decrypt; but the same key cannot both encrypt and decrypt the same item. The private key is known only to its owner. As such, only the owner of the private key could have used it to develop the HMAC. 661

AU1518Ch36Frame Page 662 Thursday, November 14, 2002 7:56 PM

CRYPTOGRAPHY Other Applications HMACs are useful when there is a need to validate software that is downloaded from download sites. HMACs are used in logging onto various operating systems, including UNIX. When the user enters a password, the password is usually run through a hash algorithm; and the hashed result is compared to a user database or password file. An interesting use of hash algorithms to prevent software piracy is in the Windows XP registration process. SHA-1 is used to develop the installation ID used to register the software with Microsoft. During installation of Windows XP, the computer hardware is identified, reduced to binary representation, and hashed using MD5. The hardware hash is an eight-byte value that is created by running ten different pieces of information from the PC’s hardware components through the MD5 algorithm. This means that the resultant hash value cannot be backward-calculated to determine the original values. Further, only a portion of the resulting hash value is used in the hardware hash in order to ensure complete anonymity. Unauthorized file modification such as Web page defacement, system file modification, virus signature update, signing XML documents, and signing database keys are all applications for which various forms of hashing can increase security levels. PROBLEMS WITH HASH ALGORITHMS Flaws have been discovered in various hash algorithms. One such basic flaw is entitled the birthday attack. Birthday Attack This attack’s name comes from the world of probability theory: out of any random group of 23 people, it is probable that at least two share a birthday. Finding two numbers that have the same hash result is known as the birthday attack. If hash function f maps into message digests of length 60 bits, then an attacker can find a collision using only 230 inputs (2f/2). Differential cryptanalysis has proven to be effective against one round of MD5. (There are four rounds of transformation defined in the MD5 algorithm.) When choosing a hash algorithm, speed of operation is often a priority. For example, in asymmetric (public/private) cryptography, a message may be hashed into a message digest as a data integrity enhancement. However, if the message is large, it can take some time to compute a hash result. In consideration of this, a review of speed benchmarks would give a basis for choosing one algorithm over another. Of course, implementation in hardware is usually faster than in a software-based algorithm. 662

AU1518Ch36Frame Page 663 Thursday, November 14, 2002 7:56 PM

Hash Algorithms: From Message Digests to Signatures LOOKING TO THE FUTURE SHA-256, -384, and -512 In the summer of 2001, NIST published for public comment a proposed update to the Secure Hash Standard (SHS) used by the U.S. Government. Although SHA-1 appears to be still part of SHS, the update includes the recommendation to use hash algorithms with longer hash results. Longer hash results increase the work factor needed to break cryptographic hashing. This update of the Secure Hash Standard coincides with another NIST update — selection of the Rijndael symmetric cryptography algorithm for U.S. Government use for encrypting data. According to NIST, it is thought that the cryptographic strength of Rijndael requires the higher strength of the new SHS algorithms. The new SHS algorithms feature similar functions but different structures. Newer and more secure algorithms, such as SHA256, -384, and –512, may be integrated into the IPSec specification in the future to complement the Advanced Encryption Standard (AES), Rijndael. As of this writing, NIST has proposed updating SHA-1 as the government standard, with the similarly designated but varying output word size SHA256, -384, or -512. The newer iterations of the SHA allow for larger message digests, which increase the resilience of the properties of a secure hash algorithm discussed earlier. SUMMARY Hash algorithms have existed in many forms at least since the 1950s. As a result of the increased value of data interactions and the increased motivation of attackers seeking to exploit electronic communications, the requirements for hash algorithms has changed. At one time, hashing was used to detect inadvertent errors generated by data processing equipment and poor communication lines. Now, secure hash algorithms are used to associate source of origin with data integrity, thus tightening the bonds of data and originator of data. So-called HMACs facilitate this bonding through the use of public/private cryptography. Protocols such as TLS and IPSec use HMACs extensively. Over time, weaknesses in algorithms have been discovered and hash algorithms have improved in reliability and speed. The present digital economy finds that hash algorithms are useful for creating message digests and digital signatures. Further Reading http://www.deja.com/group/sci.crypt.

ABOUT THE AUTHOR Keith Pasley, CISSP, is a senior security technologist with Ciphertrust in Atlanta, Georgia. 663

AU1518Ch36Frame Page 664 Thursday, November 14, 2002 7:56 PM

AU1518Ch37Frame Page 665 Thursday, November 14, 2002 7:55 PM

Chapter 37

PKI Registration Alex Golod, CISSP

PKI is comprised of many components: technical infrastructure, policies, procedures, and people. Initial registration of subscribers (users, organizations, hardware, or software) for a PKI service has many facets, pertaining to almost every one of the PKI components. There are many steps between the moment when subscribers apply for PKI certificates and the final state, when keys have been generated and certificates have been signed and placed in the appropriate locations in the system. These steps are described either explicitly or implicitly in the PKI Certificate Practice Statement (CPS). Some of the companies in the PKI business provide all services: hosting Certificate and Registration Authorities (CAs and RAs); registering subscribers; issuing, publishing, and maintaining the current status of all types of certificates; and supporting a network of trust. Other companies sell their extraordinarily powerful software, which includes CAs, RAs, gateways, connectors, toolkits, etc. These components allow buyers (clients) to build their own PKIs to meet their business needs. In all the scenarios, the processes for registration of PKI subscribers may be very different. This chapter does not claim to be a comprehensive survey of PKI registration. We will simply follow a logical flow. For example, when issuing a new document, we first define the type of document, the purpose it will serve, and by which policy the document will abide. Second, we define policies by which all participants will abide in the process of issuing that document. Third, we define procedures that the parties will follow and which standards, practices, and technologies will be employed. Having this plan in mind, we will try to cover most of the aspects and phases of PKI registration. CP, CPS, AND THE REGISTRATION PROCESS The process of the registration of subjects, as well as a majority of the aspects of PKI, are regulated by its Certificate Policies (CP) and Certification Practice Statement (CPS). The definition of CP and CPS is given in the document RFC 2527, which provides a conduit for implementation of PKIs: 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

665

AU1518Ch37Frame Page 666 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY Certificate Policy: a named set of rules indicating the applicability of a certificate to a particular community or class of application with common security requirements. For example, a particular certificate policy might indicate applicability of a type of certificate to the authentication of electronic data interchange transactions for the trading of goods within a given price range. Certification Practice Statement (CPS): a statement of the practices that a certification authority employs in issuing certificates. In other words, CP says where and how a relying party will be able to use the certificates. CPS says which practice the PKI (and in many cases its supporting services) will follow to guarantee to all the parties, primarily relying parties and subscribers, that the issued certificates may be used as is declared in CP. The relying parties and subscribers are guided by the paradigm that a certificate “… binds a public key value to a set of information that identifies the entity (such as person, organization, account, or site), associated with use of the corresponding private key (this entity is known as the “subject” of the certificate).”1 The entity or subject in this quote is also called an end entity (EE) or subscriber. CPSs are expressed in a set of provisions. In this chapter we focus only on those provisions that pertain to the process of registration, which generally include: • • • • • • •

Identification and authentication Certificate issuance Procedural controls Key-pairs generation and installation Private key protection Network security in the process of registration Publishing

Reference to CP and CPS associated with a certificate may be presented in the X509.V3 certificates extension called “Certificate Policies.” This extension may give to a relying party a great deal of information, identified by attributes Policy Identifier in the form of Abstract Syntax Notation One Object IDs (ASN.1 OID) and Policy Qualifier. One type of the Policy Qualifier is a reference to CPS, which describes the practice employed by the issuer to register the subscriber (the subject of the certificate). See Exhibit 37-1. REGISTRATION, IDENTIFICATION, AND AUTHENTICATION For initial registration with PKI, a subscriber usually has to go through the processes of identification and authentication. Among the rules and elements that may comprise these processes in a CPS are: 666

AU1518Ch37Frame Page 667 Thursday, November 14, 2002 7:55 PM

PKI Registration

Exhibit 37-1. Certificate policies.

1. 2. 3. 4. 5. 6. 7.

Types of names assigned to the subject Whether names have to be meaningful Rules for interpreting various name forms Whether names have to be unique How name claim disputes are resolved Recognition, authentication, and role of trademarks If and how the subject must prove possession of the companion private key for the public key being registered 8. Authentication requirements for organizational identity of subject (CA, RA, or EE) 667

AU1518Ch37Frame Page 668 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY 9. Authentication requirements for a person acting on behalf of a subject (CA, RA, or EE), including: — Number of pieces of identification required — How a CA or RA validates the pieces of identification provided — If the individual must present personally to the authenticating CA or RA — How an individual as an organizational person is authenticated The first six items of the list are more a concern of the legal and naming conventions. They are beyond the scope of this chapter. Other items basically focus on three issues: 1. How the subject proves its organizational entity (above) 2. How the person, acting on behalf of the subject, authenticates himself in the process of requesting a certificate (above) 3. How the certificate issuer can be sure that the subject, whose name is in the certificate request, is really in the possession of the private key, which public key is presented in the certificate request along with the subject name (above) Another important component is the integrity of the process. Infrastructure components and subscribers should be able to authenticate themselves and support data integrity in all the transactions during the process of registration. How the Subject Proves Its Organizational Entity Authentication requirements in the process of registration with PKI depend on the nature of applying EE and CP, stating the purpose of the certificate. Among end entities, there can be individuals, organizations, applications, elements of infrastructure, etc. Organizational certificates are usually issued to the subscribing organization’s devices, services, or individuals representing the organization. These certificates support authentication, encryption, data integrity, and other PKI-enabled functionality when relying parties communicate to the organization. Among organizational devices and services may be: • Web servers with enabled SSL, which support server authentication and encryption • WAP gateways with WTLS enabled, which support gateway authentication • Services and devices, signing a content (software codes, documents etc.) on behalf of the organization • VPN gateways 668

AU1518Ch37Frame Page 669 Thursday, November 14, 2002 7:55 PM

PKI Registration • Devices, services, applications, supporting authentication, integrity, and encryption of electronic data interchange (EDI), B2B, or B2C transactions Among procedures enforced within applying organizations (before a certificate request is issued) are: • An authority inside the organization should approve the certificate request. • After that, an authorized person within the organization will submit a certificate application on behalf of the organization. • The organizational certificate application will be submitted for authentication of the organizational identity. Depending on the purpose of the certificate, a certificate issuer will try to authenticate the applying organization, which may include some but not all of the following steps, as in the example below:2 • Verify that the organization exists. • Verify that the certificate applicant is the owner of the domain name that is the subject of the certificate. • Verify employment of the certificate applicant and if the organization authorized the applicant to represent the organization. There is always a correlation between the level of assurance provided by the certificate and the strength of the process of validation and authentication of the EE registering with PKI and obtaining that certificate. How the Person, Acting on behalf of the Subject, Authenticates Himself in the Process of Requesting Certificate (Case Study) Individual certificates may serve different purposes, for example, for e-mail signing and encryption, for user authentication when they are connecting to servers (Web, directory, etc.), to obtain information, or for establishing a VPN encryption channel. These kinds of certificates, according to their policy, may be issued to anybody who is listed as a member of a group (for example, an employee of an organization) in the group’s directory and who can authenticate himself. An additional authorization for an organizational person may or may not be required for PKI registration. An individual who does not belong to any organization can register with some commercial certificate authorities with or without direct authentication and with or without presenting personal information. As a result, an individual receives his general use certificate. Different cases are briefly described below. Online Certificate Request without Explicit Authentication. As in the example with VeriSign certificate of Class 1, a CA can issue an individual certificate 669

AU1518Ch37Frame Page 670 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY

ISP and e-mail 1 User enters CS Web site and enters his name and e-mail address.

2 Certificate Service Provider (CSP) validates the name.

Internet Certificate Service Provider

Laptop 4

5 User receives the e-mail from CSP, opens suggested URL, and enters its reference number.

3 Keys are generated and certificate is issued by CSP.

CSP sends to the client e-mail with its reference number and URL to go to with that number.

Exhibit 37-2. Certificate request via e-mail or Web with no authentication.

(a.k.a. digital ID) to any EE with an unambiguous name and e-mail address. In the process of submitting the certificate request to the CA, the keys are generated on the user’s computer; and initial data for certificate request, entered by the user (username and e-mail address) is encrypted with a newly generated private key. It is sent to the CA. Soon the user receives by e-mail his PIN and the URL of a secure Web page to enter that PIN in order to complete the process of issuing the user’s certificate. As a consequence, the person’s e-mail address and ability to log into this e-mail account may serve as indirect minimal proof of authenticity. However, nothing prevents person A from registering in the public Internet e-mail as person B and requesting, receiving, and using person B’s certificate (see Exhibit 37-2). Authentication of an Organizational Person. T h e a b i l i t y o f t h e E E t o authenticate in the organization’s network, (e.g., e-mail, domain) or with the organization’s authentication database may provide an acceptable level of authentication for PKI registration. Even only the person’s organizational e-mail authentication is much stronger from a PKI registration perspective than authentication with public e-mail. In this case, a user authentication for PKI registration is basically delegated to e-mail or domain user authentication. In addition to corporate e-mail and domain controllers, an organization’s HR database, directory servers, or databases can be used for the user’s authentication and authorization for PKI registration. In each case an integration of the PKI registration process and the process of user authentication with corporate resources needs to be done (see Exhibit 37-3). 670

AU1518Ch37Frame Page 671 Thursday, November 14, 2002 7:55 PM

PKI Registration

4 Upon successful authentication, an initialization request is forwarded to the PKI CA.

1 User enters the corporate PKI RA via intranet Web or GUI client. 2 User enters his name, E-mail address, and other information pertaining to his authentication within the corporate network.

6

ISP and E-Mail

PKI CA

Laptop User receives the code, initiates key generation, sends its certificate data to PKI CA to complete the certificate issuing.

PKI RA

Internet 3

PKI RA uses the data to authenticate the user against a corporate IAW CPS policy. 5

Domain ControllerHR Database

PKI CA initiates user registration and issues authentication codes bound to the user’s name.

Exhibit 37-3. Certificate request via corporate e-mail or Web or GUI interface.

A simplified case occurs when a certificate request is initiated by a Registration Authority upon management authorization. In this case, no initial user authentication is involved. Individual Authentication In the broader case, a PKI registration will require a person to authenticate potentially with any authentication bases defined in accordance with CPS. For example, to obtain a purchasing certificate from the CA, which is integrated into a B2C system, a person will have to authenticate with financial institutions — which will secure the person’s Internet purchasing transactions. In many cases, an authentication gateway or server will do it, using a user’s credentials (see Exhibit 37-4). Dedicated Authentication Bases. In rare cases, when a PKI CPS requires a user authentication that cannot be satisfied by the existing authentication bases, a dedicated authentication base may be created to meet all CPS requirements. For example, for this purpose, a pre-populated PKI directory may be created, where each person eligible for PKI registration will be presented with a password and personal data attributes (favorite drink and 671

AU1518Ch37Frame Page 672 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY

5 Third-party PKI CA initiates user registration and issues authentication codes bound to the user's name.

1 User enters the third-party PKI RA via intranet Web or GUI client. 2 User enters his credentials to authenticate with his financial/payment institutions.

6

Upon successful authentication, an initialization request is forwarded to the third-party PKI CA. 3

ISP and E-Mail

Laptop

User receives the code, initiates key generation, sends its certificate data to the third-party PKI CA to complete the certificate issuing.

4

Internet

PKI CA

PKI RA

Authentication Gateway 1

Third-party PKI RA uses the data to authenticate the user with those institutions via authentication gateways, IAW CP, and CPS policy.

Authentication Gateway 2

Domain ControllerHR Database 7

Later, in transactions with the merchant's Web site, the user uses his certificate as credential.

Authentication Database 1

Exhibit 37-4. Certificate request via gateway interfaces.

color, car, etc.). Among possible authentication schemes with dedicated or existing authentication bases may be personal entropy, biometrics, and others. Face-to-Face. The most reliable but most expensive method to authenti-

cate an EE for PKI registration is face-to-face authentication. It is applied when the issued certificate will secure either high-risk and responsibility transactions (certificates for VPN gateways, CA and RA administrators) or transactions of high value, especially when the subscriber will authenticate and sign transactions on behalf of an organization. To obtain this type certificate, the individual must be personally present and show a badge and other valid identification to the dedicated corporate registration security office and sign a document obliging use of the certificate only for assigned purposes. Another example is a healthcare application (e.g., Baltimore-based Healthcare eSignature Authority). All the procedures and sets of ID and documents that must be presented before an authentication authority are described in CPS. 672

AU1518Ch37Frame Page 673 Thursday, November 14, 2002 7:55 PM

PKI Registration CERTIFICATE REQUEST PROCESSING So far we have looked at the process of EE authentication that may be required by CPS; but from the perspective of the PKI transactions, this process includes out-of-bound transactions. Whether the RA is contacting an authentication database online, or the EE is going through face-to-face authentication, there are still no PKI-specific messages. The RA only carries out the function of personal authentication of an EE before the true PKI registration of the EE can be initialized. This step can also be considered as the first part of the process of initial registration with PKI. Another part of initial registration includes the step of EE initialization, when the EE is requesting information about the PKI-supported functions and acquiring CA public key. The EE- is also making itself known to the CA, generating the EE key-pairs and creating a personal secure environment (PSE). The initial PKI registration process, among other functions, should provide an assurance that the certificate request is really coming from the subject whose name is in the request, and that the subject holds private keys that are the counterparts to the public keys in the certificate request. These and other PKI functions in many cases rely on PKI Certificate Management Protocols3 and Certificate Request Management Format.4 PKIX-CMP establishes a framework for most of the aspects of PKI management. It is implemented as a message-handling system with a general message format as presented below:3 PKIMessage :: = SEQUENCE { header PKIHeader, body PKIBody, protection [0] PKIProtection OPTIONAL, extraCerts [1] SEQUENCE SIZE (1..MAX) OF Certificate OPTIONAL }

The various messages used in implementing PKI management functions are presented in the PKI message body3 (see Exhibit 37-5). Initial Registration In the PKIX-CMP framework, the first PKI message, related to the EE, may be considered as the start of the initial registration, provided that out-ofbound required EE authentication and CA public key installation have been successfully completed by this time. All the messages that are sent from PKI to the EE must be authenticated. The messages from the EE to PKI may or may not require authentication, depending on the implemented scheme, which includes the location of key generation and the requirements for confirmation messages. 673

AU1518Ch37Frame Page 674 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY Exhibit 37-5. Messages used in implementing PKI management functions. PKIBody :: = ir [0] ip [1] cr [2] cp [3] p10cr [4]

CHOICE {— message-specific body elements CertReqMessages, — Initialization Request CertRepMessage, — Initialization Response CertReqMessages, — Certification Request CertRepMessage, — Certification Response CertificationRequest, — PKCS #10 Cert. Req. — the PKCS #10 certification request* popdecc [5] POPODecKeyChallContent, — pop Challenge popdecr [6] POPODecKeyRespContent, — pop Response kur [7] CertReqMessages, — Key Update Request kup [8] CertRepMessage, — Key Update Response krr [9] CertReqMessages, — Key Recovery Request krp [10] KeyRecRepContent, — Key Recovery Response rr [11] RevReqContent, — Revocation Request rp [12] RevRepContent, — Revocation Response ccr [13] CertReqMessages, — Cross-Cert. Request ccp [14] CertRepMessage, — Cross-Cert. Response ckuann [15] CAKeyUpdAnnContent, — CA Key Update Ann. cann [16] CertAnnContent, — Certificate Ann. rann [17] RevAnnContent, — Revocation Ann. crlann [18] CRLAnnContent, — CRL Announcement conf [19] PKIConfirmContent, — Confirmation nested [20] NestedMessageContent, — Nested Message genm [21] GenMsgContent, — General Message genp [22] GenRepContent, — General Response error [23] ErrorMsgContent — Error Message

} * RSA Laboratories, The Public-Key Cryptography Standards (PKCS), RSA Data Security Inc., Redwood City, California, November 1993 Release.

Source: RFC 2510.

• In the centralized scheme initialization starts at the CA, and key-pair generation also occurs on the CA. Neither EE message authentication nor confirmation messages are required. Basically, the entire initial registration job is done on the CA, which may send to the EE a message containing the EE’s PSE. • In the basic scheme initiation and key-pair generation start on the EE’s site. As a consequence, its messages to RA and CA must be authenticated. This scheme also requires a confirmation message from the EE to RA/CA when the registration cycle is complete. Issuing to the EE an authentication key or reference value facilitates authentication of any message from the EE to RA/CA. The EE will use the authentication key to encrypt its certificate request before sending it to the CA/RA.

674

AU1518Ch37Frame Page 675 Thursday, November 14, 2002 7:55 PM

PKI Registration Proof of Possession A group of the key PKIX-CMP messages, sent by the EE in the process of initial registration, includes “ir,” “cr,” and “p10cr” messages (see the PKI message body above). The full structure of these messages is described in RFC 2511 and RSA Laboratories’ The Public-Key Cryptography Standards (PKCS). Certificate request messages, among other information, include “publicKey” and “subject” name attributes. The EE has authenticated itself out-of-bound with RA on the initialization phase of initial registration (see above section on registration, identification, and authentication). Now an additional proof is required — that the EE, or the subject, is in possession of a private key, which is a counterpart of the publicKey in the certificate request message. It is a proof of binding, or so-called proof of possession, or POP, which the EE submits to the RA. Depending on the types of requested certificates and public/private keypairs, different POP mechanisms may be implemented: • For encryption certificates, the EE can simply provide a private key to the RA/CA, or the EE can be required to decrypt with its private key a value of the following data, which is sent back by RA/CA: — In the direct method it will be a challenge value, generated and encrypted and sent to the EE by the RA. The EE is expected to decrypt and send the value back. — In the indirect method, the CA will issue the certificate, encrypt it with the given public encryption key, and send it to the EE. The subsequent use of the certificate by the EE will demonstrate its ability to decrypt it, hence the possession of a private key. • For signing certificates, the EE merely signs a value with its private key and sends it to the RA/CA. Depending on implementation and policy, PKI parties may employ different schemes of PKIX-CMP message exchange in the process of initial registration (see Exhibit 37-6). An initialization request (“ir”) contains, as the PKIBody, a CertReqMessages data structure that specifies the requested certificate. This structure is represented in RFC 2511 (see Exhibit 37-7). A registration/certification request (“cr”) may also use as PKIBody a CertReqMessages data structure, or alternatively (“p10cr”), a CertificationRequest.5 ADMINISTRATIVE AND AUTO-REGISTRATION As we saw above, the rich PKIX-CMP messaging framework supports the inbound initial certificate request and reply, messages authentication, and 675

AU1518Ch37Frame Page 676 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY Initial Registration Out of Bound Processes (i.e., Personal Authentication, Authorization, Verification)

Request PKI Functions and Acquire CA Public Key (EE Initialization)

EE/PKI messages: - Messages from PKI to EE are always authenticated - Messages from EE to CA are authenticated (uses Authentication and Reference codes from CA) - Messages from EE to CA are not authenticated

Initialization of the initial registration starts at: - EE or - RA or - CA Possible location of key generation: - EE or - RA or - CA Confirmation of successful certification: - EE confirms successful receipt of the message, indicating creation of the certificate - EE does not confirm Possible Scenarios

IAK

IAK

REF

REF Key Generation Certificate Request

EE

Certificate Request Certificate

CA

EE

Certificate Protected RA Request with IAK

CA

Certificate Request Certificate Response Confirmation

Centralized Scheme

Basic Authenticated Scheme

Exhibit 37-6. Different schemes of PKIX-CMP message exchange.

POP. However, it does not support some important out-of-bound steps of PKI initial registration, such as: • Authentication of an EE and binding its personal identification attributes with the name, which is a part of the registration request 676

AU1518Ch37Frame Page 677 Thursday, November 14, 2002 7:55 PM

PKI Registration Exhibit 37-7. Data structure specifying the requested certificate. CertReqMessages :: = SEQUENCE SIZE (1..MAX) OF CertReqMsg CertReqMsg :: = SEQUENCE { certReq CertRequest, pop ProofOfPossession OPTIONAL, — content depends upon key type regInfo SEQUENCE SIZE(1..MAX) OF AttributeTypeAndValue OPTIONAL} CertRequest :: = SEQUENCE { certReqId INTEGER, — ID for matching request and reply certTemplate CertTemplate, — Selected fields of cert to be issued controls Controls OPTIONAL}— Attributes affecting issuance CertTemplate :: = SEQUENCE { version [0] Version OPTIONAL, serialNumber [1] INTEGER OPTIONAL, signingAlg [2] AlgorithmIdentifier OPTIONAL, issuer [3] Name OPTIONAL, validity [4] OptionalValidity OPTIONAL, subject [5] Name OPTIONAL, publicKey [6] SubjectPublicKeyInfo OPTIONAL, issuerUID [7] UniqueIdentifier OPTIONAL, subjectUID [8] UniqueIdentifier OPTIONAL, extensions [9] Extensions OPTIONAL} OptionalValidity :: = SEQUENCE { notBefore [0] Time OPTIONAL, notAfter [1] Time OPTIONAL} — at least one must be present Time :: = CHOICE { utcTime UTCTime, generalTime GeneralizedTime}

• Administrative processes, such as managers’ approval for PKI registration To keep the PKIX-CMP framework functioning, the EE can generally communicate either directly with the CA or via the RA, depending on specific implementation. However, the CA cannot support the out-of-bound steps of initial registration. That is where the role of the RA is important. In addition to the two functions above, the RA also assumes some CA or EE functionality, such as initializing the whole process of initial registration and completing it by publishing a new certificate in the directory. In the previous section on “Certificate Request Processing,” we briefly mentioned several scenarios of user authentication. In the following analysis we will not consider the first scenario (online certificate request without explicit authentication) because certificates issued in this way have a very limited value. Case Study The following are examples of the initial registration, which require explicit EE authentication. 677

AU1518Ch37Frame Page 678 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY Administrative Registration.

1. An EE issues an out-of-bound request to become a PKI subscriber (either organizational or commercial third party). 2. An authorized administrator or commercial PKI clerk will authenticate EE and verify its request. Upon successful authentication and verification, an authorized administrator submits the request to the RA administrator. 3. The RA administrator enters the EE subject name and, optionally, additional attributes into the RA to pass it to the CA. The CA will verify if the subject name is not ambiguous and will issue a reference number (RN) to associate the forthcoming certificate request with the subject and an authentication code (AC) to encrypt forthcoming communications with EE. 4. The RA administrator sends the AC and RN in a secure out-of-bound way to the EE. 5. The EE generates a signing key-pair, and using AC and RN, establishes inbound “ir” PKIX-CMP exchange. 6. As a result, the EE’s verification and encryption certificates, along with signing and decryption keys, are placed in the EE PSE. The EE’s encryption certificate is also placed in the public directory. 7. If the keys are compromised or destroyed, the PKI administrator should start a recovery process, which quite closely repeats the steps of initial registration described here. As we see, most of the out-of-bound steps in each individual case of administrative PKI registration are handled by administrators and clerks. Moreover, the out-of-bound distribution of AC/RN requires high confidentiality. Auto-Registration.

1. Optionally (depending on the policy), an EE may have to issue an out-of-bound application to become a PKI subscriber (either organizational or commercial third party). An authorized administrator or commercial PKI clerk will evaluate the request. Upon evaluation, the EE will be defined in the organizational or commercial database as a user, authorized to become a PKI subscriber. 2. The EE enters his authentication attributes online in the predefined GUI form. 3. The form processor (background process of the GUI form) checks if the EE is authorized to become a PKI subscriber and then tries to authenticate the EE based on the entered credentials. 4. Upon successful authentication of the EE, the subsequent registration steps are performed automatically, as well as the previous step. 678

AU1518Ch37Frame Page 679 Thursday, November 14, 2002 7:55 PM

PKI Registration 5. As a result, the EE’s verification and encryption certificates, along with signing and decryption keys, are placed in the EE PSE. The EE’s encryption certificate is also placed in the public directory. 6. If the keys are compromised or destroyed, the EE can invoke via a GUI form a recovery process without any administrator’s participation. Comparing the two scenarios, we can see an obvious advantage to autoregistration. It is substantially a self-registration process. From an administration perspective, it requires simply to authorize the EE to become a PKI subscriber. After that, only exceptional situations may require a PKI administrator’s intervention. Authentication Is a Key Factor We may assume that in both scenarios described above, all the inbound communications follow the same steps of the same protocol (PKIX-CMP). The difference is in the out-of-bound steps, and more specifically, in the user (EE) authentication. Generally, possible authentication scenarios are described in the section on “Registration, Identification, and Authentication.” Most of those scenarios (except face-to-face scenarios) may be implemented either in the administrative or auto-registration stage. The form, sources, and quality of authentication data should be described in the CPS. The stronger the authentication criteria for PKI registration, the more trust the relying parties or applications can use. There may be explicit and implicit authentication factors. In the administrative registration case above, authentication of the organizational user may be totally implicit, because his PKI subscription may have been authorized by his manager, and AC/RN data may have been delivered via organizational channels with good authentication mechanisms and access control. On the other hand, registration with a commercial PKI may require an EE to supply personal information (SSN, DOB, address, bank account, etc.), which may be verified by a clerk or administrator. Auto-registration generally accommodates verification of all the pieces of the personal information. If it is implemented correctly, it may help to protect subscribers’ privacy, because no personal information will be passed via clerks and administrators. In both the cases of the organizational and commercial PKI registration, it may even add additional authentication factors — the ability of the EE/user to authenticate himself online with his existing accounts with one or many authentication bases within one or many organizations. 679

AU1518Ch37Frame Page 680 Thursday, November 14, 2002 7:55 PM

CRYPTOGRAPHY CONCLUSION For most common-use certificates, which do not assume a top fiscal or a highest legal responsibility, an automated process of PKI registration may be the best option, especially for large-scale PKI applications and for the geographically dispersed subscribers’ base. Improvement of this technology in mitigating possible security risk, enlarging online authentication bases, methods of online authentication, and making the entire automated process more reliable, will allow the organization to rely on it when registering subscribers for more expensive certificates, which assume more responsibility. For user registration for certificates carrying a very high responsibility and liability, the process will probably remain manual, with face-to face appearance of the applicant in front of the RA, with more than one proof of his identity. It will be complemented by application forms (from the applicant and his superior), and verification (both online and offline) with appropriate authorities. The number of certificates of this type is not high, and thus does not create a burden for the RA or another agency performing its role. References 1. S. Chokhani and W. Ford, Internet X.509 Public Key Infrastructure, Certificate Policy and Certification Practices Framework, RFC 2527, March 1999. 2. VeriSign Certification Practice Statement, Version 2.0., August 31, 2001. 3. C. Adams and S. Farrell, Internet X.509 Public Key Infrastructure, Certificate Management Protocols, RFC 2510, March 1999. 4. Myers, M., Adams, C., Solo, D., and D. Kemp, Certificate Request Message Format, RFC 2511, March 1999. 5. RSA Laboratories, The Public-Key Cryptography Standards (PKCS), RSA Data Security Inc., Redwood City, CA, November 1993 Release.

ABOUT THE AUTHOR Alex Golod, CISSP, is an infrastructure specialist for EDS in Troy, Michigan.

680

AU1518Ch38Frame Page 681 Thursday, November 14, 2002 7:55 PM

Domain 6

Computer, System, and Security Architecture

AU1518Ch38Frame Page 682 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Domain 6 in this volume contains only one section comprised of three chapters. Section 6.1 addresses the principles of computer, system, and security architecture. Chapter 38 considers intrusion detection systems as they relate to an overall security architecture. First, intrusion detection systems are defined. Then the important concept of defense-in-depth is discussed. The characteristics of a good intrusion detection system are described along with the methodology for choosing and implementing one. The appendices present valuable information such as definitions and conducting a cost/benefit analysis. Chapter 39 focuses directly on security architecture. It examines the components of information security architectures and why all the technology is required in today’s enterprise. Security architecture is, of course, based on a security policy. This leads to the development of an infrastructure that can implement the policy. What comprises this infrastructure and how it is created is thoroughly discussed in this comprehensive chapter. Chapter 40 touches on an interesting aspect of architecture, virtual computers. Specifically, the use of virtual network computing (VNC) provides a bridge that makes it possible to access a dissimilar system where usual options are limited or nonexistent. The author describes what a VNC is and how its services are provided. The security provisions of VNC are thoroughly discussed; and although some of the description tends to be fairly technical, the explanations are clearly presented. Although there are several security issues related to the use of VPNs, solutions to address them are outlined in detail.

682

AU1518Ch38Frame Page 683 Thursday, November 14, 2002 7:55 PM

Chapter 38

Security Infrastructure: Basics of Intrusion Detection Systems Ken Shaurette, CISSP, CISA

An intrusion detection system (IDS) inspects all inbound and outbound network activity. Using signature and system configuration, it can be set up to identify suspicious patterns that may indicate a network or system attack. Unusual patterns, or patterns that are known to generally be attack signatures, can signify someone attempting to break into or compromise a system. The IDS can be a hardware- or software-based security service that monitors and analyzes system events for the purpose of finding and providing real-time or near-real-time warning of events that are identified by the configuration to be attempts to access system resources in an unauthorized manner (see Exhibit 38-1). There are many ways that an IDS can be categorized: • Misuse detection. In misuse detection, the IDS analyzes the information it gathers and compares it to databases of attack signatures. To be effective, this type of IDS depends on attacks that have already been documented. Like many virus detection systems, misuse detection software is only as good as the databases of attack signatures that it can use to compare packets. • Anomaly detection. In anomaly detection, a baseline, or normal, is established. This consists of things such as the state of the network’s traffic load, breakdown, protocol, and typical packet size. With anomaly detection, sensors monitor network segments to compare their present state against the baseline in order to identify anomalies. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

683

AU1518Ch38Frame Page 684 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Exhibit 38-1. Definitions. To better understand the requirements and benefits of an intrusion detection system, it is important to understand and be able to differentiate between some key terms. Some of that terminology is outlined below. Anomaly — This is a technique used for identifying intrusion. It consists of determining deviations from normal operations. First, normal activity is established that can be compared to current activity. When current activity varies sufficiently from previously set normal activity, an intrusion is assumed. Audit Logs — Most operating systems can generate logs of activity, often referred to as audit logs. These logs can be used to obtain information about authorized and unauthorized activity on the system. Some systems generate insufficient or difficult-to-obtain information in their audit logs and are supplemented with third-party tools and utilities (i.e., Top Secret for MVS). The term audit as it pertains to these logs is generally associated with the process to assess the activity contained in the logs. Procedures should exist to archive the logs for future review, as well as review security violations in the logs for appropriateness. As it pertains to intrusion detection, an audit approach to detection is usually based on batch processing of after-the-fact data. False Negative/Positive — These are the alerts that may not be desired. Not identifying an activity when it actually was an intrusion would be classified as a false negative. Crying wolf on activity that is not an actual intrusion would be a false positive. File Integrity Checking (FIC) — File integrity checking employs a cryptographic mechanism to create a signature of each file to be monitored. The signature is stored for further use for matching against future signatures of the same file. When a mismatch occurs, the file has been modified or deleted; and it must be determined whether intrusive activity has occurred. FIC is valuable for establishing a “golden” unmodified version of critical software releases or system files. Hackers — The popular press has established this term to refer to individuals who gain unauthorized access to computer systems for the purpose of stealing and corrupting data. It is used to describe a person who misuses someone else’s computer or communications technology. Hackers maintain that the proper term for such individuals is cracker, and they reserve the term hacker for people who look around computer systems to learn with no intent to damage or disrupt. Honeypot — A honeypot is a system or file designed to look like a real system or file. It is designed to be attractive to the attacker in order to learn their tools and methods. It can also be used to help track the hacker to determine their identity and to help find out vulnerabilities. It is used to help keep an attacker off of the real production systems. Intrusion Detection Systems — By definition, an intrusion detection system consists of the process of detecting unauthorized use of, or attack upon, a computer or network. An IDS is software or hardware that can detect such misuse. Attacks can come from the Internet, authorized insiders who misuse privileges, and insiders attempting to gain unauthorized privileges. There are basically two kinds of intrusion detection — hostbased and network-based — described below. Some products have become hybrids that combine features of both types of intrusion detection.

684

AU1518Ch38Frame Page 685 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems Exhibit 38-1. Definitions (Continued). IDS System Types Host Based — This intrusion detection involves installing pieces of software on the host to be monitored. The software uses log files and system auditing agents as sources of data. It looks for potential malicious activity on a specific computer in which it is installed. It involves not only watching traffic in and out of the system but also integrity checking of the files and watching for suspicious processes and activity. There are two major types: application specific and OS specific. OS Specific — Based on monitoring OS log files and audit trails. Application Specific — Designed to monitor a specific application server such as a database server or Web server. Network Based — This form of intrusion detection monitors and captures traffic (packets) on the network. It uses the traffic on the network segment as its data source. It involves monitoring the packets on the network as they pass by the intrusion detection sensor. A network-based IDS usually consists of several single-purpose hosts that “sniff” or capture network traffic at various points in the network and report on the attacks based on attack signatures. Incident Response Plan — This is the plan that has been set up to identify what is to be done when a system is suspected of being compromised. It includes the formation of a team that will provide the follow-up on the incident and the processes that are necessary to capture forensic evidence for potential prosecution of any criminal activity. Penetration Testing — Penetration testing is the act of exploiting known vulnerabilities of systems and users. It focuses on the security architecture, system configuration, and policies of a system. Penetration tests are often purchased as a service from third-party vendors to regularly test the environment and report findings. Companies can purchase the equivalent software used by these service organizations to perform the penetration tests themselves. Penetration testing and vulnerability analysis (see below) are often confused and used by people to mean the same thing, differentiated technically by whether you are attempting to penetrate (access) versus simply reporting on vulnerabilities (test, for existence) such as the presence or absence of security-related patches. Some penetration test software can identify an apparent vulnerability and provide the option of attempting to exploit it for verification. Vulnerability Scanner — This tool collects data and identifies potential problems on hosts and network components. Scanners are the tools often used to do a vulnerability analysis and detect system and network exposures. A scanner can identify such things as systems that do not have current patch levels, software and installation bugs, or poor configuration topology and protocols. A scanner does not enforce policy or fix exposures; it purely identifies and reports on them. Vulnerability Analysis — (also called vulnerability assessment) Vulnerability analysis is the act of checking networks or hosts to determine if they are susceptible to attack, not attempting to exploit the vulnerability. The process consists of scanning servers or networks for known vulnerabilities or attack signatures to determine whether security mechanisms have been implemented with proper security configuration, or if poor security design can be identified. A form of vulnerability assessment would be to use a product to scan sets of servers for exposures that it can detect.

685

AU1518Ch38Frame Page 686 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE • Network-based system. In a network-based system, or NIDS, the IDS sensors evaluate the individual packets that are flowing through a network. The NIDS detects malicious packets that are designed by an attacker to be overlooked by the simplistic filtering rules of many firewalls. • Host-based system. In a host-based system, the IDS examines the activity on each individual computer or host. The kinds of items that are evaluated include modifications to important system files, abnormal or excessive CPU activity, and misuse of root or administrative rights. • Passive system. In a passive system, the IDS detects a potential security breach, logs the information, and signals an alert. No direct action is taken by the system. • Reactive system. In a reactive system, the IDS can respond in several ways to the suspicious activity such as by logging a user off the system, closing down the connection, or even reprogramming the firewall to block network traffic from the suspected malicious source. DEFENSE-IN-DEPTH Hacking is so prevalent that it is wrong to assume that it will not happen. Similar to insurance statistics, “the longer we go without being compromised, the closer we are to an incident.” You do not buy flood insurance in the 99th year before the 100-year flood. Although keeping hackers away from your company data is virtually impossible, much can be done to reduce vulnerabilities. A hacker has the easiest task; they need find only one open door. As the defenders, a company must check every lock, monitor every hallway. A company will implement a variety of sound security mechanisms such as authentication, firewalls, and access control; but there is still the potential that systems are unknowingly exposed to threats from employees and non-employees (from inside and from outside). Layering security or using generally accepted practices for what is today often called a defense-in-depth requires more. The complexity of the overall corporate environment and disparity of knowledge for security professionals subject implemented protection mechanisms to improper configuration, poor security design, or malicious misuse by trusted employees or vendor/contract personnel. Today’s intrusions are attacks that exploit the vulnerabilities that are inherent in operating systems such as NT or UNIX. Vulnerabilities in network protocols and operating system utilities (i.e., telnet, ftp, traceroute, snmp, smtp, etc.) are used to perform unauthorized actions such as gaining system privileges, obtaining access to unauthorized accounts, or rerouting network traffic. The hacker preys on systems that: • Do not lock out users after unsuccessful log-in attempts • Allow users to assign dictionary words as passwords • Lack basic password content controls 686

AU1518Ch38Frame Page 687 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems • Define generic user IDs and assign password defaults that do not get changed • Do not enforce password aging Two-factor authentication is still expensive and slow to gain widespread adoption in large organizations. Using two factors — something you have and something you know — is one of the best methods to improve basic access control and thwart many simple intrusions. A company that does not have a comprehensive view of where its network and system infrastructure stands in terms of security lacks the essentials to make informed decisions. This is something that should be resolved with the cooperation and support of all a company’s IS technology areas. A baseline identifying gaps or places for improvement must be created. An IDS requirements proposal or any other security improvement proposal will require coordination with all infrastructure technicians to be effective. Companies need to have a dynamic information security infrastructure. While no organization relishes the idea of a system intrusion, there is some comfort that, with the right tools, it is possible to reduce exposures and vulnerabilities — but not necessarily eliminate all of the threats. There will always be some exposure in the environment. It is virtually impossible to remove them all and still have a functional system. However, measures to reduce impact of compromise can be put in place, such as incident response (what to do when), redundancy, traps (honeypots), prosecution (forensic evidence), and identification (logging). In order for it to be easier to track a hacker’s activity, proper tools are needed to spot and plug vulnerabilities as well as to capture forensic evidence that can be used to prosecute the intruder. Intrusion detection systems are complex to implement, especially in a large environment. They can generate enormous quantities of data and require significant commitments in time to configure and manage properly. As such, an IDS has limitations that must be considered when undertaking selection and deployment. Even so, intrusion detection is a critical addition to an organization’s security framework; but do not bother without also planning at least rudimentary incident response. WHAT TO LOOK FOR IN AN IDS Vendors are searching for the next generation, a predictive IDS — an IDS that can flag an attack without being burdened by the weight of its own logs and can operate worry-free with minimal false alarms. There are many shapes, sizes, and ways to implement an IDS. A rule-based model relies on preset rules and attack signatures to identify what to alert on and review. Anomaly-based systems build their own baselines over time by generating a database of usage patterns: when usage is outside the identified norm, an 687

AU1518Ch38Frame Page 688 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE alert or alarm is set off. In addition, placement of an IDS is important especially when it comes to determining host- or network-based or the need for both. A typical weakness in rule-based systems is that they require frequent updates and risk missing new or yet-unidentified attack patterns. An anomaly system attempts to solve this but tends to be plagued by false alarms. Often, companies install and maintain the host-based IDS on only production systems. Test hosts are often the entry point for an attacker and, as such, require monitoring for intrusion as well. The next generation of IDS will correlate the fact that an intrusion has occurred, is occurring, or is likely to occur. It will use indicators and warnings, network monitoring and management data, known vulnerabilities, and threats to arrive at a recommended recovery process. Some intrusion detection systems introduce the ability to have a realtime eye on what is happening on the network and operating systems. Many of the leading products offer similar features, so the choice of product can boil down to the fine details of how well the product will integrate into a company’s environment as well as meet the company’s incident response procedures. For example, one vendor’s product may be a good fit for network detection in a switched network, but does not provide any host intrusion detection, or it misses traffic on other segments of the network. For intrusion detection to be a useful tool, the network and all of the hosts under watch should have a known security state. A company must be first willing to apply patches for known vulnerabilities. Most of the vulnerability assessment tools can find the vulnerabilities, and these are what the intrusion detection tools monitor for exploitation. The anomaly-based system relies on the fact that most attacks fit a known profile. Usually this means that, by the time the IDS system can detect an attack, the attack is preventable and patches are available. Security patches are a high priority among most if not all product vendors, and they appear rapidly if they are actively exploited. Therefore, it might be more effective to first discover the security posture of the network and hosts, bring them up to a base level of security, and identify maintenance procedures to stay at that desired level of security. Once that is accomplished, IDS can more effectively contribute to the overall security of the environment. It becomes a layer of the defense that has value. GETTING READY Although many organizations are not aware of them, there are laws to address intrusion and hacking. There are an even greater number of organizations that are not prepared to take advantage of the laws. For example, the Federal Computer Fraud and Abuse Act was updated in 1996 to 688

AU1518Ch38Frame Page 689 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems reflect problems such as viruses sent via e-mail (Melissa, Bubble-Boy). In fact, the law was used to help prosecute the Melissa virus author. In addition, this same law addresses crimes of unauthorized access to any computer system, which would include non-virus-related intrusions. DoS (denial-of-service) attacks have become very common, but they are no joking matter. In the United States, they can be a serious federal crime under the National Infrastructure Protection Act of 1996 with penalties that include years of imprisonment. Many countries have similar laws. For more information on computer crimes, refer to www.usdoj.gov/criminal/ cybercrime/compcrime.html. Laws are of little help if a company is unable to recognize an event is occurring, react to it, and produce forensic evidence of the crime. Forensic computer evidence is required for prosecution of a crime. Not every system log is appropriate as forensic evidence. Logs must maintain very specific qualities and should document system activity and log-in/log-out type activity for all computers on the network. These allow a prosecutor to identify who has accessed what and when. Also important is the process for gathering and protecting any collected information (the chain of custody) in order for the information to retain forensic value. This process should be part of a comprehensive incident response plan. IDS without intrusion response, including an incident response plan, essentially reduces its value. The IDS effectively becomes merely another set of unused log data. Even more important than prosecution as a reason for maintaining forensic data, the company’s network technicians would use the forensic evidence to determine how a hacker gained access in order to close the hole. The data can also be necessary to determine what was done when the attacker was inside the network. It can be used to help mitigate the damage. In many cases, companies are still rarely interested in the expense, effort, and publicity involved in prosecution. A company must perform a thorough requirements analysis before selecting an intrusion detection system strategy and product. A return on investment (ROI) would be difficult to calculate; but in any case, costs and benefits need to be identified and weighted. Refer to Exhibit 38-2 for a discussion on cost/benefit analyses (CBA) and ROI. A solution must be compatible with the organization’s network infrastructure, host servers, and overall security philosophy and security policies. There can be a big variance of resource (especially human) requirements among the different tools and methodologies. Both network and server teams must work together to analyze the status of an organization’s security posture. (i.e., systems not patched for known vulnerabilities, weak password schemes for access control, poor control over root or administrative access). There may be many areas of basic information security infrastructure that require attention before IDS cost can be justified. The evaluations could 689

AU1518Ch38Frame Page 690 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Exhibit 38-2. Cost/Benefit analysis — return on investment. Risk Management to Improve Enterprise Security Infrastructure Effective protection of information assets identifies the information used by an area and assigns primary responsibility for its protection to the management of the respective functional area that the data supports. These functional area managers can accept the risk to data that belongs to them, but they cannot accept exposures that put the data of other managers at risk. Every asset has value. Performing an analysis of business assets — and the impact of any loss or damage resulting from the loss — is necessary to determine the benefits of any actual dollar or human time expenditures to improve the security infrastructure. A formal quantitative risk analysis is not necessary, but generally assessing the risks and taking actions to manage them can pay dividends. It will never be possible to eliminate all risks; the trick is to manage them. Sometimes it may be desirable to accept the risks, but it is a must to identify acceptance criteria. The most difficult part of any quantifiable risk management is assigning value and annual loss expectancy (ALE) to intangible assets like a customer’s lost confidence, potential embarrassment to the company, or various legal liabilities. To provide a risk analysis, a company must consider two primary questions. • What is the probability that something will go wrong (probability of one event)? • What is the cost if something does go wrong (the exposure of one event)? Risk is determined by getting answers to the above questions for various vulnerabilities and assessing the probability and impact of the vulnerability on each risk. A quantifiable way to determine the risk and justify the cost associated with purchase of an IDS or any other security software or costs associated with mitigating risks is as follows: • Risk becomes the probability times the exposure (risk = probability × exposure). Cost justification becomes the risk minus the cost to mitigate the vulnerabilities (justification = risk minus cost of security solution). If the justification is a positive number, then it is cost-justified. For example, if the potential loss (exposure) on a system is $100,000, and the chance that the loss will be realized (probability) is about once in every ten years, the annual frequency rate (AFR) would be 1/10 (0.10). The risk (ALE) would be $100,000 × 0.10 = $10,000 per year. If the cost is $5000 to minimize the risk by purchasing security software, the cost justification would be $10,000 less $5000 = $5000, or payback in six months. • Using a less quantifiable method, it would be possible to assign baseline security measures used in other similar sized companies, including other companies in the same industry. Setting levels of due diligence that are accepted in the industry would then require implementation of controls that are already proven, generally used, and founded on the “standard of due care.” For example, for illustration purposes, say that 70 percent of other companies the size of your company are implementing intrusion detection systems and creating incident response teams. Management would be expected to provide similar controls as a “standard of due care.” Unless it can be clearly proven that implementation costs of such measures are above the company’s expected risks and loss expectancies, management would be expected to provide due diligence in purchasing and implementing similar controls.

690

AU1518Ch38Frame Page 691 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems indicate that simply selecting and implementing another security technology (IDS) would be wasted money. A company may already own technologies that are not fully implemented or properly supported that could provide compensating controls and for which cost could be more easily justified. When it comes to a comprehensive IDS, integration between server and network environments will be critical. A simple decision such as whether the same tool should provide both network and host IDS is critical in the selection process and eliminates many tools from consideration that are unable to provide both. Even simply identifying integration requirements between operating systems will place limitations and requirements on technology selection. Does a company want to simply detect an intrusion, or is it desirable to also track the activity such as in a honeypot? Honeypots are designed to be compromised by an attacker. Once compromised, they can be used for a variety of purposes, such as an alert, an intrusion detection mechanism, or as a deception. Honeypots were first discussed in a couple of very good books: Cliff Stoll’s Cuckoo’s Egg1 and Bill Cheswick’s An Evening with Berferd.2 These two reviews used a capture-type technology to gather an intruder’s sessions. The sessions were then monitored in detail to determine what the intruder was doing. STEPS FOR PROTECTING SYSTEMS To continue improving the process of protecting the company systems, three fundamental actions are required. Action 1 The company must demonstrate a willingness to commit resources (money, people, and time) to patching the basic vulnerabilities in current systems and networks as well as prioritize security for networks and hosts. Making use of an IDS goes way beyond simply installing the software and configuring the sensors and monitors. It means having necessary resources, both technical and human, to customize, react, monitor, and correct. Nearly all systems should meet basic levels of security protection. Simple standards such as password aging, improved content controls, and elimination of accounts with fixed passwords or default passwords are a step in that direction. It is also critical that all network and operating systems have current security patches installed to address known vulnerabilities and that maintenance procedures exist to keep systems updated as new alerts and vulnerabilities are found. Action 2 All systems and network administrators must demonstrate the security skills and focus to eliminate basic vulnerabilities by maintaining and 691

AU1518Ch38Frame Page 692 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE designing basic secure systems — which, poorly done, account for the majority of attacks. Nearly all system and network administrators want to know how to secure their systems, but in many cases they have never received actual security training or been given security as a priority in their system design. Often, security is never identified as a critical part of job responsibility. It should be included in employee job descriptions and referenced during employee performance reviews. However, before this can be used as a performance review measurement, management must provide staff with opportunity (time away from office) and the priority to make security training part of job position expectations. Training should be made available in such topics as system security exposures, vulnerability testing, common attacks and solutions, firewall design and configuration, as well as other general security skills. For example, the effectiveness of any selected IDS tool is dependent on who monitors the console — a skilled security expert or an inexperienced computer operator. Even a fairly seasoned security expert may not know how to respond to every alert. Action 3 Once security expectations are in place, tasks must be given proper emphasis. Staff members must recognize that security is part of their job and that they must remain properly trained in security. Security training should receive the same attention as the training they receive on the system and network technologies they support. Security must be given similar time and resources as other aspects of the job, especially defining and following maintenance procedures so that systems remain updated and secure. Network and system administrators will need to stay current with the technology they support. Often they will attend training to stay current, but not to understand security because it is not sufficiently recognized as important to their job responsibilities. These tasks will not stop all attacks but they will make a company a lot less inviting to any criminal looking for easy pickings. Typical attackers first case their target. When they come knocking, encourage them to go knocking on your neighbor’s door — someone who has not put security measures in place. Putting the fundamentals in place to monitor and maintain the systems will discourage and prevent common external intrusion attempts as well as reduce internal incidents. TYPES OF INTRUSION Intrusions can be categorized into two main classes: 692

AU1518Ch38Frame Page 693 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems • Misuse intrusions are well-defined attacks on known weak points of a system. They can be detected by watching for certain actions performed on certain objects. A set of rules determines what is considered misuse. • Anomaly intrusions are based on the observation of the deviation from normal system activity. An anomaly is detected by building a profile of the system monitored, followed by using some methodology for detecting significant deviations from this profile. Misuse intrusions can be detected by doing pattern matching on audittrail information because they follow well-defined patterns. For example, examining log messages of password failures can catch an attempt to log on or set user ID (su) to root from unauthorized accounts or addresses. Anomalous intrusions are a bit more difficult to identify. The first difficulty is identifying what is considered normal system activity. The best IDS would be able to learn system and network traffic and correlate it to the time of day, day of week, and recognize changes. Exploitation of a system’s vulnerabilities usually involves the hacker performing abnormal use of the system; therefore, certain kinds of system activity would be detected from normal patterns of system usage and be flagged as potential intrusion situations. To detect an anomaly intrusion, it is necessary to observe significant deviations from the normal system behavior from the baseline set in a profile. A quantitative measure of normal activity can be identified over a period of time by measuring the daily activity of a system or network. For example, the average or a range of normal CPU activity can be measured and matched against daily activity. Significant variations in the number of network connections, an increase or decrease in average number of processes running in the system per minute, or a sudden sustained spike in CPU utilization when it does not normally occur could signify intrusion activity. Each anomaly or deviation may signal the symptoms of a possible intrusion. The challenge is mining the captured data and correlating one element of data to other captured data and determining what the two together might signify. CHARACTERISTICS OF A GOOD INTRUSION DETECTION SYSTEM There are several issues an IDS should address. Regardless of the mechanism on which it is based, it should include the following: • Run continually with minimal human interaction. It should run in the background. The internal workings should be able to be examined from outside, so it is not a black box. • Fault tolerance is necessary so that it can survive a system crash and not require that its knowledge base be rebuilt at restart. 693

AU1518Ch38Frame Page 694 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE • It must be difficult to sabotage. The system should be self-healing in the sense that it should be able to monitor itself for suspicious activities that might signify attempts to weaken the detection mechanism or shut it off. • Performance is critical. If it creates performance problems, it will not get used. • Deviations from normal behavior need to be observed. • The IDS must be easy to configure to the system it is monitoring. Every system has a different usage pattern, and the defense mechanism should adapt easily to these patterns. • It should be like a chameleon, adapting to its environment and staying current with the system as it changes — new applications added, upgrades, and any other modifications. The IDS must adapt to the changes of the system. • To be effective, an IDS must have built-in defense mechanisms, and the environment around it should be hardened to make it difficult to fool. Watch Out for Potential Network IDS Problems ACIRI (AT&T Center for Internet Research at the International Computer Science Institute) does research on the Internet architecture and related networking issues. Research has identified that a problem for a NIDS is its ability to detect a skilled attacker who desires to evade detection by exploiting the uncertainty or ambiguity in the traffic’s data stream. The ability to address this problem introduces a network-forwarding element called a traffic normalizer. The normalizer needs to sit directly in the path of traffic coming into a site. Its purpose is to modify the packet stream to eliminate potential ambiguities before the monitor sees the traffic. Doing this removes evasion opportunities. There are a number of trade-offs in designing a normalizer. Mark Handley and Vern Paxson discuss these in more detail in their paper titled “Network Intrusion Detection: Evasion, Traffic Normalization, and End-to-End Protocol Semantics.” In the paper they emphasize the important question pertaining to the degree to which normalizations can undermine end-to-end protocol semantics. Also discussed are the key practical issues of “cold start” and attacks on the normalizer. The paper shows how to develop a methodology for systematically examining the ambiguities present in a protocol based on walking the protocol’s header. Refer to the notes at the end of this chapter to find more information on the paper. METHODOLOGY FOR CHOOSING AND IMPLEMENTING AN IDS To choose the best IDS, evaluation is necessary of how well the tool can provide recognition of the two main classes of intrusion. Specific steps should be followed to make the best selection. Some of the steps are: 694

AU1518Ch38Frame Page 695 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems 1. Form a team representing impacted areas, including network and server teams. 2. Identify a matrix of intrusion detection requirements and prioritize, including platform requirements, detection methodology (statistical or real-time), cost, resource commitments, etc. 3. Determine preferences for purchasing IDS software versus using a managed service. 4. Determine if the same product should provide both network- and host-based IDS. 5. Formulate questions that need to be answered about each product. 6. Diagram the network to understand what hosts, subnets, routers, gateways, and other network devices are a part of the infrastructure. 7. Establish priority for security actions such as patching known vulnerabilities. 8. Identify IDS sensor locations (critical systems and network segments). 9. Identify and establish monitoring and maintenance policies and procedures. 10. Create an intrusion response plan, including creation of an incident response team. SUSPICION OF COMPROMISE Before doing anything, define an incident. Incident handing can be very tricky, politically charged, and sensitive. The IDS can flag an incident, but next is determining what first-level support will do when an alert is received or identifying what to do in case of a real incident. This is critical to the system reaching its full value. An IDS can be configured to take an action based on the different characteristics of the types of alerts, their severity, and the targeted host. In some cases it may be necessary to handle an incident like a potential crime. The evidence must be preserved similar to a police crime scene. Like a police crime scene that is taped off to prevent evidence contamination, any logs that prove unauthorized activity and what was actually done must be preserved. Inappropriate actions by anyone involved can cause the loss of valuable forensic evidence, perhaps even tip off the intruder, and cause a bigger problem. An incident response program can be critical to proper actions and provide consistency when reacting to intrusion activity. Without documented procedures, the system and network administrators risk taking the wrong actions when trying to fix what might be broken and contaminating or even eliminating evidence of the incident. The following outlines considerations for incident response: 695

AU1518Ch38Frame Page 696 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE • Scream loudly and get hysterical — your system has been compromised. • Brew up a few pots of strong coffee. • Actually, you need to remain calm — don’t hurry. • Create a documented incident handling procedure, including options if possible. • Notify management and legal authorities as outlined in the incident response plan. • Apply the need-to-know security principle — only inform those personnel with a need to know. The less people who are informed about the incident, the better; but be sure to prevent rumors by supplying enough information to the right people. • Use out-of-band communications and avoid e-mail and other networkbased communication channels — they may be compromised. • Determine the items you need to preserve as forensic evidence (i.e., IDS log files, attacked system’s hard drive, snapshot of system memory, and protection and safety logs). • Take good notes — the notes may be needed as evidence in a court of law. Relying on your memory is not a good idea. This will be a stressful time, and facts may become fuzzy after everything calms down. • Back up the systems; collect forensic evidence and protect it from modification. Ensure a chain of custody for the information. • Contain the problem and pull the network cable? Is shutting off the system appropriate at this point? Is rebooting the system appropriate? It might not be! • Eradicate the problem and get back to business. • Use what has been learned from the incident to apply modifications to the process and improve the incident response methodology for future situations. SUMMARY Before doing anything, define an incident. Know what you are detecting in order to know what you are handling. Every year thousands of computers are illegally accessed because of weak passwords. How many companies have users who are guilty of any of the following? • Writing down a password on a sticky note placed on or near your computer • Using a word found in a dictionary; that’s right, a dictionary; any dictionary! • Using a word from a dictionary followed by two or less numerics • Using the names of people, places, pets, or other common items • Sharing your password with someone else 696

AU1518Ch38Frame Page 697 Thursday, November 14, 2002 7:55 PM

Security Infrastructure: Basics of Intrusion Detection Systems • Using the same password for more than one account, and for an extended period of time • Using the default password provided by the vendor Chances are, like the majority of companies, the answer is yes to one or more of the above. This is a more basic flaw in overall security infrastructure and requires attention. The problem is, hackers are aware of these problems as well and target those who do not take the correct precautions. This makes systems very vulnerable, and more than simple technology is necessary to correct these problems. If a company’s current security posture (infrastructure) is unacceptable, it must be improved for additional security technology to provide much added benefit. Performing an assessment of the present security posture provides the information necessary to adequately determine a cost–benefit analysis or return on investment. Implementing all the best technology does not eliminate the basic exposure introduced by the basic problem described above. A team should be created to identify current protection mechanisms as well as other measures that could be taken to improve overall security infrastructure for the company. Immediate benefits could be realized by enhancement to procedures, security awareness, and better implementation of existing products (access control and password content) with minimum investment. The overall security improvement assessment could include a project to select and implement an intrusion detection system (IDS) and incident response (IR) programs. IDS without IR is essentially worthless. First steps are for management to identify a team to look into necessary security infrastructure improvements. From this team, recommendations will be made for security improvements and the requirements against which products can be judged in order to help reduce security vulnerabilities while being an enabler of company business objectives. Now that you have the IDS deployed and working properly, it is possible to kick back and relax. Not yet — in fact, the cycle has just begun. IDS, while a critical component of the defense-in-depth for an organization’s security infrastructure, is just that — only a component. References 1. C. Stoll, The Cuckoo’s Egg: Tracking a Spy through the Maze of Computer Espionage, New York: Pocket Books, 1990. 2. Bill Cheswick, An Evening with Berferd in which a Cracker is Lured, Endured, and Studied, http://www.securityfocus.com/library/1793. 3. Intrusion Detection Pages, http://www.cerias.purdue.edu/coast/intrusion-detection/. 4. Mark Handley and Vern Paxson, Network Intrusion Detection: Evasion, Traffic Normalization, and End-to-End Protocol Semantics, http://www.aciri.org/vern/papers/norm-usenixsec-01-html/norm.html. 5. http://www.aciri.org/vern/papers/norm-usenix-sec-01.ps.gz. 6. http://www.aciri.org/vern/papers/norm-usenix-sec-01.pdf. 7. http://www.usenix.org/events/sec01/handley.html.

697

AU1518Ch38Frame Page 698 Thursday, November 14, 2002 7:55 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE ABOUT THE AUTHOR Ken M. Shaurette, CISSP, CISA, NSA IAM, is a Senior Information Security Analyst for Omni Tech Corporation in Waukesha, Wisconsin. With over 23 total years IT experience, Ken has provided information security and audit advice and vision for companies building information security programs for over 17 of those years. Ken is the President of the Western Wisconsin Chapter of InfraGard, Vice President of ISSA–Milwaukee, a member of the Wisconsin Association of Computer Crime Investigators (WACCI), and a Founder and participant in the CASPR Project (www.caspr.org). For questions or comments, contact Ken at: [email protected].

698

AU1518Ch39Frame Page 699 Thursday, November 14, 2002 7:54 PM

Chapter 39

Firewalls, Ten Percent of the Solution: A Security Architecture Primer Chris Hare, CISSP, CISA

A solid security infrastructure consists of many components that, through proper application, can reduce the risk of information loss to the enterprise. This chapter examines the components of an information security architecture and why all the technology is required in today’s enterprise. A principal responsibility of the management team in any organization is the protection of enterprise assets. First and foremost, the organization must commit to securing and protecting its intellectual property. This intellectual property provides the organization’s competitive advantage. When an enterprise loses that competitive advantage, it loses its reason for being an enterprise. Second, management must make decisions about what its intellectual property is, who it wants to protect this property from, and why. These decisions form the basis for a series of security policies to fulfill the organization’s information protection needs. However, writing the policies is only part of the solution. In addition to developing the technical capability of implementing these policies, the organization must remain committed to these policies, and include regular security audits and other enforcement components into its operating plan. This is similar to installing a smoke alarm: if you do not check the batteries, how will you know it will work when you need it? There are many reasons why a corporation should be interested in developing a security architecture. These include: 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

699

AU1518Ch39Frame Page 700 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE • • • • • • • • •

Telecommunications fraud Internet hacking Viruses and malicious code War dialing and modem hacking Need for enhanced communications Globalization Cyber-terrorism Corporate espionage E-commerce and transaction-based Web sites

Telecommunications fraud and Internet and modem hacking are still at the top of the list for external methods of attacking an organization. Sources of attack are becoming more sophisticated and know no geographical limits. Consequently, global attacks are more predominant due to the increased growth in Internet connectivity and usage. With business growth has come the need for enhanced communications. No longer is remote dial-up sufficient. Employees want and need high-speed Internet access and other forms of services to get their jobs done, including videoconferencing, multimedia services, and voice conferencing. Complicating the problem is that many corporate networks span the globe, and provide a highly feature-rich, highly connected environment for both their employees and for hackers. The changes in network requirements and services has meant that corporations are more dependent on technologies that are easily intercepted, such as e-mail, audio conferencing, videoconferencing, cellular phones, remote access, and telecommuting. Employees want to access their e-mail and corporate resources through wireless devices, including their computers, cell phones, and personal digital assistants such as the PalmPilot and Research in Motion (RIM) BlackBerry. With the Information Age, more and more of the corporation’s knowledge and intellectual capital are being stored electronically. Information technology is even reported as an asset on the corporation’s financial statements. Without the established and developed intellectual capital, which is often the distinguishing factor between competitors, the competitive advantage may be lost. Unfortunately, the legal mechanisms are having difficulty dealing with this transnational problem, which affects the effectiveness and value of the legislation — expertise of law enforcement, investigators, and prosecutors alike. This legal ineffectiveness means that companies must be more diligent at protecting themselves because these legal deficiencies limit effective protection. Add to this legal problem the often limited training and education investment made to maintain corporate security and investigative personnel in the 700

AU1518Ch39Frame Page 701 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer legal and information technology areas. Frequently, the ability of the hacker far surpasses the ability of the investigator. Considering the knowledge and operational advantages that a technology infrastructure provides, the answer is this: the corporation requires a security infrastructure because the business needs one. Over the past 15 years, industry has experienced significant changes in the business environment. Organizations of all sizes are establishing and building new markets. Globalization has meant expanding corporate and public networks and computing facilities to support marketing, sales, and support staff. In addition to the geographical and time barriers, enterprises are continually faced with cultural, legal, language, and ethical issues never before considered. In this time frame, we have also seen a drive toward electronic exchange of information with suppliers and customers, with E-commerce and transaction-based Web sites being the growth leader in this area. This very competitive environment has forced the enterprise to seek efficiencies to drive down product costs. The result of this activity has been to outsource non-core activities, legacy systems, consolidation of workforces, and a reduction in non-essential programs. The mobile user community reflects the desire to get closer to our customer for improved responsiveness (e.g., automated salesforce). In addition, legislation and the high cost of real estate have played a role in providing employees with the ability to work from home. The result of these trends is that information is no longer controlled within the confines of the data center, thereby making it easier to get access to, and less likely that this access would be noticed. WHERE ARE THE RISKS? The fact is that firewalls provide the perimeter security needed by today’s organizations. However, left on their own, they provide little more than false assurance that the enterprise is protected. Indeed, many organizations believe the existence of a firewall at their perimeter is sufficient protection. It is not! The number of risks in today’s environment grows daily. There have been recent documented instances in which members of some of these areas, such as outsourced consultants, have demonstrated that they are more at risk than some organizations are prepared to handle. For example, Information Week has reported cases where outsourced consultants have injected viruses into the corporate network. A few of the many risks in today’s environment include: 701

AU1518Ch39Frame Page 702 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE • • • • • • • • • • • • •

Inter-enterprise networking with business partners and customers Outsourcing Development partners Globalization Open systems Access to business information Research and development activities Industrial and economic espionage Labor unrest Hacking Malicious code Inadvertent release or destruction of information Fraud

These are but a few of the risks to the enterprise the security architecture must contend with. Once the organization recognizes that the risk comes from both internal and external sources, the corporation can exert its forces into the development of technologies to protect its intellectual property. As one legitimate user community after another have been added to the network, it is necessary to identify who can see what and provide a method of doing it. Most enterprises have taken measures to address many of the external exposures, such as hacking and inadvertent leaks, but the internal exposures, such as industrial or economic espionage, are far more complex to deal with. If a competitor really wants to obtain valuable information, it is easier and far more effective to plant someone in the organization or engage a business partner who knows where the information can be found. Consider this: the U.S. FBI estimates that one out of every 700 employees is actively working against the company. ESTABLISHING THE SECURITY ARCHITECTURE The architecture of the security infrastructure must be aligned with the enterprise security policy. If there is no security policy, there can be no security infrastructure. As security professionals, we can lead the best technologists to build the best and most secure infrastructure; however, if it fails to meet the business goals and objectives, we have failed. We are, after all, here to serve the interests of the enterprise — not the other way around. The security architecture and resulting technology implementation must, at the very least, meet the following objectives: • It must not impede the flow of authorized information or adversely affect user productivity. • It must protect information at the point of entry into the enterprise. 702

AU1518Ch39Frame Page 703 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer • It must protect the information throughout its useful life. • It must enforce common processes and practices throughout the enterprise. • It must be modular to allow new technologies to replace existing ones with as little impact as possible. Enterprises and their employees often see security as a business impediment. Consequently, they are circumvented in due course. For security measures to work effectively, they must be built into operating procedures and practices in such a way that they do not represent an “extra effort.” From personal experience, this author has seen people spend up to ten times the effort and expense to avoid implementing security. The moment the security infrastructure and technology are seen, or perceived, to impact information flow, system functionality, or efficiencies, they will be questioned and there will be those who will seek ways to avoid the process in the interest of saving time or effort. Consequently, the infrastructure must be effective, yet virtually transparent to the user. Once data has entered the system, it must be assumed that it may be input to one or more processes. It is becoming impractical to control the use of all data elements at the system layer; therefore, any data that is considered sensitive, or can only be “seen” by a particular user community, must be appropriately protected at the point of entry to the network or system and, most importantly, wherever it is subsequently transferred. This involves the integration of security controls at all levels of the environment: the network, the system, the database, and the application. A centralized security administration system facilitates numerous benefits, both in terms of efficiency and consistency. Perhaps the most significant advantage is knowing who has access to what and if, for whatever reason, access privileges are to be withdrawn, that can be accomplished for all systems expeditiously. Quite clearly, it is not economically feasible to rewrite existing applications or replace existing systems. Therefore, an important aspect of the security architecture must be the ability to accommodate the existing infrastructure. Along the same lines of thinking, the size of existing systems and the population using them precludes a one-time deployment plan. A modular approach is an operational necessity. The infrastructure resulting from the architecture must also provide specific services and meet additional objectives, including: • • • •

Access controls Authorization Information classification Data integrity 703

AU1518Ch39Frame Page 704 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Achieving these goals is not only desirable, it is possible with the technology that exists today. It is highly desirable to have one global user authentication and authorization system or process, a single encryption tool, and digital signature methodology that can be used consistently across the enterprise for all applications. Authenticating the user does not necessarily address the authorization criteria; it may prove that you are who you say you are but does not dictate what information can be accessed and what can be done with it. Given the inter-enterprise electronic information exchange trend, one can no longer be certain that the data entering the corporate systems is properly protected and stored at the points of creation. Data that is submitted from unsecured areas represents a number of problems, primarily related to integrity, the potential for information to be modified (e.g., the possibility of the terminal device being “spoofed,” collecting data, modifying it, and retransmitting it as if from the original device), and confidentiality (e.g., “shoulder surfing”). Unfortunately, one cannot ignore the impact of government in our infrastructure. In some way or another, domestic and foreign policies regarding what one can and cannot use do have an effect. Consider one of the major issues today being the use of encryption. The United States limits the export of encryption to a key length, whereas other governments (e.g., France) have strict rules regarding the use of encryption and when they require a copy of the encryption key. In addition, governments also impose import and export restrictions on corporations to control the movement of technology to and from foreign countries. These import/export regulations are often difficult to deal with due to the generalities in the language the government uses, but they cannot be ignored. Doing so may result in the corporation not being able to trade with some countries, or lose its ability to operate. AN INFRASTRUCTURE MODEL The security infrastructure must be concerned with all aspects of the information, and the technology used to create and access it. This includes: • • • • • • • • • 704

Physical security for the enterprise and security devices Monitoring tools Public network connectivity Perimeter access controls Enterprise WAN and LAN Operating systems Applications Databases Data

AU1518Ch39Frame Page 705 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer

Users Data Application Operating Systems Local Area Network Wide Area Network Alarms and Monitoring Perimeter Public Network Connectivity Physical Security Policy

Exhibit 39-1. The infrastructure model.

This also does not discount the need for proper policies and an awareness program as discussed earlier. The protection objects listed above, if viewed in a reverse order (see Exhibit 39-1), provides an outside in view to protecting the data. What this model also does is incorporate the elements of physical security and awareness, including user training, which are often overlooked. Without the user community understanding what is expected from them in the security model, it will be difficult — if not impossible — to maintain. The remainder of this chapter focuses on the technology components and how to bring them together in a sample architecture model. ESTABLISHING THE PERIMETER The 1980s brought the development of the microcomputer, and despite its cost, many enterprises that were mainframe oriented could now push the work throughout the enterprise on these lower-cost devices. Decentralization of the computing infrastructure brought several benefits and, consequently, several challenges. As connectivity to the Internet increased, a new security model was developed. This consisted of a “moat,” where the installation of a firewall provided protection against unauthorized access. Many organizations then, as today, took the approach that information contained within the network was available for any authorized employee to access. However, 705

AU1518Ch39Frame Page 706 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Protected

External Filter

Firewall

Filter

Internal IDS

Exhibit 39-2. Perimeter access point.

this open approach meant that the enterprise was dependent upon other technology such as network encryption devices to protect the information and infrastructure. The consequence many organizations have witnessed with this model is that few internal applications and services made any attempt to operate in a secure fashion. As the number of external organizations connected to the enterprise network increases, the likelihood of the loss of intellectual property also increases. With the knowledge that the corporate network and intellectual property were at risk, it was evident that a new infrastructure was required to address the external access and internal information security requirements. Security professionals around the globe have embarked on new technology and combinations. Consequently, it is not uncommon for the network perimeter to include: • • • •

Screening or filter routers Firewalls Protected external networks Intrusion detection systems

When assembled, the perimeter access point resembles the diagram in Exhibit 39-2. The role of the screening or filter router between the external network and the firewall is to limit the types of traffic allowed through, thereby reducing the quantity of network traffic visible to the firewall. This establishes the first line of defense. The firewall can then respond more effectively to the traffic that is allowed through by the filter router. This first filter router 706

AU1518Ch39Frame Page 707 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer performs the ingress traffic filtering, meaning it limits the traffic inbound to your network based on the filter rules. Traditionally, enterprises have placed their external systems such as Web and FTP servers outside their firewall, which is typically known as the DMZ (demilitarized zone). However, placing the systems in this manner exposes them to attack from the external network. An improved approach is to add additional networks to the firewall for these external systems. Doing so creates a protected network, commonly known as a service network or screened subnet. The filters on the external filter router should be written to allow external connections to systems in the protected network, but only on the allowed service ports. For example, if there is a Web server in the protected network, the filter router can be designed to send all external connection requests to the Web server to only the Web server. This prevents any connections into the internal network due to an error on the firewall. Note: The overuse of filters on routers can impact the overall performance of the device, increasing the time it takes to move a packet from one network to another. For example, adding a single rule: to adds ten percent to the processing load on the router CPU. Consequently, router filter rules, while recommended, must be carefully engineered to not impede network performance.

The firewall is used to create the screened or protected subnet. A screened subnet allows traffic from the external network into the screened subnet, but not directly into the corporate network. Additionally, firewall rules are also used to further limit the types of traffic allowed into the screened subnet, or into the internal network. Should a system in the protected network require access into the internal network, the firewall provides the rules to do so, and limits the protocols or services available into the internal network. The second filter router between the firewall and the internal network is used to limit outbound traffic to the external network. This is particularly important to prevent network auto-discovery systems such as HP Openview from trying to use its auto-discovery features to map the entire Internet. This filter router can also allow other traffic that the enterprise does not want sent out to the Internet to be blocked. This is egress filtering, or using the router to limit the traffic types being sent to the external network. Some enterprises combine both filters on one router, which is acceptable depending on the ultimate architecture implemented. The final component is an intrusion detection system (IDS) to identify connection attempts or other unauthorized events and information. Additionally, content filtering systems can be used to scan for undesirable content in various protocols such as Web and e-mail. Many vendors offer solutions 707

AU1518Ch39Frame Page 708 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Security

Internal Filter

Exhibit 39-3. Local area network with security domains.

for both, including those that can prevent the distribution of specific types of attachments in e-mail messages. E-mail attachment scanning should also be implemented in the enterprise to prevent the distribution of attachments such as malicious code within the enterprise. THE NETWORK LAYER The network layer addresses connectivity between one user, or system, and another for the purposes of information exchange. In this context, information may be in the form of data, image, or sound and may be transmitted using copper, fiber, or wireless technologies. This layer will include specific measures to address intra- and inter-enterprise information containment controls, the use of private or public services, protocols, etc. Almost all enterprises will have some level of connectivity with a public data network, be it the Internet or other value-added networks. The security professional must not forget to examine all network access points and connectivity with the external network points and determine what level of protection is needed. At the very least, a screening router must be used. However, in some cases, external legislation determines what network access control devices are used and where they must be located. The enterprise wide area network (WAN) is used to provide communications between offices and enterprise sites. Few enterprises actually maintain the WAN using a leased line approach due to the sheer cost of the service and associated management. Typically, WAN services are utilized through public ATM or Frame Relay networks. While these are operated and managed by the public telecommunications providers, the connectivity is private due to the nature of the ATM and Frame Relay services. Finally, the local area network (LAN) used within each office provides network connectivity to each desktop and workstation within the enterprise. Each office or LAN can be used to segregate users and departments through security domains (see Exhibit 39-3). In this case, the security professional works with the network engineering teams to provide the best location for firewalls and other network 708

AU1518Ch39Frame Page 709 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer access devices such as additional filter routers. Utilizing this approach can prevent sensitive traffic from traveling throughout the network and only be visible to the users who require it. Additionally, if the information in the security domain requires it, network and host-based IDSs should be used to track and investigate events in this domain. Finally, the security professional should recommend the use of a switched network if a shared media such as coaxial or twisted-pair media is used. Traditional shared media networks allow any system on the network to see all network traffic. This makes it very easy for a sniffer to be placed on the network and packets collected, including password and sensitive application data. Use of a switched network makes it much more difficult, although not impossible. Other controls should be used in the design of the LAN. If the enterprise is using DHCP, any person who connects to the LAN and obtains an IP address can gain access to the enterprise network. For large enterprises, it is unrealistic to attempt to implement MAC-level controls due to the size of the network. However, public areas such as lobbies and conference rooms should be set up in one of the following manners: • No live network jacks • DHCP on a separate subnet and security domain • Filtered traffic The intent of these controls is to prevent a computer in a conference room from being able to participate fully on the network, and only offer limited services. In this context, security domains can be configured to specifically prevent access to other parts of the network or specific systems based on the source IP address. Other LAN-based controls for network analysis and reporting, such as Nicksun Probe and NetVCR, provide network diagnostics, investigation, and forensics information. However, on large, busy networks, these provide an additional challenge, that being the disk space to store the information for later analysis. Each of the foregoing layers provides the capability to monitor activities within that layer. Monitoring systems will be capable of collecting information from one or more layers, which will trigger alarm mechanisms when certain undesirable operational or security criteria are met. The alarm and monitoring tools layer will include such things as event logging, system usage, exception reporting, and clock synchronization. PHYSICAL SECURITY Physical security pertains to all practices, procedures, and measures relating to the operating environment, the movement of people, equipment 709

AU1518Ch39Frame Page 710 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE or goods, building access, wiring, system hardware, etc. Physical security elements are used to ensure that the corporate assets are not subjected to unwarranted security risks. Items addressed at this layer include secure areas, security of equipment off-premises, movement of equipment, and secure disposal of equipment. The physical security of the network access control devices, including the • • • • • • •

Firewall IDS Filter routers Hubs Switches Cabling Security systems

is paramount to ensuring the ongoing protection of the network and enterprise data. Should these systems not be adequately protected, a device could be installed and no one would notice. Physical security controls for these devices should include locked cabinets and cable conduits, to name only two. SYSTEM CONTROLS Beyond the network are the systems and applications that users use on a daily basis to fulfill enterprise business objectives. The protection of the operating system, the application proper, and the data are just as important as the network. Fundamentally, information security is in the hands of the users. Regardless of the measures that may be implemented, carelessness on the part of individuals involved in the preparation, consolidation, processing, recording, or movement of information can compromise any or all security measures. This layer then looks at the human-related processes, procedures, and knowledge related to developing a secure environment, such as user training, information security training and awareness, and security policies and procedures. Access to the environment must be controlled through a coordinated access control program, as discussed later in this chapter. Access control provides the control mechanisms to limit access to systems, applications, data, or services to authorized people or systems. It includes, for example, identification of the user, their authorization, and security practices and procedures. Examples of items that would be included in access control systems include identification and authentication methods, privilege management, and user registration. One could argue that privilege management is part of authorization; however, it should be closely coupled to the authentication system. 710

AU1518Ch39Frame Page 711 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer The operating system controls provide the functionality for applications to be executed and management of system peripheral units, including connectivity to network facilities. A heterogeneous computing environment cannot be considered homogeneous from a security perspective because each manufacturer has addressed the various security issues in a different manner. However, within your architecture, the security professional should establish consistent operating system baselines and configurations to maintain the overall environment. Just as the security professional will likely install a network-based intrusion detection system, so too should host-based systems be considered for the enterprise’s critical systems and data. Adding the host-based element provides the security professional with the ability to monitor for specific events on the system itself that may not be monitored by or captured through a network-based intrusion detection system. The data aspect of the architecture addresses the measures taken to ensure data origination authenticity, integrity, availability, non-repudiation, and confidentiality. This layer will address such things as database management, data movement and storage, backup and recovery, and encryption. Depending on the applications in use, a lot of data is moved between applications. These data transfers, or interfaces, must be developed appropriately to ensure that there is little possibility for data compromise or loss while in transit. The application and services layer addresses the controls required to ensure the proper management of information processing, including inputs and outputs, and the provision of published information exchange services. ESTABLISHING THE PROGRAM The security architecture must not only include the elements discussed so far, but also extend into all areas to provide an infrastructure providing protection from the perimeter to the data. This is accomplished by linking security application and components in a tightly integrated structure to implement a security control infrastructure (see Exhibit 39-4). The security control infrastructure includes security tools and processes that sit between the application and the network. The security control infrastructure augments or, ideally, replaces some of the control features in the applications — mostly user authentication. This means that the application does not maintain its own view of authentication, but relies on the security control infrastructure to perform the authentication. The result is that the user can authenticate once, and let the security control infrastructure take over. This allows for the eventual implementation of a single sign-on capability. 711

AU1518Ch39Frame Page 712 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE User Privilege Management Function Applications Automated Interface

Applications Security Control Applications Security Control

Corporate Privilege Management Tool

Security Control

Security Control Infrastructure X.500 Authentication

Encryption

Network

Exhibit 39-4. Security control infrastructure.

A centralized tool for the management of individual user and process privileges is required to enable the security control infrastructure to achieve this goal. The centralized user management services interact with the control infrastructure to determine what the user is allowed to do. Control infrastructure and other services within it depend on the existence of an enterprisewide privilege database containing the access and application rights for every user. The result is a security infrastructure that has the ability to deliver encryption, strong authentication, and a corporate directory with the ability to add single sign-on and advanced privilege management in the future. THE CORPORATE DIRECTORY The corporate directory, which is a component of the security control infrastructure, contains elements such as: • Employee number, name, department, and other contact information • Organizational information such as the employee’s manager and reporting structure • Systems assigned to the employee • User account data • E-mail addresses • Authorized application access • Application privileges • Authentication information, including method, passwords, and access history • Encryption keys 712

AU1518Ch39Frame Page 713 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer

1 X.500 Directory Server 2 Firewall 4

Indirectly connected users accessing limited internal e-mail addresses, directory, and encryption keys

X.500 E-Mail Directory

3

Internal users accessing directory information (e.g., phone, location, department) Internal users accessing e-mail addresses

Encryption users accessing public keys

X.500 Encryption Server

Exhibit 39-5. Authentication information for network, system, and application access.

All of this information is managed through the enterprise user and privilege management system to provide authentication information for network, system, and application access on a per-user basis (see Exhibit 39-5). With the wide array of directory products available today, most enterprises will not have to develop their own technology, but are best served using X.500 directory services as they provide Lightweight Directory Access Protocol (LDAP) services that can be used by many of today’s operating systems, including Windows 2000. The enterprise directory can be used to provide the necessary details for environments that cannot access the directory directly, such as NIS and non-LDAP-ready Kerberos implementations. Using the enterprise privilege management applications, a new user can be added in a few minutes, with all the necessary services configured. New applications and services can be added at any time. Should an employee no longer require access to specific applications or application privileges, the same tool can be used to remove them from the enterprise directory, and subsequently the application itself. A major challenge for many enterprises is removing user access when that user’s employment ends. The enterprise directory removes this problem because the information can be removed or invalidated within the directory, thereby preventing the possibility of the employee’s access 713

AU1518Ch39Frame Page 714 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

1

Remote user logging in via terminal servers Secure Node (HR) Secure ID Secure ID 5

Terminal Servers

Internal users accessing secure Web page 6

Secure ID Firewall

Access through Firewall

Apache SSL Web

Internal users accessing protected nodes

Dropbox

Unauthorized User

Exhibit 39-6. Authentication systems.

remaining active and exposing the company beyond the user’s final day of work. AUTHENTICATION SYSTEMS There are many different identification and authentication systems available, including passwords, secure tokens, biometrics, and Kerberos to name a few (see Exhibit 39-6). The enterprise must ultimately decide what authentication method makes sense for its own business needs, and may require multiple systems for different information types within the enterprise. However, the common thread is that in today’s environment, the simple password is just not good enough anymore. When a user authenticates to a system or application, his credentials are validated against the enterprise directory, which then makes the decision to allow or deny the user’s access request. The directory can also provide authorization information to the requesting application, thereby limiting the access rights for that user. Using this methodology, the exact authentication method is irrelevant and could be changed at any time. For example, 714

AU1518Ch39Frame Page 715 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer using a password today could be replaced with a secure token, biometrics, or Kerberos at any time, and multiple authentication technologies can easily coexist within the enterprise. However, one must bear in mind that user authentication is only one aspect. A second aspect concerns authentication of the information. This is achieved through the use of a digital signature, which provides the authentication and integrity of the original message. It is important to remember: no authentication method is perfect. As security professionals, we can only work to establish even greater levels of trust to the authenticating users. ENCRYPTION SERVICES Encryption is currently the only way to ensure the confidentiality of electronic information. In today’s business environment, the protection of enterprise and strategic information has become a necessity. Consequently, the infrastructure requirements include encryption and digital signatures (see Exhibit 39-7). Encryption of files before sending them over the Internet is essential, given the amount of business and intellectual property stolen over the Internet each year. The infrastructure must provide for key management, as well as the ability to handle keys of varying size. For example, global companies may require key management abilities for multiple key sizes. Encryption of enterprise information may be required within applications. However, without a common application-based encryption method, this is difficult to achieve. Through the use of virtual private network (VPN) technologies, however, one can construct a VPN within the enterprise network for the protection of specific information, regardless of the underlying network technologies. Virtual private networking is also a critical service when sessions are carried over insecure networks such as the Internet. In addition, the mobile user community must be able to protect the integrity and confidentiality of its data in the event a computer is stolen. This level of protection is accomplished with more than encryption, such as disk and system locking tools. CUSTOMER AND BUSINESS PARTNER ACCESS The use of the security infrastructure allows for the creation of secure environments for information exchange. One such example is the customer access network (see Exhibit 39-8) or those entry points where nonenterprise employees such as customers and suppliers can access the enterprise network and specific resources. In our global community, the 715

AU1518Ch39Frame Page 716 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

1

2

File protection on laptops for travelers Remote user accessing encrypted information

Secure Node (HR)

5

6 Protection of sensitive information

WWW Firewall Internet 7

3

Secured transmission for E-commerce

Secure transmission of data

Secure Web Server 4 Encrypted WWW files

Exchange of secured files across the Internet

Exhibit 39-7. Encryption services.

number of networks being connected every day continues to grow. However, connecting one’s corporate network to “theirs” also exposes one to all of the other networks “they” are connected to. Through the deployment of customer access networks, the ability to provide connectivity with security is achieved. The customer access network is connected to the customer network and to one’s corporate network, configured to prevent access between connected partners, and includes a firewall between it and the corporate network. In fact, the customer may also want a firewall between its network and the access point. With VPN technologies, the customer access network may not be extremely complicated, but does result in a VPN endpoint and specific rules within the VPN device for restricting the protocol types and destinations that the customer is permitted to access. 716

AU1518Ch39Frame Page 717 Thursday, November 14, 2002 7:54 PM

Firewalls, Ten Percent of the Solution: A Security Architecture Primer

Customer

Terminal Servers Customer Access Network

Enterprise Network

Customer Customer Access Node Access Node

Internal Database Server

Authentication Server

Internal Server

Firewall

Internet Employees

Firewall Separate access to Internet for employees

Customers

Exhibit 39-8. Customer access network.

The rules associated with the individual customer should be stored in the enterprise directory to allow easy setup and removal of the VPN access rules and keys. The real purpose behind the customer access network is not only to build a bridge between the two networks, but also to build a secure bridge. CONCLUSION This chatper focused on the technologies and concepts behind a security infrastructure. There are other elements that ideally should be part of the security infrastructure, including: • Desktop and server anti-virus solutions • Web and e-mail content filtering • Anti-spam devices At the same time, however, one’s infrastructure must be designed at the conceptual level using the business processes and needs, and not be driven by the available technology. The adage that “the business must 717

AU1518Ch39Frame Page 718 Thursday, November 14, 2002 7:54 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE drive the technology” is especially true. Many security and IT professionals forget that their jobs are dependent upon the viability and success of the enterprise — they exist to serve the enterprise, and not the other way around! Many infrastructure designers are seduced by the latest and greatest technology. This can have dire consequences for the enterprise due to unreliable code or hardware. Additionally, one nevers knows when one has something that works because one is constantly changing it. To make matters worse, because the users will not know what the “flavor of the week” is, they will simply refuse to use it. Through the development of a security infrastructure that is global in basis and supported by the management structure, the following benefits are realized: • The ability to encourage developers to include security in the early stages of their new products or business processes • The risk and costs associated with new ventures or business partners are reduced an order of magnitude from reactive processes • Centralized planning and operations with an infrastructure responsive to meeting business needs • Allow business application developers to deliver stronger controls over stored intellectual capital • The risks associated with loss of confidentiality are minimized • A strengthening of security capabilities within the installed backbone applications (e.g., e-mail, servers, WWW) • The privacy and integrity associated with the corporation’s intellectual capital are increased • The risks and costs associated with security failures are reduced In short, we have created a security infrastructure that protects the enterprise assets, is manageable, and is a business enabler. Above all this, the infrastructure must allow the network users, developers, and administrators to contribute to the corporation’s security by allowing them to “do the right thing.” ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes from application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture. 718

AU1518Ch40Frame Page 719 Thursday, November 14, 2002 7:53 PM

Chapter 40

The Reality of Virtual Computing Chris Hare, CISSP, CISA

A major issue in many computing environments is accessing the desktop or console display of a different graphical-based system than the one you are using. If you are in a homogeneous environment, meaning you want to access a Microsoft Windows system from a Windows system, you can use applications such as Timbuktu, pcAnywhere, or RemotelyPossible. In today’s virtual enterprise, many people have a requirement to share their desktops or allow others to view or manipulate it. Many desktop-sharing programs exist aside from those mentioned, including Microsoft NetMeeting and online conferencing tools built into various applications. The same is true for UNIX systems, which typically use the X Windows display system as the graphical user interface. It is simple matter of running the X Windows client on the remote system and displaying it on the local system. However, if you must access a dissimilar system (e.g., a Windows system from a UNIX system) the options are limited. It is difficult to find an application under UNIX allowing a user to view an online presentation from a Windows system using Microsoft PowerPoint. This is where Virtual Network Computing, or VNC, from AT&T’s United Kingdom Research labs, enters the picture. This chapter discusses what VNC is, how it can be used, and the security considerations surrounding VNC. The information presented does get fairly technical in a few places to illustrate the protocol, programming techniques, and weaknesses in the authentication scheme. However, the corresponding explanations should address the issues for the less technical reader. WHAT IS VNC? The Virtual Network Computing system, or VNC, was developed at the AT&T Research Laboratories in the United Kingdom. VNC is a very simple 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

719

AU1518Ch40Frame Page 720 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

VNC Protocol VNC Server

VNC Viewer

Exhibit 40-1. The VNC components.

graphical display protocol allowing connections from heterogeneous or homogeneous computer systems. VNC consists of a server and a viewer, as illustrated in Exhibit 40-1. The server accepts connection requests to display its local display on the viewer. The VNC services are based upon what is called a remote framebuffer or RFB. The framebuffer protocol simply allows a server to update the framebuffer or graphical display device on the remote viewer. With total independence from the graphical device driver, it is possible to represent the local display from the server on the client or viewer. The portability of the design means the VNC server should function on almost any hardware platform, operating system, windowing system, and application. Support for VNC is currently available for a number of platforms, including: • Servers: — UNIX (X Window system) — Microsoft Windows — Macintosh • Viewers: — UNIX (X Window System) — Microsoft Windows — Macintosh — Java — Microsoft Windows CE VNC is described as a thin-client protocol, making very few requirements on the viewer. In this manner, the client can run on the widest range of 720

AU1518Ch40Frame Page 721 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Exhibit 40-2. The X Windows VNC client.

hardware. There are a number of factors distinguishing VNC from other remote display systems, including: • VNC is stateless, meaning you can terminate the session and reconnect from another system and continue right where you left off. When you connect to a remote system using an application such as a PC X Server and the PC crashes or is restarted, the X Window system applications running terminate. Using VNC, the applications remain available after the reboot. • The viewer is a thin client and has a very small memory footprint. • VMC is platform independent, allowing a desktop on one system to be displayed on any other type of system, including Java-capable Web browsers. • It can be shared, allowing multiple users the ability to view and share a single desktop at the same time. This can be useful when needing to perform presentations over the network. • And, best of all, VNC is free and distributed under the standard GNU General Public License (GPL). These are some of the benefits available with VNC. However, despite the clever implementation to share massive amounts of video data, there are a few weaknesses, as presented in this chapter. HOW IT WORKS Accessing the VNC server is done using the VNC client and specifying the IP address or node name of the target VNC server as shown in Exhibit 40-2. The window shown in Exhibit 40-2 requests the node name or IP address for the remote VNC server. It is also possible to add a port number with the address. The VNC server has a password to protect unauthorized access to the server. After providing the target host name or IP address, the user is prompted for the password to access the server, as seen in Exhibit 40-3. 721

AU1518Ch40Frame Page 722 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Exhibit 40-3. Entering the VNC server password.

Exhibit 40-4. The UNIX VNC client displays the password.

The Microsoft Windows VNC viewer does not display the password when the user enters it, as shown in Exhibit 40-4. However, the VNC client included in Linux systems does not hide the password when the user enters it. This is an issue because it exposes the password for the server to public view. However, because there is no user-level authentication, one could say there is no problem. Just in case you missed it, there is no userlevel authentication. This is discussed again later in this chapter in the section entitled “Access Control.” The VNC client prompts for the password after the connection is initiated with the server and requests authentication using a challenge– response scheme. The challenge–response system used is described in the section entitled “Access Control.” Once the authentication is successful, the client and server then exchange a series of messages to negotiate the desktop size, pixel format, and the encoding schemes. To complete the initial connection setup, the client requests a full update for the entire screen and the session commences. Because the client is stateless, either the server or the client can close the connection with no impact to either the client or server. 722

AU1518Ch40Frame Page 723 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Exhibit 40-5. The Windows desktop from Linux.

Actually, this chapter was written logged into a Linux system and using VNC to access a Microsoft Windows system that used VNC to access Microsoft Word. When using VNC on the UNIX- or Linux-based client, the user sees the Windows desktop as illustrated in Exhibit 40-5. The opposite is also true — a Windows user can access the Linux system and see the UNIX or Linux desktop as well as use the features and functionality offered by the UNIX platform (see Exhibit 40-6). However, VNC is not limited to these platforms, as mentioned earlier and demonstrated later. However, this may not be exactly what the Linux user was expecting. The VNC sessions run as additional displays on the X server, which on RedHat Linux systems default to the TWM Window Manager. This can be changed; however, that is outside the topic area of this chapter. NETWORK COMMUNICATION All network communication requires the use of a network port. VNC is a connection-based TCP/IP application requiring the use of network ports. The VNC server listens on two ports. The values of these ports depend upon the access method and the display number. 723

AU1518Ch40Frame Page 724 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Exhibit 40-6. The TWM Window Manager from Windows.

The VNC server listens on port 5900 plus the display number. WinVNC for Microsoft Windows defaults to display zero, so the port is 5900. The same is true for the Java-based HTTP port, listening at port 5800 plus the display number. This small and restrictive Web server is discussed more in the section entitled “VNC and the Web.” If there are multiple VNC servers running on the same system, they will have different port numbers because their display number is different, as illustrated in Exhibit 40-7. There is a VNC server executed for each user who wishes to have one. Because there is no user authentication in the VNC server, the authentication is essentially port based. This means user chare is running a VNC server, which is set up on display 1 and therefore port 5901. Because the VNC server is running at user chare, anyone who learns or guesses the password for the VNC server can access chare’s VNC server and have all of chare’s privileges. Looking back to Exhibit 40-6, the session running on the Linux system belonged to root as shown here: [chare@rhlinux chare]$ ps -ef | grep vnc root 20368 1 0 23:21 pts/1 00:00:00 Xvnc : 1 -desktop X -httpd/usr/s 724

AU1518Ch40Frame Page 725 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Display 1

Display 2 VNC Server

Port 5903

Display 3

Port 5902 Port 5901

VNC Viewer VNC Viewer VNC Viewer

Exhibit 40-7. Multiple VNC servers.

chare 20476 20436 0 23:25 pts/3 00:00:00 grep vnc [chare@rhlinux chare]$

In this scenario, any user who knows the password for the VNC server on display 1, which is port 5901, can become root with no additional password required. Because of this access control model, good-quality passwords must be used to control access to the VNC server; and they must be kept absolutely secret. As mentioned previously, the VNC server also runs a small Web server to support access through the Java client. The Web server listens on port 58xx, where xx is the display number for the server. The HTTP port on the Web server is only used to establish the initial HTTP connection and download the applet. Once the applet is running in the browser, the connection uses port 59xx. The section entitled “VNC and the Web” describes using the VNC Java client. There is a third mode, where the client listens for a connection from the server rather than connecting to a server. When this configuration is selected, the client listens on port 5500 for the incoming connection from the server. 725

AU1518Ch40Frame Page 726 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE ACCESS CONTROL As mentioned previously, the client and server exchange a series of messages during the initial connection setup. These protocol messages consist of: • • • •

ProtocolVersion Authentication ClientInitialization ServerInitialization

Once the ServerInitialization stage is completed, the client can send additional messages when it requires and receive data from the server. The protocol version number defines what level of support both the client and server have. It is expected that some level of backward compatibility is available because the version reported should be the latest version the client or server supports. When starting the VNC viewer on a Linux system, the protocol version is printed on the display (standard out) if not directed to a file. Using a tool such as tcpdump, we can see the protocol version passed from the client to the server (shown in bold text): 22:39:42.215633 eth0 < alpha.5900 > rhlinux.chare-cissp.com.1643: P 1:13(12) ack 1 win 17520 4500 0040 77f0 0000 8006 4172 c0a8 0002 c0a8 0003 170c 066b 38e9 536b 7f27 64fd 8018 4470 ab7c 0000 0101 080a 0000 9455 02d2 854f 5246 4220 3030 332e 3030 330a E^@ ^@ @ w.. ^@^@ ..^F A r.... ^@^B .... ^@^C ^W^L ^F k 8.. S k ^¿ ‘ d.. ..^X D p .. | ^@^@ ^A^A ^H^J ^@^@.. U ^B.. .. O R F B 0 0 3. 0 0 3^J

and then again from the server to the client: 22:39:42.215633 eth0 > rhlinux.chare-cissp.com.1643 > alpha.5900: P 1:13(12) ack 13 win 5840 (DF) 4500 0040 e1b5 4000 4006 d7ac c0a8 0003 c0a8 0002 066b 170c 7f27 64fd 38e9 5377 8018 16d0 d910 0000 0101 080a 02d2 854f 0000 9455 5246 4220 3030 332e 3030 330a E^@ .... ..^X ^@^@ 726

^@ @ ^@^B ^V.. .. U

.... ^F k ..^P R F

@^@ ^W^L ^@^@ B

@^F ^¿ ‘ ^A^A 0 0

.... d.. ^H^J 3.

.... 8.. ^B.. 0 0

^@^C S w .. O 3^J

AU1518Ch40Frame Page 727 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Send 16-byte Challenge Send Encrypted Response OK/Fail/Too Many

VNC Server

VNC Viewer

Exhibit 40-8. The VNC authentication challenge–response.

With the protocol version established, the client attempts to authenticate to the server. The password prompt shown in Exhibit 40-3 is displayed on the client, where the user enters the password. There are three possible authentication messages in the VNC protocol: 1. Connection Failed. The connection cannot be established for some reason. If this occurs, a message indicating the reason the connection could not be established is provided. 2. No Authentication. No authentication is needed. This is not a desirable option. 3. VNC Authentication. Use VNC authentication. The VNC authentication challenge–response is illustrated in Exhibit 40-8. The VNC authentication protocol uses a challenge–response method with a 16-byte (128-bit) challenge sent from the server to the client. The challenge is sent from the server to the client in the clear. The challenge is random, based upon the current time when the connection request is made. The following packet has the challenge highlighted in bold. 14:36:08.908961 < alpha.5900 > rhlinux.chare-cissp.com. 2058: P 17:33(16) ack 13 win 17508 4500 0044 aa58 0000 8006 0f06 c0a8 0002 c0a8 0003 170c 080a ae2b 8b87 f94c 0e34 8018 4464 1599 0000 0101 080a 000c 355a 0083 1628 0456 b197 31f3 ad69 a513 151b 195d 8620 E^@ ^@ D .. X ^@^@ ..^F ^O^F .... ^@^B .... ^@^C ^W^L ^H^J .. + .... .. L ^N 4 ..^X D d ^U.. ^@^@ ^A^A ^H^J ^@^L 5 Z ^@.. ^V ( ^D V .... 1.. .. I ..^S ^U^[ ^Y] .. 727

AU1518Ch40Frame Page 728 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE The client then encrypts the 16-byte challenge using Data Encryption Standard (DES) symmetric cryptography with the user-supplied password as the key. The VNC DES implementation is based upon a public domain version of Triple-DES, with the double and triple length support removed. This means VNC is only capable of using standard DES for encrypting the response to the challenge. Again, the following packet has the response highlighted in bold. 14:36:11.188961 < rhlinux.chare-cissp.com.2058 > alpha.5900: P 13:29(16) ack 33 win 5840 (DF) 4500 0044 180a 4000 4006 a154 c0a8 c0a8 0002 080a 170c f94c 0e34 ae2b 8018 16d0 facd 0000 0101 080a 0083 000c 355a 7843 ba35 ff28 95ee 1493 0410 8b86 E^@ ^@ D .... ^@^B ..^X ^V.. ^@^L 5 Z ^D^P....

0003 8b97 170c caa7

^X^J @^@ @^F .. T .... ^@^C ^H^J ^W^L .. L ^N 4 .. + .... .... ^@^@ ^A^A ^H^J ^@.. ^W^L x C .. 5 .. ( .... ^T.. ....

The server receives the response and, if the password on the server is the same, the server can decrypt the response and find the value issued as the challenge. As discussed in the section “Weaknesses in the VNC Authentication System” later in this chapter, the approach used here is vulnerable to a man-in-the-middle attack, or a cryptographic attack to find the key, which is the password for the server. Once the server receives the response, it informs the client if the authentication was successful by providing an OK, Failed, or Too Many response. After five authentication failures, the server responds with Too Many and does not allow immediate reconnection by the same client. The ClientInitialization and ServerInitialization messages allow the client and server to negotiate the color depth, screen size, and other parameters affecting the display of the framebuffer. As mentioned in the “Network Communication” section, the VNC server runs on UNIX as the user who started it. Consequently, there are no additional access controls in the VNC server. If the password is not known to anyone, it is safe. Yes and no. Because the password is used as the key for the DES-encrypted response, the password is never sent across the network in the clear. However, as we will see later in the chapter, the challenge–response method is susceptible to a man-in-the-middle attack. 728

AU1518Ch40Frame Page 729 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing The VNC Server Password The server password is stored in a password file on the UNIX file system in the ~/.vnc directory. The password is always stored using the same 64bit key, meaning the password file should be protected using the local file system permissions. Failure to protect the file exposes the password, because the key is consistent across all VNC servers. The password protection system is the same on the other supported server platforms; however, the location of the password is different. The VNC source code provides the consistent key: /* •We use a fixed key to store passwords, since we assume •that our local file system is secure but nonetheless •don’t want to store passwords as plaintext. */ unsigned char fixedkey[8] = {23,82,107,6,35,78,88,7};

This fixed key is used as input to the DES functions to encrypt the password; however, the password must be unencrypted at some point to verify authentication The VNC server creates the ~/.vnc directory using the standard default file permissions as defined with the UNIX system’s umask. On most systems, the default umask is 022, making the the ~/.vnc directory accessible to users other than the owner. However, the password file is explicitly set to force read/write permissions only for the file owner; so the chance of an attacker discovering the password is minimized unless the user changes the permissions on the file, or the attacker has gained elevated user or system privileges. If the password file is readable to unauthorized users, the server password is exposed because the key is consistent and publicly available. However, the attacker does not require too much information, because the functions to encrypt and decrypt the password in the file are included in the VNC source code. With the knowledge of the VNC default password key and access to the VNC server password file, an attacker can obtain the password using 20 lines of C language source code. A sample C program, here called attack.c, can be used to decrypt the VNC server password should the password file be visible: #include #include #include #include #include #include





729

AU1518Ch40Frame Page 730 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE #include main( argc, argv) int argc; char **argv; { char *passwd; if (argc < = 1) { printf (“specify the location and name of a VNC password file\n”); exit(1); } /* we might have a file */ passwd = vncDecryptPasswdFromFile(argv[1]); printf (“passowrd file is%s\n,” argv[1]); printf (“password is%s\n,” passwd); exit(0); }

Note: Do not use this program for malicious purposes. It is provided for education and discussion purposes only. Running the attack.c program with the location and name of a VNC password file displays the password: [chare@rhlinux libvncauth]$./attack $HOME/.vnc/passwd passowrd file is/home/chare/.vnc/passwd password is holycow

The attacker can now gain access to the VNC server. Note however, this scenario assumes the attacker already has access to the UNIX system. For the Microsoft Windows WinVNC, the configuration is slightly different. While the methods to protect the password are the same, WinVNC uses the Windows registry to store the server’s configuration information, including passwords. The WinVNC registry entries are found at: • Local machine-specific settings: HKEY_LOCAL_MACHINE\Software\ORL\WinVNC3\ • Local default user settings: HKEY_LOCAL_MACHINE\Software\ORL\WinVNC3\Default • Local per-user settings: HKEY_LOCAL_MACHINE\Software\ORL\WinVNC3\ • Global per-user settings: HKEY_CURRENT_USER\Software\ORL\WinVNC3 730

AU1518Ch40Frame Page 731 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Exhibit 40-9. WinVNC Windows registry values.

The WinVNC server password will be found in the local default user settings area, unless a specific user defines his own server. The password is stored as an individual registry key value as shown in Exhibit 40-9. Consequently, access to the registry should be as controlled as possible to prevent unauthorized access to the password. The password stored in the Windows registry uses the same encryption scheme to protect it as on the UNIX system. However, looking at the password shown in Exhibit 40-9, we see the value: 48 a0 ef f3 4a 92 96 e5

and the value stored on UNIX is: a0 48 f3 ef 92 4a e5 96

Comparing these values, we see that the byte ordering is different. However, knowing that the ordering is different, we can use a program to create a binary file on UNIX with the values from the Windows system and then use the attack.c program above to determine the actual password. Notice that because the password values shown in this example are the same, and the encryption used to hide the passwords is the same, the passwords are the same. 731

AU1518Ch40Frame Page 732 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Additionally, the VNC password is limited to eight characters. Even if the user enters a longer password, it is truncated to eight. Assuming a goodquality password with 63 potential characters in each position, this represents only 638 possible passwords. Even with this fairly large number, the discussion thus far has demonstrated the weaknesses in the authentication method. RUNNING A VNC SERVER UNDER UNIX The VNC server running on a UNIX system uses the X Window System to interact with the X-based applications on UNIX. The applications are not aware there is no physical screen attached to the system. Starting a new VNC server is done by executing the command: vncserver

on the UNIX host. Because the vncserver program is actually written in Perl, most common problems with starting vncserver are associated with the Perl installation or directory structures. Any user on the UNIX host can start a copy of the VNC server. Because there is no user authentication built into the VNC server or protocol, running a separate server for each user is the only method of providing limited access. Each vncserver has its own password and port assignment, as presented earlier in the chapter. The first time a user runs the VNC server, they are prompted to enter a password for the VNC server. Each VNC server started by the same user will have the same password. This occurs because the UNIX implementation of VNC creates a directory called .vnc in the user’s home directory. The .vnc directory contains the log files, PID files, password, and X startup files. Should the users wish to change the password for their VNC servers, they can do so using the vncpasswd command. VNC Display Names Typically the main display for a workstation using the X Window System is display 0 (zero). This means on a system named ace, the primary display is ace:0. A UNIX system can run as many VNC servers as the users desire, with the display number incrementing for each one. Therefore, the first VNC server is display ace:1, the second ace:2, etc. Individual applications can be executed and, using the DISPLAY environment variable defined, send their output to the display corresponding to the desired VNC server. For example, sending the output of an xterm to the second VNC server on display ace:2 is accomplished using the command: xterm -display ace:2 & 732

AU1518Ch40Frame Page 733 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing Normally, the vncserver command chooses the first available display number and informs the user what that display is; however, the display number can be specified on the command line to override the calculated default: vncserver :2

No visible changes occur when a new VNC server is started, because only a viewer connected to that display can actually see the resulting output from that server. Each time a connection is made to the VNC server, information on the connection is logged to the corresponding server log file found in the $HOME/.vnc directory of the user executing the server. The clog file contents are discussed in the “Logging” section of this chapter. VNC as a Service Instead of running individual VNC servers, there are extensions available to provide support for VNC under the Internet Super-Daemon, inetd and xinetd. More information on this configuration is available from the AT&T Laboratories Web site. VNC AND MICROSOFT WINDOWS The VNC server is also available for Microsoft Windows, providing an alternative to other commercial solutions and integration between heterogeneous operating systems and platforms. The VNC server under Windows is run as a separate application or a service. Unlike the UNIX implementation, the Windows VNC server can only display the existing desktop of the PC console to the user. This is a limitation of Microsoft Windows, and not WinVNC. WinVNC does not make the Windows system a multi-user environment: if more than one user connects to the Windows system at the same time, they will all see the same desktop. Running WinVNC as a service is the preferred mode of operation because it allows a user to log on to the Windows system, perform his work, and then log off again. When running WinVNC, an icon as illustrated in Exhibit 40-10 is displayed. When a connection is made, the icon changes color to indicate there is an active connection. The WinVNC properties dialog shown in Exhibit 40-11 allows the WinVNC user to change the configuration of WinVNC. All the options are fully discussed in the WinVNC documentation. With WinVNC running as a service, a user can connect from a remote system even when no user is logged on at the console. Changing the properties for WinVNC when it is running as a service has the effect of changing 733

AU1518Ch40Frame Page 734 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Exhibit 40-10. WinVNC system tray icons.

Exhibit 40-11. The WinVNC Properties dialog.

the service configuration, also known as the default properties, rather than the individual user properties. However, running a nonservice mode WinVNC means a user must have logged in on the console and started WinVNC for it to work correctly. Exhibit 40-12 illustrates accessing WinVNC from a Linux system while in service mode. Aside from the specific differences for configuring the WinVNC server, the password storage and protocol-level operations are the same, regardless of the platform. Because there can be only one WinVNC server running 734

AU1518Ch40Frame Page 735 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Exhibit 40-12. Accessing WinVNC in service mode.

at a time, connections to the server are on ports 5900 for the VNC viewer and 5800 for the Java viewer. VNC AND THE WEB As mentioned previously, each VNC server listens not only on the VNC server port but also on a second port to support Web connections using a Java applet and a Web browser. This is necessary to support Java because a Java applet can only make a connection back to the machine from which it was served. Connecting to the VNC server using a Java-capable Web browser to: http://ace:5802/

loads the Java applet and presents the log-in screen where the password is entered. Once the password is provided, the access controls explained earlier prevail. Once the applet has connected to the VNC server port, the user sees a display resembling that shown in Exhibit 40-13. With the Java applet, the applications displayed through the Web browser can be manipulated as if they were displayed directly through the VNC client or on the main display of the workstation. 735

AU1518Ch40Frame Page 736 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Exhibit 40-13. A VNC connection using a Java-capable Web browser.

LOGGING As with any network-based application, connection and access logs provide valuable information regarding the operation of the service. The log files from the VNC server provide similar information for debugging or later analysis. A sample log file resembles the following. The first part of the log always provides information on the VNC server, including the listing ports, the client name, display, and the URL. 26/10/01 23:25:47 Xvnc version 3.3.3r2 26/10/01 23:25:47 Copyright © AT&T Laboratories Cambridge. 26/10/01 23:25:47 All Rights Reserved. 26/10/01 23:25:47 See http://www.uk.research.att.com/ vnc for information on VNC 26/10/01 23:25:47 Desktop name ‘X’ (rhlinux.charecissp.com:1) 26/10/01 23:25:47 Protocol version supported 3.3 26/10/01 23:25:47 Listening for VNC connections on TCP port 5901 26/10/01 23:25:47 Listening for HTTP connections on TCP port 5801 26/10/01 23:25:47 URL http://rhlinux.charecissp.com:5801 736

AU1518Ch40Frame Page 737 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing The following sample log entry shows a connection received on the VNC server. We know the connection came in through the HTTPD server from the log entry. Notice that there is no information regarding the user who is accessing the system — only the IP address of the connecting system. 26/10/01 23:28:54 httpd: get ‘‘ for 192.168.0.2 26/10/01 23:28:54 httpd: defaulting to ‘index.vnc’ 26/10/01 23:28:56 httpd: get ‘vncviewer.jar’ for 192.168.0.2 26/10/01 23:29:03 Got connection from client 192.168.0.2 26/10/01 23:29:03 Protocol version 3.3 26/10/01 23:29:03 Using hextile encoding for client 192.168.0.2 26/10/01 23:29:03 Pixel format for client 192.168.0.2: 26/10/01 23:29:03 8 bpp, depth 8 26/10/01 23:29:03 true colour: max r 7 g 7 b 3, shift r 0 g 3 b 6 26/10/01 23:29:03 no translation needed 26/10/01 23:29:21 Client 192.168.0.2 gone 26/10/01 23:29:21 Statistics: 26/10/01 23:29:21 key events received 12, pointer events 82 26/10/01 23:29:21 framebuffer updates 80, rectangles 304, bytes 48528 26/10/01 23:29:21 hextile rectangles 304, bytes 48528 26/10/01 23:29:21 raw bytes equivalent 866242, compression ratio 17.850354

The log file contains information regarding the connection with the client, including the color translations. Once the connection is terminated, the statistics from the connection are logged for later analysis, if required. Because there is no authentication information logged, the value of the log details for a security analysis are limited to knowing when and from where a connection was made to the server. Because many organizations use DHCP for automatic IP address assignment and IP addresses may be spoofed, the actual value of knowing the IP address is reduced. WEAKNESSES IN THE VNC AUTHENTICATION SYSTEM We have seen thus far several issues that will have the security professional concerned. However, these can be alleviated as discussed later in the chapter. There are two primary concerns with the authentication. The first is the man-in-the-middle attack, and the second is a cryptographic attack to uncover the password. 737

AU1518Ch40Frame Page 738 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Attacker Initiate Connection Request

VNC Server

VNC Viewer

Exhibit 40-14. Attacker opens connection to VNC server.

The Random Challenge The random challenge is generated using the rand(3) function in the C programming language to generate random numbers. The random number generator is initialized using the system clock and the current system time. However, the 16-byte challenge is created by successive calls to the random number generator, decreasing the level of randomness on each call. (Each call returns 1 byte or 8 bits of data.) This makes the challenge predictable and increases the chance an attacker could establish a session by storing all captured responses and their associated challenges. Keeping track of each challenge–response pair can be difficult and, as discussed later, not necessary. The Man-in-the-Middle Attack For the purposes of this illustration, we will make use of numerous graphics to facilitate understanding this attack method. The server is system S, the client is C, and the attacker, or man in the middle, is A. (This discussion ignores the possibility the network connection may be across a switched network, or that there are ways of defeating the additional security provided by the switched network technology.) The attacker A initiates a connection to the server, as seen in Exhibit 4014. The attacker connects, and the two systems negotiate the protocols supported and what will be used. The attacker observes this by sniffing packets on the network. We know both the users at the client and server share the DES key, which is the password. The attacker does not know the key. The password is used for the DES encryption in the challenge–response. The server then generates the 16-byte random challenge and transmits it to the attacker, as seen in Exhibit 40-15. Now the attacker has a session established with the server, pending authorization. 738

AU1518Ch40Frame Page 739 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing

Attacker Send 16-Byte Challenge

VNC Viewer

VNC Server

Exhibit 40-15. Server sends challenge to attacker.

Attacker

Send 16-Byte Challenge VNC Server

VNC Viewer

Exhibit 40-16. Attacker captures and replaces challenge.

At this point, the attacker simply waits, watching the network for a connection request to the same server from a legitimate client. This is possible as there is no timeout in the authentication protocol; consequently, the connection will wait until it is completed. When the legitimate client attempts a connection, the server and client negotiate their protocol settings, and the server sends the challenge to the client as illustrated in Exhibit 40-16. The attacker captures the authentication request and changes the challenge to match the one provided to him by the server. Once the attacker has modified the challenge, he forges the source address and retransmits it to the legitimate client. As shown in Exhibit 40-17, the client then receives the challenge, encrypts it with the key, and transmits the response to the server. The server receives two responses: one from the attacker and one from the legitimate client. However, because the attacker replaced the challenge 739

AU1518Ch40Frame Page 740 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE

Attacker

VNC Server

Send Encrypted Response

VNC Viewer

Exhibit 40-17. Attacker and client send encrypted response.

sent to the client with his own challenge, the response sent by the client to server does not match the challenge. Consequently, the connection request from the legitimate client is refused. However, the response sent does match the challenge sent by the server to the attacker; and when the response received from the attacker matches the calculated response on the server, the connection is granted. The attacker has gained unauthorized access to the VNC server. Cryptographic Attacks Because the plaintext challenge and the encrypted response can both be retrieved from the network, it is possible to launch a cryptographic attack to determine the key used, which is the server’s password. This is easily done through a brute-force or known plaintext attack. A brute-force attack is the most effective, albeit time-consuming, method of attack. Both linear cryptanalysis, developed by Lester Mitsui, and differential cryptanalysis, developed by Biham and Shamir, are considered the two strongest analytic (shortcut) methods for breaking modern ciphers; and even these have been shown as not very practical, even against Single-DES. The known plaintext attack is the most advantageous method because a sample of ciphertext (the response) is available as well as a sample of the plaintext (the challenge). Publicly available software such as crack could be modified to try a dictionary and brute-force attack by repeatedly encrypting the challenge until a match for the response is found. The nature of achieving the attack is beyond the scope of this chapter. Finding VNC Servers The fastest method of finding VNC servers in an enterprise network is to scan for them on the network devices. For example, the popular nmap 740

AU1518Ch40Frame Page 741 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing scanner can be configured to scan only the ports in the VNC range to locate the systems running it. [root@rhlinux chare]# nmap -p “5500,5800–5999” 192.168.0.1–5 Starting nmap V. 2.54BETA29 (www.insecure.org/nmap/) All 201 scanned ports on gateway (192.168.0.1) are: filtered Interesting ports on alpha (192.168.0.2): (The 199 ports scanned but not shown below are in state: closed) Port State Service 5800/tcp open vnc 5900/tcp open vnc Interesting ports on rhlinux.chare-cissp.com (192.168.0.3): (The 199 ports scanned but not shown below are in state: closed) Port State Service 5801/tcp open vnc 5901/tcp open vnc-1 Nmap run completed — 5 IP addresses (3 hosts up) scanned in 31 seconds [root@rhlinux chare]#

There are other tools available to find and list the VNC servers on the network; however, nmap is fast and will identify not only if VNC is available on the system at the default ports but also all VNC servers on that system. Improving Security through Encapsulation To this point we have seen several areas of concern with the VNC environment: • There is no user-level authentication for the VNC server. • The challenge–response system is vulnerable to man-in-the-middle and cryptographic attacks. • There is no data confidentiality built into the client and server. Running a VNC server provides the connecting user with the ability to access the entire environment at the privilege level for the user running the server. For example, assuming root starts the first VNC server on a UNIX system, the server listens on port 5901. Any connections to this port where the remote user knows the server password result in a session with root privileges. We have seen how it could be possible to launch a man-in-the-middle or cryptographic attack against the authentication method used in VNC. 741

AU1518Ch40Frame Page 742 Thursday, November 14, 2002 7:53 PM

COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE Additionally, once the authentication is completed, all the session data is unencrypted and could, in theory, be captured, replayed, and watched by malicious users. However, because VNC uses a simple TCP/IP connection, it is much easier to add encryption support with Secure Sockets Layer (SSL) or Secure Shell (SSH) than, say, a Telnet, rlogin, or X Window session. Secure Shell (SSH) is likely the more obvious choice for most users, given there are clients for most operating systems. SSH encrypts all the data sent through the tunnel and supports port redirection; thus, it can be easily supported with VNC. Furthermore, while VNC uses a very efficient protocol for carrying the display data, additional benefits can be achieved at slower network link speeds because SSH can also compress the data. There are a variety of SSH clients and servers available for UNIX, although if you need an SSH server for Windows, your options are very limited and may result in the use of a commercial implementation. However, SSH clients for Windows and the Apple Macintosh are freely available. Additionally, Mindbright Technology offers a modified Java viewer supporting SSL. Because UNIX is commonly the system of choice for operating a server, this discussion focuses on configuring VNC with SSH using a UNIX-based system. Similar concepts are applicable for Windows-based servers, once you have resolved the SSH server issue. However, installing and configuring the base SSH components are not discussed in this chapter. Aside from the obvious benefits of using SSH to protect the data while traveling across the insecure network, SSH can compress the data as well. This is significant if the connection between the user and the server is slow, such as a PPP link. Performance gains are also visible on faster networks, because the compression can make up for the time it takes to encrypt and decrypt the packets on both ends. A number of extensions are available to VNC, including support for connections through the Internet superserver inetd or xinetd. These extensions mean additional controls can be implemented using the TCP Wrapper library. For example, the VNC X Window server, Xvnc, has been compiled with direct support for TCP Wrappers. More information on configuring SSH, inetd, and TCP Wrappers is available on the VNC Web site listed in the “References” section of this chapter. SUMMARY The concept of thin-client computing will continue to grow and develop to push more and more processing to centralized systems. Consequently, applications such as VNC will be with the enterprise for some time. However, the thin-client application is intended to be small, lightweight, and 742

AU1518Ch40Frame Page 743 Thursday, November 14, 2002 7:53 PM

The Reality of Virtual Computing easy to develop and transport. The benefits are obvious — smaller footprint on the client hardware and network, including support for many more devices including handheld PCs and cell phones, to name a few. However, the thin-client model has a price; and in this case it is security. While VNC has virtually no security features in the protocol, other add-on services such as SSH, VNC, and TCP Wrapper, or VNC and xinetd provide extensions to the basic VNC services to provide access control lists limited by the allowable network addresses and data confidentiality and integrity. Using VNC within an SSH tunnel can provide a small, lightweight, and secured method of access to that system 1000 miles away from your office. For enterprise or private networks, there are many advantages to using VNC because the protocol is smaller and more lightweight than distributing the X Window system on Microsoft Windows, and it has good response time even over a slower TCP/IP connection link. Despite the security considerations mentioned in this chapter, there are solutions to address them; so you need not totally eliminate the use of VNC in your organization. References 1. CORE SDI advisory: weak authentication in AT&T’s VNC, http://www.uk.research.att.com/ vnc/archives/2001–01/0530.html. 2. VNC Computing Home Page, http://www.uk.research.att.com/vnc/index.html. 3. VNC Protocol Description, http://www.uk.research.att.com/vnc/rfbproto.pdf. 4. VNC Protocol Header, http://www.uk.research.att.com/vnc/rfbprotoheader.pdf. 5. VNC Source Code, http://www.uk.research.att.com/vnc/download.html.

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes from application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

743

AU1518Ch40Frame Page 744 Thursday, November 14, 2002 7:53 PM

AU1518Ch41Frame Page 745 Thursday, November 14, 2002 7:53 PM

Domain 7

Operations Security

AU1518Ch41Frame Page 746 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY The Operations Security domain in this volume contains one section and a single very important chapter. Its focus is on directory security, which has, up to now, been an area of security that is largely overlooked. The dilemma created by the lack of directory and file permission security with the threats it poses is explained, and potential solutions are reviewed along with a discussion of several operating system utilities that can play a role in managing permissions. The threats can impact all three of the goals of security — integrity, confidentiality, and availability. You must read Chapter 41 to see how these threats can be addressed. The point is made that, until we take the time to properly identify file and directory security permissions, we have not completely done our job of creating an overall network security strategy. In the world of information security, there is no place for incomplete work; you are either secure or insecure — and you cannot afford the latter.

746

AU1518Ch41Frame Page 747 Thursday, November 14, 2002 7:53 PM

Chapter 41

Directory Security Ken Buszta, CISSP

Many organizations have invested in a wide variety of security technologies and appliances to protect their business assets. Some of these projects have taken their toll on the organization’s IT budget in the form of time, money, and the number of personnel required to implement and maintain them. While each of these projects may be critical to an organization’s overall security plan, IT managers and administrators continue to overlook one of the most fundamental and cost-effective security practices available — directory and file permission security. This chapter addresses the dilemma created by this issue, the threats it poses, offers potential solutions, and then discusses several operating system utilities that can aid the practitioner in managing permissions. UNDERSTANDING THE DILEMMA Today, people desire products that are quick to build and even easier to use, and the information technology world is no different. The public’s clamor for products that support such buzzwords as user friendly and feature-enriched has been heard by a majority of the vendors. We can press one button to power-on a computer, automate signing into an operating system, and have a wide variety of services automatically commence when we start up our computers. In the past, reviews referring to these as easeof-use features have generally led to increased market share and revenues for these vendors. While the resulting products have addressed the public’s request, vendors have failed to address the business requirements for these products, including: • Vendors have failed to understand the growing business IT security model: protect the company’s assets. Vendors have created the operating systems with lax permissions on critical operating files and thereby placed the organization’s assets at risk. By configuring the operating system permissions to conform to a stricter permission model, we could reduce the amount of time a practitioner spends in a reactive role and increase the time in proactive roles, such as performance management and implementing new technologies that continue to benefit the organization. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

747

AU1518Ch41Frame Page 748 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY • Vendors fail to warn consumers of the potential pitfalls created by using the default installation configuration. Operating system file permissions are associated to user and group memberships and are among the largest pitfalls within the default installation. The default configuration permissions are usually excessive for the average user; and as a result, they increase the potential for unauthorized accesses to the system. • Vendors fail to address the average user’s lack of computer knowledge. Many engineers work very diligently to fully understand the operating system documentation that arrives with the software. Even with their academic backgrounds and experience, many struggle and are forced to invest in third-party documentation to understand the complex topics. How can vendors then expect the average user to decipher their documentation and configure their systems correctly? THREATS AND CONSEQUENCES For experienced security practitioners, we understand it is essential to identify all potential threats to an environment and their possible consequences. When we perform a business impact analysis on data, we must take into consideration two threats that arise from our file and directory permissions — user account privilege escalation and group membership privilege escalation. User account privileges refer to the granting of permissions to an individual account. Group membership privileges refer to the granting of permissions to a group of individuals. Improperly granted permissions, whether they are overly restrictive or unnecessarily liberal, pose a threat to the organization. The security practitioner recognizes both of these threats as direct conflicts with the principle of least privilege. The consequences of these threats can be broken into three areas: 1. Loss of confidentiality. Much of our data is obtained and maintained through sensitive channels (i.e., customer relationships, trade secrets, and proprietary methodologies). A disgruntled employee with unnecessarily elevated privileges could easily compromise the system’s confidentiality. Such a breach could result in a loss of client data, trust, market share, and profits. 2. Loss of integrity. Auditing records, whether they are related to the financial, IT, or production environments, are critical for an organization to prove to its shareholders and various government agencies that it is acting with the level of integrity bestowed upon it. Improper permissions could allow for accidental or deliberate data manipulation, including the deletion of critical files. 3. Loss of availability. If permissions are too restrictive, authorized users may not be able to access data and programs in a timely 748

AU1518Ch41Frame Page 749 Thursday, November 14, 2002 7:53 PM

Directory Security manner. However, if permissions are too lenient, a malicious user may manipulate the data or change the permissions of others, rendering the information unavailable to personnel. ADDRESSING THE THREAT Before we can address the threats associated with file and directory permissions, we must address our file system structure. In this context, we are referring to the method utilized in the creation of partitions. File allocation tables (FAT or FAT32), the Microsoft NT File System (NTFS), and Network File Systems (NFS) are examples of the more commonly used file systems. If practitioners are heavily concerned about protecting their electronic assets, they need to be aware of the capabilities of these file systems. While we can set permissions in a FAT or FAT32 environment, these permissions can be easily bypassed. On the other hand, both NTFS and NFS allow us to establish the owners of files and directories. This ownership allows us to obtain a tighter control on the files and directories. Therefore, InfoSec best practices recommend establishing and maintaining all critical data on nonFAT partitions. Once we have addressed our file systems, we can address the permission threat. Consider the following scenario. Your team has been charged with creating the administration scheme for all of KTB Corporation’s users and the directory and file permissions. KTB has a centralized InfoSec department that provides support to 10,000 end users. Conservative trends have shown that 25 new end users are added daily, and 20 are removed or modified due to terminations or job transitions. The scheme should take into account heavier periods of activity and be managed accordingly. What would be the best way to approach this dilemma? As we have stated earlier, operating systems associate files with users and group memberships. This creates two different paths for the practitioner to manage permissions — by users or by groups. After applying some thought to the requirements, part of your team has developed Plan A to administer the permissions strictly with user accounts. In this solution, the practitioner provides the most scrutiny over the permissions because he or she is delegating permissions on an individual case-by-case basis. The team’s process includes determining the privileges needed, determining the resources needed, and then assigning permissions to the appropriate users. The plan estimates that with proper documentation, adding users and assigning appropriate permissions will take approximately five minutes, and a deletion or modification will take ten minutes. The additional time for deletions and modifications can be attributed to the research required to ensure all of the user permissions have been removed or changed. Under this plan, our administrator will need a little over five and a half hours of time each day to complete this primary function. This would 749

AU1518Ch41Frame Page 750 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY allow us to utilize the administrator in other proactive roles, such as implementation projects and metric collection. Another part of your team has developed Plan B. Under this plan, the administrator will use a group membership approach. The team’s process for this approach includes determining the privileges needed, determining the resources needed, examining the default groups to determine if they meet the needs, creating custom groups to address the unmet needs, assigning permissions to the appropriate groups, and then providing groups with the permissions required to perform their tasks. The team estimates that an administrator will spend approximately five minutes configuring each new user and only two minutes removing or modifying user permissions. The difference in the removal times is attributed to having to only remove the user from a group, as opposed to removing the user from each file or directory. Under Plan B, the administrator will need slightly over four hours to perform these primary duties. Up until now, both plans could be considered acceptable by management. Remember: there was a statement in the scenario about “heavier periods of activity.” What happens if the company goes through a growth spurt? How will this affect the availability of the administrator of each plan? On the other hand, what happens if the economy suffered a downturn and KTB was forced to lay off ten percent, or 2000 members, of its workforce? What type of time would be required to fulfill all of the additional tasking? Under Plan A, the administrator would require over 330 tech hours (or over eight weeks) to complete the tasking, while Plan B would only require 67 hours. As one can see, individual user permissions might work well in a small environment, but not for a growing or large organization. As the number of users increase, the administration of the permissions becomes more labor intensive and sometimes unmanageable. It is easy for a practitioner to become overwhelmed in this scenario. However, managing through group memberships has demonstrated several benefits. First, it is scalable. As the organization grows, the administrative tasking grows but remains manageable. The second benefit is ease of use. Once we have invested the time to identify our resources and the permissions required to access those resources, the process becomes templated. When someone is hired into the accounts payable department, we can create the new user and then place the user into the accounts payable group. Because the permissions are assigned to the group and not the individual, the user will inherit the permissions of the group throughout the system. Likewise, should we need to terminate an employee, we simply remove that person from the associated group. (Note: The author realizes there will be more account maintenance involved, but it is beyond the scope of this discussion.) 750

AU1518Ch41Frame Page 751 Thursday, November 14, 2002 7:53 PM

Directory Security The key to remember in this method is for the practitioner to create groups that are based on either roles or rule sets. Users are then matched against these standards and then placed in the appropriate groups. This method requires some planning on the front end by the practitioner; but over time, it will create a more easily managed program than administering by user. When developing your group management plan, remember to adhere to the following procedure: • • • • • •

Determine the privileges needed. Determine the resources needed. Examine the default groups to determine if they meet the needs. Create custom groups to address unmet needs. Assign users to the appropriate groups. Give groups the privileges and access necessary to perform their tasks.

Because each network’s design is unique to the organization, careful consideration should be given to the use of custom groups. In 1998, Trusted Systems Services, Inc. (TSSI) addressed this very issue in its Windows NT Security Guidelines study for NSA Research. In this study, TSSI recommends alleviating most of the permissions applied to the public (everyone) group except for Read and Execute. TSSI then suggested the formation of the custom group called Installers that would take on all of these stripped permissions. The purpose of this group is to provide the necessary permissions for technicians who were responsible for the installation of new applications. While this group would not enjoy the privileges of the administrator’s group, it is still an excellent example of supporting the principle of least privilege through group memberships. ESTABLISHING CORRECT PERMISSIONS When establishing the correct permissions, it is important to understand not only the need to correctly identify the permissions at the beginning of the process but also that the process is an ongoing cycle. Regular audits on the permissions should be performed, including at least once a year by an independent party. This will help address any issues related to collusion and help ensure the integrity of the system. Account maintenance is also a piece of the ongoing cycle. Whether an employee is transferred between departments or is terminated, it is essential for the practitioner to ensure that permissions are redefined for the affected user in a timely manner. Failure to act in such a manner could result in serious damage to the organization. PERMISSIONS SETTINGS For demonstration purposes of this chapter, we examine the permission settings of two of the more popular operating systems — Microsoft and 751

AU1518Ch41Frame Page 752 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY Exhibit 41-1. Windows-based file permissions. Special Permissions Traverse Folder/Execute File List Folder/Read Data Read Attributes Read Extended Attributes Create Files/Write Data Create Folders/Append Data Write Attributes Write Extended Attributes Delete Subfolders and Files Delete Read Permissions Change Permissions Take Ownership Synchronize

Full Control x x x x x x x x x x x x x x

Modify x x x x x x x x

Read & Execute

Read

x x x x

x x x

Write

x x x x

x x

x

x

x

x

x

x

x

Linux. The practitioner will notice that these permissions apply to the server as well as the client workstations. Windows-based permissions are divided into two categories — file and directory. The Window-based file permissions include Full Control, Modify, Read & Execute, Read, and Write. Each of these permissions consists of a logical group of special permissions. Exhibit 41-1 lists each file permission and specifies which special permissions are associated with that permission. Note that groups or users granted Full Control on a folder can delete any files in that folder, regardless of the permissions protecting the file. The Windows-based folder permissions include Full Control, Modify, Read & Execute, List Folder Contents, Read, and Write. Each of these permissions consists of a logical group of special permissions. Exhibit 41-2 lists each folder permission and specifies which special permissions are associated with it. Although List Folder Contents and Read & Execute appear to have the same special permissions, these permissions are inherited differently. List Folder Contents is inherited by folders but not files, and it should only appear when you view folder permissions. Read & Execute is inherited by both files and folders and is always present when you view file or folder permissions. For the Linux-based operating systems, the file permissions of Read, Write, and Execute are applicable to both the file and directory structures. However, these permissions may be set on three different levels: User ID, Group ID, or the sticky bit. The sticky bit is largely used on publicly writeable directories to ensure that users do not overwrite each other’s files. 752

AU1518Ch41Frame Page 753 Thursday, November 14, 2002 7:53 PM

Directory Security Exhibit 41-2. Windows-based folder permissions.

Special Permissions Traverse Folder/Execute File List Folder/Read Data Read Attributes Read Extended Attributes Create Files/Write Data Create Folders/Append Data Write Attributes Write Extended Attributes Delete Subfolders and Files Delete Read Permissions Change Permissions Take Ownership Synchronize

Full Control x x x x x x x x x x x x x x

Modify x x x x x x x x

Read & Execute

List Folder Contents

x x x x

x x x x

Read Write x x x x x x x

x x

x

x

x

x

x

x

x

x

x

Exhibit 41-3. Permission management utilities. Utility calcs chmod chown usermod

Operating Environment Windows Linux/UNIX Linux/UNIX Linux/UNIX

When the sticky bit is turned on for a directory, users can have read and/or write permissions for that directory; but they can only remove or rename files that they own. The sticky bit on a file tells the operating system that the file will be executed frequently. Only the administrator (root) is permitted to turn the sticky bit on or off. In addition, the sticky bit applies to anyone who accesses the file. PERMISSION UTILITIES To effectively manage permissions, the practitioner should understand the various tools made available to them by the vendors. Both vendors provide a graphical user interface (GUI) and a command line interface (CL). While there are several high-profile third-party tools available, we will concentrate on the CL utilities provided by the operating system vendors. Exhibit 41-3 lists the various CL tools within the Windows- and Linux-based operating systems. A brief discussion of each utility follows. 753

AU1518Ch41Frame Page 754 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY You can use cacls to display or modify access control lists (ACLs) of files or folders in a Windows-based environment. This includes granting, revoking, and modifying user access rights. If you already have permissions set for multiple users or groups on a folder or file, be careful using the different variables. An improper variable setting will remove all user permissions except for the user and permissions specified on the command line. It is recommended that the practitioner utilize the edit parameter (/e) whenever using this command line utility. There are several parameters associated with the calcs command, and they can be viewed by simply entering calcs at the command prompt. The administrator can then view the permissions set for each of the files within the present directory. The chmod command is used to change the permissions mode of a file or directory. The chown command changes the owner of a file specified by the file parameter to the user specified in the owner parameter. The value of the owner parameter can be a user ID or a log-in name found in the password file. Optionally, a group can also be specified. Only the root user can change the owner of a file. You can change the group only if you are a root user or own the file. If you own the file but are not a root user, you can change the group only to a group of which you are a member. Usermod is used to modify a user’s log-in definition on the system. It changes the definition of the specified log-in and makes the appropriate log-in-related system file and file system changes. The groupmod command modifies the definition of the specified group by modifying the appropriate entry in the /etc/group file. SPECIFIC DIRECTORY PERMISSIONS As we consider directory permissions, there are three different types of directories — data directories, operating system directories, and application directories. While the permission standards may differ among each of these directory types, there are two common permission threads shared among all of them — the system administrator group and the system will maintain inclusive permissions to each of them. (Note: The administrator’s group does not refer to a particular operating system but to a resource level in general. We could easily substitute root for the administrator’s title.) Because the administrator is responsible for the network, including the resources and data associated with the network, he must maintain the highest permission levels attainable through the permission structure. The system refers to the computer and its requirements for carrying out tasking entered by the user. Failure to provide this level of permission to the system could result in the unit crashing and a potential loss of data. Otherwise, 754

AU1518Ch41Frame Page 755 Thursday, November 14, 2002 7:53 PM

Directory Security unless explicitly stated, all other parties will maintain no permissions in the following discussions. The data directories may be divided into home directories and shared directories. Home directories provide a place on the network for end users to store data they create or to perform their tasking. These directories should be configured to ensure adequate privacy and confidentiality from other network services. As such, the individual user assigned to the directory shall maintain full control of the directory. If the organization has defined a need for a dedicated user data manager resource, this individual should also have full control of the directory. Share directories are placed on the network to allow a group of individuals access to a particular set of data. These directories should not be configured with individual permissions but with group permissions. For example, accounts payable data may be kept in a shared directory. A custom group could be created and assigned the appropriate permissions. The user permissions are slightly different from home directories. Instead of providing the appropriate user with full control, it has been recommended to provide the group with Read, Write, Execute, and Delete. This will only allow the group to manipulate the data within the file; they cannot delete the file itself. Additionally, these permissions should be limited to a single directory and not passed along to the subdirectories. Security is often an afterthought in the actual application design, especially in the proprietary applications designed in-house. As unfortunate as this is, it is still a common practice; and we must be careful to check the directory permissions of any newly installed application — whether it is developed within the organization or purchased from a third party — because users are often given a full set of permissions in the directory structure. Generally, the application users will not need more than read permissions on these directories, unless a data directory has been created within the application directory structure. If this case exists, the data directory should be treated according to the shared data directory permissions previously discussed. Additionally, the installers group should have the ability to implement changes to the directory structure. This would allow them to apply service patches and upgrades to the application. The third division is the operating system directories. It is critical for the practitioner to have the proper understanding of the operating system directory and file structure before beginning any installation. Failure to understand the potential vulnerabilities, whether they are in the directory structure or elsewhere, will result in a weak link and an opportunity for the E-criminal. As stated earlier in the chapter, vendors often create default installations to be user friendly. This provides for the most lenient permissions 755

AU1518Ch41Frame Page 756 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY and the largest vulnerabilities to our systems. To minimize the vulnerability, establish read-only permissions for the average user. There will be situations in which these permissions are insufficient, and they should be dealt with on a case-by-case basis. Personnel who provide desktop and server support may fall into this category. In this case, create a custom group to support the specific activities and assign permissions equivalent to read and add. Additionally, all operating system directories should be owned by the administrator only. This will limit the amount of damage an E-criminal could cause to the system. SENSITIVE FILE PERMISSIONS Until now, we have only looked at the directory permissions. While this approach addresses many concerns, it is only half of our battle. Several different file types within a directory require special consideration based on their roles. The particular file types are executable/binary compiled, print drivers, scripting files, and help files. Executable/binary files are dangerous because they direct the system or application to perform certain actions. Examples of these file extensions are DLL, EXE, BAT, and BIN. The average user should be restricted to read and execute permissions. They should not have the ability to modify these files. Print drivers are often run with a full permission set. Manipulation of these files could allow the installation of a malicious program that runs at the elevated privilege. The average user should be limited to a read and execute permission set. Improperly set permissions on scripting files, such as Java and ActiveX, could allow for two potential problems. By providing the elevated privileges on these files, the user has the ability to modify these files to place a call to run a malicious program or promote program masquerading. Program masquerading is the act of having one program run under the pretext that it is actually another program. For these reasons, these files should also have a read and execute permission set. Help files often contain executable code. To prevent program masquerading and other spoofing opportunities, these files should not be writeable. MONITORING AND ALERTS After we have planned and implemented our permission infrastructure, we will need to establish a methodology to monitor and audit the infrastructure. This is key to ensuring that unauthorized changes are identified in a timely manner and to limit the potential damage that can be done to 756

AU1518Ch41Frame Page 757 Thursday, November 14, 2002 7:53 PM

Directory Security our networks. This process will also take careful planning and administration. The practitioner could implement a strategy that would encompass all of the permissions, but such a strategy would become time-consuming and ineffective. The more effective approach would be to identify the directories and files that are critical to business operations. Particular attention should be given to sensitive information, executables that run critical business processes, and system-related tools. While designing the monitoring process, practitioners should be keenly aware of how they will be notified in the event a monitoring alarm is activated and what type of actions will be taken. As a minimum, a log entry should be created for each triggered event. Additionally, a mechanism should be in place to notify the appropriate personnel of these events. The mechanism may be in the form of an e-mail, pager alert, or telephone call. Unfortunately, not all operating systems have these features built in; so the practitioner may need to invest in a third-party product. Depending upon the nature of the organization’s business, the practitioner may consider outsourcing this role to a managed services partner. These partnerships are designed to quickly identify a problem area for the client and implement a response in a very short period. Once a response has been mounted to an alert, it is also important for the team to review the events leading up to the alert and attempt to minimize the event’s recurrence. One can take three definitive actions because of these reviews: 1. Review the present standards and make changes accordingly. If we remember that security is a business enabler and not a disabler, we understand that security must be flexible. Our ideal strategy may need slight modifications to support the business model. Such changes should be documented for all parties to review and approve and to provide a paper trail to help restore the system in the event of a catastrophic failure. 2. Educate the affected parties. Often, personnel may make changes to the system without notifying everyone. Of course, those who were not notified are the ones affected by the changes. The practitioner may avoid a repeat of the same event by educating the users on why a particular practice is in place. 3. Escalate the issue. Sometimes, neither educating users nor modifying standards is the correct solution. The network may be under siege either from an internal or external source, and it is the practitioner’s duty to escalate these issues to upper management and possibly law enforcement officials. For further guidance on handling this type of scenario, one should contact one’s legal department and conduct further research on the CERT and SANS Web sites. 757

AU1518Ch41Frame Page 758 Thursday, November 14, 2002 7:53 PM

OPERATIONS SECURITY AUDITING Auditing will help ensure that file and directory systems are adhering to the organization’s accepted standards. While an organization may perform regular internal audits, it is recommended to have the file and directory structure audited by an external company annually. This process will help validate the internal results and limit any collusion that may be occurring within the organization. CONCLUSION While most businesses are addressing the markets’ calls for userfriendly and ease-of-use operating systems, they are overlooking the security needs of most of the corporate infrastructure. This has led to unauthorized accesses to sensitive file structures and, as a result, is placing the organization in a major dilemma. Until we take the time to properly identify file and directory security permissions that best fit our organization’s business charter, we cannot begin to feel confident with our overall network security strategy. References 1. Anonymous, Maximum Linux Security, Sams Publishing, Indiana, 1999. 2. Jumes, James G. et al, Microsoft Windows NT 4.0 Security, Audit and Control, Microsoft Press, Washington, 1999. 3. Internet Security Systems, Inc., Microsoft Windows 2000 Security Technical Reference, Microsoft Press, Washington, 2000. 4. Kabir, Mohammed J., Red Hat Linux Administrator’s Handbook, 2nd ed., M&T Books, California, 2001. 5. Schultz, E. Eugene, Windows NT/2000 Network Security, Macmillan Technical Publishing, New York, 2000. 6. Sutton, Steve, Windows NT Security Guidelines, Trusted Systems Services, Inc., 1999.

ABOUT THE AUTHOR Ken Buszta, CISSP, is Chief Information Security Officer for the City of Cincinnati, Ohio, and has more than ten years of IT experience and six years of InfoSec experience. He served in the U.S. Navy’s intelligence community before entering the consulting field in 1994. Should you have any questions or comments, he can be reached at [email protected].

758

AU1518Ch42Frame Page 759 Thursday, November 14, 2002 7:52 PM

Domain 8

Business Continuity Planning

AU1518Ch42Frame Page 760 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING This domain contains two sections that provide special insight into this availability-oriented area of information security. The first focuses on the changing requirements for business continuity planning, and the other on ideas for implementing business continuity planning. The first chapter recognizes the rather frustrating fact that continuity planning is an ever-changing and evolving practice. This chapter discusses the primary factors that must be considered by continuity planning professionals who are trying to advance their skills and approaches in order to stay current in view of rapidly evolving, almost daily events. Some lessons learned as a result of the 9/11 terrorist attacks are reviewed with the purpose of avoiding the same mistakes or inadequate assumptions. One of the most obvious shortfalls was the lack of attention paid to training, education, and awareness. Continuous availability and high availability are today’s key terms. How to achieve this level of availability is the problem addressed. The key point in summary is that understanding the evolution and future focus of continuity planning as it supports information security responsibilities will be the key to success. This chapter shows the way to that success goal. The second chapter addresses thoughts about a collaborative approach to business continuity planning. The many factors involved in choosing the best course of action really force the appropriate use of all organizational resources. Obviously, the available resources vary greatly from organization to organization. How to build a workable continuity plan is described in detail. Then, the chapter goes on to describe the challenge of how to accomplish plan implementation. This process requires close collaboration with many organizational functions and key people. It is not an easy or inexpensive project, but when completed can provide the element of insurance necessary for the organization to survive. In the final analysis, it is hoped that the business continuity plan is never executed; but if it is needed, it is certainly worth doing right.

760

AU1518Ch42Frame Page 761 Thursday, November 14, 2002 7:52 PM

Chapter 42

The Changing Face of Continuity Planning Carl Jackson, CISSP, CBCP

To one degree or another, the information security professional has always had responsibility for ensuring the availability and continuity of enterprise information. While still the case, specialization within the availability discipline has resulted in the growth of the continuity planning (CP) profession and the evolution into full-time continuity planners by many former information security specialists. Aside from the growth and reliance upon E-business by most major worldwide companies, the events of September 11, 2001, and even the Enron meltdown have served to heighten awareness for increased planning and advanced arrangements for ensuring availability. The reality is that continuity planning has a changing face, and is simply no longer recovery planning as usual. This chapter focuses on some of the factors to be considered by continuity planning professionals who must advance their skills and approaches to keep up with swiftly evolving current events. REVOLUTION Heraclitus once wrote, “There is nothing permanent except change.” The continuity planning profession has evolved from the time when disaster recovery planning (DRP) for mainframe data centers was the primary objective. Following the September 11 attacks and the subsequent calls for escalating homeland security in the United States, the pace of change for the CP profession has increased dramatically from just a few months prior to the attacks. In looking back, some of us who have been around awhile may reminisce for the good ole’ days when identification of critical applications was the order of the day. These applications could be easily plucked from a production environment to be plopped down in a hot site somewhere, all in the name of preventing denial of access to information assets. In retrospect, things were so simple then — applications stood alone, hard-wired coax connectivity was limited and limiting, centralized change control ruled, physical security for automated spaces solved a multitude of 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

761

AU1518Ch42Frame Page 762 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING sins, and there were less than half a dozen vendors out there that could provide assistance. Ah, those were the days! The kind of folks who performed disaster recovery tasks in those times were fairly technical and were usually associated with the computer operations side of the house. They tended to understand applications and disk space and the like, and usually began their disaster recovery planning projects by defining, or again redefining, critical applications. Of course, the opinion of the computer operations staff about what constituted a critical application and that of the business process owner many times turned out to be two different things. Of late, especially since September 11, we have seen the industry shift from a focus strictly on computer operations and communications recovery planning to one where business functionality and processes are considered the start and endpoint for proper enterprisewide availability. This is the point where many continuity planners began to lose their technical focus to concentrate on understanding business process flow and functional interdependencies so that they could map them back to supporting resources that included IT and communications technologies. Some of us simply lost our technological edge, due to the time it took to understand business processes and interdependencies, but we became good at understanding business value-chain interrelationships, organizational change management, and process improvement/reengineering. Exhibit 42-1 depicts the evolution of industry thinking relative to the passage from technical recovery to business process recovery. It also reflects the inclination by continuity planners to again focus on technologies for support of Internet-based business initiatives. As organizations move operations onto the Web, they must ensure the reliability and availability of Web-based processes and technologies. This includes the assurance that trading partners, vendors, customers, and employees have the ability to access critical B2B (business-to-business) and B2C (business-to-customer) resources. This has been identified in recent security surveys (sources include Gartner Research, IDC, and Infonetics) that suggest the worldwide marketplace for Internet security solutions will reach somewhere around $20 billion by 2004. Included within the scope of the security solutions marketplace are myriad products that facilitate detection, avoidance, mitigation of, and recovery from adverse events. THE LESSONS OF SEPTEMBER 11 For the past decade or so, continuity planners have been shifting the emphasis to business process planning as the starting point for any meaningful continuity planning exercise. The pace has accelerated within the 762

AU1518Ch42Frame Page 763 Thursday, November 14, 2002 7:52 PM

The Changing Face of Continuity Planning

Significant Disaster Recovery Process Reengineering

Disaster Recovery Planning Activities and Tasks

Mainframe Disaster Recovery Planning Technical Focus - Application Recovery Priority - Hot site/cold site focus - No or little business impact assessment - Little or no user involvement - Limited testing - Data center management responsible

Continuity Planning Process Improvement

Business Continuity and Crisis Management Process

Business Continuity Planning Business Function Focus - Client/server and distributed processing - Business unit management involvement - Technology recovery linked to business function needs - Crisis management infrastructure tied into business continuity planning functions - Shared responsibilities between IT and business owners

Availability Business Processes

Continuity Planning Time-Critical Business Process Focus - Availability business process incorporates IT continuity, business operations continuity, crisis management strategies and continuous or high availability of Web-based technologies in support of 24/7 business imperatives - Primary responsibilities of business owners with IT support

Exhibit 42-1. Evolution from technical recovery to business process recovery.

past five years, with E-business considerations driving shorter and shorter recovery time windows. But something happened following the September 11 attacks in the United States that appeared to redouble the speed of shifting focus for many of us. We have all lived through much since the attacks of September 11. Our horror turned to shock and then grief for those souls lost on that day, and continues in military and related activities the world continues to undertake in response to these atrocities. As continuity planning professionals, we have a very unique view of events such as these because our careers so closely relate to mitigation and recovery from disruptions and disasters. Call to Arms The September 11 attacks raised the awareness level for the need for appropriate recovery planning in the United States and indeed the rest of the world. The U.S. Attorney General’s call for companies to revisit their security programs in light of the terrorist attacks on U.S. properties should also serve to put executive management on notice — as if they needed any more incentives — that it may be time to rethink investments in their security and continuity planning programs. There are no signs that the potential for disruptions caused by terrorist activities will be over anytime soon. In fact, it was recently made public 763

AU1518Ch42Frame Page 764 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING that the U.S. Government has activated its own continuity plans by establishing off-site operations for all three branches of government at secret locations outside of the Washington, D.C. area. These contingency plans were originally prepared during the Eisenhower administration in anticipation of nuclear attack during the Cold War, but they were thankfully never needed — until now. It is more than interesting to think that these long-prepared contingency plans had to be activated some 50 years later! I wonder if the folks who suggested that these plans be developed in the first place had to worry about cost justification or return on investment? A Look at the Aftermath The extent of the damage to the WTC complex alone was staggering. Even six months following the attacks, companies displaced by them continue to struggle. The Wall Street Journal reported on March 15, 2002, that of the many large companies impacted by September 11, numerous ones remain either undecided about moving back or have decided not to move back into the same area (see Exhibit 42-2). The graphic illustrates the destroyed and damaged buildings and lists some of the large companies located there. This event displaced well over 10,000 employees of the hundreds of companies involved. It is estimated that in excess of 11 million square feet of space have been impacted. There were many lessons learned from these tragic events. There are two areas that stick most in my mind as significant. First, it was the bravery of the people in reacting to the event initially and within a short period of time following the events; and second, it was the people who had to execute under duress on the many recovery teams that reacted to help their organizations survive. It was the people who made it all happen, not just the hot sites or the extra telecommunications circuits. That lesson, above all, must be remembered and used as a building block of future leading practices. The Call for Homeland Security From the mailroom to the executive boardroom, calls abound for increased preparations of your organization’s responsibility in ensuring homeland security. Following September 11, continuity planners must be able to judge the risk of similar incidents within their own business environments. This includes ensuring that continuity planning considerations are built in to the company’s policies for dealing with homeland security. Planners cannot neglect homeland security issues for their own organization, but they must also now be aware of the preparations of public- and private-sector partner organizations. Once understood, planners must interleave these external preparations with their own continuity and crisis management planning actions. In addition, continuity planners may want to 764

AU1518Ch42Frame Page 765 Thursday, November 14, 2002 7:52 PM

Exhibit 42-2. Plans to move back to Ground Zero. (Source: The Wall Street Journal, March 15, 2002.)

The Changing Face of Continuity Planning

765

AU1518Ch42Frame Page 766 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING RED ALERT The Bush administration unveiled a color-coded, five-level warning system for potential terrorist attacks. In the future, Attorney General Ashcroft will issue higher states of alerts for regions, industries, and businesses that may be the specific targets of terrorists. Severe (Red) Severe Risk of Terrorist Attacks High (Orange) High Risk of Terrorist Attacks Elevated (Yellow) Significant Risk of Terrorist Attacks Guarded (Blue) General Risk of Terrorist Attacks Low (Green) Low Risk of Terrorist Attacks

Exhibit 42-3. Alert system offered by the Office of Homeland Security. (Source: Office of Homeland Security.) consider adoption, for crisis management purposes, of an alert system similar to the one offered by the Office of Homeland Security (see Exhibit 42-3). The Importance of Education, Training, and Awareness The results of the 2000/2001 CPM/KPMG Business Continuity Study Benchmark Report show that dismal attention has been paid by many companies to training, education, and awareness. When asked, “Do employees get sufficient disaster recovery/business continuity planning training?”, of those answering the survey, 75 percent responded with no for the year 1999; and 69.5 percent said no for the year 2000. Unfortunately, I doubt that these percentages have improved to any significant degree, even since September 11. People Must Be the Focus People are important! Whether it is a life safety issue, or their participation in the recovery after the event, it is people who are most impacted by the disruption; and it is people who will have to recover following the disruption. All one has to do is look at case studies of the companies that had to recover following the attacks on the World Trade Center. For instance, in one sad case, all of the people who had participated in the most recent hot site test perished in the attack. 766

AU1518Ch42Frame Page 767 Thursday, November 14, 2002 7:52 PM

The Changing Face of Continuity Planning Planners simply should not allow haphazard education, training, and awareness programs to continue. These programs must be designed to teach the people how to protect themselves and the organization and to periodically refresh the message. The single largest lesson that must be learned from September 11 is that the people must be the focus of all crisis management and continuity planning activities — not technology. There is absolutely no question that technologies and their recovery requirements are vital, but technologies and processes are things that can be reconstructed or replaced. People cannot, as demonstrated by the loss of approximately 3000 souls on September 11. What about Executive Protection and Succession Plans? Not typically considered as part of the continuity planning responsibility, the events of September 11 call attention to the need for organizational management to revisit dated executive protection and succession plans and to test enterprise crisis management plans by challenging old assumptions based upon pre-September 11 thinking. Business Process Continuity versus IT DRP Another lesson learned was that, while many companies impacted by the events were able to recover automated operations, the vast majority of them were seriously disabled from a business process/operations standpoint. Their inability to physically transport people and supplies — given aircraft groundings — to off-site locations suitable for recovering business processes and supporting infrastructures (i.e., mail room operations, client/ server configurations, purchasing, HR, back-office operations, etc.) illustrated that the practice of only preparing for IT recovery had resulted in a serious shortfall of preparations. Security and Threats Shifting There were many, many more companies seriously impacted than those located directly in the WTC buildings. Businesses all over the country, and indeed the world, that had critical dependencies upon the WTC-based companies were also injured by the event. Subsequent severe travel restrictions and the resulting economic downturn affected countless other organizations. Our highly interconnected world is much different than our world of just a few short non-Internet years ago. There are no islands in the global economy; and because the United States is the largest economic engine in that financial system, and because each U.S. company plays a role in that engine, it seems really rather shortsighted for major companies to not be making availability-related investments. Our risks have changed and shifted focus in addition to the ones mentioned above. 767

AU1518Ch42Frame Page 768 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING Others include: • Nuclear power plant security. Recent media reports indicate that the U.S. Nuclear Regulatory Commission is unsure how many foreign nationals or security guards are employed at nuclear reactors and does not require adequate background checks of nuclear reactor employees that would uncover terrorist ties. There are 21 U.S. nuclear reactors located within five miles of an airport, 96 percent of which were not designed to withstand the crash of a small airplane. • Airport security. It was recently reported (Fox News, March 25, 2002) that, according to a confidential February 19, 2002, Transportation Department memo, the department ran tests of security at 32 airports around the country that continued to be found lacking. • Border security. There is focused attention on the increased security needs and staffing levels of border security staff along both the Canadian and Mexican borders to the United States, and President Bush is calling for consolidation of the INS and Customs Department. • Food and water supply security. In connection with concerns over bioterrorism, Homeland Security is calling for consolidation of rival U.S. agencies responsible for food and water safety. • Internet security. The U.S. Government is attempting to persuade industry to better protect the Internet from threats of cyber-crime and cyber-terrorism. • Travel security. Key personnel residences and travel to unstable international destinations must be monitored and controlled appropriately. Reassess Risk As enterprise risk is assessed, through either traditional risk analysis/assessment mechanisms or through business impact assessments, understanding potential impacts from these expanded threats is essential and prudent. We must consider the impact of functionality loss that may occur either inside or outside our walls. These types of potential impacts include the direct ones, like those listed above, and those impacts that might disrupt an external entity that our organization relies upon — a supply-chain partner, key vendor, outsourcer, parent or subsidiary company, etc. Now is the time to go back and seriously consider the last time your organization performed a comprehensive risk assessment/business impact assessment, and think about updating it. Organizations change over time and should be reevaluated frequently. THE LESSONS OF ENRON Speaking of reliance on key external relationships, the Enron situation and its repercussions among the supply-chain partners, outsourcers, vendors, and supplier relationships continue to ripple through several industry 768

AU1518Ch42Frame Page 769 Thursday, November 14, 2002 7:52 PM

The Changing Face of Continuity Planning groups. Understanding your organization’s reliance upon primary supplychain partners and assorted others is crucial in helping you anticipate the breadth and scope of continuity and crisis management planning efforts — if for no other reason than for you to say that these issues were considered during preparations and not merely ignored. Granted, there is no question that, given the global level of the Enron-related events, it would have been challenging for those with internal continuity planning responsibilities to anticipate the extent of the impacts and to appropriately prepare for all contingencies. But in hindsight, it will be incumbent upon those who have responsibility for preparing continuity and crisis management plans to be at least aware of the potential of such events and be prepared to demonstrate some degree of due diligence. COMPUTER FORENSIC TEAMS The composition of crisis management and continuity planning teams is changing as well. Virus infestations, denial-of-service attacks, spoofing, spamming, content control, and other analogous threats have called for the inclusion of computer forensic disciplines into development of continuity planning infrastructures. Forensic preparations include understanding the procedures necessary to identify, mitigate, isolate, investigate, and prosecute following such events. It is necessary to incorporate enterprise forensic teams, legal resources, and public relations into continuity planning and crisis management response teams. THE INTERNET AND ENTERPRISE CONTINUOUS AVAILABILITY With growing Internet business process reliance on supporting technologies as the motivating force, continuity planners must once again become conversant and comfortable with working in a technical environment — or at least comfortable enough to ensure that the right technical or infrastructure personnel are involved in the process. The terminology currently used to describe this Internet resource availability focal point is continuous or high availability. Continuous availability (CA) is a building-block approach to constructing resilient and robust technological infrastructures that support highavailability requirements. In preparing your organization for high availability, focusing on automated applications is only a part of the problem. On this topic Gartner Research writes: Replication of databases, hardware servers, Web servers, application servers, and integration brokers/suites help increase availability of the application services. The best results, however, are achieved when, in addition to the reliance on the system’s infrastructure, the design of the application itself incorporates considerations for continuous availability. Users looking to achieve continuous availability 769

AU1518Ch42Frame Page 770 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING for their Web applications should not rely on any one tool but should include the availability considerations systematically at every step of their application projects. — Gartner Group RAS Services COM-12-1325, 29 September 2000

Implementing CA is easier said than done. The key to achieving 24/7 or near-24/7 availability begins with the process of determining business process owner needs, vulnerabilities, and risks to the network infrastructure (e.g., Internet, intranet, extranet, etc.). As part of considering implementation of continuous availability, continuity planners should understand: • The resiliency of network infrastructures as well as the components thereof • The capability of their infrastructure management systems to handle network faults • The network configuration and change control practices • The ability to monitor network availability • Infrastructure single points of failure • The ability of individual network components to handle capacity requirements, among others Among the challenges facing continuity planners in CA are: • Ensuring that time-critical business processes are identified within the context of the organization’s Web-based initiatives • Making significant investments in terms of infrastructure hardware, software, management processes, and consulting • Obtaining buy-in from organizational management in the development, migration, and testing of CA processes • Keeping continuous availability processes in line with enterprise expectations for their organization’s continuity and crisis management plans • Ensuring CA processes are subjected to realistic testing to assure their viability in an emergency FULL-SCOPE CONTINUITY PLANNING BUSINESS PROCESS The evolution from preparing disaster recovery plans for mainframe data centers to performing full-scope continuity planning and, of late, to planning for the continuous operations of Web-based infrastructure begs the question of process improvement. Reengineering or improving continuity planning involves not only reinvigorating continuity planning processes but also ensuring that Web-based enterprise needs and expectations are identified and met through implementation of continuous availability disciplines. Today, the continuity planning professional must 770

AU1518Ch42Frame Page 771 Thursday, November 14, 2002 7:52 PM

The Changing Face of Continuity Planning possess the necessary skill set and expertise to be able to effectively manage a full-scope continuity planning environment that includes: • IT continuity planning. This skill set addresses the recovery planning needs of the organization’s IT infrastructures, including centralized and decentralized IT capabilities, and includes both voice and data communications network support services. This process includes: — Understanding the viability and effectiveness of off-site data backup capabilities and arrangements — Executing the most efficient and cost-effective recovery alternative, depending upon recovery time objectives of the IT infrastructure and the time-critical business processes it supports — Development and implementation of a customized IT continuity planning infrastructure supported by appropriately documented IT continuity plans for each primary component of the IT infrastructure — Execution of IT continuity planning testing, maintenance, awareness, training, and education programs to ensure long-term viability of the plans, and development of appropriate metrics that can be used to measure the value-added contribution of the IT infrastructure continuity plans to the enterprise people, process, technologies, and mission • Business operations planning. This skill set addresses recovery of an organization’s business operations (i.e., accounting, purchasing, etc.) should they lose access to their supporting resources (i.e., IT, communications network, facilities, external agent relationships, etc.). This process includes: — Understanding the external relationships with key vendors, suppliers, supply-chain partners, outsourcers, etc. — Executing the most efficient and cost-effective recovery alternative, depending upon recovery time objectives of the business operations units and the time-critical business processes they support — Development and implementation of a customized business operations continuity plan supported by appropriately documented business operations continuity plans for each primary component of the business units — Execution of business operations continuity plan testing, maintenance, awareness, training, and education programs to ensure longterm viability of the plans — Development of appropriate metrics that can be used to measure the value-added contribution of the business operations continuity plans to the enterprise people, processes, technologies, and mission • Crisis management planning. This skill set addresses development of an effective and efficient enterprisewide emergency/disaster response capability. This response capability includes forming appropriate 771

AU1518Ch42Frame Page 772 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING









772

management teams and training their members in reacting to serious company emergency situations (i.e., hurricane, earthquake, flood, fire, serious hacker or virus damage, etc.). Key considerations for crisis management planning include identification of emergency operations locations for key management personnel to use in times of emergency. Also of importance is the structuring of crisis management planning components to fit the size and number of locations of the organization (many small plans may well be better than one large plan). As the September 11 attacks fade somewhat from recent memory, let us not forget that people responding to people helped save the day; and we must not ever overlook the importance of time spent on training, awareness, and education for those folks who will have responsibilities related to continuity following a disruption or disaster. As with IT and business operations plans, testing, maintenance, and development of appropriate measurement mechanisms is also important for long-term viability of the crisis management planning infrastructure. Continuous availability. This skill set acknowledges that the recovery time objective (RTO) for recovery of infrastructure support resources in a 24/7 environment has shrunk to zero time. That is to say that the organization cannot afford to lose operational capabilities for even a very short period of time without significant financial (revenue loss, extra expense) or operational (customer service, loss of confidence) disruptions. CA focuses on maintaining the highest possible uptime of Web-based support infrastructures, of 98 percent and higher. The importance of testing. Once developed and implemented, the individual components of the continuity plan business process must be tested. What is more important is that the people who must participate in the recovery of the organization must be trained and made aware of their roles and responsibilities. Failure of companies to do this properly was probably the largest lesson learned from the September 11 attacks. Continuity planning is all about people! Education, training, and awareness. Renewed focus on practical personnel education, training, and awareness programs is called for now. Forming alliances with other business units within your organization with responsibility for awareness and training, as well as utilizing continuity planning and crisis management tests and simulations, will help raise the overall level of awareness. Repetition is the key to ensuring that, as personnel turnover occurs, there will always be a suitable level of understanding among remaining staff. The need to measure results. The reality is that many executive management groups have difficulty getting to the bottom of the value-add question. What degree of value does continuity planning add to the enterprise people, processes, technology, and mission? Great question. Many senior managers do not seem to be able to get beyond the financial justification barrier. There is no question that justification of investment

AU1518Ch42Frame Page 773 Thursday, November 14, 2002 7:52 PM

The Changing Face of Continuity Planning in continuity plan business processes based upon financial criteria is important, but it is not usually the financial metrics that drive recovery windows. These metrics must be both quantitative and qualitative. It is the customer service and customer confidence issues that drive short recovery time frames, which are typically the most expensive. Financial justifications typically only provide support for them. Implementation of an appropriate measurement system is crucial to success. Companies must measure not only the financial metrics but also how the continuity planning business process adds value to the organization’s people, processes, technologies, and mission. These metrics must be both quantitative and qualitative. Focusing on financial measures alone is a lopsided mistake! CONCLUSION The growth of the Internet and E-business, corporate upheavals, and the tragedy of September 11 and subsequent events have all contributed to the changing face of continuity planning. We are truly living in a different world today, and it is incumbent upon the continuity planner to change to fit the new reality. Continuity planning is a business process, not an event or merely a plan to recover. Included in this business process are highly interactive continuity planning components that exist to support time-critical business processes and to sustain one another. The major components include planning for: • IT and communications (commonly referred to as disaster recovery planning) • Business operations (commonly referred to as business continuity planning) • Overall company crisis management • And, finally, for those companies involved in E-business — continuous availability programs In the final analysis, it is incumbent upon continuity planning professionals to stay constantly attuned to the changing needs of our constituents, no matter the mission or processes of the enterprise. The information security and continuity planning professional must possess the necessary skill set and expertise to effectively manage a full-scope continuity planning environment. Understanding the evolution and future focus of continuity planning as it supports our information security responsibilities will be key to future successes. As Jack Welch has said, “Change before you have to.” 773

AU1518Ch42Frame Page 774 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING ABOUT THE AUTHOR Carl Jackson, CISSP, CBCP, brings more than 25 years of experience in the areas of business continuity planning, information security, and IT internal control reviews and audits. As the vice president, continuity planning, for QinetiQ-Trusted Information Management Corporation, he is responsible for the continued development and oversight of QinetiQ-TIM (U.S.) methodologies and tools in the enterprisewide business continuity planning arena, including network and E-business availability and recovery.

774

AU1518Ch43Frame Page 775 Thursday, November 14, 2002 7:52 PM

Chapter 43

Business Continuity Planning: A Collaborative Approach Kevin Henry, CISA, CISSP

Business continuity planning (BCP) has received more attention and emphasis in the past year than it has probably had cumulatively during the past several decades. This is an opportune time for organizations to leverage this attention into adequate resourcing, proper preparation, and workable business continuity plans. Business continuity planning is not glamorous, not usually considered to be fun, and often a little mundane. It can have all the appeal of planning how to get home from the airport at the end of an all-too-short vacation. This chapter examines some of the factors involved in setting up a credible, useful, and maintainable business continuity program. From executive support through good leadership, proper risk analysis and a structured methodology, business continuity planning depends on key personnel making business-oriented and wise decisions, involving user departments and supporting services. Business continuity planning can be defined as preparing for any incident that could affect business operations. The objective of such planning is to maintain or resume business operations despite the possible disruption. BCP is a pre-incident activity, working closely with risk management to identify threats and risks and reducing the likelihood or impact of any of these risks occurring. Many such incidents develop into a crisis, and the focus of the effort turns to crisis management. It is at this time that the value of prior planning becomes apparent. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

775

AU1518Ch43Frame Page 776 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING The format of this chapter is to outline the responsibilities of information systems security personnel and information systems auditors in the BCP process. A successful BCP program is one that will work when needed and is built on a process of involvement, input, review, testing, and maintenance. The challenge is that a BCP program is developed in times of relative calm and stability, and yet it needs to operate in times of extreme stress and uncertainty. As we look further into the role of leadership in this chapter, we will see the key role that the leader has in times of crisis and the importance of the leader’s ability to handle the extreme stress and pressures of a crisis situation. A significant role of the BCP program is to develop a trained and committed team to lead, manage, and direct the organization through the crisis. Through this chapter we will examine the aspects of crisis development, risk management, information gathering, and plan preparation. We will not go into as much detail about the plan development framework because this is not normally a function of IT or security professionals, yet understanding the role and intent of the business continuity program coordinator will permit IT professionals to provide effective and valued assistance to the BCP team. So what is the purpose of the BCP program? It is to be prepared to meet any potential disruption to a business process with an effective plan, the best decisions, and a minimization of interruption. A BCP program is developed to prepare a company to recover from a crisis — an event that may have serious impact on the organization, up to threatening the survival of the organization itself. Therefore, BCP is a process that must be taken seriously, must be thorough, and must be designed to handle any form of crisis that may occur. Let us therefore look at the elements of a crisis so that our BCP program will address it properly. THE CRISIS A crisis does not happen in isolation. It is usually the combination of a number of events or risks that, while they may not be catastrophic in themselves, in combination they may have catastrophic results. It has sometimes been said that it takes three mistakes to kill you, and any interruption in this series of events may prevent the catastrophe from taking place. These events can be the result of preexisting conditions or weaknesses that, when combined with the correct timing and business environment, initiate the crisis. This can be called a catalyst or crisis trigger. Once the crisis has begun, it evolves and grows, often impacting other areas beyond its original scope and influence. This growth of the crisis is the most stressful period for the people and the organization. This is the commencement of the crisis management phase and the transition from a 776

AU1518Ch43Frame Page 777 Thursday, November 14, 2002 7:52 PM

Business Continuity Planning: A Collaborative Approach preparatory environment to a reactionary environment. Decisions must be made on incomplete information amidst demands and pressure from management and outside groups such as the media and customers. An organization with an effective plan will be in the best position to survive the disaster and recover; however, many organizations find that their plan is not adequate and are forced to make numerous decisions and consider plans of action not previously contemplated. Unfortunately, most people find that Rudin’s Law begins to take effect: When a crisis forces choosing among alternatives, most people will choose the worst possible one. — Rudin’s Law

Let us take a closer look at each of these phases of a crisis and how we can ensure that our BCP program addresses each phase in an effective and timely manner. Preexisting Conditions In a sporting event, the opposition scores; and when reviewing the video tapes later, the coach can clearly see the defensive breakdowns that led to the goal. A player out of position, a good “deke” by the opponent (used in hockey and soccer when an opposing player fools the goalie into believing that he is going in one direction and yet he actually goes in a different direction, thereby pulling the goaltender out of position and potentially setting up a good opportunity to score), a player too tired to keep pace — each contributing to the ability of the unwanted event to occur. Reviewing tapes is a good post-event procedure. A lot can be learned from previous incidents. Preparations can be made to prevent recurrence by improvements to the training of the players, reduction of weakness (maybe through replacing or trading players), and knowledge of the techniques of the opponents. In business we are in a similar situation. All too often organizations have experienced a series of minor breakdowns. Perhaps they never became catastrophes or crises, and in many cases they may have been covered up or downplayed. These are the best learning events available for the organization. They need to be uncovered and examined. What led to the breakdown or near-catastrophe, what was the best response technique, who were the key players involved — who was a star, and who, unfortunately, did not measure up in times of crisis? These incidents uncover the preexisting conditions that may lead to a much more serious event in the future. Examining these events, documenting effective response techniques, listing affected areas, all provide input to a program that may reduce the preexisting conditions and thereby avert a catastrophe — or at least assist in the creation of a BCP that will be effective. 777

AU1518Ch43Frame Page 778 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING Other methods of detecting preexisting conditions are through tests and audits, interviewing the people on the floor, and measuring the culture of the organization. We often hear of penetration tests — what are they designed to do? Find a weakness before a hostile party does. What can an audit do? Find a lack of internal control or a process weakness before it is exploited. Why do we talk to the people on the floor? In many cases, simply reading the policy and procedure manuals does not give a true sense of the culture of the organization. One organization that recently received an award for their E-commerce site was immediately approached by several other organizations for a description of their procedure for developing the Web site. This was willingly provided — except that in conversation with the people involved, it was discovered that in actual fact the process was never followed. It looked good on paper, and a lot of administrative time and effort had gone into laying out this program; but the award-winning site was not based on this program. It was found to be too cumbersome, theoretical, and, for all intents and purposes, useless. Often, merely reviewing the policy will never give the reader a sense of the true culture of the organization. For an effective crisis management program and therefore a solid, useable BCP program, it is important to know the true culture, process, and environment — not only the theoretical, documented version. One telecommunications organization was considering designing its BCP for the customer service area based on the training program given to the customer service representatives. In fact, even during the training the instructors would repeatedly say, “This may not be the way things will be done back in your business unit, this is the ideal or theoretical way to do things; but you will need to learn the real way things are done when you get back to your group.” Therefore, a BCP program that was designed according to the training manual would not be workable if needed in a crisis. The BCP needs to reflect the group for which it is designed. This also highlighted another risk or preexisting condition. The lack of standardization was a risk in that multiple BCP programs had to be developed for each business operation, and personnel from one group may not be able to quickly assume the work or personnel of another group that has been displaced by a crisis. Detecting this prior to a catastrophe may allow the organization to adjust their culture and reduce this threat through standardization and process streamlining. One of the main ways to find preexisting conditions is through the risk analysis and management process. This is often done by other groups within and outside the organization as well — the insurance company, the risk management group, internal and external audit groups, security, and human resources. The BCP team needs to coordinate its efforts with each of these groups — a collaborative approach so that as much information is provided as possible to design and develop a solid, workable BCP program. The human resources group in particular is often looking at risks such as 778

AU1518Ch43Frame Page 779 Thursday, November 14, 2002 7:52 PM

Business Continuity Planning: A Collaborative Approach labor difficulties, executive succession, adequate policy, and loss of key personnel. These areas also need to be incorporated into a BCP program. The IT group plays a key role in discovering preexisting conditions. Nearly every business process today relies on, and in many cases cannot operate without, some form of IT infrastructure. For most organizations this infrastructure has grown, evolved, and changed at a tremendous rate. Keeping an inventory of IT equipment and network layouts is nearly impossible. However, because the business units rely so heavily on this infrastructure, no BCP program can work without the assistance and planning of the IT group. From an IT perspective, there are many areas to be considered in detecting preexisting conditions: applications, operating systems, hardware, communications networks, remote access, printers, telecommunications systems, databases, Internet links, stand-alone or desktop-based systems, defense systems, components such as anti-virus tools, firewalls, and intrusion detection systems, and interfaces to other organizations such as suppliers and customers. For each component, the IT group must examine whether there are single points of failure, documented lists of equipment including vendors, operating version, patches installed, users, configuration tables, backups, communications protocols and setups, software versions, and desktop configurations. When the IT group has detected possible weaknesses, it may be possible to alert management to this condition as a part of the BCP process in order to gain additional support for new resources, equipment, or support for standardization or centralized control. The risk in many organizations is the fear of a “shoot the messenger” reaction from management when a potential threat has been brought to the attention of management. We all like to hear good news, and few managers really appreciate hearing about vulnerabilities and recommendations for increased expenditures in the few moments they have between budget meetings. For that reason, a unified approach using credible facts, proposals, solutions, and costs, presented by several departments and project teams, may assist the IT group in achieving greater standards of security and disaster preparedness. The unfortunate reality is that many of the most serious events that have occurred in the past few years could have been averted if organizations had fostered a culture of accurate reporting, honesty, and integrity instead of hiding behind inaccurate statistics or encouraging personnel to report what they thought management wanted to hear instead of the true state of the situation. This includes incidents that have led to loss of life or financial collapse of large organizations through city water contamination, misleading financial records, or qualityof-service reporting. It is important to note the impact that terrorist activity has had on the BCP process. Risks that had never before been seriously considered now 779

AU1518Ch43Frame Page 780 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING have to be contemplated in a BCP process. One of the weaknesses in some former plans involved reliance on in-office fireproof safes, air transit for key data and personnel, and proximity to high-risk targets. An organization not even directly impacted by the actual crisis may not be able to get access to its location because of crime-scene access limitations, clean-up activity, and infrastructure breakdowns. Since the terrorist actions in New York, several firms have identified the area as a high-risk location and chosen to relocate to sites outside the core business area. One firm had recently completed construction of a new office complex close to the site of the terrorist activity and has subsequently chosen to sell the complex and relocate to another area. On the other hand, there are several examples of BCP programs that worked properly during the September 11 crisis, including tragic incidents where key personnel were lost. A BCP program that is properly designed will operate effectively regardless of the reason for the loss of the facility, and all BCP programs should contemplate and prepare for such an event. Crisis Triggers The next step in a crisis situation is the catalyst that sets off the chain of events that leads to the crisis. The trigger may be anything from a minor incident to a major event such as a weather-related or natural disaster, a human error or malicious attack, or a fire or utility failure. In any event, the trigger is not the real problem. An organization that has properly considered the preconditions that may lead to a crisis will have taken all precautions to limit the amount of damage from the trigger and hopefully prevent the next phase of the crisis — the crisis expansion phase — from growing out of control. Far too often, in a post-mortem analysis of a crisis, it is too easy to focus on the trigger for the event and look for ways to prevent the trigger from occurring — instead of focusing on the preconditions that led to the extended impact of the crisis. When all attempts have been made to eliminate the weaknesses and vulnerabilities in the system, then attention can be given to preventing the triggers from occurring. Crisis Management/Crisis Expansion As the crisis begins to unfold, the organization transitions from a preparatory stage, where the focus is on preventing and preparing for a disaster, to a reactionary stage, where efforts are needed to contain the damage, recover business operations, limit corporate exposure to liability and loss, prevent fraud or looting, begin to assess the overall impact, and commence a recovery process toward the ultimate goal of resumption of normal operations. Often, the organization is faced with incomplete information, inadequate coordinating efforts, complications from outside agencies or organizations, 780

AU1518Ch43Frame Page 781 Thursday, November 14, 2002 7:52 PM

Business Continuity Planning: A Collaborative Approach queries and investigations by the media, unavailability of key personnel, interrupted communications, and personnel who may not be able to work together under pressure and uncertainty. During a time of crisis, key personnel will rise to the occasion and produce the extra effort, clarity of focus and thought, and energy and attitude to lead other personnel and the organization through the incident. These people need to be noticed and marked for involvement in future incident preparation handling. Leadership is a skill, an art, and a talent. Henry Kissinger defines leadership as the ability to “take people from where they are to places where they have never been.” Like any other talent, leadership is also a learned art. No one is born a perfect leader, just as no one is born the world’s best golfer. Just as every professional athlete has worked hard and received coaching and guidance to perfect and refine their ability, so a leader needs training in leadership style, attention to human issues, and project planning and management. One of the most commonly overlooked aspects of a BCP program is the human impact. Unlike hardware and software components that can be counted, purchased, and discarded, the employees, customers, and families impacted by the crisis must be considered. No employee is going to be able to provide unlimited support — there must be provisions for rest, nourishment, support, and security for the employees and their families. The crisis may quickly expand to several departments, other organizations, the stock market, and community security. Through all of this the organization must rapidly recognize the growth of the disaster and be ready to respond appropriately. The organization must be able to provide reassurance and factual information to the media, families, shareholders, customers, employees, and vendors. Part of this is accomplished through knowing how to disseminate information accurately, representing the organization with credible and knowledgeable representatives, and restricting the uncontrolled release of speculation and rumor. During any crisis, people are looking for answers, and they will often grasp and believe the most unbelievable and ridiculous rumors if there is no access to reliable sources of information. Working recovery programs have even been interrupted and halted by the spread of inaccurate information or rumors. Leadership is the ability to remain effective despite a stressful situation; remain composed, reliable, able to accept criticism (much of it personally directed); handle multiple sources of information; multitask and delegate; provide careful analysis and recommendations; and inspire confidence. Not a simple or small task by any means. 781

AU1518Ch43Frame Page 782 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING In many cases the secret to a good BCP program is not the plan itself, but the understanding of the needs of the business and providing the leadership and coordination to make the plan a reality. Some organizations have been dismayed to discover that the people who had worked diligently to prepare a BCP program, coordinating endless meetings and shuffling paperwork like a Las Vegas blackjack dealer, were totally unsuited to execute the very plans they had developed. The leader of a disaster recovery team must be able to be both flexible and creative. No disaster or crisis will happen “by the book.” The plan will always have some deficiencies or invalid assumptions. There may be excellent and creative responses and answers to the crisis that had not been considered; and, while this is not the time to rewrite the plan, accepting and embracing new solutions may well save the organization considerable expense, downtime, and embarrassment. One approach may be the use of wireless technology to get a LAN up and running in a minimal amount of time without reliance on traditional cable. Another example is the use of microwave to link to another site without the delay of waiting for establishment of a new T1 line. These are only suggestions, and they have limitations — especially in regard to security — but they may also provide new and rapid answers to a crisis. This is often a time to consider a new technological approach to the crisis — use of Voice-over-IP to replace a telecommunications switch that has been lost, or use of remote access via the Internet so employees can operate from home until new facilities are operational. Business resumption or business continuity planning can be described as the ability to continue business operations while in the process of recovering from a disaster. The ability to see the whole picture and understand hidden relationships among processes, organizations, and work are critical to stopping the expansion of the crisis and disaster. Determining how to respond is a skill. The leaders in the crisis must know who to call and alert, on whom to rely, and when to initiate alternate processing programs and recovery procedures. They need to accurately assess the extent of the damage and expansion rate of the crisis. They need to react swiftly and decisively without overreacting and yet need to ensure that all affected areas have been alerted. The disaster recovery team must be able to assure the employees, customers, management team, and shareholders that, despite the confusion, uncertainty, and risks associated with a disaster, the organization is competently responding to, managing, and recovering from the failure. 782

AU1518Ch43Frame Page 783 Thursday, November 14, 2002 7:52 PM

Business Continuity Planning: A Collaborative Approach Crisis Resolution The final phase of a crisis is when the issue is resolved and the organization has recovered from the incident. This is not the same as when normal operations have recommenced. It may be weeks or years that the impact is felt financially or emotionally. The loss of credibility or trust may take months to rebuild. The recovery of lost customers may be nearly impossible; and when data is lost, it may well be that no amount of money or effort will recover the lost information. Some corporations have found that an interruption in processing for several days may be nearly impossible to recover because there is not enough processing time or capacity to catch up. The crisis resolution phase is a critical period in the organization. It pays to reflect on what went well, what lessons were learned, who were the key personnel, and which processes and assumptions were found to be missed or contrarily invalid. One organization, having gone through an extended labor disruption, found that many job functions were no longer needed or terribly inefficient. This was a valuable learning experience for the organization. First, many unnecessary functions and efforts could be eliminated; but second, why was the management unable to identify these unnecessary functions earlier? It indicated a poor management structure and job monitoring. THE BUSINESS CONTINUITY PROCESS Now that we have examined the scenarios where we require a workable business continuity plan, we can begin to explore how to build a workable program. It is good to have the end result in mind when building the program. We need to build with the thought to respond to actual incidents — not only to develop a plan from a theoretical approach. A business continuity plan must consider all areas of the organization. Therefore, all areas of the organization must be involved in developing the plan. Some areas may require a very elementary plan — others require a highly detailed and precise plan with strict timelines and measurable objectives. For this reason, many BCP programs available today are ineffective. They take a standard one-size-fits-all approach to constructing a program. This leads to frustration in areas that are overplanned and ineffectiveness in areas that are not taken seriously enough. There are several excellent Web sites and organizations that can assist a corporation in BCP training, designing an effective BCP, and certification of BCP project leaders. Several sites also offer regular trade journals that are full of valuable information, examples of BCP implementations, and disaster recovery situations. Some of these include: 783

AU1518Ch43Frame Page 784 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING • • • • • • • •

Disaster Recovery Journal, www.drj.com Disaster Recovery Institute Canada, www.dri.ca Disaster Recovery Information Exchange, www.drie.org American Society for Industrial Security, www.asisonline.org Disaster Recovery Institute International, www.dr.org Business Continuity Institute, www.thebci.org International Association of Emergency Managers, www.nccem.org Survive — The Business Continuity Group, www.survive.com

There are also numerous sites and organizations offering tools, checklists, and software to assist in establishing or upgrading a BCP program. Regardless of the Web site accessed by a BCP team member, the underlying process in establishing a BCP program is relatively the same. • • • •

Risk and business impact analysis Plan development Plan testing Maintenance

The Disaster Recovery Institute recommends an excellent ten-step methodology for preparing a BCP program. The Disaster Recovery Journal Web site presents a seven-step model based on the DRI model, and also list the articles published in their newsletters that provide education and examples of each step. Regardless of the type of methodology an organization chooses to use, the core concepts remain the same. Sample core steps are: • Project initiation (setting the groundwork) • Business impact analysis (project requirements definition) • Design and development (exploring alternatives and putting the pieces together) • Implementation (producing a workable result) • Testing (proving that it is a feasible plan and finding weaknesses) • Maintenance and update (preserving the value of the investment) • Execution (where the rubber meets the road — a disaster strikes) As previously stated, the intent of this chapter is not to provide in-depth training in establishing a BCP program. Rather, it is to present the overall objectives of the BCP initiative so that, as information systems security personnel or auditors, we can provide assistance and understand our role in creating a workable and effective business continuity plan. Let us look at the high-level objectives of each step in a BCP program methodology. 784

AU1518Ch43Frame Page 785 Thursday, November 14, 2002 7:52 PM

Business Continuity Planning: A Collaborative Approach Project Initiation Without clearly defined objectives, goals, and timelines, most projects flounder, receive reduced funding, are appraised skeptically by management, and never come to completion or delivery of a sound product. This is especially true in an administrative project like a BCP program. While the awareness has been raised about BCP due to recent events, this attention will only last as long as other financial pressures do not erode the confidence that management has in realizing worthwhile results from the project. A BCP project needs clearly defined mandates and deliverables. Does it include the entire corporation or only a few of the more critical areas to start with? Is the funding provided at a centrally based corporate level or departmentally? When should the plans be provided? Does the project have the support of senior management to the extent that time, resources, and cooperation will be provided on request as needed by the BCP project team? Without the support of the local business units, the project will suffer from lack of good foundational understanding of business operations. Therefore, as discussed earlier, it is doubtful that the resulting plan will accurately reflect the business needs of the business units. Without clearly defined timelines, the project may tend to take on a life of its own, with never-ending meetings, discussions, and checklists, but never providing a measurable result. Security professionals need to realize the importance of providing good support for this initial phase — recommending and describing the benefits of a good BCP program and explaining the technical challenges related to providing rapid data or processing recovery. As auditors, the emphasis is on having a solid project plan and budget responsibility so that the project meets its objectives within budget and on time. Business Impact Analysis The business impact analysis (BIA) phase examines each business unit to determine what impact a disaster or crisis may have on its operations. This means the business unit must define its core operations and, together with the IT group, outline its reliance on technology, the minimum requirements to maintain operations, and the maximum tolerable downtime (MTD) for its operations. The results of this effort are usually unique to each business unit within the corporation. The MTD can be dependant on costs (costs may begin to increase exponentially as the downtime increases), reputation (loss of credibility among customers, shareholders, regulatory agencies), or even technical issues (manufacturing equipment or data may be damaged or corrupted by an interruption in operations). 785

AU1518Ch43Frame Page 786 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING The IT group needs to work closely during this phase to understand the technological requirements of the business unit. From this knowledge, a list of alternatives for recovery processing can be established. The audit group needs to ensure that proper focus is placed on the importance of each function. Not all departments are equally critical, and not all systems within a department are equally important. E-mail or Internet access may not be as important as availability of the customer database. The accounting department — despite its loud objections — may not need all of its functionality prioritized and provided the same day as the core customer support group. Audit can provide some balance and objective input to the recovery strategy and time frames through analysis and review of critical systems, highest impact areas, and objective consideration. Design and Development Once the BCP team understands the most critical needs of the business from both an operational and technology standpoint, it must consider how to provide a plan that will meet these needs within the critical time frames of the MTD. There are several alternatives, depending on the type of disaster that occurs, but one alternative that should be considered is outsourcing of some operations. This can be the outsourcing of customer calls such as warranty claims to a call center, or outsourcing payroll or basic accounting functions. Many organizations rely on a hot site or alternate processing facility to accommodate their information processing requirements. The IT group needs to be especially involved in working together with the business units to ensure that the most critical processing is provided at such a site without incurring expense for the usage of unnecessary processing or storage capability. The audit group needs to ensure that the proper cost/benefit analysis has been done and that the provisions of the contract with the hot site are fulfilled and reasonable for the business needs. The development of the business continuity plan must be reviewed and approved by the managers and representatives in the local business groups. This is where the continuous involvement of key people within these groups is beneficial. The ideal is to prepare a plan that is workable, simple, and timely. A plan that is too cumbersome, theoretical, or unrelated to true business needs may well make recovery operations more difficult rather than expedite operational recovery. During this phase is it noticed that, if the BCP process does not have an effective leader, key personnel will begin to drop out. No one has time for meaningless and endless meetings, and the key personnel from the 786

AU1518Ch43Frame Page 787 Thursday, November 14, 2002 7:52 PM

Business Continuity Planning: A Collaborative Approach business units need to be assured that their investment of time and input to the BCP project is time well spent. Implementation of the Business Continuity Plan All of the prior effort has been aimed at this point in time — the production of a workable result. That is, the production of a plan that can be relied on in a crisis to provide a framework for action, decision making, and definition of roles and responsibilities. IT needs to review this plan to see their role. Can they meet their objectives for providing supporting infrastructures? Do they have access to equipment, backups, configurations, and personnel to make it all happen? Do they have the contact numbers of vendors, suppliers, and key employees in off-site locations? Does the business unit know who to call in the area for support and interaction? The audit group should review the finished product for consistency, completeness, management review, testing schedules, maintenance plans, and reasonable assumptions. This should ensure that the final product is reliable, that everyone is using the same version, that the plan is protected from destruction or tampering, and that it is kept in a secure format with copies available off-site. Testing the Plans Almost no organization can have just one recovery strategy. It is usual to have several recovery strategies based on the type of incident or crisis that affects the business. These plans need to be tested. Tests are verification of the assumptions, timelines, strategies, and responsibilities of the personnel tasked with executing a business continuity plan. Tests should not only consist of checks to see if the plan will work under ideal circumstances. Tests should stress the plan through unavailability of some key personnel and loss of use of facilities. The testing should be focused on finding weaknesses or errors in the plan structure. It is far better to find these problems in a sterile test environment than to experience them in the midst of a crisis. The IT staff should especially test for validity of assumptions regarding providing or restoring equipment, data links, and communications links. They need to ensure that they have the trained people and plans to meet the restoration objectives of the plan. Auditors should ensure that weaknesses found in the plans through testing are documented and addressed. The auditors should routinely sit in on tests to verify that the test scenario is realistic and that no shortcuts or compromises are made that could impair the validity of the test. 787

AU1518Ch43Frame Page 788 Thursday, November 14, 2002 7:52 PM

BUSINESS CONTINUITY PLANNING Maintenance of the BCP (Preserving the Value of the Investment) A lot of money and time goes into the establishment of a good BCP program. The resulting plans are key components of an organization’s survival plan. However, organizations and personnel change so rapidly that almost any BCP is out of date within a very short time frame. It needs to be defined in the job descriptions of the BCP team members — especially the representatives from the business units — to provide continuous updates and modifications to the plan as changes occur in business unit structure, location, operating procedures, or personnel. The IT group is especially vulnerable to outdating plans. Hardware and software change rapidly, and procurement of new products needs to trigger an update to the plan. When new products are purchased, consideration must be given to ensuring that the new products will not impede recovery efforts through unavailability of replacements, lack of standardization, or lack of knowledgeable support personnel. Audit must review plans on a regular basis to see that the business units have maintained the plans and that they reflect the real-world environment for which the plans are designed. Audit should also ensure that adequate funding and support is given to the BCP project on an ongoing basis so that a workable plan is available when required. CONCLUSION Business continuity plans are a form of insurance for an organization — and, like insurance, we all hope that we never have to rely on them. However, proper preparation and training will provide the organization with a plan that should hold up and ease the pressures related to a crisis. A good plan should minimize the need to make decisions in the midst of a crisis and outline the roles and responsibilities of each team member so that the business can resume operations, restore damaged or corrupted equipment or data, and return to normal processing as rapidly and painlessly as possible. ABOUT THE AUTHOR Kevin Henry, CISA, CISSP, has over 20 years of experience in telecommunications, computer programming and analysis, and information systems auditing. Kevin is an accomplished and highly respected presenter at many conferences and training sessions, and he serves as a lead instructor for the (ISC)2 Common Body of Knowledge Review for candidates preparing for the CISSP examination.

788

AU1518Ch44Frame Page 789 Thursday, November 14, 2002 7:51 PM

Domain 9

Law, Investigation, and Ethics

AU1518Ch44Frame Page 790 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS The fear of future international terrorist activities unfortunately remains in the forefront of our minds. In this domain, we include chapters highlighting the global threat of cyber-crime, a far-reaching and potentially very damaging activity that must be taken seriously. As former Attorney General Janet Reno points out, “Because of … technological advancements, today’s criminals can be more nimble and more elusive than ever before. If you can sit in a kitchen in St. Petersburg, Russia, and steal from a bank in New York, you understand the dimensions of the problem.” In theses chapters, the authors do a tremendous job of defining cyber-crime, providing statistics on the foreign hot spots, providing examples of traditional crimes that have made their way to the cyber-world, and offering ways in which the global community is banding together to fight it. Within this domain we also feature a current look at what one healthcare organization is doing to prepare for compliance with the U.S. government regulation, the Health Insurance Portability and Accountability Act (HIPAA) of 1996. Given the far-reaching extent of its privacy and security rules and the challenges they pose to healthcare providers, payers, and clearinghouses, HIPAA is a formidable law to be addressed formally and earnestly because the rules require that healthcare organizations establish and maintain a comprehensive security program, entailing administrative and technical safeguards. One of the many security practices that HIPAA mandates is the ability to respond and react to security incidents. All organizations, however, should adopt best practices for responding and reacting to security incidents, which include establishing a computer incident task team (CIRT). Several authors in this domain provide us ample information on the methodology for creating a CIRT, gaining executive management support, establishing the required detection mechanisms, enabling technology, and leveraging proper processes and procedures. Finally, we present a legal review of distributed denial-of-service (DDoS) attacks and the potential liability that organizations face when their weak security enables a hacker to exploit their environmental deficiencies in order to attack organizations elsewhere.

790

AU1518Ch44Frame Page 791 Thursday, November 14, 2002 7:51 PM

Chapter 44

Liability for Lax Computer Security in DDoS Attacks Dorsey Morrow, CISSP, JD

In the middle of February 2000, Internet security changed dramatically when Amazon.com, CNN, Yahoo, E*Trade, ZDNet, and others fell victim to what has come to be known as a distributed denial-of-service attack or, more commonly, DDoS. While denial-of-service attacks can be found as far back as 1998, it was not until these sites were brought down through the use of distributed computing that the media spotlight focused on such attacks. No longer were the attackers few in number and relatively easy to trace. A DDoS attack occurs when a targeted system is flooded with traffic by hundreds or even thousands of coordinated computer systems simultaneously. These attacking computer systems are surreptitiously commandeered by a single source well in advance of the actual attack. Through the use of a well-placed Trojan program that awaits further commands from the originating computer, the attacking computer is turned into what is commonly referred to as a zombie. These zombie computers are then coordinated in an assault against single or multiple targets. Zombie computers are typically targeted and utilized because of their lax security. While a DDoS attack has two victims — the attacking zombie computer and the ultimate target — it is the latter of these two that suffers the most damage. Not only has the security and performance of the victim’s computer system been compromised, but economic damage can run into the millions for some companies. Thus, the question arises: does the attack by a zombie computer system, because of lax security, create liability on the part of the zombie system to the target? To address this issue, this chapter provides a jurisdictional-independent analysis of the tort of negligence and the duty that attaches upon connection to the Internet. 791

AU1518Ch44Frame Page 792 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS There is a universal caveat in tort law stating that, whenever you are out of a familiar element, a reasonable and prudent person becomes even more cautious. The Internet fits the profile of an unfamiliar element in every sense of the word, be it transactional, jurisdictional, or legal. There is no clear, concise, ecumenical standard for the Internet as it applies to business transactions, political borders, or legal jurisdictions and standards. Thus, every computer user, service provider, and business entity on the Internet should exercise extra caution in travels across the Internet. But, beyond such a general duty to be extra cautious, is there more expected of those who join the broad Internet community and become Netizens? Specifically, is there a duty to others online? Computer security is a dynamic field; and in today’s business and legal environments, the demands for confidentiality, integrity, and availability of computer data are increasing at fantastic rates. But at what level is computer security sufficient? For years we have looked to a 1932 case in the 2nd Circuit (see In re T.J. Hooper, 60 F.2d 737) that involved a tugboat caught up in a tremendous storm and was subsequently involved in an accident that resulted in the loss of property. Naturally, a lawsuit resulted; and the captain was found guilty of negligence for failing to use a device that was not industry-standard at the time, but was available nonetheless — a two-way radio. The court succinctly stated, “There are precautions so imperative that even their universal disregard will not excuse their omission.” In essence, the court stated that, despite what the industry might be doing, or more precisely, failing to do, there are certain precautions we must implement to avoid disaster and liability. What the courts look to is what the reasonable and prudent person (or member of industry) might do in such unfamiliar territory. Because computer security is so dynamic, instead of trying to define a universal standard of what to do, the more practical method would be to attempt to define what rises to the standard of negligence. Negligence has developed into a common law standard of three elements. First, there must be some duty owed between the plaintiff and the defendant; second, there must be a breach of that duty by the defendant; third, the breach of duty is a proximate cause of damages that result. (See City of Mobile v. Havard, 289 Ala. 532, 268 So.2d 805, [1972]. See also United States Liab. Ins. Co. v. Haidinger-Hayes, Inc. [1970] 1 Cal.3d 586, 594, 463 P.2d 770.) So it seems we must first address whether there is a duty between the plaintiff (the victim of a DDoS attack) and the defendant zombie computer in such an attack. Does being tied to the Internet impose a duty of security upon businesses? Do businesses have an implicit requirement to ensure their security is functional and that their systems will not harm others on the wild, wild Internet? It is important to remember that the theory of negligence does not make us insurers of all around us, but rather that we act as a reasonable 792

AU1518Ch44Frame Page 793 Thursday, November 14, 2002 7:51 PM

Liability for Lax Computer Security in DDoS Attacks and prudent person would in the same circumstances. We have already established that the Internet, despite being commercially viable for the past ten years, is still a new frontier. As such, it is challenging historical business and legal concepts. This, of course, creates a new paradigm of caution for the reasonable person or business. The Internet creates an unbridled connection among all who would join. It is undisputed that no one owns the Internet or is charged with regulating content, format, or acceptable use. However, there is a duty imposed upon all who connect and become part of the Internet. As in the physical world, we owe a duty to do no harm to those around us. While the ultimate determination of duty lies properly within the discretion of the courts as a matter of law, there are a number of duties that have been routinely recognized by the courts. Perhaps the duty from which we can draw the greatest inference is the duty of landowners to maintain their land. This general duty of maintenance, which is owed to tenants and patrons, has been held to include “the duty to take reasonable steps to secure common areas against foreseeable criminal acts of third parties that are likely to occur in the absence of such precautionary measures.” (See Frances T. v. Village Green Owners Assoc. [1986] 42 Cal.3d 490, 499–501, [229 Cal.Rptr 456, 723 P.2d 573, 59 A.L.R.4th 447].) Similarly, in Illinois, there is no duty imposed to protect others from criminal attacks by a third party, unless the criminal attack was reasonably foreseeable and the parties had a “special relationship.” (See Figueroa v. Evangelical Covenant Church, 879 F.2d 1427 [7th Cir. 1989].) And, in Comolli v. 81 And 13 Cortland Assoc., ___A.D.2d _____ (3d Dept. 2001), the New York Appellate Division, quoting Rivera v. Goldstein, 152 A.D.2d 556, 557, stated, “There will ordinarily be no duty imposed on a defendant to prevent a third party from causing harm to another unless the intervening act which caused the plaintiff’s injuries was a normal or foreseeable consequence of the situation created by the defendant’s negligence.” As a shop owner in a high-crime area owes a greater duty of security and safety to those who come to his shop because criminal action is more likely and reasonably foreseeable, thus a computer system tied to the Internet owes a duty of security to others tied to the Internet because of the reasonably foreseeable criminal actions of others. Similarly, if we live in an area where there have been repeated car thefts, and those stolen cars have been used to strike and assault those who walk in the area, it could be reasonably stated we have a duty to the walkers to secure our vehicles. It is reasonably foreseeable that our car would be stolen and used to injure someone if we left it in the open and accessible. The extent to which we left it accessible would determine whether we breached that duty and, pursuant to law, left to the decision of a jury. Whether it was parked in the street, unlocked, and the keys in it, or locked with an active alarm system would be factors the jury would consider in determining if we had been negligent in securing the 793

AU1518Ch44Frame Page 794 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS automobile. Granted, this is a rather extreme and unlikely scenario; but it nonetheless illustrates our duty to others in the digital community. Statistics that bolster the claim that computer crime is a reasonably foreseeable event include a study by the Computer Security Institute and the San Francisco Federal Bureau of Investigation Computer Intrusion Squad of various organizations on the issue of computer security compiled in March of 2001. In their study, 85 percent of respondents detected computer security breaches within the previous 12 months; 38 percent detected DoS attacks in 2001 versus 27 percent for 2000; and 95 percent of those surveyed detected computer viruses. These numbers clearly show a need for computer security and how reasonably foreseeable computer crime is when connected to the Internet. When viewed in the light of increasing numbers of viruses, Trojan horses, and security breaches, and the extensive media attention given them, computer crime on the Internet almost passes beyond “reasonably foreseeable” to “expected.” A case in Texas, Dickinson Arms-Reo v. Campbell, 4 S.W.3d 333 (Tex.App. [1st Dist.] 1999) held that the element of “‘foreseeability” would require only that the general danger, not the exact sequence of events that produced the harm, be foreseeable. The court went further to identify specific factors in considering “foreseeability” to include: (1) the proximity of other crimes; (2) the recency and frequency of other crimes; (3) the similarity of other crimes; and (4) the publicity of other crimes. While this is not a ubiquitous checklist to be used as a universal standard, it does give a good reference point with which to measure whether a computer crime could be reasonably expected and foreseeable. Of course, in cyberspace, there is no physical land, tenants, or licensees. However, there is still a duty to secure systems against unauthorized use, whether mandated by statute (Health Insurance Portability and Accountability Act, Graham-Leach-Bliley Act), by regulation, or by common sense. Because of the public nature of the recent DDoS attacks, we now have a better understanding of the synergistic and interconnected nature of the Internet and the ramifications of poor security. Perhaps the most striking argument for the duty of precaution comes from a 1933 Mississippi case in which the court stated: Precaution is a duty only so far as there is reason for apprehension. Ordinary care of a reasonably prudent man does not demand that a person should prevision or anticipate an unusual, improbable, or extraordinary occurrence, though such happening is within the range of possibilities. Care or foresight as to the probable effect of an act is not to be weighed on jewelers’ scales, nor calculated by the expert mind of the philosopher, from cause to effect, in all situations. Probability arises in the law of negligence when viewed from the standpoint of the judgment of a reasonably prudent man, as a reasonable thing to be 794

AU1518Ch44Frame Page 795 Thursday, November 14, 2002 7:51 PM

Liability for Lax Computer Security in DDoS Attacks expected. Remote possibilities do not constitute negligence from the judicial standpoint. — Illinois Central RR Co. v. Bloodworth, 166 Miss. 602, 145 So. 333, (1933)

A 1962 Mississippi case (Dr. Pepper Bottling Co. v. Bruner, 245 Miss. 276, 148 So.2d 199) went further in stating that: As a general rule, it is the natural inherent duty owed by one person to his fellowmen, in his intercourse with them, to protect life and limb against peril, when it is in his power to reasonably do so. The law imposes upon every person who undertakes the performance of an act which, it is apparent, if not done carefully, will be dangerous to other persons, or the property of other persons — the duty to exercise his senses and intelligence to avoid injury, and he may be held accountable at law for an injury to person or property which is directly attributable to a breach of such duty…. Stated broadly, one who undertakes to do an act or discharge a duty by which conduct of others may be properly regulated and governed is under a duty to shape his conduct in such matter that those rightfully led to act on the faith of his performance shall not suffer loss or injury through his negligence.

We have established the requirement of a duty; but in the context of computer security, what rises to the level of a breach of such a duty? Assuming that a duty is found, a plaintiff must establish that a defendant’s acts or omissions violated the applicable standard of care. We must then ask, “What is the standard of care?” According to a 1971 case from the Fifth Circuit, evidence of the custom and practice in a particular business or industry is usually admissible as to the standard of care in negligence actions. (See Ward v. Hobart Mfg. Co., 460 F.2d 1176, 1185.) When a practice becomes so well defined within an industry that a reasonable person is charged with knowing that is the way it is done, a standard has been established. While computer security is an industry unto itself, its standards vary due to environmental constraints of the industry or business within which it is used. While both a chicken processing plant and a nuclear processing plant use computer security, the risks are of two extremes. To further skew our ability to arrive at a common standard, the courts have held that evidence of accepted customs and practices of a trade or industry does not conclusively establish the legal standard of care. (See Anderson v. Malloy, 700 F.2d 1208, 1212 [1983].) In fact, the cost justification of the custom may be considered a relevant factor by some courts, including the determination of whether the expected accident cost associated with the practice exceeded the cost of abandoning the practice. (See United States Fidelity & Guar. Co. v. Plovidba, 683 F.2d 1022, 1026 [7th Cir. 1982].) So if we are unable to arrive at a uniform standard of care for computer security in general, what do we look to? Clearly there must be a minimum standard for computer security with which we benchmark our duty to others on the 795

AU1518Ch44Frame Page 796 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS Internet. To arrive at that standard we must use a balancing test of utility versus risk. Such a test helps to determine whether a certain computer security measure ought to be done by weighing the risk of not doing it versus the social utility or benefit of doing it, notwithstanding the cost. In June of 2001, in Moody v. Blanchard Place, 34,587 (La.App. 2nd Cir. 6/20/01); ___ So.2d ___, the Court of Appeal for Louisiana held that, in determining the risk and utility of doing something, there are several factors to consider: (1) a determination of whether a thing presents an unreasonable risk of harm should be made “in light of all relevant moral, economic, and social considerations” (quoting Celestine v. Union Oil Co. of California, 94-1868 [La. 4/10/95], 652 So.2d 1299; quoting Entrevia v. Hood, 427 So.2d 1146 [La. 1983]); and (2) in applying the risk–utility balancing test, the fact finder must weigh factors such as gravity and risk of harm, individual and societal rights and obligations, and the social utility involved. (Quoting Boyle v. Board of Supervisors, Louisiana State University, 96-1158 [La. 1/14/97], 685 So.2d 1080.) So whether to implement a security measure may be considered in light of economical and social considerations weighed against the gravity and risk of harm. This in turn works to establish the standard of care. If the defendant failed to meet this standard of care, then the duty to the plaintiff has been breached. Finally, we must consider whether the breach of duty by the defendant to the plaintiff was the proximate cause of damages the plaintiff experienced. To arrive at such a claim, we must have damages. Over the years the courts have generally required physical harm or damages. In fact, economic loss, absent some correlating physical loss, has traditionally been unrecoverable. (See Pennsylvania v. General Public Utilities Corp. [1983, CA3 Pa] 710 F.2d 117.) Over the past two decades, however, the courts have been allowing for the recovery of purely economic losses. (See People Express Airlines v. Consol. Rail Corp., 194 N.J. Super. 349 [1984], 476 A.2d 1256.) Thus, while the computer and Internet are not physically dangerous machines (unless attached to some other equipment that is dangerous) and thus incapable of creating a physical loss or causing physical damage, they can produce far-reaching economic damage. This is especially true as more and more of our infrastructure and financial systems are controlled by computer and attached to the Internet. Hence, we arrive at the ability to have damages as the result of action by a computer. The final question is whether the action or inaction by the defendant to secure his computer systems is a proximate cause of the damages suffered by the plaintiff as the result of a DDoS attack by a third party. And, of course, this question is left to the jury as a matter of fact. Each case carrying its own unique set of circumstances and timelines creates issues that must be resolved by the trier of fact — the jury. However, in order to be a proximate cause, the defendant’s conduct must be a cause-in-fact. In other words, if the DDoS attack would not have occurred without the defendant’s 796

AU1518Ch44Frame Page 797 Thursday, November 14, 2002 7:51 PM

Liability for Lax Computer Security in DDoS Attacks conduct, it is not a cause-in-fact. Of course, in any DDoS there are a multitude of other parties who also contributed to the attack by their failure to adequately secure their systems from becoming zombies. But this does nothing to suppress the liability of the single defendant. It merely makes others suitable parties to the suit as alternatively liable. If the defendant’s action was a material element and a substantial factor in bringing about the event, regardless of the liability of any other party, their conduct was still a causein-fact and thus a proximate cause. In 1995, an Ohio court addressed the issue of having multiple defendants for a single proximate cause, even if some of the potential defendants were not named in the suit. In Jackson v. Glidden, 98 Ohio App.2d 100 (1995), 647 N.E.2d 879, the court, quoting an earlier case, stated: In Minnich v. Ashland Oil Co. (1984), 15 Ohio St.3d 396, 15 OBR 511, 473 N.E.2d 1199, the Ohio Supreme Court recognized the theory of alternative liability. The court held in its syllabus: “Where the conduct of two or more actors is tortious, and it is proved that harm has been caused to the plaintiff by only one of them, but there is uncertainty as to which one has caused it, the burden is upon each such actor to prove that he has not caused the harm. (2 Restatement of the Law 2d, Torts, Section 433[B][3], adopted.)” The court stated that the shifting of the burden of proof avoids the injustice of permitting proved wrongdoers, who among them have inflicted an injury upon an innocent plaintiff, to escape liability merely because the nature of their conduct and the resulting harm have made it difficult or impossible to prove which of them have caused the harm. The court specifically held that the plaintiff must still prove (1) that two or more defendants committed tortious acts, and (2) that plaintiff was injured as a proximate result of the wrongdoing of one of the defendants. The burden then shifts to the defendants to prove that they were not the cause of the plaintiff’s injuries. The court noted that there were multiple defendants but a single proximate cause.

This case does not create a loophole for a defendant in a DDoS attack to escape liability by denying his computer security created the basis for the attack; rather, it allows the plaintiff to list all possible defendants and then require them to prove they did not contribute to the injury. If a computer system was part of the zombie attack, it is a potential party and must prove otherwise that its computer security measures met the standard of care and due diligence required to avoid such a breach. In conclusion, we must look to the totality of circumstances in any attack to determine liability. Naturally, the ultimate responsibility lies at the feet of the instigator of the attack. It is imperative that the Internet community prosecute these nefarious and illegitimate users of computer resources to the fullest and reduce such assaults through every legitimate 797

AU1518Ch44Frame Page 798 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS and legal means available. However, this does not reduce the economic damages suffered by the victim. For that, we look to “deep pockets” and their roles in the attacks. Typically, the deep pockets will be the zombies. But the true determination of their liability is in their security. We must look to the standard of care in the computer security field, in the zombie’s particular industry, and the utility and risk of implementing certain security procedures that could have prevented the attack. Could this attack have been prevented or mitigated by the implementation of certain security measures, policies, or procedures? Was there a technological “silver bullet” that was available, inexpensive, and that the defendant knew or should have known about? Would a firewall or intrusion detection system have made a difference? Did the attack exploit a well-known and documented weakness that the defendant zombie should have corrected? Each of these questions will be raised and considered by a jury to arrive at the answer of liability. Each of these questions should be asked and answered by every company before such an attack even transpires. It is highly probable that those who allow their computer systems, because of weak security, to become jumping-off points for attacks on other systems will be liable to those that are the victims of such attacks. It is incumbent upon all who wish to become part of the community that is the Internet to exercise reasonable care in such an uncertain environment. Ensuring the security of one’s own computer systems inherently increases the security of all other systems on the Internet. ABOUT THE AUTHOR Dorsey Morrow, CISSP, JD, is operations manager and general counsel for the International Information Systems Security Certification Consortium, Inc. (ISC)2. He earned a B.S. degree in computer science and an M.B.A. with an emphasis in information technology. He has served as general counsel to numerous information technology companies and also served as a judge. He is licensed to practice in Alabama, Massachusetts, the 11th Federal Circuit, and the U.S. Supreme Court.

Copyright 2003. Dorsey Morrow. All Rights Reserved.

798

AU1518Ch45Frame Page 799 Thursday, November 14, 2002 7:51 PM

Chapter 45

HIPAA 201: A Framework Approach to HIPAA Security Readiness David MacLeod, Ph.D., CISSP Brian Geffert, CISSP, CISA David Deckter, CISSP

The Health Insurance Portability and Accountability Act (HIPAA) has presented numerous challenges for most healthcare organizations, but through using a framework approach we have been able to effectively identify gaps and develop plans to address those gaps in a timely and organized manner. — Wayne Haddad, Chief Information Officer for The Regence Group

HIPAA SECURITY READINESS FRAMEWORK Within the U.S. healthcare industry, increased attention is focusing on Health Insurance Portability and Accountability Act (HIPAA) readiness. For the past five years, healthcare organizations (HCOs) across the country have moved to prepare their environments for compliance with the proposed HIPAA security regulations. The past five years have also proved that HIPAA security readiness will not be a point-in-time activity for HCOs. Rather, organizations will need to ensure that HIPAA security readiness becomes a part of their operational processes that need to be maintained on a go-forward basis. To incorporate HIPAA security readiness into your organization’s operational processes, you must be able to functionally decompose your organization to ensure that you have effectively addressed all the areas within your 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

799

AU1518Ch45Frame Page 800 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS

Phase 1: Current Design

Functional Decomposition List HIPAA Security Requirements

Phase 2: Requirements Interpretation

Define Requirements Scope for Org Develop Requirements Categories

Determine Applicability

Phase 3: Gap Assessment

Determine Gaps Develop Projects Prioritize Projects Organizational Alignment Develop Budget

Phase 4: Execution

Establish Program Management Office Define PMO Activities Utilize Standard Project Lifecycle Approach

Management Approval

Exhibit 45-1. HIPAA security readiness framework.

organization. You must also be able to interpret the proposed HIPAA security regulations1 as they relate to your organization, identify any gaps, develop plans to address any gaps within your current organization, and monitor your progress to ensure you are addressing the identified gaps. For most HCOs, the path to HIPAA security readiness will mean the development of a framework that will allow you to complete the tasks outlined in Exhibit 45-1. This chapter guides through the framework that will assist you in identifying and addressing your organization’s HIPAA security readiness issues. In doing so, we assume that your organization has already established a HIPAA security team and developed a plan to apply the framework (e.g., Phase 0 activities). Finally, we do not address HIPAA’s transactions, code sets, and identifiers (TCI) or privacy requirements, but you will need to consider both sets of requirements as you move through the phases of the framework. PHASE 1: CURRENT DESIGN2 The framework begins with the construction of a matrix that documents your organization’s current design. The matrix captures the nuances of the environment (both physical and logical), its business processes, and the initiatives that make your HCO unique. It also lists the HIPAA security requirements and determines the applicability of the requirements to your organization’s environment. Functional Decomposition of the Organization Organizations have typically approached HIPAA security readiness by starting with the HIPAA security requirements and applying those requirements to their information technology (IT) departments. By relying solely on 800

AU1518Ch45Frame Page 801 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness this approach, organizations have failed to recognize that security is crossorganizational, including business units and individual users alike. Today’s Internet era is requiring ever more information sharing, further blurring the boundaries of internal access and external access. How then do you break down your organization to ensure you have adequately addressed all the areas of your organization concerning HIPAA security readiness? Organizations can functionally decompose themselves in a number of ways, including IT environment, strategic initiatives, key business processes, or locations. To illustrate the idea of functionally decomposing your organization, we provide some examples of processes, applications, IT environment elements, strategic initiatives, and locations for a typical payer and provider in Exhibit 45-2. List HIPAA Security Requirements The next step in building the matrix is to list the requirements for the five categories of the HIPAA security regulations as shown in Exhibit 45-3. These include: • • • • •

Administrative procedures Physical safeguards Technical security services Technical security mechanisms Electronic signatures

Once you have completed the functional decomposition and listed the HIPAA security requirements, you will have created your organization’s current design matrix. Determine Applicability The final step in the current design phase will be to determine the areas from the functional decomposition where the security requirements apply. The outcome of this exercise will be an initial list of areas on which to focus for developing the scope of the requirements. Exhibit 45-4 illustrates a partial current design matrix for a typical payer organization. PHASE 2: REQUIREMENTS INTERPRETATION The HIPAA security requirements were designed to be used as guidelines, which means that each organization needs to interpret how it will implement them. In this section, we provide some context for defining the scope of each requirement as it applies to your organization, categorizing the practices for the security requirements, and developing the approach for meeting the security requirements based on the practices. In addition, we develop one of the security requirements as an example to support each of the steps in the process. 801

AU1518Ch45Frame Page 802 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS Exhibit 45-2. HCO functional decomposition. Provider (Hospital and Physician)

Payer

Processes Administration Financial Scheduling Registration Admission, discharge, and transfer Billing and A/R Insurance verification Practice management

Membership and enrollment Claims administration Contract management Medical management Underwriting and actuarial Provider network management Financial management Customer service

Applications AMR (EMR, CPR) Laboratory Radiology Pharmacy Order entry Nurse management Financial

Enrollment Billing and A/R Provider management Sales management Medical management Claims Financial

IT Environment Wireless WAN LAN Dial-up Web Servers Workstations Facilities Databases

Wireless WAN LAN Dial-up Web Servers Workstations Facilities Databases

Strategic Initiatives Integrating the healthcare enterprise (IHE) Electronic medical records Web-enabling clinical applications Electronic data interchange (EDI) Location Hospital Outpatient clinic Off-site storage

Customer relationship management (CRM) E-business Electronic data interchange (EDI)

Headquarters Remote sales office Data center

Define the Scope of the Security Requirements The first step to define the scope of the security requirements is to understand the generally accepted practices and principles and where they apply for each of the requirements. To determine these generally accepted practices and their applications, you can use a number of different 802

AU1518Ch45Frame Page 803 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness Exhibit 45-3. HIPAA security requirements list. Administrative Procedures

.308(a)(1)

Certification

.308(a)(2)

Chain of Trust Partner Agreement

.308(a)(3)

Contingency Plan

Applications and data criticality analysis Data backup plan Disaster recovery plan Emergency mode operation plan Testing and revision

.308(a)(4)

Formal Mechanism for Processing Records

.308(a)(1)

Information Access Control

Access authorization Access establishment Access modification

sources that are recognized as standards bodies for information security. The standards bodies typically fall into two categories: general practices and industry-specific practices. This is an important distinction because some industry-specific practices may be different from what is generally accepted across all industries (i.e., healthcare industry versus automotive industry). Utilizing industry standards may be necessary when addressing a very specific area of risk for the organization. Exhibit 45-5 provides a short list of standards bodies, while additional standards bodies can be located in the source listing of the HIPAA security regulations. The next step is to evaluate the generally accepted practices against the description of each security requirement in the HIPAA security regulations, and then apply them to your environment to develop the scope of the requirements for your organization. For our example, we use the certification requirement. Generally accepted practices for certification include the review of a system or application during its design to ensure it meets certain security criteria. Once implemented, periodic reviews are conducted to ensure the system or application continues to meet those specified criteria. The certification requirement has been defined by the HIPAA security regulations as follows: The technical evaluation performed is part of, and in support of, the accreditation process that establishes the extent to which a particular computer system or network design and implementation meet a prespecified set of security requirements. This evaluation may be performed internally or by an external accrediting agency. 803

Administrative Procedures

.308(a)(5) Information Access Control

.308(a)(4) Formal Mechanism for Processing Records

.308(a)(3) Contingency Plan

.308(a)(2) Chain of Trust Partner Agreement

.308(a)(1) Certification

Claims/Encounters Customer Service

X

X

X

X

X

X

X

Access modification

X

X

X

X

X

X

Access establishment

X

X

X

X

X

Remote Sales Office X

X

X

Headquarters X

X

Claims

Access authorization

X

X

X

X

Testing and revision

X

X

X

X

X

X

X

Emergency mode operation plan

X

Membership

X

X

Claims X

Data Center

Disaster recovery plan

Data backup plan

Applications and data criticality analysis

HIPAA Security Requirements

X

X

X

X

X

X

X

X

Sales Management

Applications

X

X

X

X

X

X

X

X

Enrollment

Locations

IT Environment

X

X

X

X

X

Internet

Processes

X

X

X

X

X

WAN

804

X

X

X

X

X

X

X

LAN

Exhibit 45-4. Partial current design matrix.

AU1518Ch45Frame Page 804 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS

AU1518Ch45Frame Page 805 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness Exhibit 45-5. Generally accepted information security standards bodies. Standards Bodies

Category

United States Department of Commerce — National Institute of Standards and Technology (NIST) System Administration, Networking, and Security (SANS) Institute Critical Infrastructure Assurance Office (CIAO) International Organization for Standardization (ISO) 17799 Health Care Financing Administration (HCFA)

General General General General Industry-specific: Healthcare

Exhibit 45-6. Certification scope and assumptions. Scope: Assumptions: Categories:

Network, operating systems, applications, databases, and middleware None identified Policy/standards: Procedures: Tools/infrastructure: Operational:

To define the scope based on this definition, we focus on two key sets of wording: computer systems and network. The term computer system is generally accepted to include operating systems, applications, databases, and middleware. The term network is generally accepted to include the architecture, design, and implementation of the components of the wide area network (WAN), extranet, dial-in, wireless, and the local area network (LAN); and it typically addresses such items as networking equipment (e.g., routers, switches, cabling, etc.). To summarize the scope of our example, we apply the certification requirement to the following areas: • • • • •

Network Operating systems Applications Databases Middleware

In addition, we document any assumptions made during the scoping process, because they will be important inputs to the solution design and as part of the final compliance assessment to understand why some areas were addressed and others were not. Finally, we store this information in each cell containing an X in our current design matrix from the applicability task in the current design phase as shown in Exhibit 45-6. 805

AU1518Ch45Frame Page 806 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS Develop Requirements Categories Developing categories for each of the security requirements assists organizations in understanding what needs to be implemented to meet the requirements. Most organizations develop security controls in a technology vacuum, meaning that they see and understand how the technology fits into their organizations but do not understand the relationship of that technology to the policies, standards, procedures, or operations of their organizations and business. Using the technology-vacuum approach typically develops security solutions that will deteriorate over time because the solution does not have the supporting operational processes to appropriately maintain itself. We define operations as those areas that support and maintain the technology within the organization, such as assigning owners who are responsible and accountable for the technology and its supporting processes. By taking a more holistic approach that includes policies/standards, procedures, technology, and operations, you will develop security solutions to address your gaps that can be more rapidly implemented and maintained over time. Based on this approach, we typically use the following four categories for grouping the practices identified through defining the scope of requirements in the section above: 1. Policies or standards. Policies include senior management’s directives to create a computer security function, establish goals for the function, and assign responsibilities for the function. Standards include specific security rules for particular information systems and practices. 2. Procedures. Procedures include the activities and tasks that dictate how the policies or supporting standards will be implemented in the organization’s environment. 3. Tools or infrastructure. Tools or infrastructure includes the elements that are necessary to support implementation of the requirements within the organization such as process, organizational structure, network and system-related controls, and logging and monitoring devices. 4. Operational. Operational includes all the activities and supporting processes associated with maintaining the solution or system and ensuring it is running as intended. Typically, an owner is assigned to manage the execution of the activities and supporting processes. Examples of activities and supporting processes include maintenance, configuration management, technical documentation, backups, software support, and user support. In addition, the categories will be used to monitor your progress with implementing the practices related to each requirement. To continue with our certification requirement example, we have identified some practices 806

AU1518Ch45Frame Page 807 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness Exhibit 45-7. Practice categories — certification. Administrative Procedures — Certification Categories

Practices

Policies or standards

Written policy that identifies certification requirements Policy identifies individuals responsible for implementing that policy and defines what their duties are Policy identifies consequences of noncompliance Security standards for the configuration of networks, security services, and mechanism, systems, applications, databases, and middleware Identifying certification need review Precertification review Certification readiness Periodic recertification review Precertification readiness tool Certification criteria tool (standards) Certification compliance issue resolution tool Operational when the following criteria are established: Owner Budget Charter Certification plan

Procedures

Tools or infrastructure Operational

related to certification and placed them into categories as illustrated in Exhibit 45-7. Finally, we store this information in the current design matrix as illustrated in Exhibit 45-8. By completing your organization’s current design matrix, you have developed your organization’s to-be state, which includes a minimum set of practices for each area of your organization based on your interpretation of the HIPAA security requirements. You can now use this to-be state to conduct your gap assessment. PHASE 3: GAP ASSESSMENT With interpretation of the HIPAA security requirements complete, you are ready to conduct your HIPAA security readiness or gap assessment. The time it will take to conduct the assessment will vary greatly, depending on a number of factors that include, at a minimum, the size of the organization, the number of locations, the number of systems/applications, and current level of maturity of the security function within the organization. An example of a mature security organization would be an organization with a defined security policy, an established enterprise security architecture 807

AU1518Ch45Frame Page 808 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS Exhibit 45-8. Certification categories. Scope:

Network, operating systems, applications, databases, and middleware

Assumptions: None identified Categories:

Policy/standards: 1. Written policy that identifies certification requirements 2. Policy identifies individuals responsible for implementing that policy and what their duties are 3. Policy identifies consequences of noncompliance 4. Security standards for the configuration of networks, security services, and mechanism, systems, applications, databases, and middleware Procedures: 1. Identifying certification need review 2. Precertification review 3. Certification readiness 4. Periodic recertification review Tools/infrastructure: 1. Precertification readiness tool 2. Certification criteria tool (standards) 3. Certification compliance issue resolution tool Operational: 1. Operational when the following criteria are established: A. Owner, budget, charter, and certification plan

(ESA), documented standards, procedures with defined roles and responsibilities that are followed, established metrics that measure the effectiveness of the security controls, and regular reporting to management. The outcome of the assessment provides you with gaps based on your previously defined scope and practices for each of the security requirements. Because the identified gaps will pose certain risks to your organization, an important point to keep in mind, as your organization reviews the assessment gaps, is that your organization will not be able to address all the gaps due to limited time and resources. Typically, the gaps that you can translate into business risks need to be addressed, particularly the ones that will affect your organization’s HIPAA TCI and privacy initiatives. One way of determining if a particular gap poses a business risk to the organization is to answer the question, “So what?” (by which we mean that, if we do not address this risk, how will it adversely impact our business?). For example, application security access controls are lacking on extranet-accessible applications, allowing 808

AU1518Ch45Frame Page 809 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness for the compromise of sensitive health information and clearly having an adverse impact on your bottom line. If the gap does not adversely affect your business at this point in time, document the gap because it may become a business risk in the future. For example, consider an operating system that supports a nonsensitive application that has not been certified. The application, however, will be replaced in 30 days with a newer version that requires another operating system altogether. Therefore, there is no adverse impact on your bottom line. However, if the organization has resources available, then consider taking actions to mitigate the risk posed by the gap. Once you have completed your assessment and identified your gaps, you need to define a set of projects to remediate the issues. After you have defined these projects, you need to determine the resources and level of effort required to complete the projects, prioritize them, and develop a budget. In addition, you need to obtain organizational alignment around the projects. Finally, you need to get management approval for the projects. Defining Projects Gaps are identified based on analysis of prior requirements and then reevaluated against strategic initiatives to determine a project assignment. That is, some gaps are dealt with as stand-alone HIPAA security projects, while others are bundled or packaged within projects that more directly support strategic goals. A typical set of projects developed from an assessment would include the following: • High-Risk Mitigation. Address high-risk vulnerabilities and exposures to your bottom line that were discovered as part of your assessment. • Security Management. Address the development of the core security plans and processes required to manage the day-to-day business operations at an acceptable level of risk, such as reporting and ownership, resources and skills, roles and responsibilities, risk management, data classification, operations, and maintenance for security management systems. • Policy Development and Implementation. Address the development of security policies and standards with a supporting policy structure, a policy change management process, and a policy compliance function. • Education and Awareness. Address areas such as new employee orientation to meet legal and HR requirements, ongoing user and management awareness programs, and ongoing user training and education programs. • Security Baseline. Address development of an inventory of information assets, networking equipment, and entity connections to baseline your current environment. 809

AU1518Ch45Frame Page 810 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS • Technical Control Architecture. Address the development of a standards-based security strategy and architecture that is aligned with the organization’s IT and business strategies and is applied across the organization. • Identity Management Solution. Address the consistent use of authorization, authentication, and access controls for employees, customers, suppliers, and partners. • Physical Safeguards. Address physical access controls and safeguards. • Business Continuity Planning/Disaster Recovery Planning. Address an overall BCP/DRP program (backup and recovery plan, emergency mode operation plan, recovery plan, and restoration plan) to support the critical business functions. • Logging and Monitoring. Address monitoring, logging, and reporting requirements, as well as developing and implementing the monitoring architecture, policies, and standards • Policy Compliance Function. Address the development of a policy compliance auditing and measurement process, which will also identify the process for coordinating with other compliance activities such as internal audit, regulatory, etc. • HIPAA Security Readiness Support. Address the management of the overall SRAP and supporting compliance assessment activities. Once you have defined the projects, you have to estimate the resources and level of effort required to complete each of the projects. In addition, following management approval, further refinement of the estimate will be necessary during the scoping and planning phase of the project life cycle. Prioritizing Projects For the identified projects, you need to prioritize them based on preselected criteria such as: • HIPAA interdependencies: Does the project support HIPAA readiness for security, privacy, or TCI? For example, a project that includes the development of a data classification scheme can support both privacy and security. • Strategic initiatives: Does the project support strategic initiatives for the organization? For example, a project that includes the development of a service to e-mail members’ explanation of benefits (EOB) supports the strategic initiative to reduce paper-based transactions while facilitating HIPAA readiness for security and privacy. • Cost reduction: Does the project help the organization reduce costs? For example, a project that includes the development of a VPN solution can support HIPAA security implementation requirements as well as support cost-reduction efforts related to migrating providers from extranet-based or dial-up access over the WAN to the Internet. 810

AU1518Ch45Frame Page 811 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness • Improve customer service/experience: Will the project improve customer service/experience? For example, implementing user provisioning and Web access control solutions supports HIPAA security implementation requirements, as well as improves the customer experience by allowing for single sign-on (SSO) and the ability for end users to reset their own passwords with a challenge–response. • Foundation building: Does the project facilitate the execution of future projects, or is it in the critical path of other necessary projects? For example, an organization will need to execute the project to develop and implement policies before executing a project to facilitate compliance. Based on the prioritization, you can then arrange the projects into an initial order of completion or plan to present them for review by the organization. Develop Budget Once you have the proposed plan developed, you need to develop an initial budget, which should include: • • • • •

Resources to be used to complete the project The duration of time needed to complete the project Hardware or software required to support the project’s completion Training for new processes, and hardware or software additions Capitalization and accounting guidelines

Organizational Alignment and Management Approval The plan you present to the organization will consist of the projects you have defined based on the gaps in your assessment, the resources and time needed to complete the projects, and the order of the projects’ completion based on prioritization criteria. Based on input from the organization, you can modify your plan accordingly. The outcome of this activity will be to gain organizational buy-in and approval of your plan, which is especially critical when you require resources from outside of your organizational area to complete the projects. PHASE 4: EXECUTION Execution deals with both the management of projects and the reporting of completion status to the organization. Program Management Office Due to the sheer number of projects, the amount of work required to complete those projects, and the need to manage the issues arising from the projects, a formal program management office (PMO) and supporting structure will be required for the successful completion of your projects on 811

AU1518Ch45Frame Page 812 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS time and within budget. You do not necessarily have to create your own security PMO, but instead you may wish to leverage an existing overall HIPAA or enterprise PMO to assist you with your project execution. Define PMO Activities Typically, a PMO performs the following activities: • Provides oversight for multiple projects. Prioritize projects, manage project interdependencies and corresponding critical path items. • Manages the allocation of resources. De-conflict resource constraints and shortages resulting from multiple project demands. • Manages budget. Manage the budget for all related projects. • Resolves issues. Facilitate resolution of issues both within projects and between cross-organizational departments. • Reports status. Provide status reports on a periodic basis to oversight committees and management to report on the progress, issues, and challenges of the overall program. Utilize a Standard Project Life-Cycle Approach Organizations should utilize a project life-cycle approach with a standard set of project documentation. Using a standard project life-cycle approach will streamline the design and implementation activities and support consistent, high-quality standards among different project teams and, potentially, different locations. SUMMARY Addressing HIPAA security readiness may seem like an unmanageable task for most organizations. As outlined in this chapter, by applying a framework approach to break down the task into manageable pieces, you should be able document your organization’s current design, effectively identify your organization’s gaps, develop an action plan to address those gaps, and execute that plan in an organized and systematic manner. Notes

Department of Health and Human Services (HHS) 45 CFR, Part 142 — Security and Electronic Standards; Proposed Rule published in the Federal Register (August 12, 1998). Any reference to the HIPAA security regulations in this chapter refer to the proposed HIPAA security regulations. The framework can be used for any organization to address information security readiness by simply modifying, adding or changing the criteria (HIPAA security regulations, FDA regulations, ISO 17799, NIST, SANS, etc.). 812

AU1518Ch45Frame Page 813 Thursday, November 14, 2002 7:51 PM

HIPAA 201: A Framework Approach to HIPAA Security Readiness References 1. Guttman, Barbara and Roback, Edward, A., An introduction to computer security: The NIST Handbook; NIST Special Publication 800-12; U.S. Department of Commerce, Technology Administration, National Institute of Standards and Technology. 2. Federal Register, Part III, Department of Health and Human Services 45 CFR Part 142 — Security and Electronic Signature Standards; Proposed Rule August 12, 1998. 3. Scholtz, Tom, Global Networking Strategies —The Security Center of Excellence; META Group; April 19, 2001. 4. Practices for Securing Critical Information Assets; Critical Infrastructure Assurance Office, January 2000. 5. Rishel, W. and Frey, N., Strategic Analysis Report R-14-2030, Integration Architecture for HIPAA Compliance: From ‘Getting It Done’ to ‘Doing It Right,’ Gartner, August 23, 2001. 6. Guttman, Barbara and Swanson, Marriane, Generally Accepted Principles and Practices for Security Information Technology Systems; NIST Special Publication 800-14; U.S. Department of Commerce, Technology Administration, National Institute of Standards and Technology.

ABOUT THE AUTHORS David MacLeod, Ph.D., CISSP, is the chief information security officer for The Regence Group, based in Portland, Oregon. He holds a Ph.D. in computer science, has 23 years of experience in information technology, and is accredited by ISC2 as a CISSP. He is also accredited by the Healthcare Information Management and Systems Society (HIMSS) as a Certified Professional in Healthcare Information Management Systems (CPHIMS). MacLeod has worked in a variety of industries, including government, retail, banking, defense contracting, emerging technologies, biometrics, physical security, and health care. He is a member of the organizing committee for the Health Sector Information Sharing and Analysis Center (ISAC), part of the Critical Infrastructure Protection activities ordered by Presidential Decision Directive 63. Brian Geffert, CISSP, CISA, is a senior manager for Deloitte & Touche’s Security Services Practice and specializes in information systems controls and solutions. Geffert has worked on the development of HIPAA assessment tools and security services for healthcare industry clients to determine the level of security readiness with Health Insurance Portability and Accountability Act of 1996 (HIPAA) regulations. In addition, he has implemented solutions to assist organizations addressing their HIPAA security readiness issues. Finally, Geffert is a Certified Information Systems Security Professional (CISSP) and a Certified Information Systems Auditor (CISA). David Deckter, CISSP, manager with Deloitte & Touche Enterprise Risk Services, has extensive experience in information systems security disciplines, controlled penetration testing, secure operating system, application and internetworking architecture and design, risk and vulnerability assessments, and project management. Deckter has obtained ISC2 CISSP certification. He has performed numerous network security assessments

813

AU1518Ch45Frame Page 814 Thursday, November 14, 2002 7:51 PM

LAW, INVESTIGATION, AND ETHICS for emerging technologies and electronic commerce initiatives in the banking, insurance, telecommunications, healthcare, and financial services industries, and has been actively engaged in projects requiring HIPAA security solutions.

814

AU1518Ch46Frame Page 815 Thursday, November 14, 2002 7:50 PM

Chapter 46

The International Dimensions of Cyber-Crime A Look at the Council of Europe’s Cyber-Crime Convention and the Need for an International Regime to Fight Cyber-Crime Ed Gabrys, CISSP

It is Monday morning and you begin your pre-work ritual by going to the World Wide Web and checking the morning electronic newspapers. In the past you might have read the paper edition of The New York Times or The Wall Street Journal; but with free news services and robust search features available on the Internet, you have decided to spare the expense and now the Internet is your primary news source. Your browser automatically opens to the electronic edition of your favorite news site, where you see the latest headline, “Electronic Terrorist Group Responsible for Hundreds of Fatalities.” Now wishing that you had the paper edition, you wonder if this news story is real or simply a teenage hacker’s prank. This would not 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

815

AU1518Ch46Frame Page 816 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS be the first time that a major news service had its Web site hacked. You read further and the story unfolds. A terrorist group, as promised, has successfully struck out at the United States. This time, the group did not use conventional terrorist weapons such as firearms and explosives, but instead has attacked state infrastructure using computers. Electronically breaking into electric power plants, automated pipelines, and air-trafficcontrol systems, in one evening, they have successfully caused havoc and devastation across the United States, including mid-air collisions over major U.S. city airports. To top it off, the U.S. Government is unable to locate the culprits. The only thing that authorities know for sure is that the perpetrators are not physically located in the United States. Is this science fiction or a possible future outcome? As an information security specialist, you have probably heard variations on this theme many times; but now, in the light of both homegrown and foreign terrorism striking the United States, the probability needs to be given serious thought. Considering the growing trends in computer crime, world dependence on computers and communication networks, and the weaknesses in the world’s existing laws, it may soon be history. Kenneth A. Minihan, Director of the National Security Agency, has called the Information Superhighway “the economic lifeblood of our nation.”1 When you consider that in the New World order, economic prosperity is as important to state security as military power, an attack on a country’s infrastructure may be as devastating as a military attack. This could be the next Pearl Harbor — an electronic Pearl Harbor! To successfully combat the cyber-crime threat, a global solution must be addressed. To date, the only far-reaching and coordinated global response to the cyber-crime problem has been the Convention on Cybercrime developed by the Council of Europe (CoE). Unfortunately, the treaty has the potential to achieve its goals at the loss of basic human rights and innovation, and by extending state powers. Those who drafted the treaty have violated an important principle of regime theory — disallowance of the participation of all relevant actors in its decision making by drafting a convention that only represents the voice of the actors in power. To clarify the arguments outlined above, this chapter first defines the scale and extent of the growing global cyber-crime threat. The second section illustrates how organizations are currently responding and highlights the Council of Europe’s solution. In the third section, regime theory is defined and applied to the global cyber-crime problem; then an argument of how the CoE’s convention fails to embrace an important element of regime-theory principles is made. Finally, in the last section, an adjusted Council of Europe convention is offered as an alternative and will be compared to a notable and successful international regime. 816

AU1518Ch46Frame Page 817 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime PART I: GLOBAL CYBER-CRIME The Cyber-Crime Threat Look at how many clueless admins are out there. Look at what kind of proprietary data they are tasked to guard. Think of how easy it is to get past their pathetic defenses…. ‘The best is the enemy of the good.’ — Voltaire2 Posted on The New York Times Web site by the computer hacking group, Hacking 4 Girliez

A New Age and New Risks The human race has passed through a number of cultural and economic stages. Most of our progress can be attributed to the ideas and the tools we have created to develop them. Wielding sticks and stones, we began our meager beginnings on par with the rest of the animal kingdom, as hunters and gatherers. We then graduated on to agrarian life using our picks and shovels, through an industrial society with our steam engines and assembly lines, and have arrived in today’s digital age. Computers and communication networks now dominate our lives. Some may argue that a vast number of people in the world have been overlooked by the digital revolution and have never made a phone call, let alone e-mailed a friend over the Internet. The advent of computers has had far-reaching effects; and although some people may not have had the opportunity to navigate the digital highway, they probably have been touched in other ways. Food production, manufacturing, education, health care, and the spread of ideas have all been beneficiaries of the digital revolution. Even the process of globalization owes its far and rapid reach to digital tools. For all of the benefits that the computer has brought us, like the tools of prior ages, we have paid little attention to the potential harm they bring until after the damage has been done. On one hand, the Industrial Age brought industrialized states greater production and efficiency and an increase in standards of living. On the other, it also produced mechanized warfare, sweatshops, and a depleting ozone layer, to name a few. Advocates of the digital age and its now most famous invention, the Internet, flaunt dramatic commercial growth, thriving economies, and the spread of democracy as only a partial list of benefits. The benefits are indeed great, but so are the costs. One such cost that we now face is a new twist on traditional crime — cyber-crime. An International Threat Because of its technological advancements, today’s criminals can be more nimble and more elusive than ever before. If you can sit in a kitchen in St. Petersburg, Russia, and steal from a bank in New York, you understand the dimensions of the problem.3 — Former Attorney General Janet Reno 817

AU1518Ch46Frame Page 818 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS Cyber-crime is an extension of traditional crime, but it takes place in cyberspace4 — the nonphysical environment created by computer systems. In this setting, cyber-crime adopts the nonphysical aspects of cyberspace and becomes borderless, timeless, and relatively anonymous. By utilizing globally connected phone systems and the world’s largest computer network, the Internet, cyber-criminals are able to reach out from nearly anywhere in the world to nearly any computer system, as long as they have access to a communications link. Most often, that only needs to be a reliable phone connection. With the spread of wireless and satellite technology, location will eventually become totally irrelevant. In essence, the global reach of computer networks has created a borderless domain for cyber-crimes. Add in automation, numerous time zones, and 24/7 access to computer systems, and now time has lost significance. A famous New Yorker cartoon shows a dog sitting at a computer system speaking to his canine companion, saying, “On the Internet, nobody knows you’re a dog.”5 In this borderless and timeless environment, only digital data traverses the immense digital highway, making it difficult to know who or what may be operating a remote computer system. As of today there are very few ways to track that data back to a person, especially if they are skilled enough to conceal their tracks. Moreover, cyber-criminals are further taking advantage of the international aspect of the digital domain by networking with other cyber-criminals and creating criminal gangs. Being a criminal in cyberspace takes technical know-how and sophistication. By dividing up the work, cyber-gangs are better able to combat the sophistication and complexities of cyberspace. With computers, telecommunication networks, and coordination, the cyber-criminal has achieved an advantage over his adversaries in law enforcement. Cyber-crime, therefore, has an international aspect that creates many difficulties for nations that may wish to halt it or simply mitigate its effects. Cyber-Crime Defined Cyber-crime comes in many guises. Most often, people associate cybercrime with its most advertised forms — Web hacking and malicious software such as computer worms and viruses, or malware as it is now more often called. Who can forget some of these more memorable events? Distributed denial-of-service attacks in early 2000 brought down E-commerce sites in the United States and Europe, including Internet notables Yahoo!, Amazon.com, and eBay. The rash of computer worms that are becoming more sophisticated spread around the world in a matter of hours and cost businesses millions — or by some estimates, billions — in damages related to loss and recovery. Also in 2000, a Russian hacker named “Maxus” stole thousands of credit card numbers from the online merchant CD Universe and held them for ransom at $100,000 (U.S.). When his demands were not met, he posted 25,000 of the numbers to a public Web site. These are just 818

AU1518Ch46Frame Page 819 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime

1997

1998

1999

2000

100

87 89 86 77 73 72 74

80 60

51 48 53

40

81

44

29 30 26

22 21 21 21

23

Foreign Governments

Foreign Corporations

20 0

Independent Hackers

U.S. Competitors

Disgruntled Employees

Exhibit 46-1. CSI/FBI 2000 Computer Crime and Security Survey. (Source: Computer Security Institute)

a sample of the more recent and widely publicized events. These types of cyber-crimes are often attributed to hackers — or, as the hacker community prefers them to be called, crackers or criminals. Most often, the hackers associated with many of the nuisance crimes such as virus writing and Web site defacements are what security experts refer to as script-kiddies. They are typically males between the ages of 15 and 25, of whom Jerry Schiller, the head of network security at the Massachusetts Institute of Technology, said, “… are usually socially maladjusted. These are not the geniuses. These are the misfits.”6 Although these socalled misfits are getting much of the public attention, the threat goes deeper. The annual “CSI/FBI Computer Crime and Security Survey,”7 as shown in Exhibit 46-1, cited foreign governments and corporations, U.S. competitors, and disgruntled employees as other major players responsible for cyber-attacks.8 Because cyber-crime is not bound by physical borders, it stands to reason that cyber-criminals can be found anywhere around the world. They do, however, tend to concentrate in areas where education is focused on mathematics (a skill essential to hacking), computer access is available, and the country is struggling economically, such as Russia, Romania, or Pakistan. Although this does not preclude other countries such as the United Kingdom or United States from having their share of computer criminals, recent trends suggest that the active criminal hackers tend to center in these specific areas around the globe. This is an indication that, if their talented minds cannot be occupied and compensated as they may be in an economically prosperous country, then they will use their skills for other purposes. Sergie Pokrovsky, an editor of the Russian hacker magazine Khaker, 819

AU1518Ch46Frame Page 820 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS Exhibit 46-2. Ten foreign hot spots for credit card fraud. City

Percent of Fraudulent Foreign Orders

Bucharest, Romania Minsk, Belarus Lasi, Romania Moscow, Russia Karachi, Pakistan Krasnogorsk, Russia Cairo, Egypt Vilnius, Lithuania Padang, Indonesia Sofia, Bulgaria

12.76 8.09 3.14 2.43 1.23 0.78 0.74 0.74 0.59 0.56

Source: Internet World, Feb. 01, 1999.

said hackers in his circle “… have skills that could bring them rich salaries in the West, but they expect to earn only about $300 a month working for Russian companies.”9 An online poll on a hacker-oriented Web site asked respondents to name the world’s best hackers and awarded hackers in Russia top honors, with 82 percent of the vote. Compare that to the paltry five percent given to American hackers.10 Looking at online credit card fraud, a 1999 survey of Yahoo! stores (see Exhibit 46-2) reported that nearly a third of foreign orders placed with stolen credit cards could be traced to ten international cities, which is an indicator of the geographic centers of major international hacker concentrations.11 Cyber-crime is quite often simply an extension of traditional crimes; and, similarly, there are opportunities for everyone — foreign spies, disgruntled employees, fraud perpetrators, political activists, conventional criminals, as well as juveniles with little computer knowledge. It is easy to see how crimes like money laundering, credit card theft, vandalism, intellectual property theft, embezzlement, child pornography, and terrorism can exist both in and outside of the cyber-world. Just think about the opportunities that are available to the traditional criminal when you consider that cyber-crime promises the potential for a greater profit and a remote chance of capture. According to the FBI crime files, the average bank robbery yields $4000, while the average computer heist can turn around $400,000.12 Furthermore, the FBI states that there is less than a 1:20,000 chance of a cyber-criminal being caught. This is more evident when you take into consideration that employees — who, as you know, have access to systems, procedures, and passwords — commit 60 percent of the thefts.13 Adding insult to injury, in the event that a cyber-criminal is actually caught, there is still only a 1:22,000 chance that he will be sent to prison.14 820

AU1518Ch46Frame Page 821 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime Here are just a few examples of traditional crimes that have made their way to the cyber-world. In 1995, a Russian hacker, Vladimir Levin, embezzled more than $10 million from Citibank by transferring electronic money out of the bank’s accounts.15 Copyright infringement or information theft has reached mass proportions with wildly popular file-sharing programs like Limewire, Morpheous, and the notorious Napster. Millions of copies of copyrighted songs are freely traded among these systems’ users all over the globe, which the record companies are claiming cost them billions of dollars.16 In August 2000, three Kazakhs were arrested in London for allegedly breaking into Bloomberg L.P.’s computer system in Manhattan in an attempt to extort money from the company. 17 A 15-year-old boy was arrested for making terrorists threats and possessing an instrument of crime after he sent electronic mail death threats to a U.S. judge. He demanded the release of three Arab men imprisoned in connection with the failed 1993 plot to blow up several New York City landmarks. If they were not released, he threatened that a jihad would be proclaimed against the judge and the United States. Beginning in 1985 until his capture in 2001, Robert Philip Hanssen, while working for the U.S. Federal Bureau of Investigation, used computer systems to share national secrets with Russian counterparts and commit espionage.18 In 1996, members of an Internet chat room called “KidsSexPics” executed a horrific offense involving child pornography and international computer crime. Perpetrators, who included citizens of the United States, Finland, Australia, and Canada, were arrested for orchestrating a child molestation that was broadcast over the Internet.19 Computers Go to War: Cyber-Terrorism The modern thief can steal more with a computer than with a gun. Tomorrow’s terrorist may be able to do more damage with a keyboard than with a bomb.20 — National Research Council, 1991 We are picking up signs that terrorist organizations are looking at the use of technology.21 — Ronald Dick, Head of the FBI’s Anti-Cyber-Crime Unit

One of the most frightening elements of cyber-crime is a threat that has fortunately been relatively absent in the world — cyber-terrorism. Cyberterrorism is, as one may expect, the marriage of terrorism and cyberspace. Dorothy Denning, a professor at Georgetown University and a recognized expert in cyber-terrorism, has described it as “unlawful attacks and threats of attack against computer’s networks, and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives.”22 Although there have been a number of cyber-attacks over the past few years of a political or social nature, none 821

AU1518Ch46Frame Page 822 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS have been sufficiently harmful or frightening to be classified by most authorities as cyber-terrorism. Most of what has occurred, such as threatening e-mails, e-mail bombs, denial-of-service attacks, and computer viruses, are more analogous to street protests and physical sit-ins. The threat, however, is still very real. In a controlled study, the U.S. Department of Defense attacked its own machines. Of the 38,000 machines attacked, 24,700 (or 65 percent) were penetrated. Only 988 (or four percent) of the penetrated sites, realized they were compromised; and only 267 (or 27 percent) of those reported the attack.23 Keep in mind that the Department of Defense has mandatory reporting requirements and a staff that recognizes the importance of following orders, which makes those numbers even more ominous. Although government systems may have deficiencies, a greater vulnerability may lie with critical infrastructures. Finance, utilities, and transportation systems are predominately managed by the private sector and are far more prone to an attack because those organizations are simply unprepared. A survey by the U.K.-based research firm Datamonitor shows that businesses have been massively underspending for computer security. Datamonitor estimates that $15 billion is lost each year through E-security breaches, while global spending on defenses is only $8.7 billion. Moreover, even if business were to improve security spending habits and correct the weaknesses in computer systems, it is effectively impossible to eliminate all vulnerabilities. Administrators often ignore good security practices or are unaware of weaknesses when they configure systems. Furthermore, there is always the possibility that an insider with knowledge may be the attacker. In March 2000, Japan’s Metropolitan Police Department reported that software used by the police department to track 150 police vehicles, including unmarked cars, was developed by the Aum Shinryko cult — the same group that gassed the Tokyo subway in 1995, killing 12 people and injuring 6000 others. At the time of the discovery, the cult had received classified tracking data on 115 vehicles.24 Experts believe that terrorists are looking at the cyber-world as an avenue to facilitate terrorism. The first way in which terrorists are using computers is as part of their infrastructure, as might any other business trying to take advantage of technological advancements. They develop Web sites to spread messages and recruit supporters, and they use the Internet to communicate and coordinate action.25 Clark State, executive director of the Emergency Response & Research Institute in Chicago, testified before a senate judiciary subcommittee that “members of some Islamic extremist organizations have been attempting to develop a ‘hacker network’ to support their computer activities and may engage in offensive information warfare attacks in the future.”26 This defines their second and more threatening use of computer systems — that of a 822

AU1518Ch46Frame Page 823 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime weapon. Militant and terrorist groups such as the Indian separatist group Harkat-ul-Ansar and The Provisional Irish Republican Army have already used computer systems to acquire classified military information and technology. In all of the related terrorist cases, there have luckily been no casualties or fatalities directly related to the attack. For those who doubt that a computer attack may be fatal, consider the following real incident. A juvenile from Worcester, Massachusetts, took control of a local telephone switch. Given the opportunity, he disabled local phone service. That alone is not life-threatening. That switch, however, controlled the activation of landing lights for a nearby airport runway that were subsequently rendered inoperable.27 Luckily, it was a small airport. If it had been the Newark or Los Angeles airport, the effects could have been devastating. It is believed that most terrorist groups are not yet prepared to stage a meaningful cyber-attack but that they can be in the near future. Understanding that these groups are preparing, critical systems are and will be vulnerable to an attack; and a successful attack in the cyber-world will gain them immediate and widespread media attention — it should be expected that a cyber-terrorist attack is imminent. The Threat is Growing Every one of us either has been or will be attacked in cyberspace. A threat against one is truly a threat against all.28 — Mary Ann Davidson, Security Product Manager at Oracle

It is difficult to determine what the real scope of the cyber-crime threat is. Most successful computer crimes go unreported to law enforcement or undetected by the victims. If a business has systems that are compromised by a cyber-criminal, they are hard-pressed to make that information public. The cost of the break-in may have been a few thousand, tens of thousands, or possibly hundreds of thousands of dollars. If that cost is not substantial enough, the cost associated with a loss of customer trust and negative public opinion can bankrupt a company. The statistics that are available illustrate that cyber-crime is undeniably on the rise. The number of Web sites that are reported vandalized each year is reaching numbers close to 1000 a month.29 ICSA.net reported that the rate of virus infections doubled annually from 1997 to 1999, starting at 21 incidents per month per 1000 computers up to 88.30 In the United Kingdom, there was a 56 percent increase in cyber-crime for 2000, with most cyber-criminals seeking financial gain or hacking for political reasons.31 In the first six months of 2000, cyber-crime accounted for half of all U.K. fraud. The FBI has approximately 1400 active investigations into cyber-crime, and there are at least 50 new computer viruses generated weekly that require attention from federal law enforcement or the private sector.32 According to a 823

AU1518Ch46Frame Page 824 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS Gartner Group study, smaller companies stand a 50–50 chance of suffering an Internet attack by 2003; and more than 60 percent of the victimized companies will not know that they have been attacked.33 In the event that an attack is undetected, a cyber-criminal can utilize the pirated system to gather information, utilize system capacity, launch further attacks internally or externally to the organization, or leave behind a logic bomb. A logic bomb is a computer program that will wait until triggered and then release a destructive payload. This can include destruction of data, capturing and broadcasting sensitive information, or anything else that a mischievous programmer may be able to devise. Beyond the increase in incidents, the costs of dealing with cyber-crime are rising as well. A joint study by the American Society for Industrial Security (ASIS) and consulting firm PricewaterhouseCoopers found that Fortune 1000 companies incurred losses of more than $45 billion in 1999 from the theft of proprietary information. That number is up from roughly $24 billion a year in the middle 1990s.34 Furthermore, the average Fortune 1000 company reported 2.45 incidents with an estimated loss per incident in excess of $500,000.35 If these numbers are truly accurate, that is a cost of over $1 trillion. INTERNATIONAL ISSUES We cannot hope to prevail against our criminal adversaries unless we begin to use the same interactive mechanisms in the pursuit of justice as they use in the pursuit of crime and wealth.36 — Former U.S. Attorney General Janet Reno

Cyber-criminals and cyber-terrorists are chipping away at the cyberworld, weakening the confidentiality, integrity, and availability of our communications channels, computer systems, and the information that traverses or resides in them. As illustrated, the costs are high in many ways. Moreover, if a nation cannot protect its critical infrastructure, the solvency of its businesses, or the safety of its citizens from this growing threat, then it is possible that the nations most dependent on the cyberworld are jeopardizing their very sovereignty. So what is preventing the world from eliminating or at least reducing the cyber-crime threat? The primary challenges are legal and technical. Whether a cyber-criminal is the proverbial teenage boy hacker or a terrorist, the borderless, timeless, and anonymous environment that computers and communication networks provide creates an international problem for law enforcement agencies. With most crimes, the physical presence of a perpetrator is necessary. This makes investigation of a crime and identification, arrest, and prosecution of a criminal much simpler. Imagine for a moment that a group of cyber-criminals located in a variety of countries including Brazil, Israel, Canada, and Chile decide to launch an attack to 824

AU1518Ch46Frame Page 825 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime break into an E-commerce Web site that is physically located in California but maintained for a company in New York City. In an attempt to foil investigators, the cyber-gang first takes control of a computer system in South Africa, which in turn is used to attack a system in France. From the system in France, the attackers penetrate the system in California and steal a listing of credit card numbers that they subsequently post to a Web site in England. If California law enforcement is notified, how are they able to investigate this crime? What laws apply? What technology can be used to investigate such a crime? Legal Issues Currently, at least 60 percent of INTERPOL membership lacks the appropriate legislation to deal with Internet/computer-related crime.37 — Edgar Adamson, Head of the U.S. Customs Service

Traditional criminal law is ill-prepared for dealing with cyber-crime in many ways. The elements that we have taken for granted, such as jurisdiction and evidence, take on a new dimension in cyberspace. Below are some of the more important legal issues concerning cyber-crime. This is not intended to be a comprehensive list but rather a highlight. Criminalizing and Coordinating Computer-Related Offenses. Probably the most important legal hurdle in fighting cyber-crime is the criminalizing and coordinating of computer-related offenses among all countries. Because computer crime is inherently a borderless crime, fighting cyber-criminals cannot be effective until all nations have established comprehensive cyber-crime laws. A report by Chief Judge Stein Schjolberg of Norway highlights a number of countries that still have “no special penal legislation.”38 According to a study that examined the laws of 52 countries and was released in December 2000, Australia, Canada, Estonia, India, Japan, Mauritius, Peru, Philippines, Turkey, and the United States are the top countries that have “fully or substantially updated their laws to address some of the major forms of cyber-crimes.”39 There are still many countries that have not yet adequately addressed the cyber-crime issue, and others are still just considering the development of cyber-security laws.40

An excellent example of this issue involves the developer of the “ILOVEYOU” or Love Bug computer worm that was launched from the Philippines in May 2000 and subsequently caused damages to Internet users and companies worldwide calculated in the billions of dollars. A suspect was quickly apprehended, but the case never made it to court because the Philippines did not have adequate laws to cover computer crimes. Because the Philippines did not have the laws, the United States and other countries that did were unable extradite the virus writer in order to prosecute 825

AU1518Ch46Frame Page 826 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS him for the damage done outside of the Philippines. Within six weeks after the Love Bug attack, the Philippines outlawed most computer crimes. Investigations and Computer Evidence. Once an incident has occurred, the crime must be investigated. In most societies, the investigation of any crime deals with the gathering of evidence so that guilt or innocence may be proven in a court of law. In cyberspace, this often proves very difficult. Evidence is the “testimony, writings, material objects, or other things presented to the senses that are offered to prove the existence or nonexistence of a fact.”41 Without evidence, there really is no way to prove a case. The problem with electronic evidence, unlike evidence in many traditional crimes, is that it is highly perishable and can be removed or altered relatively easily from a remote location. The collection of useful evidence can be further complicated because it may not be retained for any meaningful duration, or at all, by involved parties. For example, Internet service providers (ISPs) may not maintain audit trails, either because their governments may not allow extended retention for privacy reasons, or the ISP may delete it for efficiency purposes. At this time, most countries do not require ISPs to retain electronic information for evidentiary purposes. These audit trails can be essential for tracing a crime back to a guilty party.

In instances where the investigation involves more than one country, the investigators have further problems because they now need to coordinate and cooperate with foreign entities. This often takes a considerable amount of time and a considerable amount of legal wrangling to get foreign authorities to continue with or cooperate in the investigation. Assuming that it is possible to locate evidence pertaining to a cybercrime, it is equally important to have the ability to collect and preserve it in a manner that maintains its integrity and undeniable authenticity. Because the evidence in question is electronic information, and electronic information is easily modified, created, and deleted, it becomes very easy to question its authenticity if strict rules concerning custody and forensics are not followed. Jurisdiction and Venue. After the evidence has been collected and a case is made, a location for trial must be chosen. Jurisdiction is defined as “the authority given by law to a court to try cases and rule on legal matters within a particular geographic area and/or over certain types of legal cases.”42 Because cyber-crime is geographically complex, jurisdiction becomes equally complex — often involving multiple authorities, which can create a hindrance to an investigation. The venue is the proper location for trial of a case, which is most often the geographic locale where the crime was committed. When cyber-crime is considered, jurisdiction and venue create a complex situation. Under which state or nation’s laws is a cyber-criminal prosecuted when the perpetrator was physically located in 826

AU1518Ch46Frame Page 827 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime one place and the target of the crime was in another? If a cyber-criminal in Brazil attacked a system in the United States via a pirated system in France, should the United States or France be the venue for the trial? They were both compromised. Or should Brazil hold the trial because the defendant was physically within its geographic boundaries during the crime? Extradition. Once jurisdiction is determined and a location for trial is set, if the defendant is physically located in a different state or nation than the venue for trial, that person must be extradited. Black’s Law Dictionary defines extradition as “the surrender by one state or country to another of an individual accused or convicted of an offense outside its own territory and within the territorial jurisdiction of the other, which being competent to try and punish him, demands the surrender.”43 As seen by the Love Bug case, extradition efforts can become unpredictable if cyber-crime laws are not criminalized and especially if extradition laws are not established or modified to take cyber-crime into consideration. As an example, the United States requires, by constitutional law, that an extradition treaty be signed and that these treaties must either list the specific crimes covered by it or require dual criminality, whereby the same law is recognized in the other country.44 Because the United States only has approximately 100 extradition treaties, and most countries do not yet have comprehensive computer crime laws, extradition of a suspected cyber-criminal to the United States may not be possible.

Technical Issues The technical roadblocks that may hinder the ability of nations to mitigate the cyber-crime threat primarily concern the tools and knowledge used in the electronic domain of cyberspace. Simply put, law enforcement often lacks the appropriate tools and knowledge to keep up with cybercriminals. The Internet is often referenced as the World Wide Web (WWW). However, information security professionals often refer to the WWW acronym as the Wild Wild Web. Although some countries do their best to regulate or monitor usage of the Internet, it is a difficult environment for any one country to exercise power over. For every control that is put in place, a workaround is found. One example exists for countries that wish to restrict access to the Internet. Saudi Arabia restricts access to pornography, sites that the government considers defamatory to the country’s royal family or to Islam, and usage of Yahoo! chat rooms or Internet telephone services on the World Wide Web.45 Reporters Without Borders, a media-rights advocacy group based in France, estimates that at least 20 countries significantly restrict Internet access.46 SafeWeb, a small Oakland, California, company, provides a Web site that allows Internet users to mask the Web site destination. SafeWeb is only one of many such companies; and although 827

AU1518Ch46Frame Page 828 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS the Saudi government has retaliated by blocking the SafeWeb site, other sites appear quickly that either offer the same service as SafeWeb or mirror the SafeWeb site so that it is still accessible. This is one example of a service that has legitimate privacy uses and is perfectly legal in its country of origin. However, it is creating a situation for Saudi Arabia and other countries whereby they are unable to enforce their own laws. Although some may argue that Saudi citizens should have the ability to freely access the Internet, the example given is not intended for arguing ethics but purely to serve as an example of the increasing inability of law enforcement to police what is within its jurisdiction. The same tool in the hands of a criminal can prevent authorities with legal surveillance responsibilities from monitoring criminal activity. SafeWeb is but one example of a large number of tools and processes used for eluding detection. Similarly, encryption can be used to conceal most types of information. Sophisticated encryption programs were once solely used by governments but are now readily available for download off of the Internet. If information is encrypted with a strong cryptography program, it will take authorities months or possibly years of dedicated computing time to reveal what the encryption software is hiding. Also available from the Internet is software that not only searches for system vulnerabilities, but also proceeds to run an attack against what it has found; and if successful, it automatically runs subsequent routines to hide traces of the break-in and to ensure future access to the intruder. These types of tools make investigation and the collection of evidence increasingly more difficult. Until more effective tools are developed and made available to facilitate better detection and deterrence of criminal activities, criminals will continue to become more difficult to identify and capture. PART 2: INTERNATIONAL EFFORTS TO MITIGATE CYBER-CRIME RISK The cyber-crime threat has received the attention of many different organizations, including national and local governments, international organizations such as the Council of Europe and the United Nations, and nongovernmental organizations dealing with issues such as privacy, human rights, and those opposed to government regulation. GENERAL GOVERNMENT EFFORTS We are sending a strong signal to would-be attackers that we are not going to let you get away with cyber-terrorism.47 — Norman Mineta, Former U.S. Secretary of Commerce 828

AU1518Ch46Frame Page 829 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime One thing that we can learn from the atomic age is that preparation, a clear desire and a clear willingness to confront the problem, and a clear willingness to show that you are prepared to confront the problem is what keeps it from happening in the first place.48 — Condoleezza Rice, U.S. National Security Advisor

Governments around the world are in an unenviable position. On one hand, they need to mitigate the risk imposed by cyber-crime in an environment that is inherently difficult to control — while on the other, nongovernmental organizations are demanding limited government interference. The first order of business for national governments is to take the lead in creating a cyber-crime regime that can coordinate the needs of all the world’s citizens and all of the nation’s interests in fighting the cyber-crime threat. To date, industry has taken the lead; and in effect, government has in a large part ceded public safety and national security to markets. Many efforts have been made by various nations to create legislation concerning computer crime. The first was a federal bill introduced in 1977 in the U.S. Congress by Senator Ribikoff, although the bill was not adopted.49 The United States later passed the 1984 Computer Fraud and Abuse Law, the 1986 Computer Fraud and Abuse Act, and the Presidential Decision Directive 63 (PDD-63), all of which resulted in strengthened U.S. cyber-crime laws. Internationally, in 1983 the OECD made recommendations for its member countries to ensure that their penal legislation also applied to certain categories of computer crime. The Thirteenth Congress of the International Academy of Comparative Law in Montreal, the U.N.’s Eighth Criminal Congress in Havana, and a Conference in Wurzburg, Germany, all approached the subject in the early 1990s from an international perspective. The focus of these conferences included modernizing national criminal laws and procedures; improvement of computer security and prevention measures; public awareness; training of law enforcement and judiciary agencies; and collaboration with interested organizations of rules and ethics in the use of computers.50 In 1997, the High-Tech Subgroup of the G-8’s Senior Experts on Transnational Organized Crime developed Ten Principles and a plan of action for combating computer crime. This was followed in 1999 by the adoption of principles of transborder access to stored computer data by the G-8 countries. The Principles and action plan included:51 • A review of legal systems to ensure that telecommunication and computer system abuses are criminalized • Consideration of issues created by high-tech crimes when negotiating mutual assistance agreements and arrangements • Solutions for preserving evidence prior to investigative actions 829

AU1518Ch46Frame Page 830 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS • Creation of procedures for obtaining traffic data from all communications carriers in the chain of a communication and ways to expedite the passing of this data internationally • Coordination with industry to ensure that new technologies facilitate national efforts to combat high-tech crime by preserving and collecting critical evidence Around the globe, countries are slowly developing laws to address cyber-crime, but the organization that has introduced the most far-reaching recommendations has been the Council of Europe (CoE). The Convention on Cyber-Crime was opened for signature on November 23, 2001, and is being ratified by its 41 member states and the observing states — Canada, United States, and Japan — over a one- to two-year period. The treaty will be open to all countries in the world to sign once it goes into effect. The impact of the treaty has the potential to be significant considering that CoE members and observing countries represent about 80 percent of the world’s Internet traffic.52 COUNCIL OF EUROPE CONVENTION The objective of the Council of Europe’s Convention on Cyber-Crime is aimed at creating a treaty to harmonize laws against hacking, fraud, computer viruses, child pornography, and other Internet crimes and ensure common methods of securing digital evidence to trace and prosecute criminals.53 It will be the first international treaty to address criminal law and procedural aspects of various types of criminal behavior directed against computer systems, networks or data, and other types of similar misuse.54 Each member country will be responsible for developing legislation and other measures to ensure that individuals can be held liable for criminal offenses as outlined in the treaty. The Convention has been drafted by the Committee of Experts on Crime in Cyberspace (PC-CY) — a group that is reportedly made up of law enforcement and industry experts. The group worked in relative obscurity for three years, released its first public draft — number 19 — in April 2000, and completed its work in December 2000 with the release of draft number 25. The Convention was finalized by the Steering Committee on European Crime Problems and submitted to the Committee of Ministers for adoption before it was opened to members of the Council of Europe, observer nations, and the world at large. The Convention addresses most of the important issues outlined in this chapter concerning cyber-crime. As previously described, the major hurdles in fighting cyber-crime are the lack of national laws applicable to cyber-crime and the inability for nations to cooperate when investigating or prosecuting perpetrators. 830

AU1518Ch46Frame Page 831 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime National Law At a national level, all signatory countries will be expected to institute comprehensive laws concerning cyber-crime, including the following: • Criminalize “offenses against the confidentiality, integrity and availability of computer data and systems,” “computer-related offenses,” and “content-related offenses.” • Criminalize the “attempt and aiding or abetting” of computer-related offenses. • Adopt laws to expedite the preservation of stored computer data and “preservation and partial disclosure of traffic data.” • Adopt laws that empower law enforcement to order the surrender of computer data, computer systems, and computer data storage media, including subscriber information provided by an ISP. • Adopt laws that provide law enforcement with surveillance powers over “content data” and require ISPs to cooperate and assist. • Adopt legislation that establishes jurisdiction for computer-related offenses. International Cooperation The section of the Convention dealing with international cooperation concerns the development and modification of arrangements for cooperation and reciprocal legislation. Some of the more interesting elements include the following: • Acceptance of criminal offenses within the Convention as extraditable offenses even in the absence of any formal extradition treaties. If the extradition is refused based on nationality or jurisdiction over the offense, the “requested Party” should handle the case in the same manner as under the law of the “requesting Party.” • Adoption of legislation to provide for mutual assistance to the “widest extent possible for the purpose of investigations or proceedings concerning criminal offenses related to computer systems and data, or for the collection of evidence in electronic form of a criminal offense.”55 • In the absence of a mutual assistance treaty, the “requested Party” may refuse if the request is considered to be a political offense or that execution of the request may likely risk its “sovereignty, security or other essential interests.” NGO RESPONSES AND CRITICISMS We don’t want to pass a text against the people.56 — Peter Csonka, Deputy Head of the Council of Europe’s Economic Crime Division 831

AU1518Ch46Frame Page 832 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS The experts should be proud of themselves. They have managed during the past eight months to resist pernicious influence of hundreds if not thousands of individual computer users, security experts, civil liberties groups, ISPs, computer companies and others outside of their select circle of law enforcement representatives who wrote, faxed and e-mailed their concerns about the treaty.57 — David Banisar, Deputy Director of Privacy International We don’t have any comment regarding these protestings. Everyone is entitled to their own opinion, but we have no comment.58 — Debbie Weierman, FBI Spokeswoman

Within days of the CoE’s release of its first public draft of the Convention on Cyber-Crime, as well as the release of its subsequent versions, opposition groups rallied together and flooded the Council with requests urging the group to put a hold on the treaty. The 22nd draft received over 400 emails.59 The Global Internet Liberty Campaign, an organization consisting of 35 lobby groups ranging from Internet users to civil liberties activists and anti-censorship groups, wrote to the European Council stating that they “believe that the draft treaty is contrary to well-established norms for the protection of the individual (and) that it improperly extends the police authority of national governments.”60 Member organizations represent North America, Asia, Africa, Australia, and Europe, and include the American Civil Liberties Union, Privacy International (United Kingdom), and Human Rights Network (Russia). Other groups opposed to the proposed treaty are the International Chamber of Commerce, all the ISP associations, and data security groups that are concerned with some key areas regarding human rights, privacy, and the stifling of innovation. Lack of NGO Involvement The primary concern — and the problem from which all the others stem — is the fact that the PC-CY worked in seclusion without the involvement of important interest groups representing human rights, privacy, and industry. According to opposition sources, the PC-CY is comprised of “police agencies and powerful private interests.”61 A request by the author was made to the CoE for a list of PC-CY members; however, the request was declined, stating that they “are not allowed to distribute such a list.”62 Throughout the entire period during which the PC-CY was drafting the treaty, not a single open meeting was held. Marc Rotenberg of the Electronic Privacy Information Center called the draft a “direct assault on legal protections and constitutional protections that have been established by national governments to protect their citizens.”63 If the three years of work done by the PC-CY were more inclusive and transparent, many if not all of 832

AU1518Ch46Frame Page 833 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime the remaining issues could have already been addressed. Unfortunately, although opposition has been expressed, little has been done to address the issues raised; and the Council of Europe passed the Convention regardless. Overextending Police Powers and Self-Incrimination A chief concern of many opposition groups is that the Convention extends the power of law enforcement beyond reasonable means and does not provide adequate requirements to ensure that individual rights are preserved. The Global Internet Liberty Campaign points out that an independent judicial review is not required before a search is undertaken. Under Article 19 of the Convention, law enforcement is empowered to search and seize any computer system within its territory that it believes has data that is lawfully accessible or available to the initial system. With today’s operating systems and their advanced networking capabilities, it is difficult to find a computer system without a network connection that would make it accessible to any other system. The only question remaining is whether that access is “lawful.” If law enforcement draws the same conclusion, where might they stop their search? Such a broad definition of authority can implicate nearly any personal computer attached to the Internet. Furthermore, Article 19 gives law enforcement the power to order any person who has knowledge about the functioning of the computer system, or measures applied to protect the computer data therein, to provide any information necessary to grant access. This would easily include encryption keys or passwords used to encrypt information. To date, only Singapore and Malaysia are believed to have introduced such a requirement into law. The required disclosure of such information to some people might seem to be contrary to U.S. law and the Fifth Amendment, which does not require people to incriminate themselves. Privacy The Convention requires that ISPs retain records regarding the activities of their customers and to make that information available to law enforcement when requested. The Global Internet Liberty Campaign letter to the CoE stated, “these provisions pose a significant risk to the privacy and human rights of Internet users and are at odds with well-established principles of data protection such as the Data Protection Directive of the European Union.” They argue that such a pool of information could be used “to identify dissidents and persecute minorities.” Furthermore, for ISPs to be able to provide such information, the use of anonymous e-mailers and Web surfing tools such as SafeWeb would need to be outlawed because they mask much of the information that ISPs would be expected to provide. ISP organizations have also taken exception to the proposed requirements, which would place a heavy responsibility on them to manage burdensome record-keeping tasks as well as capture and maintain the informa833

AU1518Ch46Frame Page 834 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS tion. In addition, they would be required to perform the tasks necessary to provide the requested information. Mutual Assistance Under the Convention’s requirements, countries are not obligated to consider dual criminality to provide mutual assistance. That is, if one country believes that a law under the new Convention’s guidelines is broken and the perpetrator is in foreign territory, that foreign country, as the “requested nation,” is required to assist the “requesting nation,” regardless of whether a crime was broken in the requested nation’s territory. The “requested nation” is allowed to refuse only if they believe the request is political in nature. What will happen if there is a disagreement in definition? In November of 2001, Yahoo! was brought to trial in France because it was accused of allowing the sale of Nazi memorabilia on its auction site — an act perfectly legal in the United States, Yahoo!’s home country. Barry Steinhardt, associate director for the American Civil Liberties Union, asked, “Is what Yahoo! did political? Or a ‘crime against humanity,’ as the French call it?” Germany recently announced that anyone, anywhere in the world, who promotes Holocaust denial is liable under German law; and the Malaysian government announced that online insults to Islam will be punished.64 How will this impact national sovereignty over any country’s citizens when that country legally permits freedom of speech? Stifling of Innovation and Safety Article 6 of the Convention, titled “Misuse of Devices,” specifically outlaws the “production, sale, procurement for use, import, distribution or otherwise making available of, a device, including a computer program, designed or adapted primarily for the purpose of committing any of the offences established (under Title 1).” The devices outlawed here are many of the same devices that are used by security professionals to test their own systems for vulnerabilities. The law explains that the use of such devices is acceptable for security purposes provided the device will not be used for committing an offense established under Title 1 of the Convention. The problem with the regulation is that it may prohibit some individuals or groups from uncovering serious security threats if they are not recognized as authorities or professionals. The world may find itself in a position whereby it must rely on only established providers of security software. They, however, are not the only ones responsible for discovering system vulnerabilities. Quite often, these companies also rely on hobbyists and lawful hacker organizations for relevant and up-to-date information. Dan Farmer, the creator of the free security program “SATAN,” caused a tremendous uproar with his creation. Many people saw his program solely as a hacking device with a purpose of discovering system weaknesses so that hackers could exploit them. Today, many professionals use that tool and 834

AU1518Ch46Frame Page 835 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime others like it in concert with commercially available devices to secure systems. Under the proposed treaty, Dan Farmer could have been labeled a criminal and possession of his program would be a crime. Council of Europe Response Despite the attention that the draft Convention on Cyber-Crime has received, CoE representatives appear relatively unconcerned; and the treaty has undergone minimal change. Peter Csonka, the CoE deputy head, told Reuters, “We have learned that we have to explain what we mean in plain language because legal terms are sometimes not clear.”65 It is interesting to note that members of the Global Internet Liberty Campaign — and many other lobby groups that have opposed elements of the Convention — represent and include in their staff and membership attorneys, privacy experts, technical experts, data protection officials, and human rights experts from all over the world. The chance that they all may have misinterpreted or misread the convention is unlikely. PART 3: APPROACHES FOR INTERNET RULE The effects of globalization have increasingly challenged national governments. Little by little, countries have had to surrender their sovereignty in order to take advantage of gains available by global economic and political factors. The Council of Europe’s Convention on Cyber-Crime is a prime example. The advent of the Internet and global communications networks have been responsible for tearing down national borders and permitting the free flow of ideas, music, news, and possibly a common culture we can call cyber-culture. Saudi Arabia is feeling its sovereignty threatened and is attempting to restrict access to Web sites that it finds offensive. France and Germany are having a difficult time restricting access to sites related to Nazism. And all countries that are taking full advantage of the digital age and its tools are threatened by cyber-criminals, whether they are a neighborhood away or oceans away. Sovereign nations are choosing to control the threat through the CoE’s cyber-crime treaty. Is this the only option for governing the Internet? No, not necessarily. The following is a selection of possible alternatives. Anarchic Space The Internet has remained relatively unregulated. Despite government attempts, Saudis can still access defamatory information about the Saudi royal family; and U.S. citizens are still able to download copyrighted music regardless of restrictions placed on Napster. It is possible that the Internet could be treated as anarchical space beyond any control of nations. This, however, does not solve the cyber-crime problem and could instead lead to an increase in crime. 835

AU1518Ch46Frame Page 836 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS Supranational Space On the opposite end of the spectrum, a theoretical possibility is that of the Internet as supranational space. Under this model, a world governing body would set legislation and controls. Because no world government actually exists, this not a realistic option. National Space A more probable approach is the treatment of the Internet as national space, wherein individual nations would be responsible for applying their own territorial laws to the Internet. This, unfortunately, has been an approach that seems to be favored by the more powerful nations such as the United States, but it has little effect without coordination and cooperation from other nations and nongovernmental organizations (NGOs). Epistemic Communities Another option for Internet rule could be to establish an epistemic community — a “knowledge-based transnational community of experts with shared understandings of an issue or problem or preferred policy responses.”66 This has been a successful approach leading up to the Outer Space Treaty and the Antarctica Treaty. The Outer Space Treaty claims outer space as the “province of mankind”67 and the Antarctica Treaty “opens the area to exploration and scientific research, to use the region for peaceful purposes only, and to permit access on an equal, nondiscriminatory basis to all states.”68 Scientists specializing in space and ocean sciences have driven much of the decision making that has taken place. A similar approach was used in the computing environment when decisions were made on how to make the Internet handicap accessible. Experts gathered with an understanding of the issue and implemented systems to manage the problem. However, as has been discussed, national governments have an interest in controlling particular aspects of the Internet; and an epistemic community does not provide them the control they desire. Therefore, the success of an epistemic solution in resolving the cybercrime threat is unlikely. International Regimes The most obvious choice for Internet rule — bearing in mind its borderless nature and the interest of states to implement controls and safeguards — is an international regime. According to the noted regime theory expert Stephen Krasner, a regime is defined as “sets of implicit or explicit principles, norms, rules, and decision-making procedures around which actors’ expectations converge in a given area of international relations.”69 In fact, it can be argued that a regime is already in the making concerning Internet rule and cyber-crime, and that the Council of Europe’s Convention on 836

AU1518Ch46Frame Page 837 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime Cyber-Crime represents the regime’s set of explicit “rules.” Regrettably, the rules outlined by the Convention do not represent the principles of all the actors. The actors concerning Internet rule extend beyond national governments and include all of the actors that have been described previously, including individual users, privacy and human rights advocates, corporations, ISPs, and, yes, national governments. The Convention was created solely by government representatives and therefore has ignored these other important actors. If a cyber-crime regime did exist that included all interested parties or actors, the principles, norms, rules, and decisionmaking procedures would be different than what is currently represented in the CoE cyber-crime treaty. The principles — “beliefs of fact, causation, and rectitude”70 — for a government-based regime as witnessed in the Convention are primarily concerned with preservation of sovereignty. The focus of the Convention is based on the needs of government-based law enforcement for pursuing and capturing the agent responsible for limiting state sovereignty — the cyber-criminal. A treaty drafted by a fully represented regime would include recommendations and regulations that consider the need for unhindered innovation and the preservation of privacy and basic human rights. Such a regime would also foster discussions that could take place concerning the detrimental effects of criminalizing hacking tools and maintaining communications records for all Internet users. The norms — “standards of behavior defined in terms of rights and obligations”71 — for the government-based regime once again center on the need to pursue and deter cyber-criminals. The articles addressing mutual assistance explicitly define the obligations and rights of states concerning jurisdiction, extradition, and extraterritoriality, while paying little respect to the rights of individuals under their own territorial laws. A fully represented regime could table issues concerning the need for dual criminality. The rules — “specific prescriptions or proscriptions for action” — that would be included in a government-based regime are now painfully evident. Although most of the convention rules are necessary for addressing the cyber-crime problem, their lack of sensitivity to nongovernmental interests is clear. Finally, the decision-making procedures — prevailing practices for making and implementing collective choice — are obviously absent of any representation outside of government interests. If it were possible to roll back time by three years — and instead of having closed-door sessions with minimal representation, have open meetings that practiced transparency in all of its dealings and invited representation of all actors involved in Internet activity — the Convention would most likely be a treaty that truly represented the opinions of the collective Internet community. 837

AU1518Ch46Frame Page 838 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS PART 4: FORMULA FOR SUCCESS It is surprising that the CoE, an organization that proclaims one of its primary aims to be “to protect human rights,”72 would ignore the basic principles of regime theory and the success factors of thriving international regimes, instead prescribing rules that primarily cater to the needs of law enforcement. One of the more obvious examples of a successful regime is based on the Montreal Protocol on Substances that Deplete the Ozone Layer signed in 1987. As a result of the Montreal Protocol, industries have developed safer, cleaner methods for handling ozone-depleting chemicals and pollutionprevention strategies.73 The success of this regime can be directly attributed to the cooperation and coordination among all relevant actors, including government, industry, and environmental sciences. The Convention on Cyber-Crime is open for signatures, the opposition has spoken, and it appears that the only thing standing in the way of the treaty becoming law is the final ratification and introduction of national laws by individual countries. It is now too late for the cyber-crime treaty to truly represent the opinions of all the primary actors, but it is still possible for individual nations to protect the interests of its citizenry. Pressure on the more powerful nations may be enough to make sure that what is adopted will include appropriate measures and safeguards. Unfortunately, many countries do not have a very good history of keeping the best interests of its citizens in mind when they create their laws. Regardless of the ultimate outcome of the treaty, a broadly represented regime is vital to future success in fighting the cyber-crime threat. Although the Convention may not be an ideal solution, it is possible that the introduction of the Convention on Cyber-Crime and the worldwide attention that it has brought to cyber-crime will be the catalyst for finally establishing an effective cyber-crime regime — one that truly represents all actors. Notes 1. Minihan, K.A. “Defending the Nation against Cyberattack: Information Assurance in the Global Environment.” USIA, U.S. Foreign Policy Agenda. Nov. 1998, p. 1. Feb. 27, 2001. 2. Excerpt from the source file posted by the computer hacking group “Hacking 4 Girliez.” The text was displayed on the defaced New York Times Web site, September 13, 1998. 3. “Hacking Around, A NewsHour Report on Hacking.” The NewsHour with Jim Lehrer. May 8, 1998. PBS Online. Apr. 16, 2001. 4. The term “cyberspace” was first used by author William Gibson in his 1984 science fiction novel Neuromancer. 5. Steiner, P. “A Dog, Sitting at a Computer Terminal, Talking to Another Dog.” Cartoon. The New Yorker, Jul. 5, 1993 6. Schiller, J. “Profile of a Hacker.” The NewsHour with Jim Lehrer. PBS Online. May 8, 1998. Transcript. Mar. 14, 2001, p. 1.

838

AU1518Ch46Frame Page 839 Thursday, November 14, 2002 7:50 PM

The International Dimensions of Cybercrime 7. The annual “CSI/FBI Computer Crime and Security Survey” for 2000 is based on the responses from 643 computer security practitioners in U.S. corporations and government agencies. 8. Power, R. “2000 CSI/FBI Computer Crime and Security Survey.” Computer Security Journal, XVI(2), 45, Spring 2000. 9. “Russia’s Hackers: Notorious or Desperate?” CNN.com. Nov. 20, 2000. . Jan. 25, 2001, p. 1. 10. “Russia’s Hackers: Notorious or Desperate?” CNN.com. Nov. 20, 2000. .Jan. 25, 2001, p. 1. 11. “10 Foreign Hot Spots for Credit Card Fraud.” Internet World. Feb. 1, 1999. Infotrac. Mar. 24, 2001, p. 1. 12. The London School of Economics and Political Science. “Cybercrime: The Challenge to Leviathan?” Feb. 27, 2001, p. 1. 13. The London School of Economics and Political Science. “Cybercrime: The Challenge to Leviathan?” Feb. 27, 2001, p. 1. 14. The London School of Economics and Political Science. “Cybercrime: The Challenge to Leviathan?” Feb. 27, 2001, p. 1. 15. Freeh, L.J. “Statement for the Record of Louis J. Freeh, Director, Federal Bureau of Investigation on Cybercrime before the Senate Committee on Judiciary Subcommittee for the Technology, Terrorism, and Government Information.” Department of Justice, Mar. 28, 2000. Jan. 26, 2002. 16. IMRG Interactive Media in Retail Group. “Napster Offers $1 Billion to Record Companies.” Feb. 21, 2001. April 1, 2001, p. 1. 17. Computer Crime and Intellectual Property Section (CCIPS) of the Criminal Division of the U.S. Department of Justice. Computer Intrusion Cases. Mar. 31, 2001. , p. 1. 18. The Affidavit for Robert Hanssen’s arrest is available online at . 19. Godoy, J. “Computers and International Criminal Law: High Tech Crimes and Criminals.” Lexis Nexis, 2000. New England International and Comparative Law Annual. Mar. 24, 2001. . 20. Minihan, K.A. “Defending the Nation against Cyberattack: Information Assurance in the Global Environment.” USIA, U.S. Foreign Policy Agenda. Nov. 1998, p. 1. Feb. 27, 2001. 21. Vise, D.A. “FBI Sees Rising Threat from Computer Crime.” Lexis Nexis, Mar. 21, 2001. International Herald Tribune, Mar. 24, 2001, p. 1. 22. Vise, D.A. “FBI Sees Rising Threat from Computer Crime.” Lexis Nexis, Mar. 21, 2001. International Herald Tribune, Mar. 24, 2001, p. 1. 23. Charney, S. “The Internet, Law Enforcement and Security.” Internet Policy Institute. Feb. 27, 2001, p. 1. . 24. Denning, D. “Reflections on Cyberweapons Controls.” Computer Security Journal. XVI(4), 1, Fall 2000. 25. Denning, D. “Reflections on Cyberweapons Controls.” Computer Security Journal. XVI(4), 1, Fall 2000. 26. Denning, D. “Reflections on Cyberweapons Controls.” Computer Security Journal. XVI(4), 1, Fall 2000. 27. U.S. Department of Justice, “Juvenile Computer Hacker Cuts Off FAA Tower at Regional Airport.” Press Release. Mar. 18, 1998, p. 1. Jan. 4, 2001. 28. Information Technology Association of America, “Industry Partnerships to Combat Cyber Crime Take on Bold Agendas.” InfoSec Outlook. Feb. 27, 2001, p. 1. . 29. Attrition.Org maintains defacement counts and percentages, by domain suffix for worldwide Internet Web site defacement . Attrition.Org. Defacement Counts and Percentages, by Domain Suffix. Mar. 31, 2001. .

839

AU1518Ch46Frame Page 840 Thursday, November 14, 2002 7:50 PM

LAW, INVESTIGATION, AND ETHICS 30. Denning, D. “Reflections on Cyberweapons Controls.” Computer Security Journal. XVI(4), 43, Fall 2000. 31. Ticehurst, J. “Cybercrime Soars in the UK.” Vnunet.com. Nov. 6, 2000, p. 1. Jan. 25, 2001. 32. Vise, D.A. “FBI Sees Rising Threat from Computer Crime.” Lexis Nexis, Mar. 21, 2001, p. 1. International Herald Tribune, Mar. 24, 2001. 33. Kelsey, D. “GartneróHalf of All Small Firms Will Be Hacked.” Newsbytes. Oct. 11, 2000, p. 1. Mar. 27, 2001. 34. Konrad, R. “Hack Attacks a Global Concern.” CNET New.com. Oct. 29, 2000, p. 1. Feb. 27, 2001. 35. Konrad, R. “Hack Attacks a Global Concern.” CNET New.com. Oct. 29, 2000, p. 1. Feb. 27, 2001. 36. “Reno Urges Crackdown on Cybercrime in The Americas.” Nov. 27, 1998, p. 1. Fox News Network. Feb. 27, 2001. 37. “Many Countries Said to Lack Computer Crime Laws.” CNN.com. Jul. 26, 2000, p. 1. Jan. 25, 2001. 38. Schjolberg, S. “Penal Legislation in 37 Countries.” Moss Bryett, Moss City Court Web site. Feb. 22, 2001, p. 1. April 14, 2001. 39. McConnell International with Support from WITSA. Cyber Crime … and Punishment? Archaic Laws Threaten Global Information. McConnell International LLC. Dec. 2000, p. 5. 40. McConnell International with Support from WITSA. Cyber Crime … and Punishment? Archaic Laws Threaten Global Information. McConnell International LLC. Dec. 2000, p. 6. 41. Black, H., Campbell, M.A., Nolan, J.R., and Connolly, M.J. Black’s Law Dictionary, fifth edition. St. Paul: West Publishing Co.,1979, p. 489. 42. Law.Com Legal Dictionary. Apr. 25, 2001, p. 1. . 43. Black, H., Campbell, M.A., Nolan, J.R., and Connolly, M.J. Black’s Law Dictionary, fifth edition. St. Paul: West Publishing Co.,1979, p. 528. 44. Godoy, J. “Computers and International Criminal Law: High Tech Crimes and Criminals.” Lexis Nexis, 2000. New England International and Comparative Law Annual. Mar. 24, 2001, p. 1. . 45. Lee, J. “Punching Holes in Internet Walls.” New York Times, Apr. 26, 2001, p. G1. 46. Lee, J. “Punching Holes in Internet Walls.” New York Times, Apr. 26, 2001, p. G1.

ABOUT THE AUTHOR Ed Gabrys, CISSP, is information security manager for People’s Bank in Bridgeport, Connecticut.

840

AU1518Ch47Frame Page 841 Thursday, November 14, 2002 7:49 PM

Chapter 47

Reporting Security Breaches James S. Tiller, CISSP

If you are involved with information systems within an organization — whether at the highest levels of technical management or the end user in a remote office — you will ultimately be faced with a security incident. Managing a security breach life cycle encompasses many managerial, technical, communication, and legal disciplines. To survive an event you need to completely understand the event and the impacts of properly measuring and investigating. When reporting an incident, the information provided will be scrutinized as it rolls up the ranks of the organization. Ultimately, as the report gains more attention and it nears the possibility of publication, the structure of the incident report and supporting information will be critical. This chapter touches upon the definition of an incident and response concepts, but its focus is on reporting the incident. It is assumed that incident response processes, policy, mitigation, and continuity are all existing characteristics — allowing us to focus on the reporting process and escalation. SCHROEDINGER’S CAT A quick discussion on the value of information in the world of incidents is in order. Quantum mechanics is an interesting code of thought that finds its way into the world of security more often than not. Erwin Schroedinger produced a paper in 1935, “Die gegenwartige Situation in der Quantenmechanik,” that introduced the “Cat” and the theory of measurement. In general, a variable has no definite value before it is measured; then measuring it does not mean ascertaining the value that it has but rather the value it has been measured against. Using Schroedinger’s example, let us assume there is a cat in a box, a black box. You open the box and the cat is dead. How do you know the cat was dead before you actually made the observation by opening the box? Opening the box could have killed it for all you 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

841

AU1518Ch47Frame Page 842 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS know. In the most basic terms, the interaction of variables with measurement requirements will raise the question of how much of the value obtained was associated with the act and process of measurement. Of course, Schroedinger’s Cat is a theory that impacts quantum mechanics more so than measuring your waistline, but establishing control sets and clear measurement policy related to the technology is critical in the space between the ordinary and the extraordinary. This simple paradox lends itself to interesting similarities in the world of security incidents — albeit loosely. Your actions when determining an event, or how you have set the environment for detecting an event, can have ramifications on the interpretation of the event as it is escalated and reported. How does the “cat” apply? It is necessary to measure from multiple points in various ways to properly ascertain the event when reporting as an incident. For example, if you have an intrusion detection system (IDS) at your perimeter and another on your DMZ with an identical configuration and an anomaly is detected, you have proven an anomaly on both sides of your firewall. With information from the logs of the alleged target server and the firewall, you now have more disparate information sources to state your case and clearly ascertain the scope of the incident. Additionally, this will demonstrate the attention to clarity and comprehensiveness of the detection and documentation process, furthering the credibility of the report. Another application of the analogy is incident response process and the actual collection of information. Although we are focusing on reporting incidents, it is important for the reader to understand the importance of the information to be shared. Collecting information in support of detailing the incident can be a sensitive process, depending on two fundamental directions decided upon at the initial onset of incident response: proceed and protect or pursue and prosecute. Care should always be practiced when collecting evidence from impacted systems, but this is most true when the decision to pursue and prosecute has been made. It is here, gathering data for future analysis, reporting, or evidence, that Schroedinger’s Cat can become a lesson in forensics. Simply stated, the act of extracting data — no matter the perceived simplicity or interaction — can affect the value as well as the integrity of the information collected. Was that log entry there because you created it? Understandably, an oversimplified example, but the point is clear — every interaction with a system can inherently impact your ability to measure the incident in its purest state. Based on Schroedinger’s theory, simply the act of quantifying will inevitably and unavoidably influence the measured outcome. Understanding the consequences of data collection during and after an incident will help you to clearly detail and report an event, ultimately building efficiencies into the mitigation process. 842

AU1518Ch47Frame Page 843 Thursday, November 14, 2002 7:49 PM

Reporting Security Breaches SECURITY REQUIREMENTS At the risk of communicating an oversimplification, it is necessary to state that proper configuration and management of security is critical. Through the use of technology and defined processes, you can accurately and confidently identify incidents within the network and quickly determine what happened and the vulnerability that was exploited. Security Policy Every discussion on security has a section on security policies and their importance. Security policies define the desired security posture through communicating what is expected of employees and systems as well as the processes used to maintain those systems. Security policies are inarguably the core point of any successful security program within an organization. However, with regard to incident management, the criticality of security policies cannot be understated. Security policies provide an opportunity to understand the detailed view of security within an organization. In many cases, security policies reflect common activities practiced within the organization regularly and can be used as a training resource as well as a communication tool. However, incident response policies could be considered the most important section of any security policy, based on the criticality and uniqueness of the process combined with the simple fact that incidents are not typical occurrences (usually). In the event of a rare occurrence, no one will know exactly what to do — step by step — and in all cases a referenceable document defining what should be done in accordance with the desired security posture can be your lifeblood. In the day-to-day activities of a nuclear plant, there is always the underlying threat of a failure or event; but it does not permeate the daily tasks — they are preparing and avoiding those events through regular management of the systems. In the rare times there is a significant occurrence, the proprietors will always reference a process checklist to assist in troubleshooting. Another example is a pilot’s checklist — a systematic process that could be memorized; but if one portion is exercised out of order or missed, the result could end in disaster. Therefore, a security policy that clearly defines the identification and classification of an event should also state the process for handling and reporting the incident. Without this significant portion of a security policy, it is almost assured the unguided response procedures will be painful and intermittent in context. Security Technology In the realm of digital information, security is realized and measured through technology. The configuration of that technology and the defined 843

AU1518Ch47Frame Page 844 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS interaction with other forms of technology will directly impact the ability to recognize an incident and its eventual investigation. Security-related technology comes in many forms, ranging from firewalls and IDSs to authentication systems. Additionally, security characteristics can emerge from other technologies that are traditionally not directly associated with security and provide services beyond the envelope of information security. However, these become the tools to identify events in addition to becoming collection points for gaining information about the incident. As briefly mentioned above, more points within a network that have the ability to detect or log events will increase the quantity of information available that can be correlated to amplify the quality and accuracy of the incident description. In addition to the number of points in the network, the type and layer with which it interacts may become the defining factor in isolating the event. For example, a firewall may log traffic flow by collecting information about source and destination IP addresses and port numbers. Along with time stamps and various other data, the information can be used to identify certain characteristics of the incident. To obtain even more of the picture, the target operating system, located by the destination IP address from the firewall’s logs, may have logs detailing certain actions on the system that are suspicious in nature and fall within the time of attack window established by the firewall’s logs. The last piece of the puzzle is provided by a system-monitoring package, such as Tripwire — an application that essentially detects changes in files. Based on the information from Tripwire, it may appear that several files were changed during the time of the attack. A short search on the Internet may reveal that a Trojan version of the file is in the wild that can provide temporary administrative access using port 54321, which you have verified from the firewall and system logs. Additionally, the report continues to detail known implantation techniques to install the Trojan — replacing the valid file — by leveraging a weakness in the TCP/IP stack by sending overlapping packets that result in distorted IP headers. It was the “notification” log on the firewall that allowed you to initially determine the time frame of the attack; but without the other information, you would be hard-pressed to come to the same detailed conclusion. The purpose of the example is to communicate the importance of disparate information points and types within the network. The firewall passed the packet because it was not denied by the rules, and the header structure fell within limits; but the vulnerability exploited in the operating system could not survive those changes. The file implantation would normally go undetected without the added information from Tripwire. 844

AU1518Ch47Frame Page 845 Thursday, November 14, 2002 7:49 PM

Reporting Security Breaches It is clear that ample information is helpful, but the variety of data can be the defining factor. Therefore, how technology is configured in your environment today can dramatically affect the ability to detect and survive an incident in the future. Additionally, the example further demonstrates the need for incident response policies and procedures. Without a well-documented guide to follow, it is doubtful that anyone would be able to traverse the complicated landscape of technology to quickly ascertain an incident’s cause, scope, and remedy. REPORT REASONING There are many attributes of incident management that must be considered within the subject of reporting. This section discusses: • Philosophy. Simply stated, why report an incident at all? This question insinuates notifying the public, but it can be applicable for internal as well as partnership communications. What are the benefits and pitfalls of reporting an incident? • Audience. When reporting anything, there must be an audience or scope of the people who will be receiving or wanting the information. It is necessary to know your constituents and the people who may have a vested interest in your technical situation. • Content. As information is collected about an incident, there will certainly exist data that an organization would not want to share with some communities that make up the audience. It is necessary to determine the minimal information required to convey the message. • Timing. The point in time when an incident is reported can have dramatic impacts within and beyond an organization. This is especially true when the incident investigation reveals a vulnerability that affects many people, departments, or companies. Philosophy Reporting an incident will undoubtedly have ramifications internally; and based on the type, scope, and impact of the event, there could be residual effects globally. So, given the exposure and responsibility — why bother? What are the benefits of reporting that you have a weakness or that you were successfully attacked because you were simply negligent in providing even the basic security? In this light, it seems ridiculous to breathe a word that you were a victim. To add to the malaise, if you report an incident prior to assuring the vulnerability used for the attack is not rectified, you may be in for many more opportunities to refine your incidence response process. Finally, once attackers know you do not have a strong security program or do not perform sound security practices, they may attempt to attack you in hope of finding another vulnerability or simply slip 845

AU1518Ch47Frame Page 846 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS under the radar of confusion that runs rampant in most companies after an incident. The answer, as one may expect, is not simple. There are several factors that are used to determine if an incident should be reported, and ultimately, to whom, when, and what should be shared. The following are some of the factors that may need to be considered. Ultimately, it is a lesson in marketing. Impact Crater. Essentially, how bad was the impact and who — or what — was affected by the debris? With certain events that stretch the imagination and had catastrophic results, it is usually best to be a reporter and provide your perspective, position, and mitigation prior to CNN dropping the bomb on you publicly.

It is usually best to report your situation first rather than be put in the position of defending your actions. This is a reality for public reports in addition to internal reporting. For example, if the IS department makes an enormous security oversight and money is lost due to the exploited vulnerability, accepting responsibility prior to having an investigation uncover the real issue may be best. Who’s on First? Somewhat related to the impact crater, many organizations will be attacked and attempt to deal with it internally — or within the group. Unfortunately for these organizations, the attackers are usually trying to prove their capability in the hacking community. After some chest thumping on news groups, your demise will soon be public. Again, when faced with public interpretation of the event, it is typically better to be first. Customer Facing. If the attack affected customer systems or data, you may have no choice. You may not have to reveal the incident publicly; but in the event a customer or partner was affected, you must report the situation, history, plan for mitigation, recovery options, and future protection. If you do not, you run an extreme liability risk and might never recover from the loss of reputation.

The previous factors can be presented in many ways, but all cast a dark shadow on the concept of exposure and do not present any positive reason for reporting an incident. No one wants to be perceived as weak publicly or internally — to customers or partners. However, there are factors that, when properly characterized within the scope of the incident and business objectives, it is essential that a report evolve from an event. Following are some points of interest regarding reporting. Well Done. There are many occasions where a vulnerability was exploited but there was little or no loss associated with the attack. Moreover, the vulnerability may have proved to be extreme in terms of industry 846

AU1518Ch47Frame Page 847 Thursday, November 14, 2002 7:49 PM

Reporting Security Breaches exposure; it just so happened that you experienced the attack on a system you practically forgot was still in the wire closet. Or better yet, your security awareness and vigilance allowed you to identify the incident in realtime, mitigate the attack, and determine the structure and target. This, of course, is how it is supposed to work. Detect, identify, eradicate, and learn — all without suffering from the attack. If this is the case, you could substantially benefit from letting people know how good you are at security. Fix First. In some situations, it may be beneficial to report an incident to convey to your constituents that there is a new threat afoot and demonstrate your agility and accuracy in handling the incident. Good Samaritan. In some cases, you may simply be ethically drawn to

report the details of an incident for the betterment of the security community and vendors who can learn and improve based on the information. Of course, all previous points may apply — mitigate the exposure and clearly identify the incident. Truly, at the end of the day, if an event is detected — regardless of impact — there should be a report created and forwarded to a mediator to work within the organization’s policy and the dynamics of the attack to properly determine the next step. If the vulnerability is like the recent SNMP vulnerability, it is generally accepted that working with the vendors first is the best plan of global mitigation. How you identify and react to an attack will relate to whom, what, and when you report. Audience For better or for worse, the decision has been made to report the incident; and now the appropriate audience must be determined. You can report to one group or several, but assume the obvious leakage when dealing with people and sensitive information. For example, if you do not feel the employees need to know, it would be unwise to tell the partners, customers, or the public. Keeping this in mind, it is also necessary to understand the audience (for the purposes of this discussion); this is your primary audience and others may be indirect recipients — purposely. For example, the managers should know that there was an incident that could impact operations temporarily. This should not be kept from the employees, but the managers could be advised to convey the announcement to their respective groups within a certain time frame. To add to the complexity, the audience type is proportional to the impact of the incident and the philosophy, or mindset, of performing the report. Essentially, a three-dimensional matrix should be constructed, with one axis being the impact or the criticality of the event, another the response structure (speed, ethics-based or self-preservation, etc.), and the 847

AU1518Ch47Frame Page 848 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS last a timeline of events. The matrix would then help determine who should know the details of the incident and when. Nevertheless, it is feasible to segment the different audience types with associated descriptions to help you assess the appropriate target based on the incident characteristics. Customers. Customers are people, groups, or companies to whom you provide a service or product. Depending on the incident type and scope, it may be necessary to notify them of the event. Customers are entities that invest in your organization through their utilization of your product or service. The greater the investment, the greater the expectation for a supportive and long relationship. If a customer’s investment in your organization is affected, reporting may be critical.

As stated above in the section “Well Done,” properly responding to an attack and formulating a mitigation process to recover from the attack can offset the strain on the relationship between you and the customer and, in some circumstances, enhance the relationship. Vendors. One of the more interesting aspects of reporting incidents is the involvement of vendors. For example, if you only use Cisco routers and switches and suffer a breach that is directly associated to a vulnerability in their product, you want them to know about your discovery in order to fix it. In the event they already know, you can become more involved in the remedy process. Of course, you must first overcome the “if they knew, why did I have to get attacked” argument.

Another characteristic of vendor notification is the discovery of the vulnerability through a noncatastrophic incident and having to decide how long they have to fix the vulnerability prior to notifying the public. In many cases, this situation evolves from the discovery of a vulnerability through testing and not the exploitation via an active attack. In the event the vulnerability was determined through a recorded incident, the target organization usually is very patient in allowing the vendor to provide a fix. The patience is mostly due to the desire to let the vendor announce the vulnerability and the fix — making the vendor look good — relieving the victim of the responsibility and exposure and alleviating the vulnerability. If the vulnerability is detected through testing, the testers were usually looking for a weakness to discover. Therefore, in many scenarios, the testers want people to know their discovery; and waiting around for a vendor to provide patches runs against that desire. In all fairness, it is very common for a vulnerability to be discovered and shared with the vendor prior to letting the general public know. There have been occasions when it has taken the vendor a year to get the fix addressed due to its complexity. The person who discovered the vulnerability was 848

AU1518Ch47Frame Page 849 Thursday, November 14, 2002 7:49 PM

Reporting Security Breaches assured they were working on the fix and was ultimately hired to assist in the mitigation. For vendors that want to have a chance to fix something before the vulnerability is exposed and there are no protection options for their customers, it is necessary to communicate on all levels. Do not ignore the people who provided you the information. For someone who has expended effort in discovering a vulnerability, the feeling that they are not being taken seriously will definitely expedite the public’s awareness of your weakness. One example was a large organization that had a firewall product and received an e-mail detailing a vulnerability and a request for an audience to discuss rectifying the proposed serious hole. After many attempts to gain the much-desired attention, the person became frustrated and turned to the public to ensure that someone would know the existence of the vulnerability. The consequence of ignoring the first contact resulted in customers — some of whom had validated the vulnerability — flooding the vendor with demands for assistance, only to realize the vendor had accomplished very little to date. This entire fiasco reflected badly on the vendor by publicizing its incompetence and inability to meet customer demands with its product. Partners. Partners are usually companies that establish an alliance with your company to reach a similar objective or augment each other’s offerings to customers. Partners can be affected by incidents, especially when there are connections between the entities or the sharing of applications that were impacted. If an incident hinders business operations to a point where a partner’s success or safe operation is in jeopardy, a notification with details must be communicated.

It is a crucial priority to advise partners of increased exposure to threats because of an incident on your network. Reporting to the partner the incident and the impact it may directly have on them needs to be addressed in the incident response policies. Employees. Employees (or contractors) are people who perform the necessary functions required by the company to accomplish the defined business objectives. In nearly every situation, where there is an incident that affects multiple users, employees are typically informed immediately with instructions. The reality is that word-of-mouth and rumor will beat you to it, but providing a comprehensive explanation of the incident and procedures they must follow to protect the company’s information assets is necessary. Managers. Managers are typically informed when the incident can lead to more serious business ramifications that may not be technically related. For example, if an attack is detected that results in the exposure of the entire payroll, employees may get very upset — understandably. It is necessary to control the exposure of information of this nature to the general 849

AU1518Ch47Frame Page 850 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS population to limit unfounded rumors. Additionally, it must be assumed that there is a strong probability the attacker is an employee. Communication of the incident to the general staff could alert the perpetrators and provide time to eliminate any evidence of their involvement. Obviously, it is necessary for the person or department responsible for the investigation to report to managers to allow them the opportunity to make informed decisions. This is especially critical when the data collected in preliminary investigations may provide evidence of internal misconduct. Public. One of the more interesting aspects of reporting incidents is communicating to the public the exposures to new threats. In most circumstances, reporting security incidents to the public is not required. For example, a privately held company may experience an event that does not directly impact production, the quality of their product, or the customer’s access to that product. Therefore, there is little reason to express the issue, generally speaking. However, it depends on the scope of your company. Following are some examples.

• Product vendor. Beyond debate, if a product vendor discovers a vulnerability with its implementation, the vendor is inescapably responsible to communicate this to its clientele. Granted, it is best to develop a solution — quickly — to provide something more than a warning when contacting customers. Sometimes, the general public represents the audience. A clear example is Microsoft and its reaction to security vulnerabilities that will virtually impact everyone. • Service providers. Information service providers, such as application service providers (ASPs), Internet service providers (ISPs), etc., are responsible to their customers to make them aware of an exposure that may affect them. Some very large service providers must disseminate information to a global audience. In addition to the possible scope of a provider’s clientele, other service providers can greatly benefit from knowing the impact and process associated with the incident in their attempt to avoid a similar incident. A perfect example is the distributed denial-of-service (DDoS) attack. Now that service providers as well as the developmental community understand the DDoS type of attack, it is easier to mitigate the risk, ultimately gaining more credibility for the industry from the customer’s perception. • Public companies. After the ENRON and Arthur Andersen debacle, the sensitivity of disclosing information has reached a new peak. In a short time the trend moved from concern over information accuracy to include information breadth. Consequently, if an incident occurs in an organization that is publicly traded, the repercussions of not clearly reporting incidents could cause problems on many levels. 850

AU1518Ch47Frame Page 851 Thursday, November 14, 2002 7:49 PM

Reporting Security Breaches Content and Timing What you report and when are driven by the type of incident, scope, and the type of information collected. For internal incidents, ones that affect your organization only, it is typical to provide a preliminary report to management outlining the event and the current tasks being performed to mitigate or recover. The timing is usually as soon as possible to alert all those who are directly associated with the well-being of business operations. As you can see, the content and timing are difficult to detail due to the close relation to other attributes of the incident. Nevertheless, a rule of thumb is to notify management with as much information as practical to allow them to work with the incident team in formulating future communications. As time passes and the audience is more displaced from the effects of the incident, the information is typically more general and is disseminated once recovery is well on its way. COMMUNICATION In communications there should always be a single point within an organization that handles information management between entities. A marketing department is an example of a group that is responsible for interpreting information detailed from internal sources to formulate a message that best represents the information conveyed to the audience. With incident management, a triage team must be identified that serves as the single gateway of information coming into the team and controls what is shared and with whom based on the defined policies. The combination of a limited team, armed with a framework to guide them, ensures that information can be collected into a single point to create a message to the selected audience at the appropriate time. Reporting an incident, and determining the audience and the details to communicate, must be described in a disclosure policy. The disclosure policy should detail the recipients of a report and the classification of the incident. It should also note whether the report would span audiences and whether the primary audience should be another incident response group internally or a national group such as CERT/CC. The CERT/CC is a major reporting center for Internet security problems. The CERT/CC can provide technical assistance and coordinate responses to security compromises, identify trends in intruder activity, work with other security experts to identify solutions to security problems, and disseminate information to the broad community. The CERT/CC also analyzes product vulnerabilities, publishes technical documents, and presents training courses. Formerly known as the Computer Emergency Response Team of Carnegie Mellon University, it was formed 851

AU1518Ch47Frame Page 852 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS at the Software Engineering Institute (SEI) by the Defense Advanced Research Projects Agency (DARPA) in 1988. Incident response groups will often need to interact and communicate with other response groups. For example, a group within a large company may need to report incidents to a national group; and a national incident response team may need to report incidents to international teams in other countries to deal with all sites involved in a large-scale attack. Additionally, a response team will need to work directly with a vendor to communicate improvements or modifications, to analyze the technical problem, or to test provided solutions. Vendors play a special role in handling an incident if their products’ vulnerabilities are involved in the incident. Communication of information of this nature requires some fundamental security practices. The information and the associated data must be classified and characterized to properly convey the appropriate message. Classification Data classification is an important component of any well-established security program. Data classification details the types of information — in its various states — and defines the operational requirements for handling that information. A data classification policy would state the levels of classification and provide the requirements associated with the state of the data. For example, a sensitive piece of information may only exist on certain identified systems that meet rigorous certification processes. Additionally, it is necessary to provide the distinctive characteristics that allow people to properly classify the information. The data classification policy must be directly correlated with the incident management policy to ensure that information collected during investigation is assigned the appropriate level of security. Included in the policy is a declassification process for the information for investigative processes. For example, the data classification policy may state that operating system DLL files are sensitive and cannot have their security levels modified. If the DLL becomes a tool or target of an attack, it may be necessary to collect the data that may need to be reported. It is at this point the incident response management policy usually takes precedence. Otherwise, bureaucracy can turn the information collection of the incidence response team into an abyss, leading to communication and collaboration issues that could hinder the response process. 852

AU1518Ch47Frame Page 853 Thursday, November 14, 2002 7:49 PM

Reporting Security Breaches Identification and Authentication Prior to sharing information, it should be considered a requirement to authenticate the recipient(s) of the information. Any response organization, including your own, should have some form of identification that can be authenticated. Certificates are an exceptional tool that can be utilized to identify a remote organization, group, individual, or role. Authentication can be provided by leveraging the supporting public key infrastructure (PKI) to authenticate via a trusted third party through digital signatures. Very similar to PKI — and also based on asymmetrical encryption — pretty good privacy (PGP) can authenticate based on the ability to decrypt information or sign data proving the remote entity is in possession of the private key. Confidentiality Once you have asymmetrical keys and algorithms established for authentication, it is a short step to use that technology to provide confidentiality. Encryption of sensitive data is considered mandatory, and the type of encryption will more than likely use large keys and advanced algorithms for increased security. Symmetrical as well as asymmetrical encryption can be used to protect information in transit. However, given the sensitivity, multiple forms of communication, and characteristics of information exchange, asymmetrical encryption is typically the algorithm of choice. (The selection default to asymmetrical also simplifies the communication process, because you can use the same keys for encryption that were used for the authentication. CONCLUSION Incident reporting is a small but critical part of a much more comprehensive incident management program. As with anything related to information security, the program cannot survive without detailed policies and procedures to provide guidance before, during, and after an incident occurs. Second only to the policy is the technology. Properly configured network elements that deliver the required information to understand the event and scope are essential. Collecting the information from various sources and managing that information based on the policies are the preliminary steps to properly reporting the incident. Reporting is the final frontier. Clearly understanding the content and the audience that requires the different levels of information are essentially the core concerns for the individuals responsible for sharing vital and typically sensitive information. Reporting incidents is not something that many organizations wish to perform outside the company, but this information is critical to the 853

AU1518Ch47Frame Page 854 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS advancement and awareness of the security industry as a whole. Understanding what attacks people are experiencing will help many others, through increased consciousness of product vendors, developers, and the security community as a whole, to further reduce the seriousness of security incidents to the entire community. ABOUT THE AUTHOR James S. Tiller, CISSP, MSCE+I, is the Global Portfolio and Practice Manager for International Network Services in Tampa, Florida.

854

AU1518Ch48Frame Page 855 Thursday, November 14, 2002 7:49 PM

Chapter 48

Incident Response Management Alan B. Sterneckert, CISA, CISSP, CFE, CCCI

Incident response management is the most critical part of the enterprise risk management program. Frequently, organizations form asset protection strategies focused primarily on perceived rather than actual weaknesses, while failing to compare incident impact with continuing profitable operations. In the successful implementation of risk management programs, all possible contingencies must be considered, along with their impact on the enterprise and their chances of occurring. By way of illustration, in the 1920s and 1930s, France spent millions of francs on the construction of the Maginot Line defenses, anticipating an invasion similar to the World War I German invasion. At that time, these fortifications were considered impregnable. During the 1940 German army invasion, they merely bypassed the Maginot Line, rendering these expensive fortifications ineffective. The Maginot planners failed to consider that invaders would take a route different than previous invasions, resulting in their defeat. RISK MANAGEMENT PROJECT Risk management is not a three-month project; it is not a project that, when completed, becomes shelved and never reviewed again. Rather, it is a continuous process requiring frequent review, testing, and revision. In the most basic terms, risk has two components: the probability of a harmful incident happening and the impact the incident will have on the enterprise. TOP-DOWN RISK MANAGEMENT PROJECT PLANNING Beginning at the end is a description of top-down planning. Information technology (IT) professionals must envision project results at the highest level by asking, what are my deliverables? Information risk management deliverables are simply defined: confidentiality, integrity, and availability (CIA). CIA, and the whole risk management process, must be first considered 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

855

AU1518Ch48Frame Page 856 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS in the framework of the organization’s strategic business plans. A formula for success is to move the risk management program forward with a clear vision of the business deliverables and their effect on the organization’s business plans. The concept of risk management is relatively simple. Imagine that the organization’s e-mail service is not functioning or that critical data has been destroyed, pilfered, or altered. How long would the organization survive? If network restoration is achieved, what was the business loss during the restoration period? It is a situation in which one hopes for the best but expects the worst. Even the best risk management plan deals with numerous what-if scenarios. What if a denial-of-service (DoS) attacks our network? Or what if an employee steals our customer list? What if a critical incident happens — who is responsible and authorized to activate the incident response team? In the world of risk management, the most desirable condition is one in which risks are avoided. And if risks cannot be avoided, can their frequency be increased and can their harmful effects be mitigated? RISK MANAGEMENT KEY POINTS These are general key points in developing a comprehensive risk management plan: • Document the impact of an extended outage on profitable business operations in the form of a business impact analysis. Business impact analysis measures the effects of threats, vulnerabilities, and the frequency of their occurrence, against the organization’s assets. • Remember that risk management only considers risks at a given moment. These risks change as the business environment changes, necessitating the constantly evolving role of risk management. • Complete a gap analysis, resulting in the measured difference between perceived and actual weaknesses and their effects on key assets. OVERALL PROJECT PLANNING Incident response planning is no different than other planning structures. There are four basic key phases: 1. Assess needs for asset protection within the organization’s business plan 2. Plan 3. Implement 4. Revise In assessing needs, representatives of the affected departments should participate in the initial stage and should form the core of the project team. 856

AU1518Ch48Frame Page 857 Thursday, November 14, 2002 7:49 PM

Incident Response Management Additional experts can be added to the project team on an ad hoc basis. This is also a good time to install the steering committee that has overall responsibility for the direction and guidance of the project team. The steering committee acts as a buffer between the project team and the various departmental executives. The early stage is the time for hard and direct questions to be asked by the project team members in detailing the business environment, corporate culture, and the minimum organizational infrastructure required for continuing profitable operations. It becomes important to decide the project’s owners at the outset. Project ownership and accountability are based on two levels: one is the line manager who oversees the project team, and the other is the executive who handles project oversight. This executive–owner is a member of the steering committee and has departmental liaison responsibilities. Project scope, success metrics, work schedules, and other issues should be decided by the project team. Project team managers, acting in cooperation with the steering committee, should keep the project focused, staffed, and progressing. Planning is best conducted in an atmosphere of change control. The project team’s direction will become lost if formal change control procedures are not instituted and followed. Change controls decide what changes may be made to the plan, who may approve changes, why these changes are being made, and the effect of these changes. It is critical that change controls require approvals from more than one authority, and that these changes are made part of any future auditing procedure. Once changes are proposed, approved, and adopted, they must be documented and incorporated as part of the plan. With planning completed, implementation begins. Implementations do not usually fail because of poor planning; rather, they fail due to lack of accountability and ownership. Initial testing is conducted as part of the implementation phase. During the implementation step, any necessary modifications must be based on test results. Specific testing activities should include defining the test approach, structuring the test, conducting the test, analyzing the test results, and defining success metrics with modifications as required. In an organizational setting, the testing process should be executed in a quarantined environment, where the test is not connected to the work platforms and the data used for the test is not actual data. During testing, criteria should be documented so performance can be measured and a determination made as to where the test succeeded or failed. With the implementation and testing completed, the project moves toward final adjustments that are often tuned to the changing business environment. Remember to maintain change controls in this phase also. More than one engineer has been surprised to find two identical hosts offering the same services with different configurations. 857

AU1518Ch48Frame Page 858 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS ENTERPRISE RISK Risk is the possibility of harm or loss. Risk analysis often describes the two greatest sources of risk as human causes and natural causes. Before a risk can be managed, consideration must be given to the symptom as well as the result. Any risk statement must include what is causing the risk and the expected harmful results of that risk. KEY ASSETS Key assets are those enterprise assets required to ensure that profitable operations continue after a critical incident. Define, prioritize, and classify the organization’s key assets into four general areas: personnel, data, equipment, and physical facilities. Schedule, in the form of a table, the priority of the organization’s key assets and their associated threats and vulnerabilities. This table will serve the purpose of identifying security requirements associated with different priority levels of assets. In developing asset values, the asset cost is multiplied by the asset exposure factor, with the resulting product being the single loss expectancy. The asset value is the replacement value of a particular asset, while the exposure factor is the measure of asset loss resulting from a specific harmful event. Multiplying this single loss expectancy by the annualized rate of occurrence will result in the annualized loss expectancy. An example of this equation is as follows: assume the replacement value of a server facility, complete with building, equipment, data, and software, is $10 million. This facility is located in a geographic area prone to hurricanes that have struck three times in the past ten years and resulted in total facility losses. Annualized expectancy is the loss of the facility, data, and equipment once every three years, or 33 percent annually. Step two of our four-step process is a threat assessment. Threats are simply defined as things that can possibly bring harm upon assets. Threats should be ranked by type, the impact they have on the specific asset, and their probability of occurrence. Even the most effective risk management plan cannot eliminate every threat; but with careful deliberation, most threats can be avoided or their effects minimized. Identify vulnerabilities (weaknesses) in the security of the enterprise’s key assets. Vulnerable areas include physical access, network access, application access, data control, policy, accountability, regulatory and legal requirements, operations, audit controls, and training. Risk levels should be expressed as a comparison of assets to threats and vulnerabilities. Create a column in the table (Exhibit 48-1) providing a relative metric for threat frequency. Once completed, this table provides a measurement of the level of exposure for a particular key asset. 858

AU1518Ch48Frame Page 859 Thursday, November 14, 2002 7:49 PM

Incident Response Management Exhibit 48-1. Measurement of the level of exposure. Asset

Threat

Frequency

Vulnerability

Impact

Name and Replacement Value

Type

Annualized

Type and Ranking: High, Medium, Low

Ranking: High, Medium, Low

Avoidance and mitigation steps are processes by which analyses are put into action. Having identified the organization’s key assets, threats to these assets, and potential vulnerabilities, there should be a final analytical step detailing how the specific risk can be avoided. If risk avoidance is not possible, then can the chance of its occurrence be extended? From the outset it is recommended to include auditors. Audits must be scheduled and auditors’ workpapers amended, assuring compliance with laws, regulations, policies, procedures, and operational standards. RISK MANAGEMENT BEST PRACTICES DEVELOPMENT As part of risk management best practices, there are three principle objectives: avoiding risk, reducing the probability of risk, and reducing the impact of the risk. Initiate and foster an organizational culture that names every employee as a risk manager. Employee acceptance of responsibility and accountability pays short- and long-term dividends. In some circumstances, the creation of this risk manager culture is more important than developing and issuing extensive policies and procedures. In a general sense, there are four key best practice areas that should be addressed: organizational needs, risk acceptance, risk management, and risk avoidance. Organizational needs determine the requirement for more risk study and more information in ascertaining the characteristics of risk before taking preventive or remedial action. 859

AU1518Ch48Frame Page 860 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS Risk acceptance is defined in these terms: if these risks occur, can the organization profitably survive without further action? Risk management is defined as efforts to mitigate the impact of the risk should it occur. Risk avoidance includes the steps taken to avoid the risk from happening. RISK CONTROLS Avoidance controls are proactive in nature and attempt to remove, or at least minimize, the risk of accidental and intentional intrusions. Examples of these controls include encryption, authentication, network security architecture, policies, procedures, standards, and network services interruption prevention. Assurance controls are actions, such as compliance auditing, employed to ensure the continuous effectiveness of existing controls. Examples of these controls include application security testing, standards testing, and network penetration testing. Detection controls are tools, procedures, and techniques employed to ensure early detection, interception, containment, and response to unauthorized intrusions. Examples of these controls include intrusion detection systems (IDSs) and remotely managed security systems. Recovery controls involve response-related steps in rapidly restoring secure services and investigating the circumstances surrounding information security breaches. Included are legal steps taken in the criminal, civil, and administrative arenas to recover damages and punish offenders. Examples of these recovery controls include business continuity planning, crisis management, recovery planning, formation of a critical incident response team, and forensic investigative plans. CRITICAL INCIDENT RESPONSE TEAM (CIRT) A CIRT is a group of professionals assembled to address network risks. A CIRT forms the critical core component of the enterprise’s information risk management plan. Successful teams include management personnel having the authority to act; technical personnel having the knowledge to prevent and repair network damage; and communications experts having the skills to handle internal and external inquiries. They act as a resource and participate in all risk management phases. CIRT membership should be composed of particular job titles rather than specifically named individuals. The time for forming a CIRT, creating an incident response plan, notification criteria, collecting tools, training, and executive-level support is not the morning after a critical incident. Rather, the CIRT must be ready for deployment before an incident happens. Rapidly activating the CIRT can 860

AU1518Ch48Frame Page 861 Thursday, November 14, 2002 7:49 PM

Incident Response Management mean the difference between an outage costing an organization its livelihood or being a mere annoyance. Organizational procedures must be in place before an incident so the CIRT can be effective when deployed. This point is essential, because organizations fail to address critical incidents even when solid backup and recovery plans are in place. The problem is usually found to be that no one was responsible to activate the CIRT. The CIRT plan must have clearly defined goals and objectives integrated in the organization’s risk management plan. CIRT’s mission objectives are planning and preparation, detection, containment, recovery, and critique. As part of its pre-incident planning, the CIRT will need: information flowcharts, hardware inventory, software inventory, personnel directories, emergency response checklists, hardware and software tools, configuration control documentation, systems documentation, outside resource contacts, organization chart, and CIRT activation and response plans. For example, when arriving on the scene, the CIRT should be able to review its documentation ascertaining information flow and relevant critical personnel of the organization’s employee healthcare benefits processing unit. Considering the nature, culture, and size of the organization, an informed decision must be made about when to activate the CIRT. What is the extent of the critical incident before the CIRT is activated? Who is authorized to make this declaration? Is it necessary for the whole CIRT to respond? Included in the CIRT activation plan should be the selection of team members needed for different types or levels of incidents. If circumstances are sensitive or if they involve classified materials, then the CIRT activation plan must include out-of-band (OOB) communications. OOB communications take place outside the regular communications channels. Instead, these OOB communication methods include encrypted telephone calls, encrypted e-mail not transmitted through the organization’s network, digital signatures, etc. The purpose of OOB communications is to ensure nothing is communicated through routine business channels that would alert someone having normal access to any unusual activity. INCIDENT RESPONSE STEPS The goals of incident response must serve a variety of interests, balancing the organization’s business concerns with those of individual rights, corporate security, and law enforcement officials. An incident response plan will address the following baseline items: 861

AU1518Ch48Frame Page 862 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS • Determine if an incident has occurred and the extent of the incident. • Select which CIRT members should respond. • Assume control of the incident and involve appropriate personnel, as conditions require. • Report to management for the decision on how to proceed. • Begin interviews. • Contain the incident before it spreads. • Collect as much accurate and timely information as possible. • Preserve evidence. • Protect the rights of clients, employees, and others, as established by law, regulations, and policies. • Establish controls for the proper collection and handling of evidence. • Initiate a chain of custody of evidence. • Minimize business interruptions within the organization. • Document all actions and results. • Restore the system. • Conduct a post-incident critique. • Revise response as required. Pre-incident preparation is vital in approaching critical incidents. Contingency plans that are tested and revised will be invaluable in handling incidents where a few minutes can make the difference between disaster and a complete restoration of key services. Network administrators should be trained to detect critical incidents and contact appropriate managers so a decision can be made relative to CIRT deployment. Some of the critical details that administrators should note are the current date and time, nature of the incident, who first noticed the incident, the hardware and software involved, symptoms, and results. Suspected incidents will usually be detected through several processes, including intrusion detection systems (IDSs), system monitors, and firewalls. Managers should decide whether the administrators should attempt to isolate the affected systems from the rest of the network. Trained, experienced administrators can usually perform these preliminary steps, thereby preventing damage from spreading (see Exhibit 48-2). At the time of the initial response by the CIRT, no time should be lost looking for laptops, software, or tools. They should arrive at the scene with their plan, tools, and equipment in hand. CIRT members will begin interviews immediately in an effort to determine the nature and extent of any damage. It is important that they document these interviews for later action or as evidence. The CIRT will obtain and preserve the most volatile evidence immediately. After an initial investigation, the CIRT will formulate the best response and obtain management approval to proceed with further investigation and restoration steps. 862

AU1518Ch48Frame Page 863 Thursday, November 14, 2002 7:49 PM

Incident Response Management Exhibit 48-2. Immediate actions to be taken by administrators to contain an incident. 1. Extinguish power to the affected systems. This is a drastic but effective decision in preventing any further loss or damage. 2. Disconnect the affected equipment from the network. There should be redundant systems so users will have access to their critical services. 3. Disable specific services being exploited. 4. Take all appropriate steps to preserve activity and event logs. 5. Document all symptoms and actions by administrators. 6. Notify system managers. If authorized, notify the CIRT for response.

CRITICAL INCIDENT INVESTIGATION The goals of law enforcement officers and private investigators are basically the same. Both types of investigators want to collect evidence and preserve it for analysis and presentation at a later date. Evidence is simply defined as something physical and testimonial, material to an act. It is incumbent upon the CIRT to establish liaison with the appropriate levels of law enforcement to determine the best means of evidence collection, preservation, and delivery. If there are circumstances where law enforcement officers are not going to be involved, then the CIRT members should consider the wisdom of either developing forensic analysis skills or contracting others to perform these functions. Evidence collection and analysis are critical because incorrect crime scene processing and analysis can render evidence useless. Skilled technicians with specialized knowledge, tools, and equipment should accomplish collecting, processing, and analyzing evidence. Frequently, investigators want to be present during evidence collection and interviews; consequently, CIRT members should establish liaison with law enforcement and private investigators to establish protocols well in advance of a critical incident. Evidence may be voluntarily surrendered, obtained through the execution of a search warrant, through a court order or summons, or through subpoenas. It is a common practice for investigators to provide a receipt for evidence that has been delivered to them. This receipt documents the transfer of items from one party to another and supports the chain of custody. It is important to note that only law enforcement investigators use search warrants and subpoenas to obtain evidence. Once received, the investigator will usually physically mark the evidence for later identification. Marking evidence typically consists of the receiving investigators placing the date and their initials on the item. In the case of electronic media, the item will be subjected to special software applications, causing a unique one-way identifier to be created and written to the media, thereby identifying any subsequent changes in the media’s contents. 863

AU1518Ch48Frame Page 864 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS Does the investigator have the right to seize the computer and examine its contents? In corporate environments this right may be granted by policy. The enterprise should have a policy stating the ownership of equipment, data, and systems. It is a usual practice that organizations have policies requiring employees to waive any right to privacy as a condition of their employment. If the organization has such policies, it is important that its legal and human resources officers are consulted before any seizure takes place. Under current United States law and the Fourth Amendment to the U.S. Constitution, the government must provide a judge or magistrate with an affidavit detailing the facts and circumstances surrounding the alleged crime. Search warrants are two-part documents. The first part is the search warrant, which bears a statutory description of the alleged crime, a description of the place to be searched, and the items or persons to be seized. At the conclusion of the search warrant execution, a copy of this search warrant document must be deposited at the premises, regardless of whether it was occupied. Affidavits are the second part of the search warrant and are statements where the officer or agent, known as the affiant, swears to the truth of the matter. The law does not require the affiant to have first-hand knowledge of the statement’s details, merely that the affiant has reliable knowledge. Search warrants are granted based upon the establishment of probable cause. It is important to note that the affidavit must stand on its own; all relevant information must be contained within its borders. Questions surrounding search warrants are these: is it probable that a crime has been committed, and is it probable that fruits, instrumentalities, or persons connected to that crime are located at a given location now? Unless there are unusual circumstances, search warrants may only be executed in daylight hours from 6 a.m. to 10 p.m. If unusual circumstances exist, then these must be submitted to the court. Such circumstances include the possibility of extreme danger to the officers or the likelihood of evidence destruction. Search warrants must be announced, and authorities must declare their purpose. At the completion of the search warrant, the officers are required to deposit a copy of the search warrant and an inventory of the items seized at the searched premises. Under special circumstances, the search warrant will be sealed by the issuing court. This means the sworn statement is not public record until unsealed by the issuing court. If the affidavit is not sealed, then it is a public document and retrievable from the court’s office. At the conclusion of the search warrant, a return is completed and accompanied by an inventory of the seized items. This search warrant return is part of the original search warrant document and reflects the date, by whom, and where it was executed. Along with the search warrant return, an inventory of seized items is filed with the court, where it is available for public review. Law enforcement and non-law enforcement personnel, depending upon the nature of the investigation, may obtain court orders and summons. These documents are 864

AU1518Ch48Frame Page 865 Thursday, November 14, 2002 7:49 PM

Incident Response Management based upon applications made to the court of jurisdiction and may result in orders demanding evidence production by the judge or magistrate. Similar to search warrants, court orders are usually two-part documents with an application stating the reason the judge should issue an order to a party to produce items or testimony. The second part is the actual court order document. Court orders state the name of the case, the items to be brought before the court, the date the items are to be brought before the court, the location of the court, the name of the presiding judge, and the seal of the court. Summonses are similar to court orders and vary from jurisdiction to jurisdiction. Subpoenas are generally categorized as one of two types: one resulting from a grand jury investigation, and the second resulting from a trial or other judicial proceeding. Both documents carry the weight of the court — meaning these documents are demands that, if ignored, can result in contempt charges filed against persons or other entities. Grand juries are tasked with hearing testimony and reviewing evidence, hence their subpoenas are based upon investigative need. Their members are selected from the local community, and they are impaneled for periods of several months. Items or persons may be subpoenaed before a grand jury for examination. It is possible for a motion to quash the subpoena to be filed, causing the court to schedule a hearing where the subpoena’s merits are heard. Different than grand jury subpoenas, judicial subpoenas are issued for witnesses and evidence to be presented at trial or other hearings. Testimony is obtained through interviews, depositions, and judicial examinations. Interviewing someone is a conversation directed toward specific events. Interviews may be recorded in audio or video form, or the investigator may take carefully written notes. In the latter case, the interviewer’s notes are reduced to a report of the interview. This report serves as the best recollection of the investigator and is not generally considered a verbatim transcript of the interview. Depositions are more formal examinations and are attended by attorneys, witnesses, and persons who create a formal record of the proceedings. Usually, depositions are part of civil and administrative proceedings; however, in unusual circumstances they may be part of a criminal proceeding. Attorneys ask questions of the witnesses, with the plaintiff and defense attempting to ask questions that will cause the witness to provide an explanation favorable to their side. Judicial examinations are made before a judge or magistrate judge, and the witnesses are sworn to tell the whole truth while the proceedings are recorded. It is important to note that providing mischaracterizations, lies, or withholding information during interviews may be considered grounds for criminal prosecution. In a similar vein, the CIRT and others must be very careful interviewing potential subjects and collecting evidence. If interviews are conducted or evidence is collected through coercion, these actions could be considered as intimidating and may be considered for charges. 865

AU1518Ch48Frame Page 866 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS FORENSIC EXAMINATION There are several schools of thought in completing the forensic examination of evidence. Regardless, one rule remains steadfast — no examination should be conducted on original media; and the media, constituting evidence, must remain unchanged. There are several ways to obtain copies of media. There are forensic examination suites designed to perform exact bit-by-bit duplication; and there are specific software utilities used in duplicating media and hardware-copying devices that are convenient, but these are generally limited to the size and characteristics of the disks they can clone. There are also utilities that are part of some operating platforms that can produce bit-by-bit media duplications. It is important to remember that all forensic examination processes must be documented in the form of an activity log and, in the case of some very sensitive matters, witnessed by more than one examiner. Forensic examiners must ensure that their media is not contaminated with unwanted data; so many have a policy that, before any evidence is copied, media will be cleansed with software utilities or a degaussing device designed for such purposes. In this fashion, the examiner can testify that appropriate precautions were taken to prevent cross-contamination from other sources. As in the case of all evidence-handling practices, a chain of custody is prepared. Chain of custody is merely a schedule of the evidence, names, titles, reason for possession, places, times, and dates. From the time of the evidence seizure, the chain of custody is recorded and a copy attached to the evidence. The chain of custody documentation is maintained regardless of how the evidence was seized or whether the evidence is going to be introduced in criminal, civil, or administrative proceedings. A covert search is one targeting a specific console or system involving real-time monitoring, and it is usually conducted discreetly. In a practical example, an organization may suspect one of its employees of downloading inappropriate materials in violation of its use policy. After examining logs, an exact workstation cannot be identified. There are two ways to conduct a covert search after authorization is obtained. One method copies the suspected hard drive and replaces it with the copy, with the original considered as evidence. The second method duplicates the suspect’s hard drive while it remains in the computer. The duplicate is considered evidence and is duplicated again for examination. In either method it is important to ascertain that the organization has the right to access the equipment and that the suspect does not have any reasonable expectation of privacy. This topic must be fully addressed by the legal and human resources departments. 866

AU1518Ch48Frame Page 867 Thursday, November 14, 2002 7:49 PM

Incident Response Management After having seized the evidence, the examiner decides to either conduct an analysis on the premises or take the media to another location. The advantage of having the examination take place where the evidence is seized is obvious. If there is something discovered requiring action, it can be addressed immediately. However, if the examination takes place in the calm of a laboratory, with all the tools available, then the quality of the examination is at its highest. The CIRT and other investigators must consider the situation of sensitive or classified information that is resident on media destined for a courtroom. Sometimes, this consideration dissuades some entities from reporting criminal acts to the authorities. However, there are steps that can be legally pursued to mitigate the exposure of proprietary or sensitive information to the public. CRIMINAL, FORFEITURE, AND CIVIL PROCESSES Criminal acts are considered contrary to publicly acceptable behavior and are punished by confinement, financial fines, supervised probation, and restitution. Felonies are considered major crimes and are usually punished by periods of confinement for more than one year and fines of more than $1000. In some jurisdictions, those convicted of felonies suffer permanent loss of personal rights. Misdemeanors are minor crimes punishable by fines of less than $1000 and confinement of less than one year. Sentencing may include confinement, fines, or a period of probation. The length of sentence, fines, and victim restitution depends upon the value of the crime. If proprietary information is stolen and valued at millions of dollars, then the sentence will be longer with greater fines than for an act of Web page defacement. There are other factors that can lengthen sentencing. Was the defendant directing the criminal actions of others? Was the defendant committing a crime when he committed this crime? Has the defendant been previously convicted of other crimes? Was the defendant influencing or intimidating potential witnesses? There are also factors that can reduce a sentence. Has the defendant expressed remorse? Has the defendant made financial restitution to the victim? Has the defendant cooperated against other possible defendants? Under the laws of the United States, the length of sentence, the type of sentence, and fines are determined in a series of weighted numerical calculations and are codified in the Federal Sentencing Guidelines. At the time of sentencing, a report is usually prepared and delivered to the sentencing judge detailing the nature of the crime and the extent of the damage. It is at the judge’s discretion whether to order financial restitution to the victim; however, in recent times, more and more judges are inclined to order financial restitution as part of sentencing (see Exhibit 48-3). 867

AU1518Ch48Frame Page 868 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS Exhibit 48-3. Partial list of applicable federal criminal statutes. • 18 United States Code Section 1030 Fraud Activities with Computers • 18 United States Code Section 2511 Unlawful Interception of Communications • 18 United States Code Section 2701 Unlawful Access to Stored Electronic Communications • 18 United States Code Section 2319 Criminal Copyright Infringement • 18 United States Code Section 2320 Trafficking in Counterfeit Goods or Services • 18 United States Code Section 1831 Economic Espionage • 18 United States Code Section 1832 Theft of Trade Secrets • 18 United States Code Section 1834 Criminal Asset Forfeiture • 18 United States Code Section 1341 Mail Fraud • 18 United States Code Section 1343 Wire Fraud • 18 United States Code Sections 2251–2253 Sexual Exploitation of Children Act • 18 United States Code Section 371 Criminal Conspiracy

Frequently, organizations ask if there are statutory requirements for reports of criminal activities. Under the criminal codes of the United States, Title 18, Section 4, it states: “Whoever having knowledge of the actual commission of a felony cognizable by a court of the United States, conceals and does not as soon as possible make known the same to some judge or other person in civil or military authority under the United States, shall be fined under this title or imprisoned not more than three years or both.” Many jurisdictions have similar statutes requiring the reporting of criminal activities. Civil matters are disputes between parties that are resolved by the exchange of money or property. Civil suits may have actual, punitive, and statutory damages. In the case of actual damages, the plaintiff must prove to a preponderance of evidence (51 percent) that they suffered specific losses. Punitive damages are amounts that punish the defendant for harming the plaintiff. Statutory damages are those prescribed by law. Many jurisdictions have laws allowing the simultaneous criminal prosecution of a defendant, a civil suit naming the same defendant, and allowing forfeiture proceedings to take place. This type of multifaceted prosecution is known as parallel-track prosecution. Pursuant to criminal activities, many jurisdictions and the U.S. federal government, file concurrent forfeiture actions against offending entities. These proceedings also impact the relationship between CIRT members and the court system. Depending upon the specific jurisdiction, these actions may take the form of the criminal’s assets being indicted, or civil suits filed against those assets, or those assets being administratively forfeited. An example of this type of parallel-track prosecution is illustrated with the person who unlawfully enters an organization’s network and steals sensitive protected information that is subsequently sold to a competitor. 868

AU1518Ch48Frame Page 869 Thursday, November 14, 2002 7:49 PM

Incident Response Management Investigators conduct a thorough investigation, and the perpetrator is indicted. In this same case, a seizure warrant is obtained; and the defendant’s computer equipment, software, and the crime’s proceeds are seized. Depending upon the laws, the perpetrator may suffer confinement, loss of money resulting from the information sale, the forfeiture of his equipment or other items of value, restitution to the victim, and fines. It is also a reasonable and acceptable process that the subject is civilly sued for damages while he is criminally prosecuted and his assets forfeited. USE OF MONITORING DEVICES The enterprise must have policies governing the use of its system resources and the conduct of its employees. Pursuant to those policies, the CIRT may monitor network use by suspected employee offenders. The use of monitoring techniques is governed by the employees’ reasonable expectation of privacy and is defined by both policy and law. Techniques used to monitor employee activities should be made part of audit and executivelevel review processes to make certain these monitoring practices are not abused. Before implementing computer monitoring, it is wise to consult the organization’s human resources and legal departments because, if these policies are not implemented correctly, computer monitoring can run afoul of legal, policy, and ethical standards. Under federal statutes, network administrators are granted the ability to manage their systems. They may access and control all areas of their network and interact with other administrators in the performance of their duties. Because unauthorized system intruders do not have an expectation of privacy, their activities are not subject to such considerations. If administrators discover irregularities, fraud, or unauthorized software such as hacking tools on their systems, they are allowed to take corrective actions and report the offenders. However, this is not the case for government agencies wanting access to network systems and electronic communications. Depending upon the state of the electronic communications, they may be required to obtain a court order, search warrant, or subpoena. It is important to note that most jurisdictions do not allow retributive actions. For example, if a denial-of-service attack causes the organization to suffer losses, it may be considered unlawful for the organization to return a virus to the offender. NATURE OF CRIMINAL INCIDENTS Viruses and worms have been in existence for many years. Since the introduction of the Morris Worm in 1988, managers and administrators have paid attention to their potential for harm. In years past, viruses and worms were ignored by law enforcement, and treated as merely a nuisance. However, in more recent times, following the outbreaks of Melissa and the 869

AU1518Ch48Frame Page 870 Thursday, November 14, 2002 7:49 PM

LAW, INVESTIGATION, AND ETHICS Love Bug, persons responsible for their creation and proliferation are being investigated and prosecuted. Insider attacks usually consist of employees or former employees gaining access to sensitive information. Because they are already located inside the network, it is possible they have already bypassed many access barriers; and, by elevating their privileges, they may gain access to the organization’s most valuable information assets. Among the insiders are those who utilize the organization’s information assets for their own purposes. Downloading files in violation of use policies wastes valuable resources and, depending upon their content, may be a violation of law. Outsider attacks are more than an annoyance. A determined outsider may hammer at the target’s systems until an entry is discovered. Attackers may be malicious or curious. Regardless, their efforts have the same results in that unauthorized entry is made. Often, their attacks cause serious damage to information systems and compromise sensitive data. Attackers do not need thorough systems knowledge because there are many Web sites that provide the necessary tools for intrusions and DoS attacks. Unauthorized interception of communications may take place when an unauthorized intrusion takes place and software is installed allowing the intruder to monitor keystrokes and communications traffic. Because this activity is performed without the permission of the system owners, it may have the same net effect as an illegal wiretap. DoS attacks gained significant negative publicity recently as unscrupulous persons targeted high-profile Web sites, forcing them offline. In some cases, perpetrators were unwitting participants, wherein their broadband assets were compromised by persons installing software executing distributed DoS attacks. These attacks flood their target systems with useless data launched from single or multiple sources, causing the target’s network to crash. CONCLUSION Risk management consists of careful planning, implementation, testing, and revision. The most critical part of risk management is critical incident response. The principal purpose of risk management is avoidance and mitigation of harm. Incident response, with the development of a solid response strategy, outside liaison, and a well-trained CIRT, can make the difference between a manageable incident and a disaster costing the organization its future. ABOUT THE AUTHOR Alan B. Sterneckert, CISA, CISSP, CFE, CCCI, is the owner and general manager of Risk Management Associates located in Salt Lake City, Utah. A retired Special Agent, Federal Bureau of Investigation, Mr. Sterneckert is a 870

AU1518Ch48Frame Page 871 Thursday, November 14, 2002 7:49 PM

Incident Response Management professional specializing in risk management, IT system security, and systems auditing. In 2003, Mr. Sterneckert will complete a book about critical incident management, to be published by Auerbach.

871

AU1518Ch48Frame Page 872 Thursday, November 14, 2002 7:49 PM

AU1518Ch49Frame Page 873 Thursday, November 14, 2002 7:48 PM

Chapter 49

Managing the Response to a Computer Security Incident Michael Vangelos

Organizations typically devote substantial information security resources to the prevention of attacks on computer systems. Strong authentication is used, with passphrases that change regularly, tokens, digital certificates, and biometrics. Information owners spend time assessing risk. Network components are kept in access-controlled areas. The least privilege model is used as a basis for access control. There are layers of software protecting against malicious code. Operating systems are hardened, unneeded services are disabled, and privileged accounts are kept to a minimum. Some systems undergo regular audits, vulnerability assessments, and penetration testing. Add it all up, and these activities represent a significant investment of time and money. Management makes this investment despite full awareness that, in the real world, it is impossible to prevent the success of all attacks on computer systems. At some point in time, nearly every organization must respond to a serious computer security incident. Consequently, a well-written computer incident response plan is an extremely important piece of the information security management toolbox. Much like disaster recovery, an incident response plan is something to be fully developed and practiced — although one hopes that it will never be put into action. Management might believe that recovering from a security incident is a straightforward exercise that is part of an experienced system administrator’s job. From a system administrator’s perspective, that may be true in many instances. However, any incident may require expertise in a number 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

873

AU1518Ch49Frame Page 874 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS of different areas and may require decisions to be made quickly based on factors unique to that incident. This chapter discusses the nature of security incidents, describes how to assemble an incident response team (IRT), and explains the six phases of a comprehensive response to a serious computer security incident. GETTING STARTED Why Have an Incident Response Plan? All computer systems are vulnerable to attack. Attacks by internal users, attacks by outsiders, low-level probes, direct attacks on high-privilege accounts, and virus attacks are only some of the possibilities. Some attacks are merely annoying. Some can be automatically rejected by defenses built into a system. Others are more serious and require immediate attention. In this chapter, incident response refers to handling of the latter group of attacks and is the vehicle for dealing with a situation that is a direct threat to an information system. Some of the benefits of developing an incident response plan are: • Following a predefined plan of action can minimize damage to a network. Discovery that a system has been compromised can easily result in a state of confusion, where people do not know what to do. Technical staff may scurry around gathering evidence, unsure of whether they should disable services or disconnect servers from the network. Another potential scenario is that system administrators become aggressive, believing their job is to “get the hacker,” regardless of the effect their actions may have on the network’s users. Neither of these scenarios is desirable. Better results can be attained through the use of a plan that guides the actions of management as well as technicians during the life of an incident. Without a plan, system administrators may spend precious time figuring out what logs are available, how to identify the device associated with a specific IP address, or perform other basic tasks. With a plan, indecision can be minimized and staff can act confidently as they respond to the incident. • Policy decisions can be made in advance. An organization can make important policy decisions before they are needed, rather than in the heat of the moment during an actual incident. For example, how will decisions be made on whether gateways or servers will be taken down or users disconnected from the network? Will technicians be empowered to act on their own, or must management make those decisions? If management makes those decisions, what level of management? Who decides whether and when law enforcement is notified? If a system administrator finds an intruder with administrative access on a key server, should all user sessions be shut down immediately and log-ins prohibited? If major services are disrupted by an incident, how 874

AU1518Ch49Frame Page 875 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident are they prioritized so that technicians understand the order in which they should be recovered? Invariably, these and other policy issues are best resolved well in advance of when they are needed. • Details likely to be overlooked can be documented in the plan. Often, a seemingly unimportant event turns into a serious incident. A security administrator might notice something unusual and make a note of it. Over the next few days, other events might be observed. At some point, it might become clear that these events were related and constitute a potential intrusion. Unless the organization has an incident response plan, it would be easy for technical staff to treat the situation as simply another investigation into unusual activity. Some things may be overlooked, such as notifying internal audit, starting an official log of events pertaining to the incident, and ensuring that normal cleanup or routine activities do not destroy potential evidence. An incident response plan will provide a blueprint for action during an incident, minimizing the chance that important activities will fall through the cracks. • Nontechnical business areas must also prepare for an incident. Creation of an incident response plan and the act of performing walk-throughs or simulation exercises can prepare business functions for incident response situations. Business functions are typically not accustomed to dealing with computer issues and may be uncomfortable providing input or making decisions if “thrown into the fire” during an actual incident. For example, attorneys can be much better prepared to make legal decisions if they have some familiarity with the incident response process. Human resources and public relations may also be key players in an incident and will be better able to protect the organization after gaining an understanding of how they fit into the overall incident response plan. • A plan can communicate the potential consequences of an incident to senior management. It is no secret that, over time, companies are becoming increasingly dependent on their networks for all aspects of business. The movement toward the ability to access all information from any place at any time is continuing. Senior executives may not have an appreciation for the extent to which automation systems are interconnected and the potential impact of a security breach on information assets. Information security management can use periodic exercises in which potential dollar losses and disruption of services in real-life situations are documented to articulate the gravity of a serious computer security incident. Requirements for Successful Response to an Incident There are some key characteristics of effective response to a computer security incident. They follow from effective preparation and the 875

AU1518Ch49Frame Page 876 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS development of a plan that fits into an organization’s structure and environment. Key elements of a good incident response plan are: • Senior management support. Without it, every other project and task will drain resources necessary to develop and maintain a good plan. • A clear protocol for invoking the plan. Everyone involved should understand where the authority lies to distinguish between a problem (e.g., a handful of workstations have been infected with a virus because users disabled anti-virus software) and an incident (e.g., a worm is being propagated to hundreds of workstations and an anti-virus signature does not exist for it). A threshold should be established as a guide for deciding when to mobilize the resources called for by the incident response plan. • Participation of all the right players. Legal, audit, information security, information technology, human resources, protection (physical security), public relations, and internal communications should all be part of the plan. Legal, HR, and protection may play an important role, depending on the type of incident. For some organizations, public relations may be the most important function of all, ensuring that consistent messages are communicated to the outside world. • Clear establishment of one person to be the leader. All activity related to the incident must be coordinated by one individual, typically from IT or information security. This person must have a thorough knowledge of the incident response plan, be technical enough to understand the nature of the incident and its impact, and have the ability to communicate to senior management as well as technical staff. • Attention to communication in all phases. Depending on the nature of the incident, messages to users, customers, shareholders, senior management, law enforcement, and the press may be necessary. Bad incidents can easily become worse because employees are not kept informed and cautioned to refer all outside inquiries concerning the incident to public relations. • Periodic testing and updates. The incident response plan should be revisited regularly. Many organizations test disaster recovery plans annually or more frequently. These tests identify existing weaknesses in the plan and uncover changes in the automation environment that require corresponding adjustments for disaster recovery. They also help participants become familiar with the plan. The same benefits will be derived from simulation exercises or structured walk-throughs of an incident response plan. Defining an Incident There is no single, universally accepted definition of incident. The Computer Emergency Response Team Coordination Center (CERT/CC) at Carnegie Mellon University defines incident as “the act of violating an explicit or 876

AU1518Ch49Frame Page 877 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident implied security policy.”1 That may be a great way to describe all events that are bad for computer systems, but it is too broad to use as a basis for the implementation of an incident response plan. The installation of a packet sniffer without management authorization, for instance, may be a violation of policy but probably would not warrant the formality of invoking an incident response plan. However, the use of that sniffer to capture sensitive data such as passwords may be an incident for which the plan should be invoked. The U.S. Department of Energy’s Computer Incident Advisory Capability (CIAC) uses this definition for incident: Any adverse event that threatens the security of information resources. Adverse events may include compromises of integrity, denial-of-service attacks, compromise of confidentiality, loss of accountability, or damage to any part of the system. Examples include the insertion of malicious code (e.g., viruses, Trojan horses, or backdoors), unauthorized scans or probes, successful and unsuccessful intrusions, and insider attacks.2

This, too, is a good definition and one that is better aligned with the goal of identifying events that should trigger implementation of an incident response plan. To make this definition more useful in the plan, it should be complemented by guidelines for assessing the potential severity of an incident and a threshold describing the level of severity that should trigger invocation of the plan. Responding to an incident, as described in this chapter, involves focused, intense activity by multiple people in order to address a serious condition that may materially affect the health of an organization’s information assets. Therefore, as the incident response plan is developed, an organization should establish criteria for deciding whether to invoke the plan. Developing an Incident Response Team There is no singularly correct makeup of an incident response team (IRT). However, it is generally agreed that if the following functional units exist in an organization, they should be represented: information security, information technology, audit, legal, public relations, protection (physical security), and human resources. In an ideal situation, specific individuals (preferably a primary and secondary contact) from each of these areas are assigned to the IRT. They will be generally familiar with the incident response plan and have an understanding of what kinds of assistance they may be called upon to provide for any incident. Exhibit 49-1 lists the participants and their respective roles. Some organizations successfully manage incidents by effectively splitting an IRT into two distinct units. A technical team is made up of staff with responsibility for checking logs and other evidence, determining what damage if any has been done, taking steps to minimize damage if the incident is ongoing, and restoring systems to an appropriate state. A management team 877

AU1518Ch49Frame Page 878 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS Exhibit 49-1. Incident response team roles. Function

Probable Role

Information security

Often has responsibility for the plan and leads the response; probably leads the effort to put preventive controls in place during preparation phase; staff may also be involved in the technical response (reviewing logs, cleaning virus-infected workstations, reviewing user definitions and access rights, etc.) Performs most eradication and recovery activities; probably involved during detection phase; should be active during preparation phase Independent observer who reports to highest level of the organization; can provide valuable input for improving incident response capability May be a key participant if the incident was originated by an employee or agency hired by the victim organization; can also advise in situations where downstream liability may exist (e.g., there is evidence that a system was compromised and subsequently used to attack another company’s network); may want to be involved any time a decision is made to contact law enforcement agencies; should have input to decisions on whether to prosecute criminal activity; would advise on any privacy issues Should coordinate all communication with the outside world; probably creates the messages that are used May be necessary if the incident originated from within the organization and the response may involve confronting a potentially hostile employee or contractor; might also be the best entity to take custody of physical evidence Provides input on how to deal with a situation in which an employee caused the incident or is actively hacking the system

Information technology Audit

Legal

Public relations Protection

Human resources

consists of representatives of the functional areas listed above and would act as a steering committee and decision-making body for the life of the incident. An individual leading the response to an incident would appoint leaders of each team or serve as chair of the management team. The two teams, of course, should be in frequent communication with each other, generally with the management team making decisions based on input from the technical team. SIX PHASES OF INCIDENT RESPONSE It is generally accepted that there are six phases to the discipline of incident response, and the cycle begins well before an incident ever occurs. In any one incident, some of these phases will overlap. In particular, eradication and recovery often occur concurrently. The phases are: • Preparation • Detection • Containment 878

AU1518Ch49Frame Page 879 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident Exhibit 49-2. Goal of each incident response phase. Phase

Goal

Preparation Detection

Adopt policies and procedures that enable effective incident response Detect that an incident has occurred and make a preliminary assessment of its magnitude Keep the incident from spreading Eliminate all effects of the incident Return the network to a production-ready status Review the incident and improve incident-handling capabilities

Containment Eradication Recovery Follow-up

• Eradication • Recovery • Follow-up Exhibit 49-2 briefly describes the goal of each phase. Preparation Phase If any one phase is more important than the others, it is the preparation phase. Before an incident occurs is the best time to secure the commitment of management at all levels to the development of an effective incident response capability. This is the time when a solid foundation for incident response is built. During this phase, an organization deploys preventive and detective controls and develops an incident response capability. Management responsible for incident response should do the following: • Name specific individuals (and alternates) as members of the IRT. Each functional area described in the preceding section of this chapter (audit, legal, human resources, public relations, information security, information technology) should be represented by people with appropriate decision-making and problem-solving skills and authority. • Ensure that there is an effective mechanism in place for contacting team members. Organizations have a similar need for contacting specific people in a disaster recovery scenario. It may be possible to use the same process for incident response. • Include guidelines for deciding when the incident response plan is invoked. One of the key areas of policy to be considered prior to an incident is answering the question, “What are the criteria for declaring an incident?” • Specify the relative priority of goals during an incident. For example, — Protect human life and safety (this should always be first). — Protect classified systems and data. — Ensure the integrity of key operating systems and network components. — Protect critical data. 879

AU1518Ch49Frame Page 880 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS • Commit to conducting sessions to exercise the plan, simulating different types of incidents. Exercises should be as realistic as possible without actually staging an incident. An exercise may, for example, prompt legal, human resources, and protection to walk through their roles in a situation where an employee and contractor have conspired to compromise a network and are actively hacking the system while on company premises. Exercises should challenge IT and information security staff to identify the logs and other forensic data or tools that would be used to investigate specific types of incidents. • Decide on the philosophy to be used in response to an intrusion. Should an attacker successfully hack in, does the victim organization want to get rid of the intruder as quickly as possible and get back to business (protect and proceed)? Or does the organization want to observe the intruder’s movements and potentially gather data for prosecution (pursue and prosecute)? • Ensure that there is a reasonable expectation that the skills necessary to perform the technical tasks of the incident response plan are present in the organization. Enough staff should understand the applicable network components, forensic tools, and the overall plan so that when an incident occurs, it can be investigated in a full and competent manner. • Make adjustments to the plan based on test scenario exercises and reviews of the organization’s response to actual incidents. • Review the organization’s security practices to ensure that intrusion detection systems are functional, logs are activated, sufficient backups are taken, and a program is in place for regularly identifying system vulnerabilities and addressing those vulnerabilities. Detection Phase The goal of the detection phase is to determine whether an incident has occurred. There are many symptoms of a security incident. Some common symptoms are: • New user accounts not created by authorized administrators • Unusual activity by an account, such as an unexpected log-in while the user is known to be on vacation or use of the account during odd hours • Unexpected changes in the lengths or time stamps of operating system files • Unusually high network or server activity or poor system performance • Probing activity such as port scans • For Windows operating systems, unexplained changes in registry settings • Multiple attempts to log in as root or administrator 880

AU1518Ch49Frame Page 881 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident Various tools are available to help detect activity that could indicate a security incident. First, there are system logs. Systems should be configured so that logs capture events such as successful and failed log-ins of administrator-level accounts. In addition, failed log-ins of all accounts should be logged. Because log data is relatively worthless unless someone analyzes it, logs should be reviewed on a regular basis. For many systems, the amount of data captured in logs is so great that it is impossible to review it without a utility that searches for and reports those records that might be of interest. Data integrity checkers exist for UNIX and Windows platforms. These utilities typically keep a database of hash values for specified files, directories, and registry entries. Any time an integrity check is performed, the hash value for each object is computed and compared to its corresponding value in the database. Any discrepancy indicates that the object has changed since the previous integrity check. Integrity checkers can be good indications of an intrusion, but it can take a great deal of effort to configure the software to check only those objects that do not change due to normal system activity. Intrusion detection systems (IDSs) claim to identify attacks on a network or host in real-time. IDSs basically come in two flavors — network based and host based. A network-based IDS examines traffic as it passes through the IDS sensor, comparing sequences of packets to a database of attack signatures. If it finds a match, the IDS reports an event, usually to a console. The IDS may also be able to send an e-mail or dial a pager as it detects specific events. In contrast, a host-based IDS examines log data from a specific host. As the system runs, the IDS looks at information written to logs in real-time and reports events based on policies set within the IDS. Organizations become aware of security incidents in many ways. In one scenario, technical staff probably notices or is made aware of an unusual event and begins to investigate. After some initial analysis, it is determined that the event is a threat to the network, so the incident response plan is invoked. If so, the IRT is brought together and formal logging of all activity related to this incident begins. It should be noted that early detection of an incident could mean a huge difference in the amount of damage and cost to the organization. In particular, this is true of malicious code attacks as well as intrusions. In this phase, the IRT is formally called into action. It is important that certain things occur at this time. Perhaps most importantly, one person should take charge of the process. A log of all applicable events should be initiated at this time and updated throughout the incident. Everyone involved in responding to the incident must be aware of the process. They should all be reminded that the incident will be handled in accordance with guidance provided by the plan, that technical staff should communicate all 881

AU1518Ch49Frame Page 882 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS new developments as quickly as possible to the rest of the team, that everyone must remember to observe evidence chain-of-custody guidelines, and that all communication to employees as well as the outside world should flow through official channels. Some organizations will specify certain individuals who should always be notified when the incident response plan is invoked, even if they are not members of the IRT. For example, the highest internal audit official, COO, the highest information security official, or, in the case where each division of an organization has its own incident response capability, corporate information security may be notified. Containment Phase The goal of the containment phase is to keep the incident from spreading. At this time, actions are taken to limit the damage. If it is a malicious code incident, infected servers and workstations may be disconnected from the network. If there is an intruder on the network, the attacker may be limited to one network segment and most privileged accounts may be temporarily disabled. If the incident is a denial-of-service attack, the sources may be able to be identified and denied access to the target network. If one host has been compromised, communication to other hosts may be disabled. There is much that can be done prior to an incident to make the job of containment easier. Putting critical servers on a separate subnet, for example, allows an administrator to quickly deny traffic to those servers from any other subnet or network known to be under attack. It is prudent to consider certain situations in advance and determine how much risk to take if faced with those situations. Consider a situation where information security staff suspects that a rogue NT/2000 administrator with privileges at the top of the tree is logged in to the company’s Active Directory (AD). In effect, the intruder is logged in to every Windows server defined to the AD. If staff cannot identify the workstation used by the intruder, it may be best to immediately disconnect all workstations from the network. On the other hand, such drastic action may not be warranted if the intrusion occurs on a less sensitive or less critical network segment. In another example, consider a devastating e-mail-borne worm spreading through an enterprise. At what point is the e-mail service disabled? The incident response plan should contain guidance for making this decision. The containment phase is also the time when a message to users may be appropriate. Communication experts should craft the message, especially if it goes outside the organization. Eradication Phase Conceptually, eradication is simple — this is the phase in which the problem is eliminated. The methods and tools used will depend on the 882

AU1518Ch49Frame Page 883 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident exact nature of the problem. For a virus incident, anti-virus signatures may have to be developed and applied; and hard drives or e-mail systems may need to be scanned before access to infected systems is allowed to resume. For an intrusion, systems into which the intruder was logged must be identified and the intruder’s active sessions must be disconnected. It may be possible to identify the device used by the intruder and either logically or physically separate it from the network. If the attack originated from outside the network, connections to the outside world can be disabled. In addition to the immediate effects of the incident, such as an active intruder or virus, other unauthorized changes may have been made to systems as a result of the incident. Eradication includes the examination of network components that may have been compromised for changes to configuration files or registry settings, the appearance of Trojan horses or backdoors designed to facilitate a subsequent security breach, or new accounts that have been added to a system. Recovery Phase During the recovery phase, systems are returned to a normal state. In this phase, system administrators determine (as well as possible) the extent of the damage caused by the incident and use appropriate tools to recover. This is primarily a technical task, with the nature of the incident determining the specific steps taken to recover. For malicious code, antivirus software is the most common recovery mechanism. For denial-of-service attacks, there may not even be a recovery phase. An incident involving unauthorized use of an administrative-level account calls for a review of (at least) configuration files, registry settings, user definitions, and file permissions on any server or domain into which the intruder was logged. In addition, the integrity of critical user databases and files should be verified. This is a phase where tough decisions may have to be made. Suppose, for example, the incident is an intrusion and an administrative account was compromised for a period of two days. The account has authority over many servers, such as in a Windows NT domain. Unless one can account for every action taken by the intruder (maybe an impossible task in the real world), one can never be sure whether the intruder altered operating system files, updated data files, planted Trojan horses, defined accounts that do not show up in directory listings, or left time bombs. The only ways to be absolutely certain that a server has been recovered back to its pre-incident state is to restore from backup using backup tapes known to be taken before the intrusion started, or rebuild the server by installing the operating system from scratch. Such a process could consume a significant amount of time, especially if there are hundreds of servers that could have been compromised. So if a decision is made not to restore from tape or rebuild servers, an organization takes on more risk that the problem will not be 883

AU1518Ch49Frame Page 884 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS fully eradicated and systems fully restored. The conditions under which an organization is willing to live with the added risk is a matter deserving of some attention during the preparation phase. Follow-Up Phase It should come as no surprise that after an incident has been detected, contained, eradicated, and all recovery activities have been completed, there is still work to do. In the follow-up phase, closure is brought to the matter with a thorough review of the entire incident. Specific activities at this time include: • Consolidate all documentation gathered during the incident. • Calculate the cost. • Examine the entire incident, analyzing the effectiveness of preparation, detection, containment, eradication, and recovery activities. • Make appropriate adjustments to the incident response plan. Documentation should be consolidated at this time. There may have been dozens of people involved during the incident, particularly in large, geographically dispersed organizations. If legal proceedings begin years later, it is highly unlikely that the documentation kept by each participant will still exist and be accessible when needed. Therefore, all documentation must be collected and archived immediately. There should be no question about the location of all information concerning this incident. Another potential benefit to consolidating all of the documentation is that a similar incident may occur in the future, and individuals handling the new incident should be able to review material from the earlier incident. The cost of the incident should be calculated, including direct costs due to data loss, loss of income due to the unavailability of any part of the network, legal costs, cost of recreating or restoring operating systems and data files, employee time spent reacting to the incident, and lost time of employees who could not access the network or specific services. All aspects of the incident should be examined. Each phase of the plan should be reviewed, beginning with preparation. How did the incident occur — was there a preventable breakdown in controls, did the attacker take advantage of an old, unpatched vulnerability, was there a serious virus infection that may have been prevented with more security awareness? Exhibit 49-3 shows questions that could apply at each phase of the incident. Appropriate adjustments should be made to the incident response plan and to information security practices. No incident response plan is perfect. An organization may be able to avoid future incidents, reduce the damage of future incidents, and get in a position to respond more effectively by 884

AU1518Ch49Frame Page 885 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident Exhibit 49-3. Sample questions for post-incident review. Preparation • • • •

Were controls applicable to the specific incident working properly? What conditions allowed the incident to occur? Could more education of users or administrators have prevented the incident? Were all of the people necessary to respond to the incident familiar with the incident response plan? • Were any actions that required management approval clear to participants throughout the incident? Detection • How soon after the incident started did the organization detect it? • Could different or better logging have enabled the organization to detect the incident sooner? • Does the organization even know exactly when the incident started? • How smooth was the process of invoking the incident response plan? • Were appropriate individuals outside of the incident response team notified? • How well did the organization follow the plan? • Were the appropriate people available when the response team was called? • Should there have been communication to inside and outside parties at this time; and if so, was it done? • Did all communication flow from the appropriate source? Containment • How well was the incident contained? • Did the available staff have sufficient skills to do an effective job of containment? • If there were decisions on whether to disrupt service to internal or external customers, were they made by the appropriate people? • Are there changes that could be made to the environment that would have made containment easier or faster? • Did technical staff document all of their activities? Eradication and Recovery • Was the recovery complete — was any data permanently lost? • If the recovery involved multiple servers, users, networks, etc., how were decisions made on the relative priorities, and did the decision process follow the incident response plan? • Were the technical processes used during these phases smooth? • Was staff available with the necessary background and skills? • Did technical staff document all of their activities?

applying knowledge gained from a post-incident review. The review might indicate that changes should be made in any number of places, including the incident response plan, existing controls, the level of system monitoring, forensic skills of the technical staff, or the level of involvement of nonIT functions. 885

AU1518Ch49Frame Page 886 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS OTHER CONSIDERATIONS Common Obstacles to Establishing an Effective Incident Response Plan It may seem that any organization committed to establishing an incident response plan would be able to put one in place without much difficulty. However, there are many opportunities for failure as you address the issue of incident response. This section describes some of the obstacles that may arise during the effort. • There is a tendency to think of serious computer security incidents primarily as IT issues to be handled on a technical level. They are not. Security incidents are primarily business issues that often have a technical component that needs prompt attention. Organizations that consider security incidents to be IT issues are more likely to make the mistake of including only IT and information security staff on the IRT. • Technical staff with the skills to create and maintain an effective incident response plan may already be overworked simply trying to maintain and improve the existing infrastructure. There can be a tendency to have system administrators put together a plan in their spare time. Typically, these efforts lead to a lot of scurrying to get a plan thrown together in the last few days before a management-imposed deadline for its completion. • It can be difficult to get senior management’s attention unless a damaging incident has already occurred. Here is where it may help to draw parallels between business continuity/disaster recovery and incident response. By and large, executives recognize the benefits of investment in a good business continuity strategy. Pointing out the similarities, especially noting that both are vehicles for managing risk, can help overcome this obstacle. • One can think of a hundred reasons not to conduct exercises of the plan. Too many people are involved; it is difficult to stage a realistic incident to test the plan; everybody is too busy; it will only scare people; etc. Lack of testing can very quickly render an incident response plan less than adequate. Good plans evolve over time and are constantly updated as the business and technical environments change. Without periodic testing and review, even a well-constructed incident response plan will become much less valuable over time. The Importance of Training It is crucial that an organization conduct training exercises. No matter how good an incident response plan is, periodic simulations or walkthroughs will identify flaws in the plan and reveal where the plan has not kept pace with changes in the automation infrastructure. More importantly, it will keep IRT members aware of the general flow as an incident is 886

AU1518Ch49Frame Page 887 Thursday, November 14, 2002 7:48 PM

Managing the Response to a Computer Security Incident reported and the organization responds. It will give technical staff an opportunity to utilize tools that may not be used normally. Each exercise is an opportunity to ensure that all of the tools that might be needed during an incident are still functioning as intended. Finally, it will serve to make key participants more comfortable and more confident during a real incident. Benefits of a Structured Incident Response Methodology As this chapter describes, there is nothing trivial about preparing to respond to a serious computer security incident. Development and implementation of an incident response plan require significant resources and specialized skills. It is, however, well worth the effort for the following reasons. • An incident response plan provides structure to a response. In the event of an incident, an organization would be extremely lucky if its technicians, managers, and users all do what they think best and those actions make for an effective response. On the other hand, the organization will almost always be better served if those people acted against the backdrop of a set of guidelines and procedures designed to take them through each step of the way. • Development of a plan allows an organization to identify actions and practices that should always be followed during an incident. Examples are maintaining a log of activities, maintaining an evidentiary chain of custody, notifying specific entities of the incident, and referring all media inquiries to the public relations staff. • It is more likely that the organization will communicate effectively to employees if an incident response plan is in place. If not, messages to management and staff will tend to be haphazard and may make the situation worse. • Handling unexpected events is easier if there is a framework that is familiar to all the participants. Having critical people comfortable with the framework can make it easier to react to the twists and turns that sometimes occur during an incident. Years ago, security practitioners and IT managers realized that a good business continuity plan was a sound investment. Like business continuity, a computer incident response plan has become an essential part of a good security program. Notes 1. CERT/CC Incident Reporting Guidelines, available at http://www.cert.org/tech_tips/ incident_ reporting.html. 2. CIAC Incident Reporting Procedures, available at http://doe-is.llnl.gov/.

887

AU1518Ch49Frame Page 888 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS ABOUT THE AUTHOR Michael Vangelos has over 23 years of IT experience, including 12 specializing in information security. He has managed the information security function at the Federal Reserve Bank of Cleveland for nine years and is currently the bank’s information security officer. He is responsible for security policy development, security administration, security awareness, vulnerability assessment, intrusion detection, and information security risk assessment, as well as incident response. He holds a degree in computer engineering from Case Western Reserve University.

888

AU1518Ch50Frame Page 889 Thursday, November 14, 2002 7:48 PM

Chapter 50

Cyber-Crime: Response, Investigation, and Prosecution Thomas Akin, CISSP

Any sufficiently advanced form of technology is indistinguishable from magic. — Arthur C. Clark

As technology grows more complex, the gap between those who understand technology and those who view it as magic is getting wider. The few who understand the magic of technology can be separated into two sides — those who work to protect technology and those who try to exploit it. The first are information security professionals, the latter hackers. To many, a hacker’s ability to invade systems does seem magic. For security professionals — who understand the magic — it is a frustrating battle where the numbers are in the hackers’ favor. Security professionals must simultaneously protect every single possible access point, while a hacker only needs a single weakness to successfully attack a system. The life cycle in this struggle is: • • • • •

Protection Detection Response Investigation Prosecution

First, organizations work on protecting their technology. Because 100 percent protection is not possible, organizations realized that if they could not completely protect their systems, they needed to be able to detect 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

889

AU1518Ch50Frame Page 890 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS when an attack occurred. This led to the development of intrusion detection systems (IDSs). As organizations developed and deployed IDSs, the inevitable occurred: “According to our IDS, we’ve been hacked! Now what?” This quickly led to the formalization of incident response. In the beginning, most organizations’ response plans centered on getting operational again as quickly as possible. Finding out the identity of the attacker was often a low priority. But as computers became a primary storage and transfer medium for money and proprietary information, even minor hacks quickly became expensive. In attempts to recoup their losses, organizations are increasingly moving into the investigation and prosecution stages of the life cycle. Today, while protection and detection are invaluable, organizations must be prepared to effectively handle the response, investigation, and prosecution of computer incidents. RESPONSE Recovering from an incident starts with how an organization responds to that incident. It is rarely enough to have the system administrator simply restore from backup and patch the system. Effective response will greatly affect the ability to move to the investigation phase, and can, if improperly handled, ruin any chances of prosecuting the case. The highlevel goals of incident response are to preserve all evidence, remove the vulnerability that was exploited, quickly get operational again, and effectively handle PR surrounding the incident. The single biggest requirement for meeting all of these goals is preplanning. Organizations must have an incident response plan in place before an incident ever occurs. Incidents invariably cause significant stress. System administrators will have customers and managers yelling at them, insisting on time estimates. Executives will insist that they “just get the damn thing working!” Even the customer support group will have customers yelling at them about how they need everything operational now. First-time decisions about incident response under this type of stress always lead to mistakes. It can also lead to embarrassments such as bringing the system back online only to have it hacked again, deleting or corrupting the evidence so that investigation and prosecution are impossible, or ending up on the evening news as the latest casualty in the war against hackers. To be effective, incident response requires a team of people to help recover from the incident. Technological recovery is only one part of the response process. In addition to having both IT and information security staff on the response team, there are several nontechnical people who should be involved. Every response should include a senior executive, general counsel, and someone from public relations. Additionally, depending on the incident, expanding the response team to include personnel from HR, the physical security group, the manager of the affected area, and even law enforcement may be appropriate. 890

AU1518Ch50Frame Page 891 Thursday, November 14, 2002 7:48 PM

Cyber-Crime: Response, Investigation, and Prosecution Once the team is put together, take the time to plan response priorities for each system. In a Web server defacement, the top priorities are often getting the normal page operational and handling PR and the media. If an online transaction server is compromised and hundreds of thousands of dollars are stolen, the top priority will be tracking the intruder and recovering the money. Finally, realize that these plans provide a baseline only. No incident will ever fall perfectly into them. If a CEO is embezzling money to pay for online sex from his work computer, no matter what the standard response plan calls for, the team should probably discreetly contact the organization’s president, board of directors, and general counsel to help with planning the response. Each incident’s “big picture” may require changes to some of the preplanned details, but the guidelines provide a framework within which to work. Finally, it is imperative to make sure the members of the response team have the skills needed to successfully respond to the incident. Are IT and InfoSec staff members trained on how to preserve digital evidence? Can they quickly discover an intruder’s point of entry and disable it? How quickly can they get the organization functional again? Can they communicate well enough to clearly testify about technology to a jury with an average education level of sixth grade? Very few system or network administrators have these skills — organizations need to make sure they are developed. Additionally, how prepared is the PR department to handle media inquiries about computer attacks? How will they put a positive spin on a hacker stealing 80,000 credit card numbers from the customer database? Next, general counsel — how up-to-date are they on the ever-changing computer crime case law? What do they know about the liability an organization faces if a hacker uses its system to attack others? Without effective response, it is impossible to move forward into the investigation of the incident. Response is more than “just get the damn thing working!” With widespread hacking tools, a volatile economy, and immature legal precedence, it is not enough to know how to handle the hacker. Organizations must also know how to handle customers, investors, vendors, competitors, and the media to effectively respond to computer crime. INVESTIGATION When responding to an incident, the decision of whether to formally investigate will have to be made. This decision will be based upon factors such as the severity of the incident and the effect an investigation will have on the organization. The organization will also have to decide whether to conduct an internal investigation or contact law enforcement. A normal investigation will consist of: 891

AU1518Ch50Frame Page 892 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS • • • • •

Interviewing initial personnel A review of the log files An intrusion analysis Forensic duplication and analysis Interviewing or interrogating witnesses and suspects

Experienced investigators first determine that there actually was an intrusion by interviewing the administrators who discovered the incident, the managers to whom the incident was reported, and even users to determine if they noticed deviations in normal system usage. Next, they will typically review system and network log files to verify the organization’s findings about the intrusion. Once it is obvious that an intrusion has occurred, the investigator will move to a combination of intrusion analysis and forensics analysis. While they often overlap, intrusion analysis is most often performed on running systems, while forensic analysis is done offline on a copy of the system’s hard drive. Next, investigators will use the information discovered to locate other evidence, systems to analyze, and suspects to interview. If the attacker came from the outside, then locating the intruder will require collecting information from any third parties that the attacker passed through. Almost all outside organizations, especially ISPs, will require either a search warrant or subpoena before they will release logs or subscriber information. When working with law enforcement, they can provide the search warrant. Non-law enforcement investigators will have to get the organization to open a “John Doe” civil lawsuit in order to subpoena the necessary information. Finally, while the search warrant or subpoena is being prepared, investigators should contact the third party and request that they preserve the evidence that investigators need. Many ISPs delete their logs after 30 days, so it is important to contact them quickly. Due to the volatility of digital evidence, the difficulty in proving who was behind the keyboard, and constantly changing technology, computer investigations are very different from traditional ones. Significant jurisdictional issues can come up that rarely arise in normal investigations. If an intruder resides in Canada, but hacks into the system by going first through a system in France and then a system in China, where and under which country’s laws are search warrants issued, subpoenas drafted, or the case prosecuted? Because of these difficulties, international investigations usually require the involvement of law enforcement — typically the FBI. Few organizations have the resources to handle an international investigation. Corporate investigators can often handle national and internal investigations, contacting law enforcement only if criminal charges are desired. Computer investigations always involve digital evidence. Such evidence is rarely the smoking gun that makes or breaks an investigation; instead, it often provides leads for further investigation or corroborates other evidence. For digital evidence to be successfully used in court, it needs to be 892

AU1518Ch50Frame Page 893 Thursday, November 14, 2002 7:48 PM

Cyber-Crime: Response, Investigation, and Prosecution backed up by either physical evidence or other independent digital evidence such as ISP logs, phone company records, or an analysis of the intruder’s personal computer. Even when the evidence points to a specific computer, it can be difficult to prove who was behind the keyboard at the time the incident took place. The investigator must locate additional proof, often through nontechnical means such as interviewing witnesses, to determine who used the computer for the attack. Much of technology can be learned through trial and error. Computer investigation is not one of them. Lead investigators must be experienced. No one wants a million-dollar suit thrown out because the investigator did not know how to keep a proper chain of custody. There are numerous opinions about what makes a good investigator. Some consider law enforcement officers trained in technology the best. Others consider IT professionals trained in investigation to be better. In reality, it is the person, not the specific job title, that makes the difference. Investigators must have certain qualities. First, they cannot be afraid of technology. Technology is not magic, and investigators need to have the ability to learn any type of technology. Second, they cannot be in love with technology. Technology is a tool, not an end unto itself. Those who are so in love with technology that they always have be on the bleeding edge lack the practicality needed in an investigation. An investigator’s nontechnical talents are equally important. In addition to strong investigative skills, he or she must have excellent communications skills, a professional attitude, and good business skills. Without good oral communications skills, an investigator will not be able to successfully interview people or testify successfully in court if required. Without excellent written communications skills, the investigator’s reports will be unclear, incomplete, and potentially torn apart by the opposing attorney. A professional attitude is required to maintain a calm, clear head in stressful and emotional situations. Finally, good business skills help make sure the investigator understands that sometimes getting an organization operational again may take precedent over catching the bad guy. During each investigation, the organization will have to decide whether to pursue the matter internally or to contact law enforcement. Some organizations choose to contact law enforcement for any incident that happens. Other organizations never call them for any computer intrusion. The ideal is somewhere in between. The decision to call law enforcement should be made by the same people who make up the response team — senior executive management, general counsel, PR, and technology professionals. Many organizations do not contact law enforcement because they do not know what to expect. This often comes from an organization keeping its proverbial head in the sand and not preparing incident response plans ahead of time. Other reasons organizations may choose not to contact law enforcement include: 893

AU1518Ch50Frame Page 894 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS • They are unsure about law enforcement’s computer investigation skills. • They want to avoid publicity regarding the incident. • They have the internal resources to resolve the investigation successfully. • The incident is too small to warrant law enforcement attention. • They do not want to press criminal charges. The reasons many organization will contact law enforcement are: • • • • •

They do not have the internal capabilities to handle the investigation. They want to press criminal charges. They want to use a criminal prosecution to help in a civil case. They are comfortable with the skills of law enforcement in their area. The incident is international in scope.

All of these factors must be taken into account when deciding whether to involve law enforcement. When law enforcement is involved, they will take over and use state and federal resources to continue the investigation. They also have legal resources available to them that corporate investigators do not. However, they will still need the help of company personnel because those people are the ones who have an in-depth understanding of policies and technology involved in the incident. It is also important to note that involving law enforcement does not automatically mean the incident will be on the evening news. Over the past few years, the FBI has successfully handled several large-scale investigations for Fortune 500 companies while keeping the investigation secret. This allowed the organizations to publicize the incident only after it had been successfully handled and avoid damaging publicity. Finally, law enforcement is overwhelmed by the number of computer crime cases they receive. This requires them to prioritize their cases. Officially, according to the Computer Fraud and Abuse Act, the FBI will not open an investigation if there is less than $5000 in damages. The actual number is significantly higher. The reality is that a defaced Web site, unless there are quantifiable losses, will not get as much attention from law enforcement as the theft of 80,000 credit card numbers. PROSECUTION After the investigation, organizations have four options — ignore the incident, use internal disciplinary action, pursue civil action, or pursue criminal charges. Ignoring the incident is usually only acceptable for very minor infractions where there is very little loss and little liability from ignoring the incident. Internal disciplinary action can be appropriate if the intruder is an employee. Civil lawsuits can be used to attempt to recoup losses. Criminal charges can be brought against those violating local, state, or federal laws. Civil cases only require a “preponderance of evidence” to 894

AU1518Ch50Frame Page 895 Thursday, November 14, 2002 7:48 PM

Cyber-Crime: Response, Investigation, and Prosecution show the party guilty, while criminal cases require evidence to prove someone guilty “beyond a reasonable doubt.” When going to trial, not all of the evidence collected will be admissible in court. Computer evidence is very different from physical evidence. Computer logs are considered hearsay and therefore generally inadmissible in court. However, computer logs that are regularly used and reviewed during the normal course of business are considered business records and are therefore admissible. There are two points to be aware of regarding computer logs. If the logs are simply collected but never reviewed or used, then they may not be admissible in court. Second, if additional logging is turned on during the course of an investigation, those logs will not be admissible in court. That does not mean additional logging should not be performed but that such logging needs to lead to other evidence that will be admissible. Computer cases have significant challenges during trial. First, few lawyers understand technology well enough to put together a strong case. Second, fewer judges understand technology well enough to rule effectively on it. Third, the average jury has extremely little or no computer literacy. With these difficulties, correctly handling the response and investigation phases is crucial because any mistakes will confuse the already muddy waters. Success in court requires a skilled attorney and expert witnesses, all of whom can clearly explain complex technology to those who have never used a computer. These challenges are why many cases are currently pleabargained before ever going to trial. Another challenge organizations face is the financial insolvency of attackers. With the easy availability of hacking tools, many investigations lead back to teenagers. Teenagers with automatic hacking tools have been able to cause billions of dollars in damage. How can such huge losses be recovered from a 13-year-old adolescent? Even if the attacker were financially successful, there is no way an organization could recoup billions of dollars in losses from a single person. It is also important to accurately define the losses. Most organizations have great difficulty in placing a value on their information. How much is a customer database worth? How much would it cost if it were given to a competitor? How much would it cost if it were inaccessible for three days? These are the type of questions organizations must answer after an incident. It is easy to calculate hardware and personnel costs, but calculating intangible damages can be difficult. Undervalue the damages, and the organization loses significant money. Overvaluing the damages can hurt the organization’s credibility and allow opposing counsel to portray the organization as a money-hungry goliath more interested in profit than the truth. 895

AU1518Ch50Frame Page 896 Thursday, November 14, 2002 7:48 PM

LAW, INVESTIGATION, AND ETHICS Any trial requires careful consideration and preparation — those involving technology even more so. Successful civil and criminal trials are necessary to keep computer crime from becoming even more rampant; however, a successful trial requires that organizations understand the challenges inherent to a case involving computer crime. SUMMARY For most people, technology has become magic — they know it works, but have no idea how. Those who control this magic fall into two categories — protectors and exploiters. Society uses technology to store and transfer more and more valuable information every day. It has become the core of our daily communications, and no modern business can run without it. This dependency and technology’s inherent complexity have created ample opportunity for the unethical to exploit technology to their advantage. It is each organization’s responsibility to ensure that its protectors not only understand protection but also how to successfully respond to, investigate, and help prosecute the exploiters as they appear. RESPONSE SUMMARY • Preplan a response strategy for all key assets. • Make sure the plan covers move than only technological recovery — it must address how to handle customers, investors, vendors, competitors, and the media to be effective. • Create an incident response team consisting of personnel from the technology, security, executive, legal, and public relations areas of the organization. • Be flexible enough to handle incidents that require modifications to the response plan. • Ensure that response team members have the appropriate skills required to effectively handle incident response. INVESTIGATION SUMMARY • Organizations must decide if the incident warrants an investigation. • Who will handle the investigation — corporate investigators or law enforcement? • Key decisions should be made by a combination of executive management, general counsel, PR, and technology staff members. • Investigators must have strong skills in technology, communications, business, and evidence handling — skills many typical IT workers lack. • Digital evidence is rarely a smoking gun and must be corroborated by other types of evidence or independent digital evidence. 896

AU1518Ch50Frame Page 897 Thursday, November 14, 2002 7:48 PM

Cyber-Crime: Response, Investigation, and Prosecution • Knowing what computer an attack came from is not enough; investigators must be able to prove the person behind the keyboard during the attack. • Corporate investigators can usually successfully investigate national and internal incidents. International incidents usually require the help of law enforcement. • Law enforcement, especially federal, will typically require significant damages before they will dedicate resources to an investigation. PROSECUTION SUMMARY • Organizations can ignore the incident, use internal disciplinary action, pursue civil action, or pursue criminal charges. • Civil cases require a “preponderance of evidence” to prove someone guilty, while criminal cases require evidence “beyond a reasonable doubt.” • Most cases face the difficulties of financially insolvent defendants; computer-illiterate prosecutors, judges, and juries; and a lack of strong case law. • Computer logs are inadmissible as evidence unless they are used in the “normal course of business.” • Due to the challenges of testifying about complex technology, many cases result in a plea-bargain before they ever go to trial. • Placing value on information is difficult, and overvaluing the information can be as detrimental as undervaluing it. • Most computer attackers are financially insolvent and do not have the assets to allow organizations to recoup their losses. • Successful cases require attorneys and expert witnesses to be skilled at explaining complex technologies to people who are computer illiterate. ABOUT THE AUTHOR Thomas Akin is a CISSP with a decade of experience in information security. He is the founding director of the Southeast Cybercrime Institute and is an active member of the Georgia Cybercrime Task Force. Thomas’s other publications include Hardening Cisco Routers and several articles relating to information security. He can be reached at [email protected].

897

AU1518Ch50Frame Page 898 Thursday, November 14, 2002 7:48 PM

AU1518Ch51Frame Page 899 Thursday, November 14, 2002 7:47 PM

Domain 10

Physical Security

AU1518Ch51Frame Page 900 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY All of the controls available to us will not provide an adequate level of security if an employee inadvertently allows someone disguised as a pizza delivery person into the data center and within harm’s range. Physical security is as critical as other precautions to provide a standard of due care. This domain begins with a grounding of the objectives of implementing physical access controls, including providing for a safe workplace. We then address the technology of closed-circuit television (CCTV) as a deterrent to insecure behavior and a mechanism for the enforcement of security policy. This interesting chapter describes CCTV technology, the purpose, and the proper placement and use of camera equipment to support a good security position. Finally, we present a striking discourse that offers an inclusive look at physical threats, the increased risk of terrorism, how to conduct physical security assessments, implementing comprehensive operational security processes and procedures, and lessons learned from the September 2001 attack on the World Trade Center towers. As with the other chapters in this Handbook, this one is a must-read.

900

AU1518Ch51Frame Page 901 Thursday, November 14, 2002 7:47 PM

Chapter 51

Computing Facility Physical Security Allen Brusewitz, CISSP, CBCP

Most information security practitioners are experienced in and concentrate on logical issues of computer and telecommunications security while leaving physical security to another department. However, most of us would agree that a knowledgeable person with physical access to a console could bypass most of our logical protective measures by simply rebooting the system or accessing the system that is already turned on with root or administrator access in the computer room. Additionally, an unlocked wiring closet could provide hidden access to a network or a means to sabotage existing networks. Physical access controls and protective measures for computing resources are key ingredients to a well-rounded security program. However, protection of the entire facility is even more important to the wellbeing of employees and visitors within the workplace. Also, valuable data is often available in hard copy on the desktop, by access to applications, and by using machines that are left unattended. Free access to the entire facility during or after work hours would be a tremendous asset to competitors or people conducting industrial espionage. There is also a great risk from disgruntled employees who might wish to do harm to the company or to their associates. As demonstrated in the September 11 attack on the World Trade Center, greater dangers now exist than we may have realized. External dangers seem more probable than previously thought. Physical access to facilities, lack of control over visitors, and lack of identification measures may place our workplaces and our employees in danger. Additionally, economic slowdowns that cause companies to downsize may create risks from displaced employees who may be upset about their loss of employment. Physical security is more important than ever to protect valuable information and even more valuable employees. It must be incorporated into 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

901

AU1518Ch51Frame Page 902 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY the total information security architecture. It must be developed with several factors in mind such as cost of remedies versus value of the assets, perceived threats in the environment, and protective measures that have already been implemented. The physical security plan must be developed and sold to employees as well as management to be successful. It must also be reviewed and audited periodically and updated with improvements developed to support the business of the organization. COMPUTING CENTERS Computing centers have evolved over the years, but they still remain as the area where critical computing assets are enclosed and protected from random or unauthorized access. They have varying degrees of protection and protective measures, depending on the perceptions of management and the assets they contain. Members of the technical staff often demand computing center access during off-hours, claiming that they might have to reboot systems. Members of management may also demand access because their position in the company requires that they have supervisory control over company assets. Additionally, computer room access is granted to non-employees such as vendors and customer engineers to service the systems. Keeping track of authorized access and ensuring that it is kept to a minimum is a major task for the information security department. Sometimes, the task is impossible when the control mechanisms consist of keys or combination locks. Computing Center Evolution In the days of large mainframes, computing centers often occupied whole buildings with some space left around for related staff. Those were the days of centralized computing centers where many people were required to perform a number of required tasks. Operators were required to run print operations, mount and dismount tapes, and manage the master console. Production control staffs were required to set up and schedule jobs. In addition, they required staffs of system programmers and, in some cases, system developers. Computer security was difficult to manage, but some controls were imposed with physical walls in place to keep the functions separate. Some of these large systems still remain; however, physical computer room tasks have been reduced through automation and departmental printing. As distributed systems evolved, servers were installed and managed by system administrators who often performed all system tasks. Many of these systems were built to operate in office environments without the need for stringent environmental controls over heat and humidity. As a result, servers were located in offices where they might not be placed 902

AU1518Ch51Frame Page 903 Thursday, November 14, 2002 7:47 PM

Computing Facility Physical Security behind a locked door. That security was further eroded with the advent of desktop computing, when data became available throughout the office. In many cases, the servers were implemented and installed in the various departments that wanted control over their equipment and did not want control to go back to the computing staff with their bureaucratic change controls, charge-backs, and perceived slow response to end-user needs. As the LANs and distributed systems grew in strategic importance, acquired larger user bases, needed software upgrades and interconnectivity, it became difficult for end-user departments to manage and control the systems. Moreover, the audit department realized that there were security requirements that were not fulfilled in support of these critical systems. This resulted in the migration of systems back to centralized control and centralized computer rooms. While these systems could withstand environmental fluctuations, the sheer number of servers required some infrastructure planning to keep the heat down and to provide uninterruptible power and network connectivity. In addition, the operating systems and user administration tasks became more burdensome and required an operations staff to support. However, these systems no longer required the multitudes of specialized staffs in the computer rooms to support them. Print operations disappeared for the most part, with data either displayed at the desktop or sent to a local printer for hard copy. In many cases, computer centers still support large mainframes but they take up a much smaller footprint than the machines of old. Some of those facilities have been converted to support LANs and distributed UNIXbased systems. However, access controls, environmental protections, and backup support infrastructure must still be in place to provide stability, safety, and availability. The security practitioner must play a part ensuring that physical security measures are in place and effective. As stated before, the computing center is usually part of a facility that supports other business functions. In many cases, that facility supports the entire business. Physical security must be developed to support the entire facility with special considerations for the computing center that is contained within. In fact, protective measures that are applied in and around the entire facility provide additional protection to the computing center. ENVIRONMENTAL CONCERNS Most of us do not have the opportunity to determine where our facilities will be located because they probably existed prior to our appointment as an information security staff member. However, that does not prevent us from trying to determine what environmental risks exist and taking action 903

AU1518Ch51Frame Page 904 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY to reduce them. If lucky, you will have some input regarding relocation of the facilities to areas with reduced exposure to threats such as airways, earthquake faults, and floodplains. Community The surrounding community may contribute to computer room safety as well as risks. Communities that have strong police and fire services will be able to provide rapid response to threats and incidents. Low crime rates and strong economic factors provide safety for the computing facilities as well as a favorable climate for attracting top employees. It is difficult to find the ideal community, and in most cases you will not have the opportunity to select one. Other businesses in that community may provide dangers such as explosive processes, chemical contaminants, and noise pollution. Community airports may have landing and takeoff flight paths that are near the facility. High crime rates could also threaten the computing facility and its inhabitants. Protective measures may have to be enhanced to account for these risks. The security practitioner can enhance the value of community capabilities by cultivating a relationship with the local police and fire protection organizations. A good relationship with these organizations not only contributes to the safety of the facilities, but also will be key to safety of the staff in the event of an emergency. They should be invited to participate in emergency drills and to critique the process. The local police should be invited to tour the facilities and understand the layout of the facilities and protective measures in place. In fact, they should be asked to provide suggested improvements to the existing measures that you have employed. If you have a local guard service, it is imperative that they have a working relationship with the local police officials. The fire department will be more than happy to review fire protection measures and assist in improving them. In many cases, they will insist with inspecting such things as fire extinguishers and other fire suppression systems. It is most important that the fire department understand the facility layout and points of ingress and egress. They must also know about the fire suppression systems in use and the location of controls for those systems. Acts of Nature In most cases we cannot control the moods of Mother Nature or the results of her wrath. However, we can prepare for the most likely events and try to reduce their effects. Earthquake threats may require additional bracing and tie-down straps to prevent servers and peripheral devices from destruction due to tipping or falling. Flooding risks can be mitigated with the installation of sump pumps and locating equipment above the 904

AU1518Ch51Frame Page 905 Thursday, November 14, 2002 7:47 PM

Computing Facility Physical Security ground floor. Power outages resulting from tornadoes and thunderstorms may be addressed with uninterruptible power supply (UPS) systems and proper grounding of facilities. The key point with natural disasters is that they cannot be eliminated in most cases. Remedies must be designed based on the likelihood that an event will occur and with provisions for proper response to it. In all cases, data backup with off-site storage or redundant systems are required to prepare for manmade or natural disasters. Other External Risks Until the events that occurred on September 11, 2001, physical security concerns related to riots, workplace violence, and local disruptions. The idea of terrorist acts within the country seemed remote but possible. Since that date, terrorism is not only possible, but also probable. Measures to protect facilities by use of cement barriers, no-parking zones, and guarded access gates have become understandable to both management and staff. The cost and inconvenience that these measures impose are suddenly more acceptable. Many of our facilities are located in areas that are considered out of the target range that terrorists might attack. However, the Oklahoma City bombing occurred in a low-target area. The anthrax problems caused many unlikely facilities to be vacated. The risks of bioterrorism or attacks on nuclear power plants are now considered real and possible, and could occur in almost any city. Alternate site planning must be considered in business continuity and physical security plans. FACILITY The facilities that support our computing environments are critical to the organization in providing core business services and functions. There are few organizations today that do not rely on computing and telecommunications resources to operate their businesses and maintain services to their customers. This requires security over both the physical and logical aspects of the facility. The following discussion concentrates on the physical protective measures that should be considered for use in the computing center and the facilities that surround it. Layers of Protection For many computing facilities, the front door is the initial protection layer that is provided to control access and entry to the facility. This entry point will likely be one of many others such as back doors, loading docks, and other building access points. A guard or a receptionist usually controls front-door access. Beyond that, other security measures apply based on 905

AU1518Ch51Frame Page 906 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY the value of contents within. However, physical security of facilities may begin outside the building. External Protective Measures. Large organizations may have protective fences surrounding the entire campus with access controlled by a guardactivated or card-activated gate. The majority of organizations will not have perimeter fences around the campus but may have fences around portions of the building. In most of those cases, the front of the building is not fenced due to the need for entry by customers, visitors, and staff. These external protective measures may be augmented through the use of roving guards and closed-circuit television (CCTV) systems that provide a 360-degree view of the surrounding area.

Security practitioners must be aware of the risks and implement costeffective measures that provide proper external protection. Measures to consider are: • Campus perimeter fences with controlled access gates • Building perimeter fences with controlled access gates • Building perimeter fences controlling rear and side access to the building • Cement barriers in the front of the building • Restrict parking to areas away from the building • CCTV viewing of building perimeters External Walls. Facilities must be constructed to prevent penetration by accidental or unlawful means. Windows provide people comforts for office areas and natural light, but they can be a means for unauthorized entry. Ground floors may be equipped with windows; however, they could be eliminated if that floor were reserved for storage and equipment areas. Loading docks may provide a means of unauthorized entry and, if possible, should be located in unattached buildings or be equipped with secured doors to control entry. Doors that are not used for normal business purposes should be locked and alarmed with signs that prohibit their use except for emergencies. Internal Structural Concerns. Critical rooms such as server and telecommunications areas should be constructed for fire prevention and access controls. Exterior walls for these rooms should not contain windows or other unnecessary entry points. They should also be extended above false ceilings and below raised floors to prevent unlawful entry and provide proper fire protection. Additional entry points may be required for emergency escape or equipment movement. These entrances should be locked when not in use and should be equipped with alarms to prevent unauthorized entry. 906

AU1518Ch51Frame Page 907 Thursday, November 14, 2002 7:47 PM

Computing Facility Physical Security Ancillary Structures (Wiring Cabinets and Closets). Wiring cabinets may be a source of unauthorized connectivity to computer networks and must be locked at all times unless needed by authorized personnel. Janitor closets should be reserved for that specific purpose and should not contain critical network or computing connections. They must be inspected on a regular basis to ensure that they do not contain flammable or other hazardous materials.

Facility Perils and Computer Room Locations Computer rooms are subject to hazards that are created within the general facility. These hazards can be reduced through good facility design and consideration for critical equipment. Floor Locations. Historically, computing equipment was added to facilities that were already in use for general business processes. Often, the only open area left for computing equipment was the basement. In many cases, buildings were not built to support heavy computers and disk storage devices on upper floors, so the computer room was constructed on the ground floor. In fact, organizations were so proud of the flashy computer equipment that they installed observation windows for public viewing, with large signs to assist them in getting there.

Prudent practices along with a realization that computing resources were critical to the continued operation of the company have caused computing facilities to be relocated to more protected areas with minimum notification of their special status. Computer rooms have been moved to upper floors to mitigate flooding and access risks. Freight elevators have been installed to facilitate installation and removal of computing equipment and supplies. Windows have been eliminated and controlled doors have been added to ensure only authorized access. Rest Rooms and Other Water Risks. Water hazards that are located above computer rooms could cause damage to critical computing equipment if flooding and leakage occurs. A malfunctioning toilet or sink that overflows in the middle of the night could be disastrous to computer operations. Water pipes that are installed in the flooring above the computer room could burst or begin to leak in the event of earthquakes or corrosion. A well-sealed floor will help, but the best prevention is to keep those areas clear of water hazards. Adjacent Office Risks. Almost all computing facilities have office areas to support the technical staff or, in many cases, the rest of the business. These areas can provide risks to the computing facility from fire, unauthorized access, or chemical spills. Adjacent office areas should be equipped with appropriate fire suppression systems that are designed to control flammable material and chemical fires. Loading docks and janitor rooms 907

AU1518Ch51Frame Page 908 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY can also be a source of risk from fire and chemical hazards. Motor-generated UPS systems should be located in a separate building due to their inherent risks of fire and carbon monoxide. The local fire department can provide assistance to reduce risks that may be contained in other offices as well as the computing center. PROTECTIVE MEASURES Entrances to computing facilities must be controlled to protect critical computing resources, but they must also be controlled to protect employees and sensitive business information. As stated before, valuable information is often left on desks and in unlocked cabinets throughout the facility. Desktop computers are often left on overnight with valuable information stored locally. In some cases, these systems are left logged on to sensitive systems. Laptops with sensitive data can be stolen at night and even during business hours. To protect valuable information resources, people, and systems, various methods and tools should be considered. Use of any of these tools must be justified according to the facility layout and the value of the resources contained within. Guard Services There are many considerations related to the use of guard services. The major consideration, other than whether to use them, is employee versus purchased services. The use of employee guards may be favored by organizations with the idea that employees are more loyal to the organization and will be trustworthy. However, there are training, company benefit, and insurance considerations that accompany that decision. Additionally, the location may not have an alternative guard source available. If the guards are to be armed, stringent controls and training must be considered. There are high-quality guard services available in most areas that will furnish trained and bonded guards who are supervised by experienced managers. While cost is a factor in the selection of a contract guard service, it should not be the major one. The selection process should include a request for proposal (RFP) that requires references and stringent performance criteria. Part of the final selection process must include discussions with customer references and a visit to at least two customer sites. Obviously, the guard service company should be properly licensed and provide standard business documentation. The guard service will be operating existing and planned security systems that may include CCTV, card access systems, central control rooms, and fire suppression systems. Before contracting with an organization, that 908

AU1518Ch51Frame Page 909 Thursday, November 14, 2002 7:47 PM

Computing Facility Physical Security organization must demonstrate capabilities to operate existing and planned systems. It should also be able to provide documented operating procedures that can be modified to support the facility needs. Intrusion Monitoring Systems Closed-circuit television (CCTV) systems have been used for years to protect critical facilities. These systems have improved considerably over the years to provide digital images that take up less storage space and be transmitted over TCP/IP-based networks. Their images can be combined with other alarm events to provide a total picture for guard response as well as event history. Digital systems that are activated in conjunction with motion detection or other alarms may be more effective because their activation signals a change to the guard who is assigned to watch them. CCTV systems allow guards to keep watch on areas that are located remotely, are normally unmanned, or require higher surveillance, such as critical access points. These systems can reduce the need for additional manpower to provide control over critical areas. In many cases, their mere presence serves as a deterrent to unwanted behavior. They may also contribute to employee safety by providing surveillance over parking areas, low traffic areas, and high-value functions such as cashier offices. A single guard in a central control center can spot problems and dispatch roving manpower to quickly resolve threats. In addition to the above, stored images may be used to assist law enforcement in apprehending violators and as evidence in a court of law. Security requirements will vary with different organizations; however, CCTV may be useful in the following areas: • • • • • • • • •

Parking lots for employee and property safety Emergency doors where access is restricted Office areas during nonworking hours Server and telecommunications equipment rooms during nonworking hours Loading docks and delivery gates Cashier and check-processing areas Remote facilities where roving guards would be too costly Executive office areas in support of executive protection programs Mantrap gates to ensure all entry cards have been entered

Alarms and motion-detection systems are designed to signal the organization that an unusual or prohibited event has occurred. Doors that should not be used during normal business activity may be equipped with local sound alarms or with electronic sensors that signal a guard or activate surveillance systems. Motion detectors are often installed in areas that are normally unmanned. In some systems, motion detection is activated 909

AU1518Ch51Frame Page 910 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY during nonbusiness hours and can be disabled or changed to allow for activities that are properly scheduled in those areas. Many systems can be IP addressable over the backbone TCP/IP network, and alarm signals can be transmitted from multiple remote areas. It is important to note that IP-based systems may be subject to attack. The vendor of these systems must ensure that these systems are hack-protected against covert activities by unauthorized people. Physical Access Control Measures Physical access controls are as important as logical access controls to protect critical information resources. Multiple methods are available, including manual and automated systems. Often, cost is the deciding factor in their selection despite the risks inherent in those tools. Access Policies. All good security begins with policies. Policies are the drivers of written procedures that must be in place to provide consistent best practices in the protection of people and information resources. Policies are the method by which management communicates its wishes. Policies are also used to set standards and assign responsibility for their enforcement. Once policies are developed, they should be published for easy access and be part of the employee awareness training program.

Policies define the process of granting and removing access based on need-to-know. If badges are employed, policies define how they are to be designed, worn, and used. Policies define who is allowed into restricted areas or how visitors are to be processed. There is no magic to developing policies, but they are required as a basic tool to protect information resources. Keys and Cipher Locks. Keys and cipher (keypad) locks are the simplest to use and hardest to control in providing access to critical areas. They do not provide a means of identifying who is accessing a given area, nor do they provide an audit trail. Keys provide a slightly better security control than keypad locks in that the physical device must be provided to allow use. While they can be copied, that requires extra effort to accomplish. If keys are used to control access, they should be inventoried and stamped with the words Do Not Duplicate.

Cipher locks require that a person know the cipher code to enter an area. Once given out, use of this code cannot be controlled and may be passed throughout an organization by word of mouth. There is no audit trail for entry, nor is there authentication that it is used by an authorized user. Control methods consist of periodic code changes and shielding to prevent other people from viewing the authorized user’s code entry. Use of 910

AU1518Ch51Frame Page 911 Thursday, November 14, 2002 7:47 PM

Computing Facility Physical Security these methods of entry control could be better protected through the use of CCTV. Card Access Controls. Card access controls are considerably better tools than keys and cipher locks if they are used for identification and contain a picture of the bearer. Without pictures, they are only slightly better than keys because they are more difficult to duplicate. If given to another person to gain entry, the card must be returned for use by the authorized cardholder. Different types of card readers can be employed to provide ease of use (proximity readers) and different card identification technology. Adding biometrics to the process would provide added control along with increased cost and inconvenience that might be justified to protect the contents within.

The most effective card systems use a central control computer that can be programmed to provide different access levels depending on need, time zone controls that limit access to certain hours of the day, and an audit trail of when the card was used and where it was entered. Some systems even provide positive in and out controls that require a card to be used for both entry and exit. If a corresponding entry/exit transaction is not in the system, future entry will be denied until management investigation actions are taken. Smart card technology is being developed to provide added security and functionality. Smart cards can have multiple uses that expand beyond mere physical access. Additional uses for this type of card include computer access authentication, encryption using digital certificates, and debit cards for employee purchases in the cafeteria or employee store. There is some controversy about multiple-use cards because a single device can be used to gain access to many different resources. If the employee smart card provides multiple access functions as well as purchasing functions, the cardholder will be less likely to loan the badge to an unauthorized person for use and will be more likely to report its loss. Mantraps and Turnstiles. Additional controls can be provided through the use of mantraps and turnstiles. These devices prevent unauthorized tailgating and can be used to require inspection of parcels when combined with guard stations. These devices also force the use of a badge to enter through a control point and overcome the tendency for guards to allow entry because the person looks familiar to them. Mantraps and turnstiles can control this weakness if the badge is confiscated upon termination of access privileges. The use of positive entry/exit controls can be added to prevent card users from passing their card back through the control point to let a friend enter. 911

AU1518Ch51Frame Page 912 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY Fire Controls Different fire control mechanisms must be employed to match the risks that are present in protected areas. Fire control systems may be as simple as a hand-held fire extinguisher or be combined with various detection mechanisms to provide automated activation. Expert advice should be used to match the proper system to the existing threats. In some cases, multiple systems may be used to ensure that fires do not reignite and cause serious damage. Detectors and Alarms. Smoke and water detectors can provide early warning and alarm the guards that something dangerous may be happening. Alarms may also trigger fire prevention systems to activate. To be effective, they must be carefully placed and tested by experts in fire prevention. Water-Based Systems. Water-based systems control fires by reducing temperatures below the combustion point. They are usually activated through overhead sprinklers to extinguish fires before they can spread. The problem with water-based systems is that they cause a certain amount of damage to the contents of areas they are designed to protect. In addition, they may cause flooding in adjacent areas if they are not detected and shut off quickly following an event.

Water-based systems may be either dry pipe or wet pipe systems. Wet pipe systems are always ready to go and are activated when heat or accidental means open the sprinkler heads. There is no delay or shut-off mechanism that can be activated prior to the start of water flow. Water in the pipes that connect to the sprinkler heads may become corroded, causing failure of the sprinkler heads to activate in an emergency. Dry pipe systems are designed to allow some preventive action before they activate. These types of systems employ a valve to prevent the flow of water into the overhead pipes until a fire alarm event triggers water release. Dry pipe systems will not activate and cause damage if a sprinkler head is accidentally broken off. They also allow human intervention to override water flow if the system is accidentally activated. Gas-Based Fire Extinguishing Systems. Halon-type systems are different from water-based systems in that they control fires by interrupting the chemical reactions needed to continue combustion. They replaced older gas systems such as carbon dioxide that controlled fires by replacing the oxygen with a gas (CO2) that did not support the combustion process. Oxygen replacement systems were effective, but they were toxic to humans who might be in the CO2-activated room due to the need for oxygen to survive. 912

AU1518Ch51Frame Page 913 Thursday, November 14, 2002 7:47 PM

Computing Facility Physical Security Throughout the 1970s and 1980s, Halon systems were the preferred method to protect computer and telecommunication rooms from fire damage because they extinguished the fire without damaging sensitive electronic equipment. Those systems could extinguish fires and yet allow humans to breathe and survive in the flooded room. The problem with Halon is that it proved unfriendly to the ozone layer and was banned from new implementations by an international agreement (Montreal Protocol). There are numerous Clean Air Act and EPA regulations now in effect to govern the use of existing Halon systems and supplies. Current regulations and information can be obtained by logging onto www.epa.gov/docs/ozone/ title6/snap/hal.html. This site also lists manufacturers of Halon substitute systems. Today, Halon replacement systems are available that continue to extinguish fires, do not harm the ozone layer, and, most important, do not harm humans who may be in the gas-flooded room. While these systems will not kill human inhabitants, most system manufacturers warn that people should leave the gas-flooded area within one minute of system activation. Current regulations do not dictate the removal of Halon systems that are in place; however, any new or replacement Halon systems must employ the newer ozone-friendly gas (e.g., FM 200). Utility and Telecommunication Backup Requirements Emergency Lighting. As stated before, modern computer rooms are usually lacking in windows or other sources of natural light. Therefore, when a power outage occurs, these rooms become very dark and exits become difficult to find. Even in normal offices, power outages may occur in areas that are staffed at night. In all of these cases, emergency lighting with exit signs must be installed to allow people to evacuate in an orderly and safe manner. Emergency lighting is usually provided by battery-equipped lamps that are constantly charged until activated. UPS Systems. Uninterruptible power supply (UPS) systems ensure that a computing system can continue to run, or at least shut down in an orderly manner, if normal power is lost. Lower cost systems rely on battery backup to provide an orderly shutdown, while motor generator backup systems used in conjunction with battery backup can provide continuous power as long as the engines receive fuel (usually diesel). As usual, cost is the driver for choosing the proper UPS system. More enlightened management will insist on a business impact analysis prior to making that decision to ensure that critical business needs are met.

Regardless of the type of system employed, periodic testing is required to ensure that the system will work when needed. Diesel systems should be 913

AU1518Ch51Frame Page 914 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY tested weekly to ensure they work and to keep the engines properly lubricated. Redundant Connections. Redundancy should be considered for facility electrical power, air conditioning, telecommunication connections, and water supplies. Certain systems such as UPS can be employed to mitigate the need for electrical redundancy. Telecommunication connectivity should be ensured with redundant connections. In this E-commerce world, telecommunications redundancy should also include connections to the Internet. Water is important to the staff, but environmental systems (cooling towers) may also depend on a reliable supply. In most cases, this redundancy can be provided with separate connections to the water main that is provided by the supporting community.

SUMMARY Physical security must be considered to provide a safe working environment for the people who visit and work in a facility. Although physical access controls must be employed for safety reasons, they also should prevent unauthorized access to critical computing resources. Many tools are available to provide physical security that continues to be enhanced with current technology. Backbone networks and central control computers can support the protection of geographically separated facilities and operations. IP-supported systems can support the collection of large amounts of data from various sensors and control mechanisms and provide enhanced physical security while keeping manpower at a minimum. The information security practitioner must become aware of existing physical security issues and be involved. If a separate department provides physical security, coordination with them becomes important to a total security approach. If information security organizations are assigned to provide physical security, they must become aware of the tools that are available and determine where to employ them. ABOUT THE AUTHOR Allen Brusewitz, CISSP, CBCP, has more than 30 years of experience in computing in various capacities, including system development, EDP auditing, computer operations, and information security. He has continued his professional career leading consulting teams in cyber-security services with an emphasis on E-commerce security. He also participates in business continuity planning projects and is charged with developing that practice with his current company for delivery to commercial organizations.

914

AU1518Ch52Frame Page 915 Thursday, November 14, 2002 7:47 PM

Chapter 52

Closed-Circuit Television and Video Surveillance David Litzau, CISSP

In June of 1925, Charles Francis Jenkins successfully transmitted a series of motion pictures of a small windmill to a receiving facility over five miles away. The image included 48 lines of resolution and lasted ten minutes. This demonstration would move the television from an engineer’s lark to reality. By 1935, Broadcast magazine listed 27 different television broadcast facilities across the nation, some with as many as 45 hours of broadcast a week. Although the television set was still a toy for the prosperous, the number of broadcast facilities began to multiply rapidly. On August 10, 1948, the American Broadcasting Company (ABC) debuted the television show Candid Camera. The basis of the show was to observe the behavior of people in awkward circumstances — much to the amusement of the viewing audience — by a hidden camera. This human behavior by surreptitious observation did not go unnoticed by psychologists and security experts of the time. Psychologists recognized the hidden camera as a way to study human behavior, and for security experts it became a tool of observation. Of particular note to both was the profound effect on behavior that the presence of a camera had on people once they became aware that they were being observed. Security experts would have to wait for advances in technology before the emerging technology could be used. Television was based on vacuum tube technology and the use of extensive broadcast facilities. It would be the space race of the late 1950s and 1960s that would bring the television and its cameras into the realm of security. Two such advances that contributed were the mass production of transistors and the addition of another new technology known as videotape. The transistor replaced bulky, failure-prone vacuum tubes and resulted in television cameras becoming smaller and 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

915

AU1518Ch52Frame Page 916 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY more affordable. The videotape machine meant that the images no longer had to be broadcast; the images could be collected through one or more video cameras and the data transmitted via a closed circuit of wiring to be viewed on a video monitor or recorded on tape. This technology became known as closed-circuit television, or CCTV. In the early 1960s, CCTV would be embraced by the Department of Defense as an aid for perimeter security. In the private sector, security experts for merchants were quick to see the value of such technology as an aid in the prevention of theft by customers and employees. Today, unimagined advances in the technology in cameras and recording devices have brought CCTV into the home and workplace in miniature form. WHY CCTV? Information security is a multifaceted process, and the goal is to maintain security of the data processing facility and the assets within. Typically, those assets can be categorized as hardware, software, data, and people, which also involve the policies and procedures that govern the behavior of those people. With the possible exception of software, CCTV has the ability to provide defense of these assets on several fronts. To Deter The presence of cameras both internally and externally has a controlling effect on those who step into the field of view. In much the same way that a small padlock on a storage shed will keep neighbors from helping themselves to garden tools when the owner is not at home, the camera’s lens tends to keep personnel from behaving outside of right and proper conduct. In the case of the storage shed, the lock sends the message that the contents are for the use of those with the key to access it, but it would offer little resistance to a determined thief. Likewise, the CCTV camera sends a similar message and will deter an otherwise honest employee from stepping out of line, but it will not stop someone determined to steal valuable assets. It becomes a conscious act to violate policies and procedures because the act itself will likely be observed and recorded. With the cameras at the perimeter, those looking for easy targets will likely move on, just as employees within the facility will tend to conduct themselves in a manner that complies with corporate policies and procedures. With cameras trained on data storage devices, it becomes difficult to physically access the device unobserved, thereby deterring the theft of the data contained within. The unauthorized installation or removal of hardware can be greatly deterred by placing cameras in a manner that permits the observation of portals such as windows or doors. Overall, the statistics of crimes in the presence of CCTV cameras is dramatically reduced. 916

AU1518Ch52Frame Page 917 Thursday, November 14, 2002 7:47 PM

Closed-Circuit Television and Video Surveillance To Detect Of particular value to the security professional is the ability of a CCTV system to provide detection. The eyes of a security guard can only observe a single location at a time, but CCTV systems can be configured in such a manner that a single pair of eyes can observe a bank of monitors. Further, each monitor can display the output of multiple cameras. The net effect is that the guard in turn can observe dozens of locations from a single observation point. During periods of little or no traffic, a person walking into the view of a camera is easily detected. Placing the camera input from highsecurity and high-traffic locations in the center of the displays can further enhance the coverage, because an intruder entering the field of view on a surrounding monitor would be easily detected even though the focus of attention is at the center of the monitors. Technology is in use that will evaluate the image field; and if the content of the image changes, an alarm can be sounded or the mode of recording changed to capture more detail of the image. Further, with the aid of recording equipment, videotape recordings can be reviewed in fast-forward or rewind to quickly identify the presence of intruders or other suspicious activities. To Enforce The human eyewitness has been challenged in the court of law more often in recent history. The lack of sleep, age of the witness, emotional state, etc. can all come to bear on the validity of an eyewitness statement. On the other hand, the camera does not get tired; video recording equipment is not susceptible to such human frailties. A video surveillance recording can vastly alter the outcome of legal proceedings and has an excellent track record in swaying juries as to the guilt or innocence of the accused. Often, disciplinary action is not even required once the alleged act is viewed on video by the accused, thereby circumventing the expense of a trial or arbitration. If an act is caught on tape that requires legal or disciplinary action, the tape ensures that there is additional evidence to support the allegations. With the combined abilities of deterrence, detection, and enforcement of policies and procedures over several categories of assets, the CCTV becomes a very effective aid in the process of information security, clearly an aid that should be carefully considered when selecting countermeasures and defenses. CCTV COMPONENTS One of the many appealing aspects of CCTV is the relative simplicity of its component parts. As in any system, the configuration can only be as good as the weakest link. Inexpensive speakers on the highest quality sound system will result in inexpensive quality sound. Likewise, a poor 917

AU1518Ch52Frame Page 918 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY quality component in a CCTV system produces poor results. There are basically four groups of components: 1. 2. 3. 4.

Cameras Transmission media Monitors Peripherals

The Camera The job of the camera is to collect images of the desired viewing area and is by far the component that requires the most consideration when configuring a CCTV system. In a typical installation, the camera relies on visible light to illuminate the target; the reflected light is then collected through the camera lens and converted into an electronic signal that is transmitted back through the system to be processed. The camera body contains the components to convert visible light to electronic signals. There are still good-quality, vacuum-tube cameras that produce an analog signal, but most cameras in use today are solid-state devices producing digital signal output. Primary considerations when selecting a camera are the security objectives. The sensitivity of a camera refers to the number of receptors on the imaging surface and will determine the resolution of the output; the greater the number of receptors, the greater the resolution. If there is a need to identify humans with a high level of certainty, one should consider a color camera with a high level of sensitivity. On the other hand, if the purpose of the system is primarily to observe traffic, a simple black-and-white camera with a lower sensitivity will suffice. The size of cameras can range from the outwardly overt size of a large shoebox to the very covert size of a matchbox. Although the miniaturized cameras are capable of producing a respectable enough image to detect the presence of a human, most do not collect enough reflective light to produce an image quality that could be used for positive identification. This is an area of the technology that is seeing rapid improvement. There are so many considerations in the placement of cameras that an expert should be consulted for the task. Some of those considerations include whether the targeted coverage is internal or external to the facility. External cameras need to be positioned so that all approaches to the facility can be observed, thereby eliminating blind spots. The camera should be placed high enough off the ground so that it cannot be easily disabled, but not so high that the images from the scene only produce the tops of people’s heads and the camera is difficult to service. The camera mount can have motor drives that will permit aiming left and right (panning) or up and down (tilting), commonly referred to as a pan/tilt drive. Additionally, if 918

AU1518Ch52Frame Page 919 Thursday, November 14, 2002 7:47 PM

Closed-Circuit Television and Video Surveillance the camera is on the exterior of the facility, it may require the use of a sunshade to prevent the internal temperature from reaching damaging levels. A mount that can provide heating to permit de-icing should be considered in regions of extreme cold so that snow and ice will not damage the pan/tilt drive. Internal cameras require an equal amount of consideration; and, again, the area to be covered and ambient light will play a large part in the placement. Cameras may be overt or covert and will need to be positioned such that people coming or going from highly valued assets or portals can be observed. Because the quality of the image relies in large part on the reflective light, the lens on the camera must be carefully selected to make good use of available light. The cameras should be placed in a manner that will allow the evening lighting to work with the camera to provide front lighting (lights that shine in the same direction that the camera is aimed) to prevent shadowing of approaching people or objects. Constant adjustments must be made to lenses to accommodate the effects of a constantly changing angle of sunlight, changing atmospheric conditions, highly reflective rain or snowfall, and the transition to artificial lighting in the evening; all affect ambient light. This is best accomplished with the use of an automatic iris. The iris in a camera, just as in the human eye, opens and closes to adjust the amount of light that reaches the imaging surface. Direct exposure to an intense light source will result in blossoming of the image — where the image becomes all white and washes out the picture to the point where nothing is seen — and can also result in serious damage to the imaging surface within the camera. The single most important element of the camera is the lens. There are basically four types of lenses: standard, wide-angle, telephoto, and zoom. When compared to human eyesight, the standard lens is the rough equivalent; the wide-angle takes in a scene wider that what humans can see; and the telephoto is magnified and roughly equivalent to looking through a telescope. All are fixed focal length lenses. The characteristics of these three combined are a zoom lens. The Transmission Media Transmission media refer to how the video signal from the cameras will be transported to the multiplexer or monitor. This is typically, but not restricted to, some type of wiring. Coaxial Cable. By far the most commonly used media are coaxial cables. There are varying grades of coaxial cable, and the quality of the cable will have a profound effect on the quality of the video. Coaxial cable consists of a single center conductor with a piezoelectric insulator surrounding it. The insulation is then encased in a foil wrap and further surrounded by a wire mesh. A final coating of weather-resistant insulation is placed around the 919

AU1518Ch52Frame Page 920 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY entire bundle to produce a durable wire that provides strong protection for the signal as it transits through the center conductor. The center conductor can be a single solid wire or a single conductor made up of multiple strands of wire. Engineers agree that the best conductor for a video signal is pure copper. The amount of shielding will determine the level of protection for the center conductor. The shielding is grounded at both ends of the connection and thereby shunts extraneous noise from electromagnetic radiation to ground. Although 100 percent pure copper is an excellent conductor of the electronic signal, there is still a level of internal resistance that will eventually degrade the signal’s strength. To overcome the loss of signal strength, the diameter of the center conductor and the amount of shielding can be increased to obtain greater transmission lengths before an in-line repeater/amplifier will be required. This aspect of the cable is expressed in an industry rating. The farther the distance the signal must traverse, the higher the rating of the coaxial cable that should be used or noticeable signal degradation will occur. Some examples are: • RG59/U rated to carry the signal up to distances of 1000 feet • RG6/U rated to carry a signal up to 1500 feet • RG11/U rated to carry a signal up to 3000 feet One of the benefits of coaxial cable is that it is easy to troubleshoot the media should there be a failure. A device that sends a square-wave signal down the wire (time domain reflectometer) can pinpoint the location of excessive resistance or a broken wire. Avoid using a solid center conductor wire on cameras mounted on a pan/tilt drive because the motion of the camera can fatigue the wire and cause a failure; thus, multi-strand wire should be used. Fiber-Optic Cable. Fiber-optic cable is designed to transmit data in the form of light pulses. It typically consists of a single strand of highly purified silica (glass), smaller than a human hair, surrounded by another jacket of lower grade glass. This bundle is then clad in a protective layer to prevent physical damage to the core. The properties of the fiber-optic core are such that the outer surface of the center fiber has a mirror effect, thereby reflecting the light back into itself. This means that the cable can be curved, and it has almost no effect on the light pulses within. This effect, along with the fact that the frequency spectrum that spans the range of light is quite broad, produces an outstanding medium for the transfer of a signal. There is very little resistance or degradation of the signal as it traverses the cable, and the end result is much greater transmission lengths and available communication channels when compared to a metallic medium. 920

AU1518Ch52Frame Page 921 Thursday, November 14, 2002 7:47 PM

Closed-Circuit Television and Video Surveillance The reason that fiber optics has not entirely replaced its coaxial counterpart is that the cost is substantially higher. Because the fiber does not conduct any electrical energy, the output signal must be converted to light pulses. This conversion is known as modulation and is accomplished using a laser. Once converted to light pulses, the signal is transferred into the fiber-optic cable. Because the fiber of the cable is so small, establishing good connections and splices is critical. Any misalignment or damage to the fiber will result in reflective energy or complete termination of the signal. Therefore, a skilled technician with precision splicing and connection tools is required. This cost, along with modulators/demodulators and the price of the medium, add substantial cost to the typical CCTV installation. For the additional cost, some of the benefits include generous gains in bandwidth. This means that more signals carrying a greater amount of data can be realized. Adding audio from microphones, adjustment signals to control zoom lenses and automatic irises, and additional cameras can be accommodated. The medium is smaller and lighter and can carry a signal measured in miles instead of feet. Because there is no electromagnetic energy to create compromising emanations, and a splice to tap the connection usually creates an easily detected interruption of the signal, there is the additional benefit of a high level of assurance of data integrity and security. In an environment of remote locations or a site containing highly valued assets, these benefits easily offset the additional cost of fiber-optic transmission. Wireless Transmission. The option of not using wiring at all is available for CCTV. The output signals from cameras can be converted to radio frequency, light waves, or microwave signals for transmission. This may be the only viable option for some remote sites and can range from neighboring buildings using infrared transceivers to a satellite link for centralized monitoring of remote sites throughout the globe. Infrared technology must be configured in a line-of-sight manner and has a limited range of distance. Radio-frequency and microwaves can get substantial improvements in distance but will require the use of repeaters and substations to traverse distances measured in miles. The more obstacles that must be negotiated (i.e., buildings, mountains, etc.), the greater the degradation of the signal that takes place.

Two of the biggest drawbacks of utilizing wireless are that the signal is vulnerable to atmospheric conditions and, as in any wireless transmission, easily intercepted and inherently insecure. Everything from the local weather to solar activity can affect the quality of the signal. From a security standpoint, the transmission is vulnerable to interception, which could reveal to the viewer the activity within a facility and compromise other internal defenses. Further, the signal could be jammed or modified to render the system useless or to provide false images. If wireless transmission 921

AU1518Ch52Frame Page 922 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY is to be utilized, some type of signal scrambling or channel-hopping technology should be utilized to enhance the signal confidentiality and integrity. Some of the more recent trends in transmission media have been the use of existing telephone lines and computer networking media. The dial-up modem has been implemented in some installations with success, but the limited amount of data that can be transmitted results in slow image refreshing; and control commands to the camera (focus, pan, tilt, etc.) are slow to respond. The response times and refresh rates can be substantially increased through the use of ISDN phone line technology. Some recent advances in data compression, and protocols that allow video over IP, have moved the transmission possibilities into existing computer network cabling. The Monitor The monitor is used to convert the signal from cameras into a visible image. The monitor can be used for real-time observation or the playback of previously recorded data. Color or black-and-white video monitors are available but differ somewhat from a standard television set. A television set will come with the electronics to convert signals broadcast on the UHF and VHF frequency spectrums and demodulate those signals into a visible display of the images. The CCTV monitor does not come with such electronics and is designed to process the signals of a standard 75-ohm impedance video signal into visible images. This does not mean that a television set cannot be used as a video monitor, but proper attenuation equipment will be needed to convert the video into a signal that the television can process. The lines of resolution determine detail and the overall sharpness of the image. The key to reproducing a quality image is matching as closely as possible the resolution of the monitor to the camera; but it is generally accepted that, if a close match is not made, then it is better to have a monitor with a greater resolution. The reason for this is that a 900-line monitor displaying an image of 300 lines of resolution will provide three available lines for each line of image. The image will be large and appear less crisp; but if at a later date the monitor is used in a split-screen fashion to display the output from several cameras on the screen at the same time, there will be enough resolution for each image. On the other hand, if the resolution of the monitor is lower than that of the camera, detail will be lost because the entire image cannot be displayed. The size of the monitor to be used is based on several factors. The more images to be viewed, the greater the number of monitors. A single monitor is capable of displaying the output from several cameras on the same 922

AU1518Ch52Frame Page 923 Thursday, November 14, 2002 7:47 PM

Closed-Circuit Television and Video Surveillance screen (see multiplexers), but this still requires a comfortable distance between the viewer and monitor. Although not exactly scientific, a general rule of thumb is that the viewer’s fist at the end of an extended arm should just cover the image. This would place the viewer farther away from the monitor for a single image and closer if several images were displayed. The Peripherals A multiplexer is a hardware device that is capable of receiving the output signal from multiple cameras and processing those signals in several ways. The most common use is to combine the inputs from selected cameras into a single output such that the group of inputs is displayed on a single monitor. A multiplexer is capable of accepting from four to 32 separate signals and provides video enhancement, data compression, and storing or output to a storage device. Some of the additional features available from a multiplexer include alarm modes that will detect a change to an image scene to alert motion and the ability to convert analog video signals into digital format. Some multiplexers have video storage capabilities, but most provide output that is sent to a separate storage device. A CCTV system can be as simple as a camera, transmission medium, and a monitor. This may be fine if observation is the goal of the system; but if the intent is part of a security system, storage of captured images should be a serious consideration. The output from cameras can be stored and retrieved to provide nearly irrefutable evidence for legal proceedings. There are several considerations in making a video storage decision. Foremost is the desired quality of the retrieved video. The quality of the data always equates to quantity of storage space required. The primary difference in storage devices is whether the data will be stored in analog or digital format. The options for analog primarily consist of standard three-quarter-inch VHS tape or higher quality one-inch tape. The measure of quantity for analog is time, where the speed of recording and tape length will determine the amount of time that can be recorded. To increase the amount of time that a recording spans, one of the best features available in tape is time-lapse recording. Time-lapse videocassette recorders (VCRs) reduce the number of frames per second (fps) that are recorded. This equates to greater spans of time on less tape, but the images will appear as a series of sequential still images when played back. There is the potential of a critical event taking place between pictures and thereby losing its evidentiary value. This risk can be offset if the VCR is working in conjunction with a multiplexer that incorporates motion detection. Then the FPS can be increased to record more data from the channel with the activity. Another consideration of analog storage medium is that the shelf life is limited. Usually if there is no event of significance, then 923

AU1518Ch52Frame Page 924 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY tapes can be recorded over existing data; but if there is a need for longterm storage, the quality of the video will degrade with time. Another option for the storage of data is digital format. There are many advantages to utilizing digital storage media. The beauty of digital is that the signal is converted to binary ones and zeroes, and once converted the data is ageless. The data can then be stored on any data processing hardware, including hard disk drives, tapes, DVDs, magneto-optical disks, etc. By far the best-suited hardware is the digital video recorder (DVR). Some of the capabilities of DVRs may include triplex functions (simultaneous video observation, playback, and recording), multiple camera inputs, multi-screen display outputs, unlimited recording time by adding multiple hard disk drives, hot-swappable RAID, multiple trigger events for alarms, and tape archiving of trigger events. Because the data can be indexed on events such as time, dates, and alarms, the video can be retrieved for playback almost instantly. Whether analog or digital, the sensitivity of the cameras used, frames recorded each second, whether the signal is in black and white or color, and the length of time to store will impact the amount of storage space required. PUTTING IT ALL TOGETHER By understanding the stages of implementation and how hardware components are integrated, the security professional will have a much higher likelihood of successfully integrating a CCTV system. There is no typical installation, and every site will have its unique characteristics to accommodate; but there is a typical progression of events from design to completion. • Define the purpose. If observation of an entrance is the only goal, there will be little planning to consider. Will the quality of images be sufficient to positively identify an individual? Will there be a requirement to store image data, and what will be the retention period? Should the presence of a CCTV system be obvious with the presence of cameras, or will they be hidden? Ultimately, the question becomes: What is the purpose of implementing and what is to be gained? • Define the surveyed area. Complete coverage for the exterior and interior of a large facility or multiple facilities will require a substantial budget. If there are financial restraints, then decisions will have to be made concerning what areas will be observed. Some of the factors that will influence that decision may be the value of the assets under scrutiny and the security requirements in a particular location. • Select appropriate cameras. At this point in the planning, a professional consultation should be considered. Internal surveillance is comparatively simpler than external because the light levels are consistent; but external surveillance requires an in-depth understanding of how light, lenses, weather, and other considerations will affect the quality 924

AU1518Ch52Frame Page 925 Thursday, November 14, 2002 7:47 PM

Closed-Circuit Television and Video Surveillance of the images. Placement of cameras can make a substantial difference in the efficiency of coverage and the effectiveness of the images that will be captured. • Selection and placement of monitors. Considerations that need to be addressed when planning the purchase of monitors include the question of how many camera inputs will have to be observed at the same time. How many people will be doing the observation simultaneously? How much room space is available in the monitoring room? Is there sufficient air conditioning to accommodate the heat generated by large banks of monitors? • Installation of transmission media. Once the camera locations and the monitoring location have been determined, the installation of the transmission media can then begin. A decision should have already been made on the type of media that will be utilized and sufficient quantities ordered. Technicians skilled in installation, splicing, and testing will be required. • Peripherals. If the security requirements are such that image data must be recorded and retained, then storage equipment will have to be installed. Placement of multiplexers, switches, universal power supplies, and other supporting equipment will have to be planned in advance. Personnel access controls are critical to areas containing such equipment. SUMMARY CCTV systems are by no means a guarantee of security, but the controlling effect they have on human behavior cannot be dismissed easily. The mere presence of a camera, regardless of whether it works, has proven to be invaluable in the security industry as a deterrent. Defense-in-depth is the mantra of the information security industry. It is the convergence of many layers of protection that will ultimately provide the highest level of assurance, and the physical security of a data processing facility is often the weakest layer. There is little else that can compare to a properly implemented CCTV system to provide security of the facility, data, and people, as well as enforcement of policies and procedures. Works Cited 1. Kruegle, Herman, CCTV Surveillance: Video Technologies and Practices, 3rd ed., Butterworth-Heinemann, 1999. 2. Axiom Engineering, CCTV Video Surveillance Systems, http://www.axiomca.com/services/ cctv.htm. 3. Kriton Electronics, Design Basics, http://shop.store.yahoo.com/kriton/secsysselrul.html. 4. Video Surveillance Cameras and CCTV Monitors, http://www.pelikanind.com/. 5. CCTV — Video Surveillance Cameras Monitors Switching Units, http://www.infosyssec.org/infosyssec/cctv.htm.

925

AU1518Ch52Frame Page 926 Thursday, November 14, 2002 7:47 PM

PHYSICAL SECURITY ABOUT THE AUTHOR David A. Litzau, CISSP, with a foundation in electronics and audio/ visual, moved into the computer sciences in 1994. David has been teaching information security in San Diego for the past six years.

926

AU1518Ch53Frame Page 927 Thursday, November 14, 2002 8:02 PM

Chapter 53

Physical Security: The Threat after September 11 Jaymes Williams

The day that changed everything began for me at 5:50 a.m. I woke up and turned on the television to watch some news. This was early Tuesday morning, September 11, 2001. My local news station had just interrupted its regular broadcast and switched over to CNN, so right away I knew something important had happened. I learned an airliner had crashed into one of the towers of the World Trade Center in New York. In disbelief, I made my way to the kitchen and poured myself a cup of coffee. I returned to the television and listened to journalists and airline experts debate the likely cause of this event. I thought to myself, “there isn’t a cloud in the sky. How could an aircraft accidentally hit such a large structure?” Knowing, but not wanting to accept the answer, I listened while hoping the television would give me a better answer. While waiting for the answer that never came, I noticed an aircraft come from the right side of the screen. It appeared to be going behind the towers of the Trade Center, or perhaps I was only hoping it would. This was one of those instances where time appeared to dramatically slow down. In the split second it took to realize the plane should have already come out from behind the towers, the fireball burst out the side of the tower instead. It was now undeniable. This was no accident. Later, after getting another cup of coffee, I returned to the television to see only smoke; the kind of smoke you only see when a building is imploded to make way for new construction. To my horror, I knew a tower had collapsed. Then, while the journalists were recovering from the shock and trying to maintain their on-air composure, they showed the top of the remaining tower. For some reason, it appeared that the camera had started to pan up. I started to feel a bit of vertigo. Then, once again, a horrible realization struck. The camera was not going up; the building was going down. Within the span of minutes, the World 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

927

AU1518Ch53Frame Page 928 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Trade Center was no more; and Manhattan was totally obscured by smoke. I was in total disbelief. This had to be a movie; but it wasn’t. The mind’s self-defenses take over when things occur that it cannot fathom, and I felt completely numb. I had witnessed the deaths of untold thousands of people on live TV. Although I live 3000 miles away, it might as well have happened down the street. The impact was the same. Then the news of the crash at the Pentagon came, followed by the crash of the aircraft in Pennsylvania. I tried to compose myself to go to work, although work seemed quite unimportant at the moment. Somehow, I put myself together and made my way out the door. On the way to work, I thought to myself that this must be the Pearl Harbor of my generation. And, I realized, my country was probably at war — but with whom?

The preceding is my recollection of the morning of September 11. This day has since become one of those days in history where we all remember where we were and what we were doing. While we all have our own individual experiences from that horrible day, some people more affected than others, these individual experiences all form a collective experience that surprised and shocked us all. Security practitioners around the world, and especially in the United States, have to ask themselves some questions. Can this happen here? Is my organization a potential target? Now that a War on Terrorism has begun as a result of the September 11 attacks, the answer to both of these questions, unfortunately, is yes. However, there are some things that can be done to lessen the risk. This chapter examines why the risk of terrorism has increased, what types of organizations or facilities are at higher risk, and what can be done to lessen that risk. WHY IS AMERICA A TARGET? Just because you’re not paranoid doesn’t mean they’re not out to get you! — From the U.S. Air Force Special Operations Creed

There are many reasons terrorist groups target America. One reason is ideological differences. There are nations or cultures that do not appreciate the freedom and tolerance espoused by Americans. America is inarguably the world’s leading industrial power and capitalist state. There are people in the world who may view America as a robber baron nation and hate Americans because of our perceived wealth. Another reason is religious differences. There are religiously motivated groups that may despise America and the West because of perceived nonconformance with their religious values and faith. A further reason is the perception that the U.S. Government has too much influence over the actions of other governments. Terrorists may think that, through acts of terror, the U.S. Government will negotiate 928

AU1518Ch53Frame Page 929 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Exhibit 53-1. Terrorist objectives and tactics.3 Examples of Terrorist Objectives Attract publicity for the group’s cause Demonstrate the group’s power Show the existing government’s lack of power Extract revenge Obtain logistic support Cause a government to overreact Common Terrorist Tactics Assassination Arson Bombing Hostage taking Kidnapping Hijacking or skyjacking Seizure Raids or attacks on facilities Sabotage Hoaxes Use of special weapons Environmental destruction Use of technology

and ultimately comply with their demands. However, our government has repeatedly stated it will not negotiate with terrorists. A final reason is that Americans are perceived as easy targets. The “open society” in America and many Western countries makes for easy movement and activities by terrorists. Whether performing in charitable organizations, businesses, in governmental capacities, or as tourists, Americans are all over the world. This makes targeting Americans quite easy for even relatively poorly trained terrorist groups. U.S. military forces stationed around the world are seen as visible symbols of U.S. power and, as such, are also appealing targets to terrorists. WHY BE CONCERNED? Terrorism can be defined as the calculated use of violence, or threat of violence, to inculcate fear; intended to coerce or intimidate governments or societies in the pursuit of goals that are generally political, religious, or ideological.3 Some examples of terrorist objectives and tactics can be seen in Exhibit 53-1. The increased threat of terrorism and cyber-terrorism is a new and important consideration for information security practitioners. Previously, physical security threats included such things as unauthorized access, 929

AU1518Ch53Frame Page 930 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY crime, environmental conditions, inclement weather, earthquakes, etc. The events of September 11 have shown us exactly how vulnerable we are. One of the most important lessons we security practitioners can take from that day is to recognize the need to reevaluate our physical security practices to include terrorism. Adding terrorism to the mix necessitates some fundamental changes in the way we view traditional physical security. These changes need to include protective measures from terrorism. Depending upon the type of organization, it is quite possible that terrorists may target it. Whether they target facilities or offices for physical destruction or they select an organization for a cyber-strike, prudent information security practitioners will assume they have been targeted and plan accordingly. IS YOUR ORGANIZATION A POTENTIAL TARGET? Many organizations may be potential targets of terrorists and have no idea they are even vulnerable. Government agencies, including federal, state, and local, and infrastructure companies may be primary targets. Other vulnerable organizations may be large multinational companies that market American products around the world and organizations located in well-known skyscrapers. Specific examples of these types of potential targets will not be named to avoid the possibility of placing them at higher risk. See Exhibit 53-2 for different types of potential targets. GOVERNMENT AGENCIES There are many terrorists who hate the U.S. Government and those of many Western countries. In the minds of terrorists and their sympathizers, governments create the policies and represent the values with which they vehemently disagree. It does not take a rocket scientist, or an information security practitioner for that matter, to realize that agencies of the U.S. Government are prime targets for terrorists. This, of course, also includes the U.S. military. Other Western countries, especially those supporting the United States in the War on Terrorism, may also find themselves targets of terrorists. State and local governments may also be at risk. • Infrastructure companies. Companies that comprise the infrastructure also face an increased risk of terrorism. Not only may terrorists want to hurt the U.S. and Western governments, but they may also want to disrupt normal life and the economies of the Western world. Disrupting the flow of energy, travel, finance, and information is one such way to accomplish this. The medical sector is also included here. One has to now consider the previously unthinkable, look beyond our usual mindsets, and recognize that, because medical facilities have not previously been targeted, it is conceivable they could be targeted in the future. 930

AU1518Ch53Frame Page 931 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Exhibit 53-2. Potential terrorist targets. Government Agencies U.S. federal agencies U.S. military facilities State government County government Local government Infrastructure Energy Transportation Financial Water Internet Medical Location Based Tall office buildings National landmarks Popular tourist destinations Large events Associated with America Large corporations synonymous with the Western world American or U.S. in the name Companies that produce famous American brand products

• Location-based targets. There are also those targets that by their location or function are at risk. Just as the towers of the World Trade Center represented the power of the American economy to the September 11 terrorists, other landmarks can be interpreted as representing things uniquely American to those with hostile intent. Such landmarks can include skyscrapers in major cities or any of the various landmarks that represent American or Western interests. Popular tourist destinations or events with large numbers of people in attendance can also be at risk because they are either uniquely American/Western or simply because they are heavily populated. • Things that mean America. There is another category to consider. This category has some overlap with the above categories but still deserves mention. Large corporations that represent America or the West to the rest of the world can also be targeted. This also includes companies whose products are sold around the world and represent America to the people of the world. If an organization falls into one of the above categories, it may face a greater risk from terrorism than previously thought. If an organization does 931

AU1518Ch53Frame Page 932 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY not fit one of the above categories, information security practitioners are still well-advised to take as many antiterrorism precautions as feasible. PARADIGM SHIFT: DETERRENCE TO PREVENTION Business more than any other occupation is a continual dealing with the future; it is a continual calculation, an instinctive exercise in foresight. — Henry R. Luce

The operating paradigm of physical security has been deterrence. The idea of a perpetrator not wanting to be caught, arrested, or even killed has become so ingrained in the way we think that we take it for granted. As we probably all know by now, there are people motivated by fervent religious beliefs or political causes that do not share this perspective; they may be willing or even desiring to die to commit an act they believe will further their cause. Most security protections considered industry standard today are based on the deterrence paradigm. Security devices such as cameras, alarms, x-ray, or infrared detection are all used with the intent to deter a perpetrator who does not want to be caught. While deterrence-based measures will provide adequate security for the overwhelming majority of physical security threats, these measures may be largely ineffective against someone who plans to die committing an act of terrorism. On the morning of September 11, 2001, we learned a painful lesson: that deterrence does not deter those who are willing to die to perpetrate whatever act they have in mind. Unfortunately, this makes physical security much more difficult and expensive. Information security practitioners need to realize that commonly accepted standards such as having security cameras, cipher-lock doors, and ID badges may only slow down a potential terrorist. Instead of working to deter intruders, we now have to also consider the previously unconsidered — the suicidal terrorist. This means considering what measures it will take to prevent someone who is willing to die to commit a terrorist act. The airline industry appears to have learned that much more stringent security measures are required to prevent a recurrence of what happened on September 11. Previously, an airline’s worst nightmare was either a bombing of an aircraft or a hijacking followed by tense negotiations to release hostage passengers. No one had considered the threat of using an airliner as a weapon of mass destruction. Anyone who has flown since then is familiar with the additional delays, searches, and ID checks. They are inconvenient and slow down the traveler; however, this is a small price to pay for having better security. 932

AU1518Ch53Frame Page 933 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Although there is still much more to be done, this serves as an example of using the prevention paradigm. The airlines have taken many security measures to prevent another such occurrence. Unfortunately, as with information security, there is no such thing as absolute physical security. There is always the possibility that something not previously considered will occur. Information security practitioners will also likely have to work within corporate/governmental budget constraints, risk assessments, etc. that may limit their ability to implement the needed physical security changes. REDUCING THE RISK OF TERRORISM The determination of these terrorists will not deter the determination of the American people. We are survivors and freedom is a survivor. — U.S. Attorney General John Ashcroft, Press conference on September 11, 2001

Now that we have a better understanding of why we face a greater risk of terrorism and who may be a target, the issue becomes how to better protect our organizations and our fellow employees. There are many methods to reduce the risk of terrorism. These methods include reviewing and increasing the physical security of an organization using the previously discussed prevention paradigm; controlling sensitive information through operational security; developing terrorism incident handling procedures; and building security procedures and antiterrorism procedures for employees. Several of these methods rely on employee training and periodic drills to be successful. Physical Security Assessments The first step in reducing risk is to control the physical environment. In this section we use the term standard to imply industry-standard practices for physical security. The term enhanced will refer to enhanced procedures that incorporate the prevention paradigm. Verify Standard Physical Security Practices Are in Place. Conduct a standard physical security assessment and implement changes as required. It is important to have physical security practices at least at current standards. Doing this will also minimize the risk from most standard physical security threats. As the trend toward holding organizations liable continues to emerge in information security, it is also likely to occur with physical security in the foreseeable future. Conduct an Enhanced Physical Security Assessment. Once the standard physical security is in place, conduct another assessment that is much more stringent. This assessment should include enhanced physical security 933

AU1518Ch53Frame Page 934 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-3. Internet resources. Professional Organizations DRI International — http://www.drii.org International Security Management Association — http://www.ismanet.com The Terrorism Research Center — http://www.terrorism.com/index.shtml Infosyssec.com’s physical security resource listing — http://www.infosyssec.com/infosyssec/physfac1.htm Infosyssec.com’s Business Continuity Planning Resource Listing — http://www.infosyssec.net/infosyssec/buscon1.htm Government Agencies National Infrastructure Protection Center (NIPC) — http://www.nipc.gov Federal Bureau of Investigation (FBI) — http://www.fbi.gov Office of Homeland Security Critical Infrastructure Assurance Office (CIAO) — http://www.ciao.gov Office of Homeland Security — http://www.whitehouse.gov/homeland/ FBI’s “War on Terrorism” page — http://www.fbi.gov/terrorinfo/terrorism.htm Canadian Security Intelligence Service (CSIS) Fighting Terrorism Page — http://canada.gc.ca/wire/2001/09/110901-US_e.html Bureau of Alcohol, Tobacco & Firearms Bomb Threat Checklist — http://www.atf.treas.gov/explarson/information/bombthreat/checklist.htm Military Agencies Department of Defense — http://www.defenselink.mil/ Department of Defense’s “Defend America” site — http://www.defendamerica.mil/ U.S. Army Physical Security Field Manual — http://www.adtdl.army.mil/cgi-bin/atdl.dll /fm/3-19.30/toc.htm

methods. Unfortunately, there is not yet a set of industry standards to protect against the enhanced threat. Many excellent resources are available from the U.S. Government. Although they are designed for protecting military or other government facilities, many of these standards can also be successfully implemented in the private sector. At this point, information security practitioners are essentially left to their own initiative to implement standards. Perhaps, in the near future, a set of standards will be developed that include the enhanced threat. Currently, there are many excellent resources available on the Internet from the U.S. Government. However, at the time of this writing, the U.S. Government is becoming more selective about what information is available to the public via the Internet for security reasons. It is quite possible that these resources may disappear from the Internet at some point in the near future. Information security practitioners may wish to locate these valuable resources before they disappear. A listing of Internet resources can be found in Exhibit 53-3. 934

AU1518Ch53Frame Page 935 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11

Exhibit 53-4. Famous World War II security poster. Implement Recommended Changes. Again, because there is no uniform set of standards for enhanced physical security for the private sector, we are left to our own devices for enhancing our physical security. Because we are not likely to have unlimited budgets for improving physical security, information security practitioners will have to assess the risk for their organizations, including the potential threat of terrorism, and make recommended changes based on the assessed risk. Ideally, these changes should be implemented in the most expeditious manner possible.

CONTROLLING SENSITIVE INFORMATION THROUGH OPERATIONAL SECURITY (OPSEC) We have now successfully “circled the wagons” and improved physical access controls to our facilities. The next step is to better control our sensitive information. As illustrated by the famous World War II security poster depicted in Exhibit 53-4, the successful control of information can win or lose wars. The Allied capture of the Enigma encryption device proved a critical blow to the Germans during World War II. The Allies were then able to decipher critical codes, which gave them an insurmountable advantage. 935

AU1518Ch53Frame Page 936 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Again, during the Gulf War, the vast technical advantage enjoyed by the Allied Coalition gave them information supremacy that translated into air supremacy. These lessons of history illustrate the importance of keeping sensitive information out of the hands of those who wish to do harm. In the days since September 11, this means keeping sensitive information from all who do not need access. First, we need to define exactly what information is sensitive. Then we need to determine how to best control the sensitive information. • Defining sensitive information. Sensitive information can easily be defined as information that, if available to an unauthorized party, can disclose vulnerabilities or can be combined with other information to be used against an organization. For example, seemingly innocuous information on a public Web site can provide a hostile party with enough information to target that organization. Information such as addresses of facilities, maps to facilities, officer and employee names, and names and addresses of customers or clients can all be combined to build a roadmap. This roadmap can tell the potential terrorist not only where the organization is and what it does, but also who is part of the organization and where it is vulnerable. • Controlling sensitive information. Prudent information security practitioners will first want to control the information source that leaves them the most vulnerable. There are several methods security practitioners can use to maintain control of their sensitive information: removing sensitive information from Web sites and corporate communications; destroying trash with sensitive information; having a clean desk policy; and limiting contractor/vendor access to sensitive information. — Remove sensitive information from publicly available web sites. Removing physical addresses, maps, officer/employee names, etc. from these Web sites is highly advisable. They can either be removed entirely from the site or moved into a secured section of the site where access to this information is verified and logged. — On January 17, 2002, the National Infrastructure Protection Center released NIPC Advisory 02-001: Internet Content Advisory: Considering the Unintended Audience. See Exhibit 53-5 for a reprint of the advisory. This advisory can function as a set of standards for deciding what and what not to place on publicly available Internet sites. When bringing up the issue with management of removing information from Web sites, the information security practitioner may receive a response that echoes item number seven in the advisory — “Because the information is publicly available in many places, it is not worth an effort to remove it from our site.” Although the information does exist elsewhere, the most likely and easiest place for terrorists to find it 936

AU1518Ch53Frame Page 937 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 is on the target organization’s Web site. This is also probably the first place they will look. Responsible information security practitioners, or corporate officers for that matter, should make it as difficult as possible for those with hostile intent to gain useful information from their Internet site. — Remove sensitive information from all corporate communications. No corporate communications should contain any sensitive information. If an organization already has an information classification structure in place, this vulnerability should already be resolved. However, if there is no information classification structure in place, this is excellent justification for implementing such a program. And, with such a program, the need for marking documents also exists. — Shred/destroy trash with sensitive information. Do you really know who goes through your trash? Do you know your janitorial staff? Dumpster diving is a widely practiced social engineering method. Shredding is an excellent way to avoid this vulnerability and is already widely practiced. Many organizations have either on-site shredders or bins to collect sensitive documents, which are later shredded by contracted shredding companies. — Create a clean desk policy. Information left unattended on a desktop is a favorite of social engineers. It is easier than dumpster diving (cleaner, too!) and will likely yield better results. While the definition of clean desk may vary, the intent of such a policy is to keep sensitive information from being left unattended on desktops. — Limit contractor/vendor access to sensitive information. This is a standard physical security practice but it deserves special mention within the OPSEC category because it is fairly easy to implement controls on contractor/vendor access. Restricting access to proprietary information is also a good practice. • Verify identity of all building/office visitors. Many large organizations and office buildings are verifying the identity of all visitors. Some organizations and buildings are checking identification for everyone who enters. This is an excellent practice because it greatly reduces the risk of unauthorized access. • Report unusual visitors or activity to law enforcement agencies (LEA). Visitors behaving in a suspicious or unusual manner should be reported to building security, if possible, and then to law enforcement authorities. Quick reporting may prevent undesired activities. • Exercise safe mail handling procedures. Mail handling procedures became of greater importance during the anthrax scare in the autumn of 2001. See Exhibit 53-6 for a list of safe mail handling procedures. 937

AU1518Ch53Frame Page 938 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-5. NIPC Advisory 02-001. Internet Content Advisory: Considering the Unintended Audience January 17, 2002 As worldwide usage of the Internet has increased, so too have the vast resources available to anyone online. Among the information available to Internet users are details on critical infrastructures, emergency response plans and other data of potential use to persons with criminal intent. Search engines and similar technologies have made arcane and seemingly isolated information quickly and easily retrievable by anyone with access to the Internet. The National Infrastructure Protection Center (NIPC) has received reporting that infrastructure related information, available on the Internet, is being accessed from sites around the world. While in and of itself this information is not significant, it highlights a potential vulnerability. The NIPC is issuing this advisory to heighten community awareness of this potential problem and to encourage Internet content providers to review the data they make available online. A related information piece on “Terrorists and the Internet: Publicly Available Data should be Carefully Reviewed” was published in the NIPC’s Highlights 11-01 on December 07, 2001 and is available at the NIPC web site http://www.nipc.gov/. Of course, the NIPC remains mindful that, when viewing information access from a security point of view, the advantages of posting certain information could outweigh the risks of doing so. For safety and security information that requires wide-dissemination and for which the Internet remains the preferred means, security officers are encouraged to include in corporate security plans mechanisms for risk management and crisis response that pertain to malicious use of open source information. When evaluating Internet content from a security perspective, some points to consider include: 1. Has the information been cleared and authorized for public release? 2. Does the information provide details concerning enterprise safety and security? Are there alternative means of delivering sensitive security information to the intended audience? 3. Is any personal data posted (such as biographical data, addresses, etc.)? 4. How could someone intent on causing harm misuse this information? 5. Could this information be dangerous if it were used in conjunction with other publicly available data? 6. Could someone use the information to target your personnel or resources? 7. Many archival sites exist on the Internet, and that information removed from an official site might nevertheless remain publicly available elsewhere. The NIPC encourages the Internet community to apply common sense in deciding what to publish on the Internet. This advisory serves as a reminder to the community of how the events of September 11, 2001 have shed new light on our security considerations. The NIPC encourages recipients of this advisory to report computer intrusions to their local FBI office http://www.fbi.gov/contact/fo/fo.htm or the NIPC, and to other appropriate authorities. Recipients may report incidents online at http://www.nipc.gov/ incident/cirr.htm, and can reach the NIPC Watch and Warning Unit at (202) 323-3205, 1-888-585-9078, or [email protected]

938

AU1518Ch53Frame Page 939 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Exhibit 53-6. Safe mail handling checklist. Suspicious Packages or Mail Suspicious characteristics to look for include: An unusual or unknown place of origin No return address An excessive amount of postage Abnormal or unusual size Oily stains on the package Wires or strings protruding from or attached to an item Incorrect spelling on the package label Differing return address and postmark Appearance of foreign style handwriting Peculiar odor (many explosives used by terrorists smell like shoe polish or almonds) Unusual heaviness or lightness Uneven balance or shape Springiness in the top, bottom, or sides Never cut tape, strings, or other wrappings on a suspect package or immerse a suspected letter or package in water; either action could cause an explosive device to detonate Never touch or move a suspicious package or letter Report any suspicious packages or mail to security officials immediately.

Develop Terrorism Incident Handling Procedures Security Working Group. Many organizations have established security working groups. These groups may be composed of management, information security practitioners, other security specialists, and safety and facilities management people. Members of the group can also serve as focal points for networking with local, state, and federal authorities and professional organizations to receive intelligence/threat information. The group may meet regularly to review the organization’s security posture and act as a body for implementing upgraded security procedures. It may also conduct security evaluations. Establish Terrorism Incident Procedures. Just as it is important to have

incident response plans and procedures for computer security incidents, it is also highly advisable to have incident response plans and procedures for terrorist threats or incidents. An integral part of any terrorism incident response is checklists for bomb threats and other terrorist threats. These checklists should contain numerous questions to ask the individual making the threatening call: where is the bomb, when is it going to go explode, what does it look like, etc. The checklists should also contain blanks to fill in descriptions of the caller’s voice — foreign accent, male or female, tone of voice, background noise, etc. Checklists should be located near all phones or, at a minimum, in company telephone directories. Many federal and state agencies have such checklists available for the general public. The Bureau of Alcohol, 939

AU1518Ch53Frame Page 940 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-7. BATF bomb threat checklist. ATF BOMB THREAT CHECKLIST Exact time of call: __________________________________________________________ Exact words of caller: __________________________________________________________ QUESTIONS TO ASK 1. When is bomb going to explode? 2. Where is the bomb? 3. What does it look like? 4. What kind of bomb is it? 5. What will cause it to explode? 6. Did you place the bomb? 7. Why? 8. Where are you calling from? 9. What is your address? 10. What is your name?

CALLER’S VOICE (circle) Calm Slow Stutter Deep Giggling Accent Stressed Nasal Disguised Sincere

Crying Loud Angry Lisp Squeaky

Slurred Broken Rapid Excited Normal

If voice is familiar, whom did it sound like? Were there any background noises? Remarks: Person receiving call: Telephone number call received at: Date: Report call immediately to: (Refer to bomb incident plan)

Tobacco & Firearms has an excellent checklist that is used by many agencies and is shown in Exhibit 53-7. Again, as with computer incident response teams, training is quite important. Employees need to know how to respond in these types of highstress situations. Recurring training on how to respond to threatening phone calls and to complete the checklist all contribute to reduced risk. Safety Practices. Here is an excellent opportunity to involve organizational safety personnel or committees. Some practices to involve them with are:

940

AU1518Ch53Frame Page 941 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 • Review building evacuation procedures. This will provide the current and best method for evacuating buildings should the need arise. Also plan for secondary evacuation routes in the event the primary route is unusable. • Conduct building evacuation drills. Periodic building evacuation drills, such as fire drills, provide training and familiarity with escape routes. In an emergency, it is far better to respond with training. These should be conducted without prior notification on all shifts. Drills should not be the same every time. Periodically, vary the drill by blocking an escape route, forcing evacuees to alter their route. • Conduct terrorism event drills. Other drills, such as responding to various terrorism scenarios, may be beneficial in providing the necessary training to respond quickly and safely in such a situation. • Issue protective equipment. Many of the individuals who survived the World Trade Center disaster suffered smoke inhalation, eye injuries, etc. These types of injuries might be avoided if emergency equipment is issued to employees, such as hardhats, dust masks, goggles, flashlights, gloves, etc. Building Security Procedures.3 A determined terrorist can penetrate

most office buildings. However, the presence and use of guards and physical security devices (e.g., exterior lights, locks, mirrors, visual devices) create a significant psychological deterrent. Terrorists are likely to shun risky targets for less protected ones. If terrorists decide to accept the risk, security measures can decrease their chance of success. Of course, if the terrorists are willing to die in the effort, their chance of success increases and the efforts to thwart them are much more complex and expensive. Corporate and government executives should develop comprehensive building security programs and frequently conduct security surveys that provide the basis for an effective building security program. These surveys generate essential information for the proper evaluation of security conditions and problems, available resources, and potential security policy. Only one of the many facets in a complex structure, security policies must be integrated with other important areas such as fire safety, normal police procedures, work environment, and work transactions. The building security checklist found in Exhibit 53-8 provides guidance when developing building security procedures.

941

AU1518Ch53Frame Page 942 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-8. Building security checklist. Office Accessibility • Buildings most likely to be terrorist targets should not be directly accessible to the public. • Executive offices should not be located on the ground floor. • Place ingress door within view of the person responsible for screening personnel and objects passing through the door. • Doors may be remotely controlled by installing an electromagnetic door lock. • The most effective physical security configuration is to have doors locked from within and have only one visitor access door into the executive office area. Locked doors should also have panic bars. • Depending upon the nature of the organization’s activities, deception measures such as a large waiting area controlling access to several offices can be taken to draw attention away from the location and function of a particular office. Physical Security Measures • Consider installing the following security devices: burglar alarm systems (preferably connected to a central security facility), sonic warning devices or other intrusion systems, exterior floodlights, deadbolt locks on doors, locks on windows, and iron grills or heavy screens for windows. • Depending on the nature of the facility, consider installing a 15–20 foot fence or wall and a comprehensive external lighting system. External lighting is one of the cheapest and most effective deterrents to unlawful entry. • Position light fixtures to make tampering difficult and noticeable. • Check grounds to ensure that there are no covered or concealed avenues of approach for terrorists and other intruders, especially near entrances. • Deny exterior access to fire escapes, stairway, and roofs. • Manhole covers near the building should be secured or locked. • Cover, lock, or screen outdoor openings (e.g., coal bins, air vents, utility access points). • Screen windows (particularly near the ground or accessible from adjacent buildings. • Consider adding a thin, clear plastic sheet to windows to degrade the effects of flying glass in case of explosion. • Periodically inspect the interior of the entire building, including the basement and other infrequently used areas. • Locate outdoor trash containers, storage bins, and bicycle racks away from the building. • Book depositories or mail slots should not be adjacent to, or in, the building. • Mailboxes should not be close to the building. • Seal the top of voids and open spaces above cabinets, bookcases, and display cases. • Keep janitorial closets, service openings, telephone closets, and electrical closets locked at all times. Protect communications closets and utility areas with an alarm system. • Remove names from reserved parking spaces. • Empty trash receptacles daily (preferably twice daily). • Periodically check all fire extinguishers to ensure that they are in working order and readily available. Periodically check all smoke alarms to ensure that they are in working order.

942

AU1518Ch53Frame Page 943 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Exhibit 53-8. Building security checklist (Continued). Personnel Procedures • Stress heightened awareness of personnel working in the building, because effective building security depends largely on the actions and awareness of people. • Develop and disseminate clear instructions on personnel security procedures. • Hold regular security briefings for building occupants. • Personnel should understand security measures, appropriate responses, and should know whom to contact in an emergency. • Conduct drills if appropriate. • Senior personnel should not work late on a routine basis. No one should ever work alone. • Give all personnel, particularly secretaries, special training in handling bomb threats and extortion telephone calls. Ensure a bomb threat checklist and a pen or pencil is located at each telephone. • Ensure the existence of secure communications systems between senior personnel, secretaries, and security personnel with intercoms, telephones, and duress alarm systems. • Develop an alternate means of communications (e.g., two-way radio) in case the primary communications systems fail. • Do not open packages or large envelopes in buildings unless the sender or source is positively known. Notify security personnel of a suspicious package. • Have mail room personnel trained in bomb detection handling and inspection. • Lock all doors at night, on weekends, and when the building is unattended. • Maintain tight control of keys. Lock cabinets and closets when not in use. • When feasible, lock all building rest rooms when not in use. • Escort visitors in the building and maintain complete control of strangers who seek entrance. • Check janitors and their equipment before admitting them and observe while they are performing their functions. • Secure official papers from unauthorized viewing. • Do not reveal the location of building personnel to callers unless they are positively identified and have a need for this information. • Use extreme care when providing information over the telephone. • Do not give the names, positions, and especially the home addresses or phone numbers of office personnel to strangers or telephone callers. • Do not list the addresses and telephone numbers of potential terrorist targets in books and rosters. • Avoid discussing travel plans or timetables in the presence of visitors. • Be alert to people disguised as public utility crews who might station themselves near the building to observe activities and gather information. • Note parked or abandoned vehicles, especially trucks, near the entrance to the building or near the walls. • Note the license plate number, make, model, year, and color of suspicious vehicles and the occupant’s description, and report that information to your supervisor, security officer, or law enforcement agency. Controlling Entry • Consider installing a peephole, intercom, interview grill, or small aperture in entry doorways to screen visitors before the door is opened.

943

AU1518Ch53Frame Page 944 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-8. Building security checklist (Continued). • Use a reception room to handle visitors, thereby restricting their access to interior offices. • Consider installing metal detection devices at controlled entrances. Prohibit nonorganization members from bringing boxes and parcels into the building. • Arrange building space so that unescorted visitors are under the receptionist’s visual observation and to ensure that the visitors follow stringent access control procedures. • Do not make exceptions to the building’s access control system. • Upgrade access control systems to provide better security through the use of intercoms, access control badges or cards, and closed-circuit television. Public Areas • Remove all potted plants and ornamental objects from public areas. • Empty trash receptacles frequently. • Lock doors to service areas. • Lock trapdoors in the ceiling or floor, including skylights. • Ensure that construction or placement of furniture and other items would not conceal explosive devices or weapons. • Keep furniture away from walls or corners. • Modify curtains, drapes, or cloth covers so that concealed items can be seen easily. • Box in the tops of high cabinets, shelves, or other fixtures. • Exercise particular precautions in public rest rooms. • Install springs on stall doors in rest rooms so they stand open when not locked. Equip stalls with an inside latch to prevent someone from hiding a device in a locked stall. • Install a fixed covering over the tops on commode water tanks. • Use open mesh baskets for soiled towels. Empty frequently. • Guards in public areas should have a way to silently alert the office of danger and to summon assistance (e.g., foot-activated buzzer). Discovery of a Suspected Explosive Device • Do not touch or move a suspicious object. If it is possible for someone to account for the presence of the object, then ask the person to identify it with a verbal description. This should not be done if it entails bringing evacuated personnel back into the area. Take the following actions if an object’s presence remains inexplicable: • Evacuate buildings and surrounding areas, including the search team. • Evacuated areas must be at least 100 meters from the suspicious object. • Establish a cordon and incident control point, or ICP. • Inform the ICP that an object has been found. • Keep person who located the object at the ICP until questioned. • Cordon suspicious objects to a distance of at least 100 meters and cordon suspicious vehicles to a distance of at least 200 meters. Ensure that no one enters the cordoned area. Establish an ICP on the cordon to control access and relinquish ICP responsibility to law enforcement authorities upon their arrival. Maintain the cordon until law enforcement authorities have completed their examination or state that the cordon may stand down. The decision to allow reoccupation of an evacuated facility rests with the individual in charge of the facility.

944

AU1518Ch53Frame Page 945 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Antiterrorism Procedures for Employees2,3 Antiterrorism procedures can be defined as defensive measures used to reduce vulnerability to terrorist attacks. These defensive measures, or procedures, although originated by the U.S. Government, are certainly applicable to those living in a high terrorist threat condition. To some security practitioners, many of these procedures may seem on the verge of paranoia; however, they are presented with two intentions: (1) to illustrate the varying dangers that exist and methods to avoid them; and (2) to allow readers to determine for themselves which procedures to use. Many of the procedures are simply common sense. Others are procedures that are generally only known to those who live and work in high terrorist threat environments. See Exhibit 53-9 for the personnel antiterrorism checklist. LESSONS LEARNED FROM SEPTEMBER 115 Our plan worked and did what it was supposed to do. Our employees were evacuated safely. — Paul Honey, Director of Global Contingency Planning for Merrill Lynch5

Many well-prepared organizations weathered the disaster of September 11. However, there were also many businesses caught unprepared; of those, many no longer exist. Organizations from around the United States and the world are benefiting from the lessons learned on that fateful day. One large and quite well-known organization that was well prepared and survived the event was Merrill Lynch. When Paul Honey, director of global contingency planning for Merrill Lynch, arrived for work on the morning of September 11, he was met by the disaster of the collapsed World Trade Center. Honey then went to one of the company’s emergency command centers, where his contingency planning staff was hard at work. Within an hour of the disaster, the crisis management team had already established communication with key representatives, and emergency procedures were well underway. Honey’s team was able to facilitate the resumption of critical operations within one day and, within a week, the relocation of 8000 employees. This effort required the activation of a well-documented and robust business continuity program, an enormous communications effort, and a lot of teamwork. BUSINESS CONTINUITY PLANS Honey has business continuity planning responsibility for all of Merrill Lynch’s businesses. He runs a team of 19 planners who verify that the business 945

AU1518Ch53Frame Page 946 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-9. Personnel antiterrorism checklist. General Security Procedures • Instruct your family and associates not to provide strangers with information about you or your family. • Avoid giving unnecessary personal details to information collectors. • Report all suspicious persons loitering near your residence or office; attempt to provide a complete description of the person and/or vehicle to police or security. • Vary daily routines to avoid habitual patterns. • If possible, fluctuate travel times and routes to and from work. • Refuse to meet with strangers outside your workplace. • Always advise associates or family members of your destination when leaving the office or home and the anticipated time of arrival. • Do not open doors to strangers. • Memorize key phone numbers — office, home, police, etc. Be cautious about giving out information regarding family travel plans or security measures and procedures. • If you travel overseas, learn and practice a few key phrases in the native language, such as “I need a policeman, doctor,” etc. Business Travel • Airport Procedures — Arrive early; watch for suspicious activity. — Notice nervous passengers who maintain eye contact with others from a distance. Observe what people are carrying. Note behavior not consistent with that of others in the area. — No matter where you are in the terminal, identify objects suitable for cover in the event of attack; pillars, trash cans, luggage, large planters, counters, and furniture can provide protection. — Do not linger near open public areas. Quickly transit waiting rooms, commercial shops, and restaurants. — Proceed through security checkpoints as soon as possible. — Avoid secluded areas that provide concealment for attackers. — Be aware of unattended baggage anywhere in the terminal. — Be extremely observant of personal carry-on luggage. Thefts of briefcases designed for laptop computers are increasing at airports worldwide; likewise, luggage not properly guarded provides an opportunity for a terrorist to place an unwanted object or device in your carry-on bag. As much as possible, do not pack anything you cannot afford to lose; if the documents are important, make a copy and carry the copy. — Observe the baggage claim area from a distance. Do not retrieve your bags until the crowd clears. Proceed to the customs lines at the edge of the crowd. — Report suspicious activity to the airport security personnel. • On-Board Procedures — Select window seats; they offer more protection because aisle seats are closer to the hijackers’ movements up and down the aisle. — Rear seats also offer more protection because they are farther from the center of hostile action, which is often near the cockpit. — Seats at an emergency exit may provide an opportunity to escape. • Hotel Procedures — Keep your room key on your person at all times.

946

AU1518Ch53Frame Page 947 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Exhibit 53-9. Personnel antiterrorism checklist (Continued). — Be observant for suspicious persons loitering in the area. — Do not give your room number to strangers. — Keep your room and personal effects neat and orderly so you will recognize tampering or strange out-of-place objects. — Know the locations of emergency exits and fire extinguishers. — Do not admit strangers to your room. — Know how to locate hotel security guards. Keep a Low Profile • Your dress, conduct, and mannerisms should not attract attention. • Make an effort to blend into the local environment. • Avoid publicity and do not go out in large groups. • Stay away from civil disturbances and demonstrations. Tips for the Family at Home • Restrict the possession of house keys. • Change locks if keys are lost or stolen and when moving into a previously occupied residence. • Lock all entrances at night, including the garage. • Keep the house locked, even if you are at home. • Develop friendly relations with your neighbors. • Do not draw attention to yourself; be considerate of neighbors. • Avoid frequent exposure on balconies and near windows. Be Suspicious • Be alert to public works crews requesting access to residence; check their identities through a peephole before allowing entry. • Be alert to peddlers and strangers. • Write down license numbers of suspicious vehicles; note descriptions of occupants. • Treat with suspicion any inquiries about the whereabouts or activities of other family members. • Report all suspicious activity to police or local law enforcement. Security Precautions When You Are Away • Leave the house with a lived-in look. • Stop deliveries or forward mail to a neighbor’s home. • Do not leave notes on doors. • Do not hide keys outside house. • Use a timer (appropriate to local electricity) to turn lights on and off at varying times and locations. • Leave radio on (best with a timer). • Hide valuables. • Notify the police or a trusted neighbor of your absence. Residential Security • Exterior grounds: — Do not put your name on the outside of your residence or mailbox. — Have good lighting. — Control vegetation to eliminate hiding places.

947

AU1518Ch53Frame Page 948 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-9. Personnel anti-terrorism checklist (Continued). • Entrances and exits should have: — Solid doors with deadbolt locks — One-way peepholes in door — Bars and locks on skylights — Metal grating on glass doors, and ground-floor windows, with interior release mechanisms that are not reachable from outside • Interior features: — Alarm and intercom systems — Fire extinguishers — Medical and first-aid equipment • Other desirable features: — A clear view of approaches — More than one access road — Off-street parking — High (six to eight feet) perimeter wall or fence Parking • Always lock your car. • Do not leave it on the street overnight, if possible. • Never get out without checking for suspicious persons. If in doubt, drive away. • Leave only the ignition key with parking attendant. • Do not allow entry to the trunk unless you are there to watch. • Never leave garage doors open or unlocked. • Use a remote garage door opener if available. Enter and exit your car in the security of the closed garage. On the Road • Before leaving buildings to get into your vehicle, check the surrounding area to determine if anything of a suspicious nature exists. Display the same wariness before exiting your vehicle. • Prior to getting into a vehicle, check beneath it. Look for wires, tape, or anything unusual. • If possible, vary routes to work and home. • Avoid late-night travel. • Travel with companions. • Avoid isolated roads or dark alleys when possible. • Habitually ride with seatbelts buckled, doors locked, and windows closed. • Do not allow your vehicle to be boxed in; maintain a minimum eight-foot inter val between you and the vehicle in front; avoid the inner lanes. Be alert while driving or riding. Know How to React if You Are Being Followed: • Circle the block for confirmation of surveillance. • Do not stop or take other actions that could lead to confrontation. • Do not drive home. • Get description of car and its occupants. • Go to the nearest safe haven. • Report incident to police.

948

AU1518Ch53Frame Page 949 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 Exhibit 53-9. Personnel antiterrorism checklist (Continued). Recognize Events that can Signal the Start of an Attack: • Cyclist falling in front of your car • Flagman or workman stopping your car • Fake police or government checkpoint • Disabled vehicle/accident victims on the road • Unusual detours • An accident in which your car is struck • Cars or pedestrian traffic that box you in • Sudden activity or gunfire Know What to Do if under Attack in a Vehicle: • Without subjecting yourself, passengers, or pedestrians to harm, try to draw attention to your car by sounding the horn. • Put another vehicle between you and your pursuer. • Execute immediate turn and escape; jump the curb at 30–45 degree angle, 35 mph maximum. • Ram blocking vehicle if necessary. • Go to closest safe haven. • Report incident to police. Commercial Buses, Trains, and Taxis • Vary mode of commercial transportation. • Select busy stops. • Do not always use the same taxi company. • Do not let someone you do not know direct you to a specific cab. • Ensure taxi is licensed and has safety equipment (seatbelts at a minimum). • Ensure face of driver and picture on license are the same. • Try to travel with a companion. • If possible, specify the route you want the taxi to follow. Clothing • Travel in conservative clothing when using commercial transportation overseas or if you are to connect with a flight at a commercial terminal in a high-risk area. • Do not wear U.S.-identified items such as cowboy hats or boots, baseball caps, American logo T-shirts, jackets, or sweatshirts. • Wear a long-sleeved shirt if you have a visible U.S.-affiliated tattoo. Actions if Attacked • Dive for cover. Do not run. Running increases the probability of shrapnel hitting vital organs or the head. • If you must move, belly crawl or roll. Stay low to the ground, using available cover. • If you see grenades, lay flat on the floor, with feet and knees tightly together with soles toward the grenade. In this position, your shoes, feet, and legs protect the rest of your body. Shrapnel will rise in a cone from the point of detonation, passing over your body. • Place arms and elbows next to your ribcage to protect your lungs, heart, and chest. Cover your ears and head with your hands to protect neck, arteries, ears, and skull.

949

AU1518Ch53Frame Page 950 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY Exhibit 53-9. Personnel antiterrorism checklist (Continued). • Responding security personnel will not be able to distinguish you from attackers. Do not attempt to assist them in any way. Lay still until told to get up. Actions if Hijacked • Remain calm, be polite, and cooperate with your captors. • Be aware that all hijackers may not reveal themselves at the same time. A lone hijacker may be used to draw out security personnel for neutralization by other hijackers. • Surrender your tourist passport in response to a general demand for identification. • Do not offer any information. • Do not draw attention to yourself with sudden body movements, verbal remarks, or hostile looks. • Prepare yourself for possible verbal and physical abuse, lack of food and drink, and unsanitary conditions. • If permitted, read, sleep, or write to occupy your time. • Discretely observe your captors and memorize their physical descriptions. Include voice patterns and language distinctions as well as clothing and unique physical characteristics. • Cooperate with any rescue attempt. Lie on the floor until told to rise.

follows the business continuity plan, or BCP. His team is not responsible for the technology recovery planning, and they do not write the plans. They are the subject matter experts in program management and set the standards through a complete BCP program life cycle. Planning involves many different departments within the company because of the comprehensive nature of the program. Each business and support group (i.e., the trading floor, operations, finance, etc.) assigns a planning manager who is responsible for that area. Honey’s team responds to nearly 70 emergencies, on average, during the course of a year. Facilities and retail branch offices around the globe experience a variety of incidents such as earthquakes, storms, power outages, floods, or bomb threats. When Honey’s team plans for business interruption, the team instructs the business groups to plan for a worst-case scenario of six weeks without access to their facility and, naturally, at the worst possible time for an outage. The planning also includes having absolutely no access to anything from any building — computers, files, papers, etc. “That’s how we force people to think about alternate sites, vital records, physical relocation of staff, and so on, as well as obviously making sure the technology is available at another site,” says Honey. 950

AU1518Ch53Frame Page 951 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 UPGRADED PLANS AND PROCEDURES AFTER Y2K Merrill Lynch must comply with standards mandated by regulatory agencies such as the Federal Reserve and the Federal Financial Institutions Examination Council. Honey says, “There’s a market expectation that companies such as Merrill Lynch would have very robust contingency plans, so we probably attack it over and above any regulatory requirements that are out there.” The BCP team’s recent efforts to exceed regulatory standards placed Merrill Lynch in a good position to recover successfully from the September 11 attacks. EXTENSIVE TESTING OF CONTINGENCY PLANS All plans are tested twice annually, and once a year the large-scale, corporatewide plans are tested. Honey’s team overhauled the headquarters evacuation plan earlier in the year. They distributed nearly 8000 placards with the new procedures. These placards proved quite useful on the day of the attacks. Furthermore, the company’s human resources database is downloaded monthly into the team’s business continuity planning software program. This ensures that the BCP team has a frequently updated list of all current employees within each building. All this preparation resulted in effective execution of the business continuity plans on September 11. RECENT TEST USING SCENARIO SIMILAR TO TERRORIST ATTACKS In May 2001, Honey’s team conducted a two-day planning scenario for the headquarters’ key staff. The scenario, although different from September 11, covered an event of devastating impact — a major hurricane in New York City. “While the hurricane scenario doesn’t compare to the tragedies of 9/11 in terms of loss of life, we actually put our company through a fairly extensive two-day scenario, which had more impact to the firm in terms of difficulties in transportation and actual damage in the region,” says Honey. “So, we were really very well prepared; we had a lot of people who already thought through a lot of the logistical, technology, and HR-type issues.” The Evacuation The corporate response team was activated at about 8:55 a.m., while Honey was en route to Canal Street. The team, comprised of representatives from all business support groups, is instrumental in assessing the situation, such as building management, physical security personnel, media relations, key technology resources, and key business units. Despite a multitude of telecommunications troubles in the area, the team was finally able to establish a conference call at 9:30 a.m. to communicate with its other command center in Jersey City, New Jersey to figure out what was happening. 951

AU1518Ch53Frame Page 952 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY “In hindsight it seems odd, but we really didn’t know, apart from the planes hitting the buildings, whether this was an accident or a terrorist attack,” says Honey. “So really, the challenge at that time was to account for our employees, and then to try and understand what had happened. The damage to our buildings also was a concern. How were our buildings. Were they still standing? What was the state of the infrastructure in them?” Call trees were used to contact employees, and employees also knew how to contact their managers to let them know they got out of the area safely. “In a typical evacuation of a building, employees go about 100 yards from the building and wait to get their names ticked off a list,” says Honey. “The issue we faced here is that the whole of lower Manhattan was evacuated. So employees were going home or trying to get to other offices — so that was a challenge for us.” Honey says the wallet cards key employees carried were extremely beneficial. “Everyone knew who to call and when,” he says. “That was a real valuable planning aid to have.” Once the team had the call trees and other communications processes under way, they began to implement the predefined continuity plans and assess what critical business items they wanted to focus on and when. The Recovery Critical Management Functions Resumed within Minutes. Many of the company’s recovery procedures were based on backup data centers at Merrill Lynch facilities outside the area. The data recovery procedures were followed through without incident. The company has a hot site provider, but they did not have to use that service.

The company’s preparedness efforts for Y2K resulted in near-routine recovery of critical data. “We had a very large IT disaster recovery program in place,” says Honey, “and we’ve been working for a couple years now with the businesses to really strengthen the business procedures to use it. So backup data centers, mirroring over fiber channels, etc. — that all worked pretty well.” Likewise for the recovery personnel at the command centers: “A lot of people already knew what a command center was, why they had to be there, and what they needed to do because we had gone through that during Y2K, and I’m very grateful that we did.” 8000 Employees Back at Work within a Week. A major challenge for the BCP team was getting the displaced employees back to work. First, the company was able to utilize two campus facilities in New Jersey. The company also had its real estate department itemize every available space in the tri-state area and put it onto a roster. Honey’s team collected requirements and coordinated the assignment of available space to each business unit. The company operates a fairly comprehensive alternate work arrangement program, so some employees were permitted to work from 952

AU1518Ch53Frame Page 953 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 home. Finally, the team was able to transfer some work abroad or to other Merrill Lynch offices, which relieved some of the workload from the affected employees. Resuming Normal Operations. By the end of the week, the BCP team’s priority shifted to making sure they could communicate with all employees. Workers needed to be assured that the company was handling the crisis and that space was allocated for displaced workers. Messages were sent instructing them on where to go for more information and what human resource hotlines were available for them to call.

Merrill Lynch’s chairman, CEO, and senior business and technology managers made prerecorded messages that were sent out automatically to all employees impacted by the incident by use of a special emergency communication system. This accounted for approximately 74,000 phone calls during the first week after the disaster. “That was a very key part,” says Honey. “Getting accurate information to our employee base was a real challenge because of a lot of misinformation in the press, which makes the job very challenging. Plus, key business folks made a huge effort to call all our key customers and reassure them with the accurate information that Merrill Lynch was open for business.” A key logistical challenge was getting the thousands of displaced workers to their new work locations. The company ran a series of ferryboats and buses from various points within the city to other points. The company Web site was also used to communicate transportation information to the affected employees. LESSONS LEARNED Honey and his team will be reevaluating certain aspects of their plans in the coming months, even after their success in recovering from such a devastating event, “One of the things I think we’ll concentrate on a lot more in the future is region-wide disasters. For example, not so much, ‘Your building is knocked out and you can’t get in,’ but maybe, ‘The city you’re in is impacted in significant ways.’ So, we’ll be looking to see how we can make the firm a lot more robust in terms of instances where a city is impacted, rather than just the building.” Honey also believes that many companies will reevaluate their real estate strategies. “Do you really want to have all your operations in one building?” he asks. “Fortunately, for a company like Merrill Lynch, we have a number of real estate options we can utilize.” THE WORK AHEAD The BCP team was busy working on backup plans for the backup facilities by the end of the second week, while primary sites were either cleaned 953

AU1518Ch53Frame Page 954 Thursday, November 14, 2002 8:02 PM

PHYSICAL SECURITY up or acquired. “Many of our operations are in backup mode,” says Honey, “so we did a lot of work to try and develop backup plans for the backup plans. That was a big challenge.” Now the team is in the planning stages for reoccupying the primary sites, which presents its own set of challenges. Switching back to primary facilities will have to be undertaken only when it is perfectly safe for employees to reoccupy the damaged facilities. One of the most important things for Honey and his team was that, by the Monday morning following the attack, everything was back to nearly 95 percent of normal operations. Their efforts over the past few years preparing for a disruption of this magnitude appear to have paid off. “Certainly from my perspective, I was very glad that we put the company through the training exercise in May,” says Honey. “It enlightened an awful lot of the key managers on what they would have to do, so we were very prepared for that. Most folks knew what to do, which was very reassuring to me.” CONCLUSION Reducing vulnerability to physical security threats became immensely more complex after September 11, 2001. Terrorism now needs to be included in all physical security planning. The events of September 11 showed us that procedures designed to deter those with hostile intent might be ineffective against suicidal terrorists. Physical security now needs to change its operating paradigm from that of deterrence to prevention to reduce the risk from terrorism. Taking the additional precautions to prevent hostile acts rather than deter them is much more difficult and costly, but necessary. Protecting one’s organization, co-workers, and family from terrorism is possible with training. Maintaining control of access to sensitive information that could be used by terrorists is paramount. Many government Web sites are awash with information that could be useful in combating terrorism. Unfortunately, many of these Web sites can also provide this information to potential terrorists who could use that information to discover vulnerabilities. Dedication

This chapter is respectfully dedicated to those whose lives were lost or affected by the events of September 11, 2001. It is the author’s deepest hope that information presented in this chapter will aid in reducing the likelihood of another such event. References 1. NIPC Advisory 02-001: Internet Content Advisory: Considering the Unintended Audience, National Infrastructure Protection Center, January 17, 2002. 2. Service Member’s Personal Protection Guide: A Self-Help Handbook to Combating Terrorism, U.S. Joint Chiefs of Staff, Joint Staff Guide 5260, July 1996.

954

AU1518Ch53Frame Page 955 Thursday, November 14, 2002 8:02 PM

Physical Security: The Threat after September 11 3. Joint Tactics, Techniques and Procedures for Antiterrorism, U.S. Joint Chiefs of Staff, Joint Pub 3-07.2, 17 March 1998, Appendix. 4. ATF Bomb Threat Checklist, ATF-F 1613.1, Bureau of Alcohol, Tobacco & Firearms, June 1997. 5. Merrill Lynch Resumes Critical Business Functions within Minutes of Attack, Janette Ballman, Disaster Recovery Journal, Volume 14, Issue 4, Fall 2001, pp. 26.

ABOUT THE AUTHOR Jaymes Williams, CISSP, is a security analyst for the PG&E National Energy Group and is currently the chapter secretary of the Portland, Oregon Chapter of ISSA. He has held security positions at other companies and served eight years in information security-related positions in the U.S. Air Force. The author’s proceeds from this chapter will be donated to the Twin Towers fund to benefit those affected by the disaster of September 11, 2001.

955

AU1518Ch53Frame Page 956 Thursday, November 14, 2002 8:02 PM

AU1518Index Page 957 Thursday, November 14, 2002 7:45 PM

Index

AU1518Index Page 958 Thursday, November 14, 2002 7:45 PM

AU1518Index Page 959 Thursday, November 14, 2002 7:45 PM

Index A AC, see Authentication code Access control(s), 344, 362, 524, 703 card, 911 list (ACL), 157, 754 mainframe, 461 mandatory, 470 matrix, 462 perimeter, 704 physical, 357, 358, 901 policy, 497 RACF, 462 systems, military, 461 point cost, 51 service provider, 110 Account identifier, attacker entering, 182 numbers, 44 Accounting controls, 11 Accreditation decision, 501 definition of, 485 elements of, 511 ACF2, 395, 418, 462 ACIRI, see AT&T Center for Internet Research at the International Computer Science Institute ACL, see Access control list Active Directory (AD), 882 Active OS fingerprinting, 56 ActiveX controls, capabilities of, 546 vulnerabilities affecting, 553 Activity monitor, 609 Acts of nature, 904 AD, see Active Directory Administrator/operator functions, 268 Adobe Acrobat, 470 Advanced Encryption Standard (AES), 630, 634, 635, 663 Advanced intelligent networks (AINs), 193 AES, see Advanced Encryption Standard AFR, see Annual frequency rate Agent-based firewalls, 156 AH, see Authentication Header

AIDS Trojan extortion scam, 605 AINs, see Advanced intelligent networks Air gap architecture, 147 considerations, 149 technology, application-level gateway technology and, 148 Airline industry, security measures of, 932 Airport security, 786 Airsnort, 54 Air space, privacy across, 197 Alarms design of, 909 false, 245 logging of, 245 multiple trigger events for, 924 ALE, see Annual loss expectancy American Broadcasting Company, 915 American Civil Liberties Union, 834 American Society for Industrial Security (ASIS), 824 Anderson v. Malloy, 795 ANNs, see Artificial neural networks Annual frequency rate (AFR), 690 Annual loss expectancy (ALE), 513 Anomaly definition of, 684 detection, 683 intrusions, 693 ANSIR, see Awareness of National Security Issues and Responses Antisniff, 223 Anti-spam devices, 717 Antiterrorism checklist, personal, 946–950 Anti-virus firm, 541 Anti-virus products, 260 Anti-virus protection, 321, 376 Anti-virus software, 349, 359, 410, 558 Anti-virus updates, 447 Anti-virus vendor communities, 541 Anti-WTO groups, 546 Apple II viruses, 568 Application(s) access, authorized, 712 coders, 390 development, outsourced, 388, 389

959

AU1518Index Page 960 Thursday, November 14, 2002 7:45 PM

Index information-sharing, 653 layer, proxies filtering packets at, 141 -level gateway, 140 considerations, 142 technology, air gap technology and, 148 -level security, technical safeguards settings contained in, 328 programming, 456 project, scope creep, 403 roles and privileges worksheet, 438 security, basic areas of, 482 service providers (ASPs), 850 software products, design of, 384 Application security, 475–483 commercial off-the-shelf software application security, 481 controls, 359, 362 development life cycle, 475 outsourced development services, 482 perception of as afterthought, 483 in production, 480–481 security controls in development life cycle, 476–480 access, 477–478 code reviews, 479–480 determination of sensitivity and criticality, 478 labeling sensitive information, 478 reporting security incidents, 477 security awareness, 477 separation of duties, 476 use of production data, 478–479 security requirements and controls, 475–476 Approving Authority, 517 Arbor Networks, 234 ARIS, see Attack Registry and Intelligence Services ARP redirect, 220 Arthur Andersen debacle, 850 Artificial neural networks (ANNs), 548 AS, see Authentication service ASIC-based firewalls, 149 ASIS, see American Society for Industrial Security ASN.1 language, expression of MIB using, 95 ASPs, see Application service providers Assets constant monitoring of, 345 failure of organization to protect, 263 financial information, 516 intangible, 405

960

key, 858 safeguarding of, 4 Assurance controls, 860 family, definition of, 286 package, EAL as, 288 Asta Networks, 234 Asymmetric cryptography, 659 ATM, 123, 124 transmission-path virtual circuits, 201 transport of ADSL data via, 107 AT&T Center for Internet Research at the International Computer Science Institute (ACIRI), 694 Clipper phone, 195 Attack(s) avenues, Macintosh, 545 birthday, 662 chosen message, 624 chosen stego, 624 covert channel, 134 cryptographic, 740 denial-of-service, 346, 598, 856 dictionary, 624 distributed denial-of-service, 91, 228, 581, 790, 850 ICMP flooding, 205 Java, 112, 546 known cover, 624 known message, 624 LAND, 137 man-in-the-middle, 738 mitigation, in VoIP designs, 205 name-dropping, 259 NetBIOS/IP, 116 password, 181 Ping of Death, 137 preventable, 688 Registry and Intelligence Services (ARIS), 556 rogue device, 207 scenario, 505 September 11, prominence of CISO position since, 318 sniffing backdoor, 64 social engineering, 186 steganalyst, 624 stego-only, 624 Tear Drop, 137 Tribal Flood, 230 Trinoo, 230 Trojan horse, 205 UNIX, goal of, 545

AU1518Index Page 961 Thursday, November 14, 2002 7:45 PM

Index vulnerability of passwords to brute-force, 222 Web-based, proliferation of, 260 wireless LAN, 51, 53 Attackers, OS types determined by, 55 Attorney–client privilege, 400 Audit compliance with security processes checked by, 319 logs, definition of, 684 organization, risks recognized by, 449 profession, controls commonly used in, 6 trails, 15 maintained, 826 monitoring of, 252 secure, 255 Auditors, security reports required by, 357 Authentication, 362 code (AC), 678 End Entity, 677 Header (AH), 660 individual, 671 open, 169 requirements, 668 scheme, 96 service (AS), 645 systems, 713 two-factor, 687 user-level, 722 VNC, 737 Author, data object, 468 Authorization, see Accreditation Awareness of National Security Issues and Responses (ANSIR), 82

B Backdoor(s), 565 BackOrifice2000, 606 commands, 64 creation of, 338 definition of, 62 nonpromiscuous sniffing, 63 promiscuous sniffing, 63, 64 sniffing, 51, 62 vulnerabilities regarded as, 75 Back Orifice, 544, 606 Backup logs, 395 tapes, erased, 436 BadTrans, 603 Banking industry, 521 Baselines, definition of, 298 Base station controller (BSC), 197

Base transceiver station (BTS), 196–197 Basic service set (BSS), 168 Batch totals, 16 B2B, see Business-to-business B2C, see Business-to-customer BCP, see Business continuity planning Best practices development, risk management, 859 BGP, see Border Gateway Protocol BIA, see Business impact analysis Biological attack mechanisms, similarity of malicious code to, 542 Biometrics, 222, 394 Birth certificates, 44 Birthday attack, 662 Black-hat sites, 556 Block ciphers, 633 Bomb threat checklist, 940 Boot-sector infectors (BSIs), 568 Border Gateway Protocol (BGP), 549 Border security, 786 Boyle v. Board of Supervisors, Louisiana State University, 796 Bragging rights, cyber-criminals hacking into sites for, 355 Brain virus, 542, 572, 576, 583 British Internet service provider, 348 Broadband Internet access users, security for, 107–117 broadband security risks, 110–111 increasing broadband security, 111–116 checking vulnerability, 112–113 NAT firewall, 114–115 personal firewall, 115–116 plugging holes in Windows, 113–114 BSC, see Base station controller BSIs, see Boot-sector infectors BSSS, see Basic service set BTS, see Base transceiver station BugTraq, 61, 556 Building evacuation drills, 941 security checklist, 942–944 security procedures, 941 Bureau of Alcohol, Tobacco & Firearms, 939–940 Burroughs, outsourcing services oriented around, 384 Business analyst, 252 -to-business (B2B), 320, 762 culture, 377 -to-customer (B2C), 762 function manager, 468

961

AU1518Index Page 962 Thursday, November 14, 2002 7:45 PM

Index impact analysis (BIA), 785 IT security model, 747 operations planning, 771 process controls, 323 owner, 442 recovery, evolution from technical recovery to, 763 Business continuity planning (BCP), 13, 775–788, 945 business continuity process, 783–788 business impact analysis, 785–786 design and development, 786–787 implementation of business continuity plan, 787 maintenance of BCP, 788 project initiation, 785 testing of plans, 787 crisis, 776–783 management/expansion, 780–782 preexisting conditions, 777–780 resolution, 783 triggers, 780 program life cycle, 950 Buy.com, DDoS attack launched against, 346 Buzzwords, 747

C CA, see Certificate authority C&A, see Certification and accreditation Cable modems (CMs), 108 television (CATV), 108 Caesar ciphers, 629 CALEA, see Communications Assistance for Law Enforcement Act Calling cards, 183, 185 Call trees, 952 Candid Camera, 915 Capability Maturity Model (CMM), 278 Captus Captio, 233 Networks, 234 Card access controls, 911 Carnegie Mellon Software Engineering Institute (CM-SEI), 103 Carrier Sense with Collision Detection (CSMA/CD), 211 CATV, see Cable television CA Unicenter, 93 CBA, see Cost/benefit analyses CBK, see Common Body of Knowledge CBQ, see Class-based queuing

962

CC, see Common Criteria CCIMB, see Common Criteria Implementation Management Board CCNA, see Cisco Certified Network Associate CCRA, see Common Criteria Recognition Agreement CCSA, see Check Point Certified Security Administrator CCTLs, see Common Criteria Testing Laboratories CCTV, see Closed-circuit television CDMA, see Code division multiple access CD Universe, 818 Celestine v. Union Oil Co. of California, 796 Cell phones, effectiveness of for alerting personnel, 246 Cellular networks, end-to-end security of, 197 CEM, see Common Methodology for Information Security Evaluation CEMEB, see Common Evaluation Methodology Editing Board Centralized scheme initialization, 674 Central office (CO), 107 Central processing unit (CPU), 497 Central procurement, 251 CEO, see Chief executive officer CE router, see Customer edge router CERT, see Computer Emergency Response Team CERT-CC, see Computer Emergency Response Team Coordination Center Certificate authority (CA), 641, 644, 646, 665 Authorizing Participant, 292 Policy, 665, 666, 667 Practice Statement (CPS), 665 definition of, 666 expression of, 666 request processing, 673 revocation list (CRL), 644 Certification abbreviated, 502 CISA, 273 CISSP, 271 definition of, 249, 485 extensive, 504 GIAC, 272 invalid, 538 moderate, 503 practice categories, 807 process, System Manager as part of, 518

AU1518Index Page 963 Thursday, November 14, 2002 7:45 PM

Index requirements, 411 SCCP, 272 Test Plan, 519, 532 Report, 536 types of, 501 Certification and accreditation (C&A), 281, 475, 485 annual assessment between, 496 effectiveness, processes supporting, 494 process, presentation of to management, 498 project management plan for, 491 Certification and accreditation methodology, 485–507 analysis and documentation of security controls and acceptance, 489–493 contingency/continuity of operations plan, 490 letter of acceptance/authorization agreement, 491 letter of deferral/list of system deficiencies, 491 project management plan for C&A, 491 risk management, 491 security plan/concept of operations, 491–492 security specifications, 492 security/technical evaluation and test results, 492 systems security architecture, 492 threats, vulnerabilities, and safeguards analysis, 490 user security rules, 493 verification and validation of security controls, 493 associated implementation factors, 498–499 documentation available in hard copy and online, 498 grouping of systems for C&A, 498 presentation of C&A process to management, 498–499 standardization of C&A procedures, templates, worksheets, and reports, 499 standardization of responses to report sections for enterprise use, 499 beginning, 487 C&A phases, 499–501 accreditation, 500–501 certification, 500

post-accreditation, 501 precertification, 499–500 components, 487 definitions, 485 identification of key personnel to support C&A effort, 487–489 authorizing official/designated approving authority, 488 certifier, 488 information systems security officer, 488–489 program manager/DAA representative, 489 system supervisor or manager, 489 user and user representative, 489 other processes supporting C&A effectiveness, 493–498 annual assessment between C&As, 496–497 applicable federal and state laws, regulations, policies, guidelines, and standards, 494 applicable organizational policies, guidelines, and standards, 494 assessment and recertification timelines, 496 configuration and change management, 494 incident response, 495 incorporation of security into system life cycle, 495 personnel background screening, 495 recertification required every three to five years, 497 security awareness training, 495 security management organization, 495–496 security safeguards and metrics, 496 security safeguards operating as intended, 498 significant change or event, 497 references for creating C&A process, 486–487 repeated process, 486 target, 486 types of certification, 501–505 abbreviated certification, 502–503 checklist, 502 extensive certification, 504–505 moderate certification, 503–504 Certification testing, 509–539 accreditation, 510–511 building certification test plan, 532–535 assumptions and constraints, 533

963

AU1518Index Page 964 Thursday, November 14, 2002 7:45 PM

Index background, 532–533 system description, 533 test objectives, 533 test results, 535 test scenario, 533–534 test script, 534–535 determining requirements, 520–531 audit, 526 functional, 521 legal, 520–521 local, 521 operational, 521 regulatory, 521 requirements decomposition, 521–522 requirements matrix, 522–525 security practices and objectives, 526 source, 526–531 system integrity, 526 dissenting opinions, 539 documentation, 518–520 plans, 519 policy, 519 procedures, 519–520 risk assessment, 520 documenting results, 536 completed requirements matrix, 536 report, 536 elements of accreditation, 511–514 certification, 514–515 configuration management, 514 contingency plan, 513–514 cost versus benefits, 515–516 physical security, 514 reason to certify, 516–517 risk assessment, 513 security plan, 512 security policy, 511 security procedures, 512-513 training, 514 vulnerability assessment, 513 recommendations, 536–539 areas to improve, 538 certify or not certify, 538 meets requirements but not secure, 538 recertification recommendations, 538–539 roles and responsibilities, 517–518 Certified Information System Auditor (CISA), 273 Certified Information Systems Security Professional (CISSP), 3, 243–244, 271, 453, 514 Certified Protection Professional (CPP), 514

964

Certified Public Accountant (CPA), 514 Certifying Authority, 517 CFOs, 427 CGL, see Comprehensive general liability Challenge Handshake Authentication Protocol (CHAP), 658 Change controls, 857 detection software, 610 management, 248, 250 effect of on security posture of system, 494 processes, 350 requirements, 249 CHAP, see Challenge Handshake Authentication Protocol Check Point Certified Security Administrator (CCSA), 515 Checkpoint Firewall-1, 75 Chernobyl virus, 542 Chief executive officer (CEO), 318, 450 Chief information officer (CIO), 247, 318, 427, 430, 450 Chief information security officer (CISO), 318, 443 Chief risk officer (CRO), 243 Chief security officer (CSO), 432–433, 435, 450 Child pornography, 820 Chosen message attack, 624 Chosen stego attack, 624 CIA, see Confidentiality, integrity, and availability CIACC, see U.S. Department of Energy Computer Incident Advisory Capability CIDR, see Classless Internet Domain Routing CIO, see Chief information officer Cipher(s), 631 block, 633 locks, 910 stream, 632 Circuit-level gateway, 138, 146 considerations, 139 function of stateful inspection as, 143 CIRT, see Computer incident response CISA, see Certified Information System Auditor Cisco Certified Network Associate (CCNA), 515 PIX, 156 router, 56 CISO, see Chief information security officer

AU1518Index Page 965 Thursday, November 14, 2002 7:45 PM

Index CISSP, see Certified Information Systems Security Professional City of Mobile v. Havard, 792 Civil lawsuit, John Doe, 892 Civil liabilities, avoidance of, 10 Civil processes, 867 CL, see Command line interface CLASS, see Customer local area signaling services Class-based queuing (CBQ), 231 Classless Internet Domain Routing (CIDR), 25 Class-of-service (CoS), 201 Clearance checks, 83 Cleartext, 627, 631 ClientInitialization message, 728 Closed-circuit television (CCTV), 900, 906 camera, statistics of crimes in presence of, 916 components, 917 installation, 921 Closed-circuit television and video surveillance, 915–926 CCTV components, 917–924 camera, 918–919 monitor, 922–923 peripherals, 923–924 transmission media, 919–922 progression of events, 924–925 reason for CCTV, 916–917 to detect, 917 to deter, 916 to enforce, 917 CMM, see Capability Maturity Model CMs, see Cable modems CM-SEI, see Carnegie Mellon Software Engineering Institute CN, see Common name CNN, attack on, 226, 346 CO, see Central office Code cost of vulnerabilities in, 104 division multiple access (CDMA), 195 strings, change of to defeat scanners, 568 Code Red worm, 58, 60, 260, 541, 543, 550, 577, 598 CoE, see Council of Europe Cold War, end of, 68 Columbus Day/Datacrime hypefest of 1989, 588 Comdisco, 395 Command line interface (CL), 753 Commercial off-the-shelf (COTS) application, introduction of into production, 475

products, 280 vendors, 449 Common Body of Knowledge (CBK), 271 Common Criteria (CC), 275 CEM major components of, 282 stakeholders, roles and responsibilities of, 290 error in, 293 Implementation Management Board (CCIMB), 279 for Information Technology Security Evaluation, 515 ISO 15408, 475, 515 organization of SFRs by, 283 Recognition Agreement (CCRA), 289, 292 security rating system, 76 Testing Laboratories (CCTLs), 291 timeline of events leading to development of, 276–277 Common Evaluation Methodology Editing Board (CEMEB), 280 Common Methodology for Information Security Evaluation (CEM), 288 Common name (CN), 640 Common Open Policy Services (COPS), 203 Common Vulnerabilities and Exposures (CVE), 556 Communications links, assumptions regarding, 787 out-of-band, 861 security (COMSEC), 279 Communications Assistance for Law Enforcement Act (CALEA), 194 Community strings, 96, 102, 104 Compliance officer, 432 Comprehensive general liability (CGL), 353 COMPUSEC, see Computer security Computer facilities, equipment cost of, 383 far-reaching effects of, 817 Incident Advisory Capability, 877 incident response plan, 873 team (CIRT), 13, 790 processing initiative, deployment of new, 385 security (COMPUSEC), 279 features, 278 standards, 275 viruses, see Malware and computer viruses zombie, lax security of, 791

965

AU1518Index Page 966 Thursday, November 14, 2002 7:45 PM

Index Computer Emergency Response Team (CERT), 235, 513, 556, 851, 876 Coordination Center (CERT-CC), 103, 134 recommendations provided by, 104 Computer Fraud and Abuse Act, 543, 829, 894 Computer security incident, managing response to, 873–888 getting started, 874–878 development of incident response team, 877–878 incident definition, 876–877 reason for having incidence response plan, 874–885 requirements for successful response to incident, 875–876 other considerations, 886–887 benefits of structured incident response methodology, 887 common obstacles, 886 importance of training, 886–887 phases of incident response, 878–885 containment, 882 detection, 880–882 eradication, 822–883 follow-up, 884–885 preparation, 879–880 recovery, 883–884 Computer Security Institute, 427, 450 Computing center evolution, 902 COMSEC, see Communications security Concept of operations (CONOPS), 491 Conditional toll deny (CTD), 184 Conference bridge on-demand, 182 security issues regarding, 183 Confidentiality expectation of, 385 integrity, and availability (CIA), 855 loss of, 748 risks associated with loss of, 718 Configuration management, 407, 408, 496, 514, 524 Conflicts of interest, 9, 371 Connection(s) denied, 29 table, proprietary methodology for building, 136 CONOPS, see Concept of operations Consultants, qualifications of, 315 Consumer inquiries, 50 Content filtering, real-time, 559 Contingency planning (CO), 407, 408, 513, 761, 951

966

Continuity planning, changing face of, 761–774 computer forensic teams, 769 lessons of Enron, 768–769 lessons of September 11, 762–768 aftermath, 764 business process continuity versus IT DRP, 767 call to arms, 763–764 call for homeland security, 764–766 executive protection and succession plans, 767 focus on people, 766–767 full-scope continuity planning business process, 770–773 importance of education, training, and awareness, 766 Internet and enterprise continuous availability, 769–770 risk reassessment, 768 security and threats shifting, 767–768 revolution, 761–762 Continuous availability, 769 challenges facing continuity planners in, 770 implementing, 770 Contract(s) fixed-price, 391 wiggle room in, 400 Control(s), 3–20 activities, 6 assurance, 860 characteristics, 6–9 components used to establish control, 5–6 data access, 17–18 definition of, 3–5 Detection, 860 discretionary access, 17 edit, 17 environment, 5 hash, 16 illusion,19 implementation, 15–17 batch totals, 16 edit, 17 hash, 16 logging, 16–17 sequence, 16 transmission controls, 15–16 mandatory access, 18 physical, implementation of, 17 placement of, 6 principle objectives for, 4

AU1518Index Page 967 Thursday, November 14, 2002 7:45 PM

Index resistance to rigid, 18 segregation of duties, 10 sequence, 16 standards, 14–15 systems, failure of, 19 transmission, 15 types, 9–14 accounting, 11 compensating, 14 corrective, 12–13 detective, 12 deterrent, 13 directive/administrative, 9–11 internal, 9 preventive, 12 recovery, 13–14, 860 why controls do not work, 18–19 COPS, see Common Open Policy Services Copyright infringement, 346, 821 protection, 405 CORA, see Cost of Risk Analysis Corporate auditing, 416 Corporate directory, 712 Corporate e-mail, certificate request via, 671 Corporate espionage, 700 Corporate network, ways for malicious code to enter, 550 Corrective controls, 12 CoS, see Class-of-service Cost/benefit analyses (CBA), 689, 690 Cost of Risk Analysis (CORA), 334 COTS, see Commercial off-the-shelf Council of Europe (CoE), 816, 830 cyber-crime treaty, 835, 837 principles of regime theory ignored by, 838 Counter-economic espionage, 67–87 barriers encountered in attempts to address economic espionage, 83–87 end of Cold War, 68–69 history, 68 implications for information security, 71–76 players, 78–79 companies, 78–79 countries, 78 real-world examples, 80–81 hacking of systems internally, 80 using language to hide in plain sight, 80–81 role of information technology in economic espionage, 69–71

targets, 76–78 what information security professionals can do, 81–83 Countermeasure trade-off analysis, 505 Covert channel attack, 134 CP, see Continuity planning CPA, see Certified Public Accountant CPE, see Customer premise equipment CPP, see Certified Protection Professional CPS, see Certificate Practice Statement CPU, see Central processing unit Crackers, 819 CRC, see Cyclic redundancy check Credential sniffing, 217 Credit card theft, 347, 820 Criminal incidents, nature of, 869 Criminal liabilities, avoidance of, 10 Crisis management, 448, 780 definition of, 445 incident response, 446 planning, 771 resolution, 782 trigger, 776, 780 Critical incident investigation, 863 CRL, see Certificate revocation list CRO, see Chief risk officer Cross-platform malicious code, 545 Cryptographic attacks, 740 Cryptographic modules, programming of, 480 Cryptography, 627–651 algorithm choice, 642–643 asymmetric, 637–642, 659 attacks, 639–640 background, 637 elliptic curve cryptosystems, 639 real-world applications, 640–642 RSA, 638–639 basics, 627–631 confidentiality, integrity, and authentication, 631 definition of cryptography, 627 history, 628–630 names participating, 630 related terms and definitions, 627–628 domain, 618 hash functions, 647–649 applications of message digests, 647 digital signature algorithm, 648 message authentication codes, 648–649 message digests, 647

967

AU1518Index Page 968 Thursday, November 14, 2002 7:45 PM

Index key management and exchange, 643–646 change control, 644 destruction, 644 examples and implementations, 644–646 exchange, 643–644 generation, 643 installation and storage, 644 notes, 649–650 backdoors and digital snake oil, 650 digital notary public, 650 steganography, 649–650 symmetric cryptographic algorithms, 632–637 block ciphers, 633–636 stream ciphers, 632–633 weaknesses and attacks, 636 Cryptovariable, 628 Cs3, Inc., 233, 235 CSI/FBI computer crime statistics, 428 CSMA/CD, see Carrier Sense with Collision Detection CSO, see Chief security officer CTD, see Conditional toll deny Custodian, definition of, 463, 464 Customer access network, 716, 717 confidence, 773 edge (CE) router, 125 lists, organization’s importance of keeping confidential, 385 local area signaling services (CLASS), 192 premise equipment (CPE), 192 Cutoff proxy, 145, 146 considerations, 147 filtering packets, 146 CVE, see Common Vulnerabilities and Exposures CyberArmor, 159, 161, 162 Cyber-consumers, 47 Cyber-crime, international dimensions of, 815–840 approaches for Internet rule, 835–837 anarchic space, 835 epistemic communities, 836 international regimes, 836–838 national space, 836 supranational space, 836 Council of Europe convention, 830–831 international cooperation, 831 national law, 831 formula for success, 838 global cyber-crime, 817–824 cyber-crime defined, 818–821

968

cyber-crime threat, 817 cyber-terrorism, 821–823 growing threat, 823–824 international threat, 817–818 new age and new risks, 817 government efforts, 828–830 international efforts to mitigate cybercrime risk, 828 international issues, 824–828 legal issues, 825–827 technical issues, 827–828 NGO responses and criticisms, 831–835 Council of Europe response, 835 lack of NGO involvement, 832–833 mutual assistance, 834 overextending police powers and selfincrimination, 833 privacy, 833–834 stifling of innovation and safety, 834–835 Cyber-crime, response, investigation, and prosecution, 889–897 investigation, 891–894 prosecution, 894–896 response, 890–891 Cyber-risk management, 341–364 insurance for cyber-risks, 353–356 finding right insurer, 356 loss prevention services, 355–356 specific cyber-liability and property loss policies, 354–355 risk management approach, 342–345 assess, 342 detect, 345 insure, 343–345 mitigate, 343 remediate, 345 technical controls, 356–363 application security controls, 359–361 data backup and archival, 361–363 network security controls, 357–359 physical access control, 357 types of security risks, 345–353 cyber-terrorism, 352–353 GLBA/HIPAA, 350–352 threats, 348–350 Cyber-terrorism, potential for, 352 Cyclic redundancy check (CRC), 610

D DAA, see Designated approving authority DAC, see Discretionary access control Daemons, 596

AU1518Index Page 969 Thursday, November 14, 2002 7:45 PM

Index Dark Avenger virus, 542 DARPA, see Defense Advanced Research Projects Agency Data access controls, 17 accuracy standards, 14 ADSL, 107 backup, 361, 362 classification, 852 collection, opting in to, 47 communications facilities, independent auditor review of, 388 servicer, biggest reason to hire, 386 completeness, 14 compression, 396, 397, 403 compromised, 48 confidentiality, definition of, 524 denied attempts to access, 471 derived, 45, 46 diddlers, 565 distribution, 14 dynamic, 45 Encryption Standard (DES), 630, 634 comparison of to RSA, 639 symmetric cryptography, 728 enterprise, 467 information valuation, 72 integrity, 703 checkers, 881 definition of, 524 -link integrity, 128 management, 14 modeling, 396 object(s) author, 468 organization of, 466 owner, 256, 257, 266 private, 43–44, 128 types of, 45 vendor innovations, 49 sets, types of, 466 static, 44, 45 storage of in digital format, 924 substitution, 620 transmission, covert, 623 validation, 14 ways to hide, 620 Data, ownership and custody of, 461–472 access control, 470 background, 461–462 classification and labeling, 469–470 definitions, 463 information identification, 466–467

owner identification, 467–469 author, 468 business function manager, 468 line manager, 468 surrogate owners, 468–469 policy, 463 recommendations, 471 roles and responsibilities, 463–466 administrator, 465 custodian, 464–465 owner, 463–464 use manager, 465 user, 465–466 variance detection and control, 470–471 Database(s) entries, erased, 80 file system integrity, 659 management system, technical safeguards settings contained in, 328 programming, 456 security specialist, 406 shared functional data stored in, 469 Data Protection Directive, European Union, 833 DDoS, see Distributed DoS Deciphering, 627 Decryption, 627, 628 Default community strings, 102, 104 Default passwords, 117 Defense Advanced Research Projects Agency (DARPA), 227, 852 Defense-in-depth practices, 481, 686 Demilitarized zone (DMZ), 219, 359, 444, 707 architecture, 358 IDS on, 842 DEN, see Directory Enabled Networking Denial-of-service (DoS) attacks, 91, 102, 103, 346, 598, 856 Denial-of-service attacks, ISPs and, 225–236 DDoS as ISP problem, 227–228 importance of DDoS attacks, 226–227 resources, 235 what ISPs can do about DDoS attacks, 228–235 assessing DDoS technologies, 232–235 assisting customers during DDoS attack, 231–232 defending against DDoS attacks, 228–231 Department of Defense, 83, 226 Department of Energy, 83, 428 Derived data, 45, 46 DES, see Data Encryption Standard

969

AU1518Index Page 970 Thursday, November 14, 2002 7:45 PM

Index Designated approving authority (DAA), 488 identification of, 499 options, 491 representative, 489 responsibility of ISSO to, 488 Desktop(s) anti-virus software, 558, 717 security configuration management of, 448 Detection anomaly, 683 controls, 12, 860 misuse, 683 Deterrent control, 13 Development life cycle, security controls in, 476 DHCP, see Dynamic Host Configuration Protocol Dial-up service, connect on-demand with, 111 Dickinson Arms-Reo v. Campbell, 794 Dictionary attacks, 624 DID, see Direct inward dial Differentiated Services (DiffServ), 119 DiffServ, see Differentiated Services Digital cellular architecture, 196 Digital certificates, Web browser-based, 120 Digital circuits, advantage of, 187 Digital notary service, 650 Digital signature, 648, 659, 661 logical extension of, 650 standard, 647 Digital subscriber line, 107 Digital video recorder (DVR), 924 Direct inward dial (DID), 178 Direct inward system access (DISA), 185 advantages, 185, 186 authorization codes, disabling of inactive, 186 Directive/administrative controls, 9 Directors and officers (D&Os), 347 Directory(ies) corporate, 712 Enabled Networking (DEN), 203 home, 755 nonretrievable, 39 operating system, 755 permissions, 754 security, 747–758 addressing threat, 749–751 auditing, 758 dilemma, 747–748 establishing correct permissions, 751 monitoring and alerts, 756–757

970

permissions settings, 751–753 permissions utilities, 753–754 sensitive file permissions, 756 specific directory permissions, 754–756 threats and consequences, 748–749 shared, 755 -specific messages, 28 Direct outward dial (DOD), 179 Direct Sequence Spread Spectrum (DSSS), 167 DISA, see Direct inward system access Disaster recovery, 274, 395, 397, 398 /business continuity (DR/BC), 443, 445 planning (DRP), 13, 378, 561, 761 team, leader of, 782 Disaster Recovery Institute, 784 Discretionary access control (DAC), 17, 18 Disk-imaging programs, 562 Distinguished name (DN), 640 Distributed DoS (DDoS), 226 Distributed DoS attack, 70, 91, 510, 581, 790, 850 defending against, 228 liability for lax computer security in, 791–798 Distributed systems, strategic importance of, 903 DLL, see Dynamic link library DMZ, see Demilitarized zone DN, see Distinguished name DNS, see Domain Name System DOD, see Direct outward dial Domain name, 25 Domain Name System (DNS), 63, 225 D&Os, see Directors and officers DoS attack, see Denial-of-service attack Dragging the wolf into the room, 245 DR/BC, see Disaster recovery/business continuity DRP, see Disaster recovery planning Dr. Pepper Bottling Co. v. Bruner, 795 DSSS, see Direct Sequence Spread Spectrum DTMF signaling, see Dual-tone multifrequency signaling Dual infector, 574 Dual-tone multifrequency (DTMF) signaling, 192 Dumpster driving, 81, 337 Duplo Manufacturing Corporation, prosecution of under Economic Espionage Act, 79 DVR, see Digital video recorder Dynamic data, 45

AU1518Index Page 971 Thursday, November 14, 2002 7:45 PM

Index Dynamic Host Configuration Protocol (DHCP), 53, 207 Dynamic link library (DLL), 159 Dynamic packet filter(s), 135 considerations, 137 differences in, 136

E EALs, see Evaluation assurance levels Earthquake threats, 904 Easter egg, 582 Eavesdropping, 178 eBay attack, 226 E-business security specialist, 406 ECC, see Elliptic curve cryptosystems ECMA, see European Computer Manufacturers Association E-commerce customers, 480 Web applications addressing, 475 Economic espionage, 67, see also Countereconomic espionage employee resistance to specter of, 86 event, world’s number one, 82 Economic Espionage Act, 78 Economic intelligence, most active collectors of, 77, 78 EDI, see Electronic data interchange Edit controls, 17 EE, see End Entity EGP, see Exterior Gateway Protocol EICAR, see European Institute of Computer Anti-virus Researchers EIGRP, see Enhanced Interior Gateway Routing Protocol Electronic data interchange (EDI), 669, 802 Electronic signature law, 648 Elliptic curve cryptosystems (ECC), 639 E-mail addresses, 712 anonymizers, 70 certificate request via, 670, 671 content filtering, 717 effectiveness of for alerting personnel, 246 encrypted, 619 HTML-formatted, 608 infrastructure vendors, 121 /messaging operations, 448 security appliances, McAfee series of, 121 servers, 134, 552 sniffing, 218 virus(es), 561, 573, 592, 608

Web-based, 121 worm, 543 Embezzlement, 820, 891 Emergency change management, 250 lighting, 913 response checklists, 861 Empire, 585 Employee antiterrorism procedures for, 945 apathy, 19 behavior, control of, 10 cross-training of, 268 data, moving of, 8 identification number, 417 indoctrination process, 410 numbers, 44 planted, 78 practices, monitoring of, 417 ranks, organization’s importance of keeping confidential, 385 Encapsulating Security Payload (ESP), 660 Encapsulation, improving security through, 741 Encryption, 362 algorithm, 642 deployment of at network level, 221 drawback of using, 619 frequently used, 636 IPSec VPN, 149 keys, 712 PKZIP, 636 security, 173 services, 715, 716 software, 643 symmetric, 643 use of to conceal information, 828 WEP, 171 End-to-end service-level guarantees, 123 End Entity (EE), 666, 676, 677 End users, interaction of with IT systems, 442 Enhanced Interior Gateway Routing Protocol (EIGRP), 549 Enron, 850 case, mishandling of information in, 11 lessons of, 768 -related events, global level of, 769 Enterprise data, 467 firewall, 155 IP telephony security, 205 network, construction of VPN within, 715

971

AU1518Index Page 972 Thursday, November 14, 2002 7:45 PM

Index resource planning (ERP), 313, 445 threats, 349 security architecture (ESA), 807–808 server operations, 448 EOB, see Explanation of benefits ERP, see Enterprise resource planning Error checking, 655 handling, 14, 253 ESA, see Enterprise security architecture ESP, see Encapsulating Security Payload Espionage, see also Counter-economic espionage categories of, 67 corporate, 700 economic, 67 functionality, divisions of, 70 industrial, 67 methods, traditional, 81 Ethereal, 212, 213 ETR, see Evaluation Technical Report European Computer Manufacturers Association (ECMA), 279 European Institute of Computer Anti-virus Researchers (EICAR), 556 Evaluation assurance levels (EALs), 283, 515 Technical Report (ETR), 291 Evidence-handling practices, 866 Executive override, 19 Explanation of benefits (EOB), 810 Explore.zip virus, 542 Exterior Gateway Protocol (EGP), 101 External extrusion, prevention of, 157

F Facility perils, 907 False alarms, 245 False alert, most famous, 603 FAT, see File allocation table Fat-client software, 119 Fax(es) encrypted, 188 misdirected, 187 FBI, 547 Awareness of National Security Issues and Responses, 82 collection of information by foreign agencies reported by, 77 definition of economic espionage by, 67 economic espionage summary by, 73 information sharing interpreted by, 83

972

opening of investigation by, 894 problem facing, 84 Federal Deposit Insurance Corporation, 350 Federal Financial Institutions Examination Council (FFIEC), 326, 951 Federal Reserve, 951 FFIEC, see Federal Financial Institutions Examination Council FHSS, see Frequency Hopping Spread Spectrum Fiber-optic cable, 920 FIC, see File integrity checking Figueroa v. Evangelical Covenant Church, 793 File allocation table (FAT), 584, 749 infectors, 572, 589 integrity checking (FIC), 684 nonretrievable, 39 sharing programs, 550 wrapper, 620 File Transfer Protocol (FTP), 95 access, denying, 23 controlling, 21–42 additional security features, 33–34 complete /etc/ftpaccess file, 37 controlling FTP access, 22–24 extending control, 24–33 logging capabilities, 34–37 scenarios, 21–22, 37–42 server creation of UNIX account on, 41 default action for, 40 preventing anonymous access to, 30 restricted command on, 35 timeouts for, 31 traffic, scanning of incoming, 359 Filtering packets, cutoff proxy, 146 Filter routers, 706 Financial operations, organization’s importance of keeping confidential, 385 Financial risk, 346 Financial statement auditors, 325 Finger program, 596 Fire controls, 912–913 detectors and alarms, 912 gas-based fire extinguishing systems, 912 water-based systems, 912 Firewall(s), 360, 509, 699–718 agent-based, 156 ASIC-based, 149 authentication systems, 714–715 configuration, use of packet sniffer to verify, 216

AU1518Index Page 973 Thursday, November 14, 2002 7:45 PM

Index corporate directory, 712–714 covert channel initiated through, 71 customer and business partner access, 715–717 DMZ, 121 downloadable, 229 encryption services, 715 enterprise, 155 establishment of perimeter, 705–708 establishment of program, 711–712 establishment of security architecture, 702–704 false assurance provided by, 701 free, 229 full-featured, 115 hardware-based, 150 host-based categories of, 157 centralized configuration, 158 deployment of, 156 UNIX, 163 infrastructure model, 704–705 installation, 705 internal, 156 level of protection provided by, 129 logs, 13 monitoring of, 378 network address translation, 114 outsourced services for, 376 personal, 90, 115, 116, 155, 555 physical security, 709–710 proxy, 57 risks, 701–702 simple, 115 software-based, 156 stateful inspection-based, 143, 144 system controls, 710–711 table, default rule defined in, 139 technical safeguards settings contained in, 328 traffic flow logged by, 844 use of to create protected subnet, 707 voice, 204 VoIP, 206 VPN tunnels through, 260 Firewall architectures, 129–154 air gap, 147–149 application-level gateway, 140–142 circuit-level gateway, 138–140 cutoff proxy, 145–147 dynamic packet filter, 135–137 fundamentals, 129–130 network security, 130

OS hardening, 150–153 definition, 150–151 importance of, 152 OS vulnerabilities, 152 patched OS, 151, 152–153 product provided with hardened OS, 152 other considerations, 149–150 hardware-based firewalls, 150 SIC-based firewalls, 149–150 stateful inspection, 142–145 additional risks, 144–145 performance and security, 144 static packet filter, 130–134 Fiscal responsibility, 83 Fixed-price contract, 391 Flooding ICMP, 205 TCP SYN, 205 UDP fragment, 205 FNBDT, see Future narrow-band digital terminal Food and water supply security, 786 Forensic examination, 866 Four Pillars Company, prosecution of under Economic Espionage Act, 78 F-Prot, 584 Frame Relay, 123, 124 Frances T. v. Village Green Owners Assoc., 793 Fraud, 398, 505 avoiding, 420 cost of, 184 perpetrators, 820 separation of duties and, 436 suspicion of, 409 telecommunications, 700 Freedom of Information Act, 376 French intelligence agency, 79 Frequency Hopping Spread Spectrum (FHSS), 167 Front lighting, 919 FTP, see File Transfer Protocol Full-featured firewall, 115 Functional family, definition of, 285 Future narrow-band digital terminal (FNBDT), 198

G GA, see Genetic algorithms GAO, see General Accounting Office Gap analysis, 323, 324, 807 Garbage-in, garbage-out (GI-GO), 333

973

AU1518Index Page 974 Thursday, November 14, 2002 7:45 PM

Index Gateway(s) application-level, 140, 142 circuit-level, 138, 139l, 146 considerations, 139 function of stateful inspection as, 143 interface, certificate request via, 672 SMP-based application-level, 144 SMTP system, scanning rules on, 559 virus protection, 555 VoIP trunking, 199 VPN, 668 WAP, 668 General Accounting Office (GAO), 421 General Public License (GPL), 721 Genetic algorithms (GA), 548 GHOST, 562 GIAC, see Global Information Assurance Certification GI-GO, see Garbage-in, garbage-out Gilbert & Jones, Inc., prosecution of under Economic Espionage Act, 79 GLBA, see Gramm-Leach-Bliley Act Global Information Assurance Certification (GIAC), 272–273 Global Internet Liberty Campaign, 833 Global positioning system (GPS), 52 Global System for Mobile Communications (GSM), 195 GNU General Public License, 212 Goner, 544, 546 Good Times, 603 Government agencies, as primary targets, 930 contractors, 474 entity, large-scale computer system acquired by, 510 Web sites, 954 GPL, see General Public License GPS, see Global positioning system Gramm-Leach-Bliley Act (GLBA), 264, 314, 347, 350, 794 Grand jury subpoenas, 865 Graphical user interface (GUI), 217, 753 certificate request via, 671 icons, 255 GSM, see Global System for Mobile Communications Guard services, 908 Guest user account, creation of, 41 GUI, see Graphical user interface Guidelines, definition of, 298

974

H Hacker(s), 71 advantage gained from anonymity, 137 definition, 684 first step of, 217 network, attempt to develop, 822 protection of sensitive information from, 365 teenage, 815 tools, automatic, 895 zombies coordinated by, 226 Hacker attacks and defenses, 51–65 active and passive operating system fingerprinting, 55–58 active OS fingerprinting, 55–56 defending against operating system fingerprinting, 57–58 passive OS fingerprinting, 56–57 recent worm advances, 58–62 more damaging attacks, 60 morphing and disguised worms, 59 multi-platform worms, 59 rapidly spreading worms, 58–59 worm defenses, 60–62 zero-day exploit worms, 60 sniffing backdoors, 62–65 defending against sniffing backdoor attacks, 64–65 nonpromiscuous sniffing backdoors, 63 promiscuous sniffing backdoors, 63–64 wireless LAN attacks, 51–55 defending against wireless LAN attacks, 53–55 network stumbling, war driving, and war walking, 52–53 Hard-coded passwords, 22, 41, 42 Hardened OS, 150, 151 Hardware -based firewall, 150 inventory, 861 Hash controls, 16 function, 654 definition of, 647 most common, 658 Hash algorithms, 653–663 definitions, 654–659 functions, 654–655 how SHA-1 works, 656–657 keyed hash, 657–659

AU1518Index Page 975 Thursday, November 14, 2002 7:45 PM

Index MD5, 657 secure hash algorithms, 655–656 problems with hash algorithms, 662 secure, 655, 656 use of in modern cryptographic systems, 659–662 digital signatures, 661 IPSec, 660–661 Transport Layer Security, 659–660 Hashed message authentication code (HMAC), 657, 658, 660 HCOs, see Healthcare organizations Healthcare organizations (HCOs), 799, 802 Health Insurance Portability and Accountability Act (HIPAA), 264, 313, 326, 352, 793, 799, see also HIPAA security readiness, framework to approach compliance, 402 interdependencies, 810 security requirements, 800, 801, 803 Help desk, 456 Hewlett-Packard Openview, 93 OS hardening guidelines provided by, 361 outsourcing services oriented around, 384 High-risk mitigation, 809 Hijacking, 950 HIPAA, see Health Insurance Portability and Accountability Act HIPAA security readiness, framework to approach, 799–814 current design, 800–801 determining applicability, 801 functional decomposition of organization, 800–801 security requirements, 801 execution, 811–812 defining PMO activities, 812 program management office, 811–812 utilizing standard project life-cycle approach, 812 framework, 799–800 gap assessment, 807–811 defining projects, 809–810 development of budget, 811 organizational alignment and management approval, 811 prioritizing projects, 810–811 requirements interpretation, 801–807 defining scope of security requirements, 802–805

development of requirements categories, 806–807 HMAC, see Hashed message authentication code Hoaxes, 565, 570, 578, 603 Home directory(ies), 755 retrieving of files from, 40 uploading of files to, 36 Homeland security, call for, 764 Home location register, 197 Honeypot, definition, 684 Host-based DDoS protection, 232 Host-based firewalls, deployment of across enterprise, 155–165 configuration, 160–161 lessons learned, 161–162 product selection criteria, 157–159 semantic introduction, 155–157 stand-alone versus agent-based firewalls, 157 testing methodology, 159–160 UNIX, 163 HR, see Human resources HRIS, see Human resources information system HTML, attack scripts coded in, 548 HTTP, 207 connection, 725 content filter, 552 packets, 215 protection, 559 proxy, 140 traffic, 144 scanning of for hostile Java applets, 555 scanning of incoming, 359 Human error, 505 Human resources (HR) controls, 350 department, consequences of corporate policy noncompliance enforced by, 416 hiring and separation process, 443 information system (HRIS), 443 Human resources issues, 441–459 hiring of information security professionals, 451–455 job descriptions, 451–452 relevant experience, 453 selection process, 453–454 when employees and non-employees leave, 454–455 information security roles and responsibilities, 442–451

975

AU1518Index Page 976 Thursday, November 14, 2002 7:45 PM

Index business process owner, information custodian, and end user, 442–443 distributed information security support in larger organizations, 448 information security functions, 443–448 information security options for smaller organizations, 448–449 internal and external audit, 449 outsourcing providers, 449–450 reporting of information security function, 450–451 separation of responsibilities, 455–457 Hybris, 541, 573, 594 HyperCard stacks, as attack avenue, 545 viruses, 599

I I&A, see Identification and authentication IBM, outsourcing services oriented around, 384 ICMP, see Internet Control Message Protocol ICV, see Integrity check value Identification and authentication (I&A), 470 IDS, see Intrusion detection system IEC, see International Electrotechnical Commission IEEE 802.11, 51, 90, 167, 169 IETF, see Internet Engineering Task Force IGMP, see Internet Group Membership Protocol IIS, see Internet Information Services Image blossoming of, 919 files, steganography in, 621 unmodified, 622 IMAP4, see Internet Mail Access Protocol Import/export regulations, 704 IN, see Intelligent network Incident(s), see also Computer security incident, managing response to criminal, nature of, 869 definition of, 695 handling, 246, 695, 696 management, reporting, 845 recovery, 890 reporting, 853 requirements for successful response to, 875

976

Incident response (IR), 392, 397, 407, 408, 446, 495, 695–696 management, 855–871 criminal, forfeiture, and civil processes, 867–869 critical incident investigation, 863–865 critical incident response team, 860–861 enterprise risk, 858 forensic examination, 866–867 incident response steps, 861–863 key assets, 858–859 nature of criminal incidents, 869–870 overall project planning, 856–857 risk controls, 860 risk management best practices development, 859–860 risk management key points, 856 risk management project, 855 top-down risk management project planning, 855–856 use of monitoring devices, 869 methodology, benefits of structured, 887 phases of, 878, 879 plan, 685 obstacles to establishing effective, 886 reason for having, 874–875 preparation, 61 team (IRT), 877, 878 tiered approach to, 246 Independent code review(s) failure to pass, 482 purpose of, 480 Independent Verification and Validation (IV&V), 518, 532, 539 Industrial Age, 817 Industrial espionage, 67 Industrial Revolution, 68, 69 Infection initial stages of, 559 mechanism, 575, 587 widely publicized vectors of, 577 Infector(s) boot-sector, 568 dual, 574 file, 572, 589 system, 573 InfoExpress CyberArmor Personal Firewall, 157 Information age, beginning of, 68 confidentiality, 505

AU1518Index Page 977 Thursday, November 14, 2002 7:45 PM

Index custodian, 442 defining sensitive, 936 disclosure of sensitive, 420 electronic exchange of, 701 improper classification of, 478 labeling of sensitive, 478 propriety of, 4 recording of, 45 resources management (IRM), 435 service providers, 850 sharing applications, 653 FBI interpretation of, 83 Sharing and Analysis Centers (ISACs), 353 society, transition to, 69 system(s) auditor, 267 Security Policy for, 511 System Security Certification Test, 509 theft of proposal, 78 threat, 499 use of encryption to conceal, 828 valuation data, 72 volatile, 250 war, 69 Information protection, 415–439 administration manager, 435 department, mission of, 426 director, 435 executive management sponsoring and support of, 429–437 fitting of information security role within organization, 429–431 security positions, 431–436 separation of duties, 436–437 size of information protection/security department, 437 group, worst place for, 431 personnel within organization, 415–420 corporate auditing, 416 human resources, 416 information asset and systems owners, 419 information protection, 420 information security oversight committee, 416 IT administrators, 418–419 law, 417 managers, 417 role of, 420–428 awareness and training, 423–426

information protection budgeting, 427–428 risk management, 421–423 Information security administrator, 433–434 agenda, personal agendas placed ahead of, 265 best practices, 315 conflict between marketing/sales and, 84 delegate, 415 department, users administered by, 270 engineering, definition of, 445 function(s) budget challenges for, 442 exercises, 304 rumors, 80 management team layers of, 266 organization, 264 onion model of, 71 oversight committee, 436 preventive–detective-corrective cycle, 12 professional, hiring of, 451 programs, floundering, 241 response times, 270, 271 roles, 437, 444 specialist (ISS), 452 Internet Scanner, 335 RealSecure Desktop Protector, 157 standards bodies, generally accepted, 805 Information security, human side of, 239–261 anti-virus and Web-based attacks, 260 business analyst, 252–254 change management, 248–250 accreditation, 250 certification, 249–250 chief information officer, 247 exploiting strengths of personnel in regard to security program, 258–259 IT steering committee, 247–248 job rotation, 259–260 job segregation of duties, 260 librarian, 255–256 need for more policy, 240–243 operator, 256 organization chart, 240 programmer, 254–255 reacting to incidents, 245–247 role of people in information security, 239–240 roles and responsibilities, 244 security director, 243–244

977

AU1518Index Page 978 Thursday, November 14, 2002 7:45 PM

Index security placement, 243 system owner and data owner, 256–257 systems analyst, 251–252 technical standards committee, 250–251 training and awareness, 244–245 user, 257–258 Information systems security officer (ISSO), 405, 488 Information systems security officer, roles and responsibilities of, 405–413 nontechnical role, 412 responsibilities, 406–411 access controls, 407–408 audits and reviews, 408 certification/accreditation, 411 configuration management, 408 contingency planning, 408 copyright, 408 exceptions, 411 incident response, 408–409 personnel security, 409 physical security, 409 policies, standards, guidelines, and rules, 407 reports, 409–410 risk management, 410 security software/hardware, 410 systems acquisition, 411 systems development, 411 testing, 410 training, 410 roles, 406 Information technology (IT), 44, 317, 800, see also IT security evaluation, common criteria for audit, 334 budgets scrutiny of, 263 weakening, 248 continuity planning, 771 crisis management expert, 446 detection of espionage using, 70 environment, separation of duties in, 456 evaluation types, 326–327 group, vulnerability of to outdating plans, 788 partner, view of outsourcing vendor as, 382 privacy concerns of, 44 product, confusion with term, 280 professionals, projects envisioned by, 855 risk assessment, 332 role of in economic espionage, 69 separation of responsibilities in, 441

978

steering committee, lack of follow-up by, 248 stolen, 70 systems, protection of data contained within, 325 Information technology environment, evaluating security posture of, 325–339 analyzing paired vulnerabilities, 338 elements of risk assessment methodologies, 329 evaluating identified vulnerabilities, 337–338 information technology audit, 334 network technical vulnerability assessment, 334–335 penetration testing, 336–337 platform technical vulnerability assessment, 335 qualitative risk assessment, 333 quantitative risk assessment, 333–334 residual risk analysis, 331 risk assessment methodologies, 331–332 safeguards, 330–331 security life-cycle model, 328–329 technical vulnerability assessment, 334 threats, 330 vulnerability, 330 InfraGard, 83 Infrastructure companies, risk of terrorism faced by, 930 In-house expertise, loss of, 371 Initialization vector (IV), 171 Insecure protocols, suggestions for mitigating risk associated with, 221 Instant messenger services, 550 Insurance coverage, cyber-risk, 363 cyber-risk, 353 as key risk transfer mechanism, 343 need for after completion of risk assessment, 342 Insurer, finding right, 356 Intangible assets, protection of, 405 Integrity check value (ICV), 660, 661 Intellectual property decisions about, 699 protection of, 399, 405 techniques used to steal, 81 theft, 78, 820 violations, 349 Intelligence agencies, Internet as battlefield recognized by, 547

AU1518Index Page 979 Thursday, November 14, 2002 7:45 PM

Index Intelligent network (IN), 193 Intel platform, dominance of in hardware, 567 Interexchange carriers (IXCs), 193 Internal auditor, 435 Internal controls abilities provided by, 9 systems, monitoring of, 6 Internal extrusion, secure management of, 157 Internal firewalls, 156 International Chamber of Commerce, 832 International Electrotechnical Commission (IEC), 526 International Information Systems Security Certification Consortium [(ISC)2], 271 International Information Systems Security Certification Consortium Code of Ethics, 272 International Organization for Standards (ISO), 367, 526 International threat, 817 Internet access, broadband, 109 connections, broadband, 155 Control Message Protocol (ICMP), 56, 231 flooding attacks, 205 redirect, 220 DMZ, 359 encryption of data while connected to, 48 environment, typical threats in, 479 Group Membership Protocol (IGMP), 124 hacking, 700 Information Services (IIS), 558 Mail Access Protocol (IMAP4), 658 most popular protocol on, 144 philosophical foundation of, 227 professional liability, 354 Protocol Security (IPSec), 149, 222, 660 relay chat (IRC), 579 resources, 934 role of in economic stability of national infrastructure, 352 security, 786 model addressing risks of, 363 specialist, 406 watchdog, 226 service provider (ISP), 110, 348, 510, 850 audit trails not maintained by, 826 British, 348 problem, DDoS as, 227 VPN management offered by, 449 telephony directory, 200

wireless, 108, 109, 111 worm, first active, 543 Internet Engineering Task Force (IETF), 55, 93, 640 Kompella draft, 124, 125 Martini draft, 124, 125 Real-Time Protocol, 204 specification, 121 INTERPOL, 825 Intrusion(s) anomaly, 693 detection, 397 devices, monitoring of, 378 host-based, 362 outsourced services for, 376 software, 410 misuse, 693 monitoring systems, 909 types of, 692 Intrusion detection system (IDS), 13, 321, 560, 683–698, 842, 860 central security operations responsible for, 447 characteristics of good, 693–694 cyber-attack caught by, 345 defense-in-depth, 686–687 definition of, 684 deployment of at network level, 561 development of, 890 establishment of, 13 flavors of, 881 getting ready, 688–691 host-based, 156 methodology for choosing and implementing, 694–695 network, 216, 350, 560, 711 next generation of, 688 predictive, 687 procurement of, 74 requirements proposal, 687 selection of, 697 sensor locations, 695 steps for protecting systems, 691–692 suspected incidents detected through, 862 suspicion of compromise, 695–696 systems types, 685 technical safeguards settings contained in, 328 types, 608, 692–693 ways to categorize, 683 what to look for, 687–688 INVITE message, 203

979

AU1518Index Page 980 Thursday, November 14, 2002 7:45 PM

Index IP address spoofing, 134 header segment, TCP header segment versus, 131 Packet structure, 131 phones, device authentication of, 207 IPSec, see Internet Protocol Security IR, see Incident response IRC, see Internet relay chat Irish Republican Army, 823 IRM, see Information resources management Iron Mountain, 395 IRT, see Incident response team ISACs, see Information Sharing and Analysis Centers (ISC)2, see International Information Systems Security Certification Consortium ISDN links, dial-on-demand, 110 User Part (ISUP), 201 Islamic extremist organizations, 822 ISO, see International Organization for Standards ISO 15408, 475 ISO 17799, 315, 342, 355, 367, 442, 454, 457 ISO 7498-2, 522 ISP, see Internet service provider ISS, see Information security specialist ISSA, association with, 244 ISSO, see Information systems security officer ISUP, see ISDN User Part IT, see Information technology IT security evaluation, common criteria for, 275–295 CC user community and stakeholders, 289–292 future of CC, 292–293 history, 275–280 major components of methodology, 281–289 purpose and intended use, 280–281 ITU vocoders, 200 IV, see Initialization vector IV&V, see Independent Verification and Validation IXCs, see Interexchange carriers

J Jackson v. Glidden, 797 Japanese Digital Cellular (JDC), 195

980

Java attack(s) Netscape browser vulnerable to, 112 scripts coded in, 548 -based attacks, 546 -capable Web browser, 735, 736 viewer, VNC support available for, 720 vulnerabilities affecting, 553 JavaScript, vulnerabilities affecting, 553 JDC, see Japanese Digital Cellular Jerusalem virus, 590 Jitter, 201 Job descriptions, example, 432–434 function, librarian as, 255 rotation cross-training provided by employees by, 268 possible fraudulent activity identified by, 259 scheduling, 456 -sequencing mistakes, 256 John Doe civil lawsuit, 892 Joint Photographic Experts Group (JPEG), 621 Joint Task Force–Central Network Operations, 228 JPEG, see Joint Photographic Experts Group

K KDC, see Key distribution center Kerberos, 222, 645 Key distribution center (KDC), 645 Known cover attack, 624 Known message attack, 624 Kompella draft, IETF, 124, 125

L Label Distribution Protocol (LDP), 202 switched path (LSP), 122, 126 switch router (LSR), 122, 202 Labor costs, 239 unrest, 702 LAN, see Local area network LAND attack, 137 Language ASN.1, 95 use of to hide in plain sight, 80

AU1518Index Page 981 Thursday, November 14, 2002 7:45 PM

Index Laptop(s) system, way to check, 558 unauthorized network configuration on, 162 use of without being connected to network, 558 Laser Devices, Inc., prosecution of under Economic Espionage Act, 79 Last mile connectivity, 126 Latent viruses, 542 Law enforcement agencies (LEA), 937 Lawrence Livermore National Laboratory, 428 LDAP, see Lightweight Directory Access Protocol LDP, see Label Distribution Protocol LEA, see Law enforcement agencies Leadership, definition of, 781 LeakTest, 159 Least privilege, principle of, 267, 457 Least significant bits (LSB), 621 Lehigh virus, 542, 589 LFSR, see Linear feedback shift register Liability comprehensive general, 353 Internet professional, 354 Lighting emergency, 913 front, 919 Lightweight Directory Access Protocol (LDAP), 120, 203, 713 Limewire, 821 Lindose/Winux, 591 Linear feedback shift register (LFSR), 632 Line managers, 468 Linux, 212 notebooks, need for personal firewall protection, 158 system(s) creation of nonexecutable stack on, 61 IP personality patch for, 57 VNC viewer on, 726 virus protection policies for, 552 vulnerability of, 225 Windows desktop from, 723 worms, 59, 544–545, 597 Lion, 577, 597 Local area network (LAN), 708, see also Wireless LAN application protocol, 120 controls used in design of, 709 hard-coding of ARP entries on, 65 manager authentication, 217

strategic importance of, 903 virtual, 206 Location-based targets, 931 Logic bombs, 565, 570, 582, 590, 824 Log-on passwords, maintaining confidentiality of, 415 Loss of privacy, public outcry regarding, 43 Lossy compression, 621 LoveBug, 355, 825 LoveLetter, 242, 561, 573, 577, 600, 601, 602 LSB, see Least significant bits LSP, see Label switched path LSR, see Label switch router

M MAC, see Media access control Macintosh attack avenues, 545 server, VNC support available for, 720 virus protection policies for, 552 Macro virus, 544, 574, 575, 592, 593 Maginot Life defenses, 855 Mail-borne nuisances, 594 Mail handling checklist, 939 Mailing lists, name ending up on, 47 Mailstorm, 595 Mail-VPN, 121 Mainframe(s) access control, 461 Big Iron, 595 data centers, disaster recovery planning, for 761 platform diagram, 336 Malicious active code, protection from, 553 Malicious code, 447, 541–563 cross-platform, 545 current threats, 542–546 operating system-specific viruses, 544–545 polymorphic viruses, 545 script attacks, 546 Trojan horses, 544 viruses, 542 worms, 543 detection and response, 556–561 current methods for detecting malicious code, 557–559 proactive detection, 560–561 virus and vulnerability notification, 556–557 future threats, 546–550 active content, 550 criminal enterprises, 546

981

AU1518Index Page 982 Thursday, November 14, 2002 7:45 PM

Index cross-platform attacks, 548 government agencies, 547 ideologues, 546–547 intelligent scripts, 548 router viruses or worms, 549 script kiddie threat, 546 self-evolving malicious code, 548–549 terrorist groups, 547 Warhol, 548 wireless viruses, 549–550 methods for detecting, 557 product having evidence of, 85 protection, 550–556 defense-in-depth, 550 education and awareness, 552–553 policy, 551–552 protection from malicious active code, 553–556 research, 556 response and cleanup, 561–562 ways to enter corporate network, 550 Malware and computer viruses, 565–615 boot-sector infectors, 583–589 Brain, 583–585 Stoned, 585–589 change detection, 610 combinations and convergence, 603 computing environment, 567–568 DDoS zombies, 581, 607 Tribe Flood Network, 607 Trin00, 607 detection/protection, 608 e-mail viruses, 592–594 CHRISTMA exec, 592 W95.Hybris, 594 W97M/Melissa, 592–594 file infectors, 589–592 Jerusalem, 590–592 Lehigh, 589–590 glossary, 611–615 history, 568–570 hoaxes, 578–579, 603–605 logic bombs, 582 macro viruses, 599–600 malware types, 570 potential security concerns, 566–567 pranks, 582–583 remote-access Trojans, 581 scanners, 608–610 activity monitors, 609 heuristic scanners, 609–610 script viruses, 600–602 stealth, 576–577 summary opinion, 610–611

982

Trojans, 579–580, 605–606 viruses, 570–576 examples, 574–575, 583 types, 572–574 virus structure, 575–576 worms, 577, 595–599 Code Red, 598–599 Linux worm, 597–598 Morris worm, 595–597 RATS, 606 Managed security service provider (MSSP), 246 companies, international branches of, 366 market, growth of, 380 services (MSS), 391 response of to new world priorities, 403 skills offered by, 404 Managed services partner, 757 Managed system security providers (MSSPs), 366 Management information base (MIB), 94, 95 Mandatory access control, 18 Man-in-the-middle attack, 738 Mantraps, 909, 911 Market segmentation, type of, 366 MARS, 635 Martini draft, IETF, 124, 125 Maximum tolerable downtime (MTD), 785, 786 Maxus, 818 Mazu Networks, 235 McAfee, 541 e-mail security appliances, 121 Personal Firewall, 156 virus scanning software, 229, 233 MCTs, see Military critical technologies Media Gateway Control Protocol (MGCP), 198 servers, 199 Media access control (MAC), 53 Media access control address(es) assignment of static IP addresses to known, 207 checking, 173 faked, 220 filtering, 53 monitoring tool, 207 wireless card, 55 Melissa virus, 566, 573, 592, 593 MELP, see Mixed excitation linear prediction vocoder Memoranda of agreement (MOA), 501

AU1518Index Page 983 Thursday, November 14, 2002 7:45 PM

Index Merger, potential partners for, 325 Merrill Lynch, 951, 953 Message(s) authentication codes, 648, 658 digest, 647, 654 hiding of, see Steganography Metropolitan area networks, 126 MGCP, see Media Gateway Control Protocol MIB, see Management information base Michelangelo virus, 542, 572, 586 Microsoft Family Networking, 113 internal network, attack on, 156 Internet Explorer, 112, 569 Networking, 113 NT File System (NTFS), 749 OS hardening guidelines provided by, 361 Outlook 98, 593 server, VNC support available for, 720 Windows CE, VNC support available for, 720 file permissions, 752 identified flaws in, 548 NT, vulnerability of, 225 OLE, 569 Script Host, 574 VNC server, 733 XP, support for IEEE 802.1x standard under, 174 Word NORMAL.DOT file, 575 Midsourcing, 377 Military critical technologies (MCTs), 76, 77 Military forces, Internet as battlefield recognized by, 547 Military mandatory access control systems, 461 Military organizations, 402 Millennium Bug, 587 Minnich v. Ashland Oil Co., 797 Misdemeanors, 867 Mismanagement claim, 353 Misuse detection, 683 intrusions, 693 Mitigation definition of, 343 high-risk, 809 Mixed excitation linear prediction vocoder (MELP), 198 MLS, see Multilevel security MOA, see Memoranda of agreement Moat, 705

Mobile switching center (MSC), 197 Model business IT security, 747 Capability Maturity, 278 information security, 71 OSI, 130, 142 policy function–responsibility, 307 policy life-cycle, 306–311 risk-elimination, 341 security life-cycle, 328 Modem(s) connection of to enterprise, 187 hacking, 700 Money laundering, 820 Monitoring importance of, 6 process, design of, 757 Monkey, 585 Monoalphabetic substitution ciphers, 629 Moody v. Blanchard Place, 796 Morpheous, 821 Morris worm, 543, 595 Most significant bits (MSB), 621 Motion-detection systems, design of, 909 MP3, 115 MPLS, see Multi-Protocol Label Switching MSB, see Most significant bits MSC, see Mobile switching center MS-DOS viruses, 576 MSS, see Managed security services MSSP, see Managed security service provider MTD, see Maximum tolerable downtime Multilevel security (MLS), 148, 151, 328 Multiline responses, FTP problems with, 25 Multi-processor server, 136, 144 Multi-Protocol Label Switching (MPLS), 119, 121, 127, 198 equipment criteria, sample, 123 Label Distribution Protocol, 202 standards, 202 topologies, 122 VPNs, 123 Music download services, 115

N Name-dropping, 258, 259 NANP, see North American numbering plan Napoleon, wisdom of, 72 Napster, 821 NAT, see Network address translation National Bureau of Standards (NBS), 630 National defense organizations, 402

983

AU1518Index Page 984 Thursday, November 14, 2002 7:45 PM

Index National Information Assurance Certification and Accreditation Process (NIACAP), 486 certification levels identified by, 501 standards, 487 National Infrastructure Protection Center (NIPC), 556, 936, 938 National Institute of Standards and Technology (NIST), 630, 635 cryptographic strength of Rijndael according to, 663 security plan template provided by, 492 SHA deigned by, 647 National Security Threat List (NSTL), 76, 77 NBS, see National Bureau of Standards NCOS, see Network class of service NCR, outsourcing services oriented around, 384 NDAs, see Nondisclosure agreements NeoByte Solution Invisible Secrets 3, 622 Neoteris, 121 Nessus, 335 NetBIOS /IP attacks, 116 protocol, 110, 111 support, 113 traffic, presence of on internal and external networks, 162 Windows-native, 117 Netilla, 121 NetMeeting, 550 Netscape browser, 112 NetStumbler, 52 NetVCR, 709 Network access control devices, 710 service devices, 199 address translation (NAT), 114, 203 firewall, 114 router, 115 administrators, critical incidents detected by, 862 -based system, 686 class of service (NCOS), 180 assignment of to phones, 181 definition, 184 levels, 180 communication, 723 configuration, unauthorized, 162 connection(s) blocking of, 156 rule compliance, 492 controller, failsafe, 386

984

customer access, 716, 717 design principles, secure, 360 devices, security configuration management of, 448 element, 93 enterprise, construction of VPN within, 715 environments, integration of IDS between server and, 691 external, 706 File Systems (NFS), 749 firewall reconfigurations, 119 hacker, 822 intelligent, 193 interface(s) card (NIC), 54, 211 failures, 94 router with multiple, 101 intrusion detection systems, 350 layer, static packet filter operating at, 132 load requirements, 144 local area, 708 management station, 93 manager, SNMP, 94 Mapper (NMAP), 335 metropolitan area, 126 MPLS-based, 121 name, role of, 168 neighborhood, 110 NetBIOS traffic on, 162 operations center (NOC), 108, 396 packet settings captured from, 56 performance requirements, 201 private, 114 public switched telephone, 191 security, 130 controls, 357, 360 coverage, types of, 354–355 holy grail of, 153 misunderstood terms in, 153 specialist, 406 shared media, 709 stumbling, 52 switched Ethernet, 219 technical vulnerability assessment, 334 third-generation, 196 topology information, 217 transport, packet-based, 201 wide area, 201, 708 wireless, 195, 220 worm, 59, 156 Network Associates PGP Desktop Security, 156 Sniffer Technologies, 211

AU1518Index Page 985 Thursday, November 14, 2002 7:45 PM

Index Network Ice, Black Ice Defender, 116 Next plane out timing, 400 NFS, see Network File Systems NIACAP, see National Information Assurance Certification and Accreditation Process NIC, see Network interface card Nicksum Probe, 709 Nimda, 59, 260, 355, 541, 543, 548, 560 NIPC, see National Infrastructure Protection Center NIST, see National Institute of Standards and Technology NMAP, see Network Mapper NOC, see Network operations center No-Int, 585 Nondisclosure agreements (NDAs), 443 Nonpromiscuous sniffing backdoors, 63 Non-repudiation, definition of, 524 North American numbering plan (NANP), 193 Norton Internet Security 2000, 156 Notary service, digital, 650 Notebooks, use of without being connected to network, 558 NSA, see U.S. National Security Agency NTFS, see Microsoft NT File System NTL, see National Security Threat List Nuclear power plant security, 786 Nuisance protection, 427 Number analysis, 184

O Object Linking and Embedding (OLE), 569 reuse, 525 OECD, see Organization for Economic Cooperation and Development OEM, see Original equipment manufacturing Office of Homeland Security, 766 Okena, 235 OLE, see Object Linking and Embedding OOB communications, see Out-of-band communications Open authentication, 169 Open society, American, 929 Open-source software tools, 74 Open Systems Interconnection (OSI), 130 layer, 135, 138, 141 model, 130, 142 Operating system (OS), 55, 548 directories, 755 fingerprinting, 51, 55

active, 56 defending against, 57 passive, 56, 57 flash memory-based, 120 hardened, 150, 151 Inetd daemon, 142, 145 kernel level, 136 network-based, 335 patched, 151, 152 suppliers, OS hardening guidelines provided by, 361 technical safeguards settings contained in, 328 vulnerabilities, 152 Operational security, 935 Operations Security domain, 746 Organization(s) acceptability of SLA to, 399 audit, risks recognized by, 449 award for E-commerce site awarded to, 778 awareness of information protection issues throughout, 423 chart, review of, 240 communication of privacy throughout, 50 conflict of interest, 9, 371 control impeding operations of, 8 decision to outsource, 369 functions involved in policy development task recognized by, 297 healthcare, 799 importance of outsourcing to, 387 information owner, 415 security problem in smaller, 448 program, responsibility for, 310 Islamic extremist, 822 military, 402 national defense, 402 options of after investigation, 894 as potential target, 930 privacy coordinator, 50 security policies affecting entire, 309 Organization for Economic Cooperation and Development (OECD), 279, 829 Original equipment manufacturing (OEM), 599 OS, see Operating system Osama Bin Laden, 618 OSI, see Open Systems Interconnection Out-of-band (OOB) communications, 861 Outgoing connections, limit on, 61

985

AU1518Index Page 986 Thursday, November 14, 2002 7:45 PM

Index Outsourcing, 702, see also Security service provider, working with managed arrangement, phases of, 368 contracts management control for, 373 problems arising with, 370–371 cost/benefit analysis of, 371 customer satisfaction with, 380 decision, justifying, 383 definition of, 365 vendor, view of as IT partner, 372 Outsourcing security, considerations for, 383–404 application development, 388–391 contracting issues, 390–391 control of strategic initiatives, 391 future of outsourced security, 402–404 industries most likely to outsource, 402 measurements of success, 403 response of MSS buyers to new world priorities, 404 response of MSS providers to new world priorities, 403 history, 383–385 contracting issues, 385 control of strategic initiatives, 385 data center operations, 383–385 network operations, 386–388 contracting issues, 387–388 control of strategic initiatives, 388 outsourcing security, 391–402 contracting issues, 400 defining security component to be outsourced, 392–397 establishing qualifications of provider, 399 incident response, 397–399 protecting intellectual property, 399–400 quality service level agreements, 401 retained, 401–402 Ownership, definition of, 463

P PABX, see Private area branch exchange Packet(s) analysis, 213 filter(s) default rule used with, 133 dynamic, 135, 136, 137 rules, configuration of, 133

986

static, 132, 134 UNIX, 163 flooding DoS attacks, 225 HTTP, 215 ping, 231 sniffing, protocols vulnerable to, 218 Packet sniffers, 91, 211–219 advanced sniffing tools, 219–221 switched Ethernet networks, 219–220 wireless networks, 220–221 detection of, 223 Ethereal, 212–214 how packet sniffers work, 211–212 legitimate uses, 214–216 network-based intrusion detection, 216 performance and network analysis, 216 troubleshooting, 214–215 verifying security configurations, 216 misuse, 216–219 credential sniffing, 217–218 e-mail sniffing, 218–219 network discovery, 217 reducing risk, 221–223 detecting packet sniffers, 223 encryption, 221–222 patches and updates, 222 securing of wiring closets, 222–223 Pagers, effectiveness of for alerting personnel, 246 PalmPilot, 700 Palmtops, 555 Pan/tilt drive, 918 Parallel-track prosecution, 868 PARASCAN, 582 Parasitic computing, 549 Paris Air Show, 82 Passive OS fingerprinting, 56, 57 Passive system, 686 Passport numbers, 44 Password(s) aging, 181 attack, 42, 181 conference call, 182 control, 194 cracking, 595, 596 default, 117 file entry, 36 guessing, 42 hard-coded, 22, 41, 42 log-on, 415 policies, 400 resets, 258, 394

AU1518Index Page 987 Thursday, November 14, 2002 7:45 PM

Index schemes, weak, 689 sharing, 696 standards, 480 storage, 734 system administrator, 260 UNIX, 29 VNC server, 722, 729 vulnerability of to brute-force attack, 222 Patched OS, 151, 152 Pattern matching, 396 Pay by product, 391 PBX, see Private branch exchange PC, see Personal computer pcAnywhere, 719 PC Cyborg, 605 PDAs, see Portable digital assistants PDD-63, see Presidential Decision Directive 63 PDU, see Protocol data unit Pearl Harbor, electronic, 816 Peer code reviews, 480 Peer-to-peer networking, 110, 112 Peer-to-peer programs, 550 PE files, see Portable executable files Pen test, definition of, 336 Penetration testing definition of, 685 external, 336 Pennsylvania v. General Public Utilities Corp., 796 People Express Airlines v. Consol. Rail Corp., 796 Perceived threat, 82 Performance metrics, determining, 269 Perimeter access controls, 704 point, 706 establishment, 704 scanning, outsourced services for, 376 Perl, attack scripts coded in, 548 Permission(s) directory, 754 establishing correct, 751 improperly granted, 748 management utilities, 753 Microsoft Windows-based file, 752 read-only, 756 sensitive file, 756 settings, 751 PE router, see Provider edge router Personal antiterrorism checklist, 946–950

Personal computer (PC), 510 cards, 54 virus, first known, 583 Personal firewall, 90, 115, 116, 155, 555 Personal identification number (PIN), 49 Personal secure environment (PSE), 673 Personnel background screening, 495 security, 344, 409 PGP, see Pretty good privacy Physical access control measures, 910–911 access policies, 910 card access controls, 911 keys and cipher locks, 910–911 mantraps and turnstiles, 911 Physical controls, implementation of, 17 Physical security, computing facility, 901–914 computing centers, 902–903 environmental concerns, 903–905 acts of nature, 904–905 community, 904 other external risks, 905 facility, 905–908 facility perils and computer room locations, 907–908 layers of protection, 905–907 protective measures, 908–914 fire controls, 912–913 guard services, 908–909 intrusion monitoring systems, 909–910 physical access control measures, 910–911 utility and telecommunication backup requirements, 913–914 Physical security, threat after September 11, 927–955 business continuity plans, 945–950 controlling sensitive information through operational security, 935–945 antiterrorism procedures for employees, 945 terrorism incident handling procedures, 939–944 deterrence to prevention, 932–933 extensive testing of contingency plans, 951 government agencies, 930–932 lessons learned, 945, 953 organization as potential target, 930 reason America is target, 928–929 reason for concern, 929–930

987

AU1518Index Page 988 Thursday, November 14, 2002 7:45 PM

Index recent test using scenario similar to terrorist attacks, 951–953 evacuation, 951–952 recovery, 952–953 reducing risk of terrorism, 933–935 upgraded plans and procedures after Y2K, 951 work ahead, 953–954 Pilot’s checklist, 843 PIN, see Personal identification number Ping of Death attack, 137, 205 Ping packets, 231 PKI, see Public key infrastructure PKI registration, 665–680 administrative and auto-registration, 675–679 authentication, 679 case study, 677–679 certificate request processing, 673–675 initial registration, 673–674 proof of possession, 675 CP, CPS, and registration process, 665–666 registration, identification, and authentication, 666–672 how person authenticates himself in process of requesting certificate, 669–671 how subject proves organizational entity, 668–669 individual authentication, 671–672 PKIX-CMP messages exchange, 675, 676 PKZIP encryption, 636 Plain old telephone service (POTS), 107 components of, 176 delivery of to subscriber, 175 Plaintext, 627, 631 Platform conversions, 391 PLO virus, 591 PMO, see Program management office Point-to-Point Protocol (PPP), 658 Policy(ies) communication of, 393 compliance, 810 creation function, 300 development, 297, 366, 392 approval, 301 awareness, 302 communication, 301 compliance, 302 creation function in, 300 enforcement, 303 exceptions, 302 maintenance, 303

988

monitoring, 303 retirement, 303 review, 301 evaluation committee effectiveness of, 309 establishment of, 307 functions applicability of policy, 306 knowledge of environment, 306 limits on authority, 305 separation of duties, 305 span of control, 305 Identifier, 666 life-cycle model, 306 Qualifier, 666 retired, 311 Polyalphabetic substitution cipher (PSC), 629 Polymorphic viruses, 545 POP3, see Post Office Protocol version 3 Portable digital assistants (PDAs), 474 Portable executable (PE) files, 591 Position description, sample, 455 Post-incident review, sample questions, 885 Post Office Protocol version 3 (POP3), 218 PostScript interpreter program, 599 POTS, see Plain old telephone service Power outages, 905 supplies, redundant, 120 PPP, see Point-to-Point Protocol PPs, see Protection Profiles Pranks, 565, 582, 815 Predictive IDS, 687 Presidential Decision Directive 63 (PDD-63), 829 Pressures to process, 338 Pretty good privacy (PGP), 853 PrettyPark, 544 Principle of Least Privilege, 267, 457 Privacy, 43–50 across air space, 197 coordinator, 50 data to be protected, 48–49 good work habits, 49 officer, 435 preserving privacy, 46–48 privacy and control, 43–44 public outcry regarding loss of, 43 recommendations, 49–50 rudiments of privacy, 44–46 derived data, 46 dynamic data, 45–46 static data, 44–45

AU1518Index Page 989 Thursday, November 14, 2002 7:45 PM

Index societal rules, 43 violation of, 50 Privacy Act, 376 Private area branch exchange (PABX), 179 Private branch exchange (PBX), 193 conditional toll deny feature of, 184 connection, 177, 180 problems, 185 security threats, 201 system(s) availability of direct inward system access on, 185 control, 194 direct inward dial offered by, 178 threats, 194 Private network, 114 Private virtual circuit (PVC), 123 Procedure, definition of, 298 Process ID, 606 Procurement central, 251 process, 84 Program efficiency, 269 management office (PMO), 811, 812 Programmer, disgruntled, 254 Programming bugs, 565 Project initiation, 785 Promiscuous sniffing backdoors, 63, 64 Proof of possession, 675 Prosecution, parallel-track, 868 Protection Profiles (PPs), 281, 286 Protocol data unit (PDU), 97, 98 messages, 726 Provider edge (PE) router, 125 Province of mankind, outer space as, 836 Provisional Irish Republican Army, 823 Proxy firewalls, 57 PSAPs, see Public safety answering points PSC, see Polyalphabetic substitution cipher PSE, see Personal secure environment PSTN, see Public switched telephone network Public key cryptography, 637, 648, 655 Public key infrastructure (PKI), 618, 644, 646 853, see also PKI registration applications large-scale, 680 Certificate Practice Statement, 665 deployment of, 376 management functions, messages used in implementing, 674

specialist, 406 technical safeguards settings contained in, 328 Public network, 114, 704 Public safety answering points (PSAPs), 193 Public switched telephone network (PSTN), 179, 191 PVC, see Private virtual circuit

Q QA, see Quality assurance QoS, see Quality-of-service Quality assurance (QA), 350, 476 Quality-of-service (QoS), 119, 201 basic voice service expectations, 194 expectations, 387 guarantees, 127 mechanisms, deployment of, 119 policy-based, 203 voice communication, 191

R RA, see Registration Authority RACF, 395, 418, 462 Radio-frequency (RF) distribution, 110 RADIUS, see Remote Access Dial-In User Service RATs, see Remote access Trojans RC6, 635 Reactive system, 686 Read-only permissions, 756 Read-write community, typical, 100 RealSecure Desktop Protector, 159 Real-Time Control Protocol (RTCP), 202 Real-Time Protocol (RTP), 202, 204 Recertification timelines, 496 Record retention requirements, 418 Recovery business process, 763 controls, 13, 860 planning, 399, 761 technical, 763 time objective (RTO), 772 Redundancy, 914 Reference number (RN), 678 Registration Authority (RA), 665 Regulated industries, 521 Remote Access Dial-In User Service (RADIUS), 120 tolerable downtime for, 317 Trojans (RATs), 580 Remote administration tools, 580

989

AU1518Index Page 990 Thursday, November 14, 2002 7:45 PM

Index Remote framebuffer (RFB), 720 RemotelyPossible, 719 Reporters Without Borders, 827 Reputation risk, 346 Request for Comment (RFC), 55, 93, 136, 137 Request for Interpretation (RI), 292 Request for proposal (RFP), 373, 482, 908 financial considerations for, 374 legal issues, 375 Requirement(s) categories, 806 decomposition, 521 matrix, 522, 523, 537 Research in Motion BlackBerry, 700 Residual risk mitigation plan, 338 statement, 501 Resource(s) lack of, 365 Reservation Protocol (RSVP), 119, 124, 202 Restricted algorithm, 628 Return on investment (ROI), 247, 689, 690 Reverse-engineering, worm function determined by, 59 RFB, see Remote framebuffer RFC, see Request for Comment RF distribution, see Radio-frequency distribution RFP, see Request for proposal RI, see Requests for Interpretation Rijndael, 635 Risk analysis, 328, 331 assessment, 5,322, 513 domains of, 344 internal, 327 methodologies, 329, 331 qualitative, 333 quantitative, 333 standards of, 332 -elimination model, 341 financial, 346 management best practices development, 859 cycle, 343 definition of, 421 key points, 856 reputation, 346 residual, 338, 501 terrorism, 933 Rivera v. Goldstein, 793 RN, see Reference number Robber baron nation, view of America as, 928

990

ROI, see Return on investment Root certificate authority, 641 Router(s) Cisco, 56 customer edge, 125 filter, 706 label-switching, 202 MPLS-enabled, 202 with multiple network interfaces, 101 network address translation, 115 provider edge, 125 TCP/IP, 111 technical safeguards settings contained in, 328 viruses, 549 worms, 549 RSA comparison of to DES, 639 SecureID, 120 step-by-step description of, 638 RSVP, see Resource Reservation Protocol RTCP, see Real-Time Control Protocol RTO, see Recovery time objective RTP, see Real-Time Protocol Rudin’s Law, 777

S Safeguards, definition of, 330 SafeWeb, 828 Salami scam, 582 Sales information, organization’s importance of keeping confidential, 385 Sandboxing, 560 SANS/FBI Top 20 Vulnerabilities, 316 SANS Institute, see System Administration, Networking and Security Institute SATAN, 834 Scanners, 608 SCCP, see Systems Security Certified Practitioner Schroedinger’s Cat, 841, 842 SCPs, see Service control points Script -kiddies, 819 viruses, 600 SCSI drive, 148 SDP, see Session Description Protocol Search warrants, questions surrounding, 864 Secure hash algorithm (SHA), 647 Secure Hash Standard (SHS), 647, 663 Secure Shell (SSH), 742

AU1518Index Page 991 Thursday, November 14, 2002 7:45 PM

Index Secure Socket Layer (SSL), 48, 207, 475, 640, 742 Secure telephone unit, first generation (STU1), 195 Securities and Exchange Act of 1934, 347 Security administrator, 435 functions, 268 responsibility of, 394, 395 airport, 768 applications, basic areas of, 482 architecture, establishment of, 702 assurance classes, 287 awareness training, 477, 494, 495 baseline, 809 border, 768 building checklist, 942–944 procedures, 941 classification system, 424 configurations, verifying, 216 controls, verification of, 493 costs of, 422 enterprise IP telephony, 205 explicit, 424 food and water supply, 768 functional classes, 283, 284 functional requirements (SFRs), 283 functions, tactical reasons for outsourcing, 370 homeland, 764, 766 improvement assessment, 697 incident, see Computer security incident, managing response to infrastructure, see Intrusion detection systems Internet, 768 life-cycle model, 328 model, use of to derive requirements, 368 multilevel, technical safeguards settings contained in, 328 nuclear power plant, 768 operation center (SOC), 396 outsourcing customer satisfaction with, 380 domains, 392 personnel, 344, 409 plan statement, sample, 512 poster, famous World War II, 935 program, key ingredients, 901 protection, rating computers for levels of, 74 request for, 482

requirements education, 502 HIPAA, 800, 801, 803 implementation-independent set of, 281 major source for, 520 risks, types of, 345 safeguards, 496 services, 527–531 software installation, 395 Targets (STs), 281, 282, 286 technology, immature, 367 transport layer, 205 travel, 768 Security assessment, 313–324 business processes, 320–321 business strategy, 317–318 inherent risks, 316–317 organizational structure, 318–320 risk assessment, 322–324 standards, 315–316 technology environment, 321 understanding business, 316 Security breaches, reporting of, 841–854 communication, 851–853 classification, 852 confidentiality, 853 identification and authentication, 853 report reasoning, 845–851 audience, 847–850 content and timing, 851 philosophy, 845–847 Schroedinger’s cat, 841–842 security requirements, 843–845 security policy, 843 security technology, 843–845 Security management, 263–274, 809 certifications, 271–274 CISA, 273–274 CISSP, 271–272 GIAC, 272–273 SSCP, 272 executive management and IT security management relationship, 264 information security management team organization, 264–265 job rotation, 268 organization, 495 performance metrics, 268–270 roles and, 265–267 separation of duties and principle of least privilege, 267–268 team justification, 263–264

991

AU1518Index Page 992 Thursday, November 14, 2002 7:45 PM

Index Security policy life cycle, 297–311 policy, 304–306 policy definitions, 298–299 policy functions, 299–304 approval, 301 awareness, 302–303 communication, 301–302 compliance, 302 creation, 300–301 enforcement, 303 exceptions, 302 maintenance, 303 monitoring, 303 retirement, 303–304 review, 301 policy life-cycle model, 306–311 Security service provider, working with managed, 365–381 industry perspective, 366–367 outsourcing from corporate perspective, 367–368 outsourcing defined, 365–366 phases of outsourcing arrangement, 368–380 identifying need, 369–376 managing arrangement, 378–379 selecting provider, 376–378 transitioning out, 379–380 Segregation of duties controls, 10 SEI, see Software Engineering Institute Self-incrimination, 833 Separation of duties, 267, 401, 436 September 11 (2001), 352, 618, see also Physical security, threat after September 11 attacks, prominence of CISO position since, 318 contribution of to changing face of continuity planning, 773 dangers demonstrated in, 901 lessons learned, 762, 900, 932, 945 pace of change for continuity planning profession following, 761 security operations employed since, 395 security world after, 403 seriousness of terrorist threat since, 547 Sequence controls, 16 Server anti-virus solutions, 717 backup systems for, 561 detection, 558 DNS, 63 e-mail, 134, 552 enterprise, 448

992

FTP access, 22, 23 creation of UNIX account on, 41 default action for, 40 preventing anonymous access to, 30 restricted command on, 35 timeouts for, 31 Macintosh, VNC support available for, 720 media, 199 Microsoft, VNC support available for, 720 multi-processor, 136, 144 publicly available, 65 reliability, telephony-grade, 200 security configuration management of, 448 security features considered when configuring, 33 UNIX, VNC support available for, 720 VNC finding, 740 Microsoft Windows, 733 multiple, 725 password, 729 UNIX, 732 voice, management techniques, 207 Web defacement, 891 Microsoft IIS, 60 vulnerability of, 225 wu-ftpd, 24 ServerInitialization message, 728 Service control points (SCPs), 192 level agreement (SLA), 231, 320, 373, 379 acceptability of to organization, 399 metrics, 378 operation requirements specified in, 442 quality of, 401 set identifier (SSID), 168 switch points (SSPs), 192 Services set identifier (SSID), 53 Session Description Protocol (SDP), 204 Initiation Protocol (SIP), 198 SFRs, see Security functional requirements SHA, see Secure hash algorithm Shadow Security Scanner, 335 Shared directories, 755 Shared media networks, 709 Shoot the messenger reaction, 779 Short message system (SMS), 550, 571 Shoulder surfing, 704 SHS, see Secure Hash Standard

AU1518Index Page 993 Thursday, November 14, 2002 7:45 PM

Index Signaling System 7 (SS7), 192, 193 Signal transfer points (STPs), 192 Signature scanning, 557 Simple firewall, 115 Simple Network Management Protocol (SNMP), 90, 93–106 administrative relationships, 96 agent, 93 definition, 93–95 host filtering, 104 management information base, 95 management philosophy, 95 network manager, 94 operations, 96 protocol data unit, 98–100 requests, 94, 97–98 security issues, 101–105 ability to change SNMP configuration, 102–103 community strings, 102 denial-of-service attacks, 103 impact of CERT CA-2002–03, 103–105 multiple management stations, 103 transmission process, 98 traps, 100–101 Single sign-on (SSO), 811 SIP, see Session Initiation Protocol SirCam virus, 566 SLA, see Service level agreement Smart cards, 49, 222 Smart phones, 555 Smartsourcing, 377 SMC Network Wireless LAN Configuration, 170, 171 Sme.g.,pathogen virus, 542 SMP, see Symmetric multi-processing SMS, see Short message system SMTP traffic, scanning of incoming, 359 Sniffer, invention of, 214 Sniffer Technologies Wireless Sniffer, 168 Sniffing backdoor attacks, defending against, 64 credential, 217 e-mail, 218 SNMP, see Simple Network Management Protocol Snort, 335 SOC, see Security operation center Social engineering attack, 186 odd kind of, 578 types of, 258 use of by pen testers, 337

Software anti-virus, 349, 359, 410, 447 desktop, 558 vendors, 556 -based firewalls, 156 change detection, 610 companies, foreign, 74 customer relationship management, 389 encryption, 643 fat-client, 119 illegal duplication of copyrighted, 408 intrusion detection, 410 inventory, 861 malicious, 570, 818 open-source, 74 programs, weaknesses in, 552 security, 395 Trojan code in, 73 update integrity verification, 659 upgrades, 391 virus scanning, 229, 233 Software Engineering Institute (SEI), 278, 852 Solaris machine, execution of code stopped on, 61–62 Source routing, 231 Spam, 717 Spoofing, 704 Spying by driving around, 68 Spyware additional layer of protection against, 160 outbound connections initiated by, 159 SS7, see Signaling System 7 SSH, see Secure Shell SSID, see Service set identifier SSL, see Secure Socket Layer SSO, see Single sign-on SSPs, see Service switch points Standard, definition of, 298 State awareness, 135 Stateful inspection, 142 Static data, 44, 45 Static packet filter, 132, 134 Stealth, 576, 583 Steganography, 619–626 defeat of, 624–625 example, 622 hiding of data, 620 image files, 621 uses, 622–624 Stego-only attack, 624 Stego signatures, 625 Step-up certificates, 641 Stoned virus, 572, 585 Storage, cheap, 467

993

AU1518Index Page 994 Thursday, November 14, 2002 7:45 PM

Index STPs, see Signal transfer points Strategic planning, 242 Stream ciphers, 632 Strong application proxies, 140 STs, see Security Targets STU-1, see Secure telephone unit, first generation Stupid Mac Tricks, 582 SubSeven backdoor for Windows, 217 Substitution box, 633 Suicidal terrorist, 932 Sun, OS hardening guidelines provided by, 361 Sunguard, 395 Sun Solaris, 212 Super certificates, 641 Switched Ethernet networks, 219 Switch jamming, 220 Sygate Personal Firewall Pro, 157 Symantec Desktop Firewall, 157 Raptor, 156 URL, 112 virus scanning software, 229, 233 Symmetric cryptography, 659, 643 algorithms, types of, 632 DES, 728 Symmetric multi-processing (SMP), 130 SYN scanning, 597 Syntax knowledge, 390 Systems(s) accreditation process, 509 acquisition, 411 administrator password, 260 analyst, 251 authentication, 714 development, 411 documentation, 14 failures, 94 file modification, 662 fire extinguishing, 912 hardening, 362 host-based, 686 infector, 573 intrusion monitoring, 909 life cycle, incorporation of security into, 495 Linux, VNC viewer on, 726 logs, recording information in, 34 monitoring, 15 motion-detection, 909 network-based, 686 owner, 256 passive, 686

994

programming, 456 reactive, 686 stack, nonexecutable, worms and, 61 System Administration, Networking and Security (SANS) Institute, 235, 244, 272, 556, 805 Systems Security Certified Practitioner (SSCP), 272

T Tape library, 456 Tapeworm program, 577 Target of Evaluation (TOE), 281 conformance evaluation, 286 types of, 282 Target server, FTP access to, 22 TCB, see Trusted computing base TCI, see Transactions, code sets, and identifiers TCO, see Total cost of ownership TCP header segment, IP header segment versus, 131 sessions, validation of, 138 SYN flooding, 205 Wrapper, 743 TCP/IP implementations, 16 protocol suite, replacement, 660 RFCs defining, 56 router, 111 TCSEC, see Trusted Computer System Evaluation Criteria TDMA, see Time division multiplexing access Tear Drop attack, 137 Technical recovery, evolution of to business process recovery, 763 Technical standards committee, 250, 251 Technical vulnerability assessment tools, automated, 335 Technology, learning of through trial and error, 893 Telecommunications fraud, 700 Telephone handset, 176 services, novel types of, 191 Templated process, 750 Tequila virus, 542 Terminate and Stay Resident (TSR), 590 Terrorism definition of, 929 event drills, 941

AU1518Index Page 995 Thursday, November 14, 2002 7:45 PM

Index incident procedures, 939 reducing risk of, 933 Terrorist(s) categories of, 547 groups, 547 information gathered about, 73 suicidal, 932 targets, potential, 931 Test Director, 517, 518 Testing areas to be considered for, 504 importance of, 772 Test scenario(s) building of, 531, 533 sample, 534 Test Scripts, 534, 535 TFN, see Tribal flood network TGS, see Ticket granting service TGT, see Ticket granting ticket Thawte, 641 Theft credit card, 820 intellectual property, 820 Threat(s), 505 assessments, 500 bomb, 940 definition of, 330 earthquake, 904 enterprise resource, 349 growing, 823 hybrid, 541 information, 499 international, 817 Internet environment, 479 matrix, 351 organizations distinguishing between internal and external, 379 PBX, 194, 201 perceived, 82 realized, 505 terrorist, 547 trusting law enforcement to provide information on, 541 Through-dialing disabling of, 182 restriction of, 181 Ticket granting service (TGS), 645 Ticket granting ticket (TGT), 645, 646 Timbuktu, 719 Time bombs, 883 Time division multiplexing access (TDMA), 195 Time-to-live (TTL), 57 Timeout directives, 32

TLS, see Transport Layer Security TOEs, see Targets of Evaluation Token cards, 222 Token replacement, 394 Toll fraud, 175, 181, 183, 185 TooLeaky, 159 Top Secret, 395, 418 ToS, see Type-of-service Total cost of ownership (TCO), 250 Trademark infringement, 346 Trade secrets, 81, 263 Traffic normalizer, 694 Transactions, code sets, and identifiers (TCI), 800 Transmission controls, 15 media, 919 Transport Layer Security (TLS), 120, 205 Travel security, 786 Trial and error, learning technology through, 893 Tribal flood network (TFN), 607 Trigger events, tape archiving of, 924 Trinoo, 230, 607 TripWire, 844 Trojan horse(s), 70, 544, 565, 883 additional layer of protection against, 160 attacks, 205 configuring host-based firewall to block, 160 connection of on TCP port, 156 extortion scam, 605 interference with operation of, 164 programs, 579 Trusted Computer System Evaluation Criteria (TCSEC), 278 Trusted computing base (TCB), 497 Trusted Systems Services, Inc. (TSSI), 751 TSR, see Terminate and Stay Resident TSSI, see Trusted Systems Services, Inc., 751 TTL, see Time-to-live Tunneling, 576 TWM Window Manager, 723, 724 Two-factor authentication, adoption of, 687 Twofish, 635 Type-of-service (ToS), 202

U UDP, see User Datagram Protocol Uninterruptible power supply (UPS), 913 United States Fidelity & Guar. Co. v. Plovidba, 795

995

AU1518Index Page 996 Thursday, November 14, 2002 7:45 PM

Index United States Liab. Ins. Co. v. Haidinger-Hayes, Inc., 792 Universal Wireless Communications (UWC), 195 UNIX, 544 account, creation of on FTP server, 41 attacks, goal of, 545 -based system, retrofitting of with Windows-based system, 533 environment, file permissions in, 32 host-based firewalls for, 163 log-in shell, 21, 37 packet filters, 163 password, 29 program, credential sniffing, 217 running VNC server under, 732 server, VNC support available for, 720 systems, password file sensitivity on, 28 virus protection policies for, 552 war-driving scripts, 52 UPS, see Uninterruptible power supply U.S. Department of Energy Computer Incident Advisory Capability (CIAC), 877 Use policies, downloading files in violation of, 870 User access, removal of, 713 account data, 712 Datagram Protocol (UDP), 138, 231 fragment flooding, 205 packet, 96 errors, incidence of, 257 IDs, creation of, 419 manager, 465 security rules, 493 U.S. Government contract, 452 U.S. National Security Agency (NSA), 195, 547, 630 UWC, see Universal Wireless Communications

V Vandalism, 820 VBA, see Visual Basic for Applications VBS, see Visual Basic Script Vendor(s) anti-virus, 541 COTS, 449 e-mail infrastructure, 121 notification, characteristic of, 848 outsourcing, 372 Verisign, 641

996

Video monitors, 922 streaming, 124 surveillance, see Closed-circuit television and video surveillance Videocassette recorders, time-lapse, 923 Videoconferencing, 124 Videotape machine, 916 Virtual computing, reality of, 719–743 access control, 726–732 how it works, 721–723 logging, 736–737 network communication, 723–725 running VNC server under UNIX, 732–733 VNC display names, 732–733 VNC as service, 733 Virtual Network Computing, 719–721 VNC and Microsoft Windows, 733–735 VNC and Web, 735 weaknesses in VNC authentication system, 737–742 cryptographic attacks, 740 finding VNC servers, 740–741 improving security through encapsulation, 741–742 man-in-the-middle attack, 738–740 random challenge, 738 Virtual LANs (VLANs), 206 Virtual leased lines (VLLs), 125 Virtual Network Computing (VNC), 682, 719 authentication challenge–response, 727 system, weaknesses in, 737 components, 720 server(s) finding, 740 Microsoft Windows, 733 multiple, 725 password, 722, 729 UNIX, 732 Virtual private network (VPN), 54, 119 carrier-class, 127 construction of within enterprise network, 715 encryption channel, 669 IPSec, 149 gateways, 668 incompatible, 251 IPSec, 222 management, 449 MPLS-, 123 new breed of, 124 outsourced services for, 376

AU1518Index Page 997 Thursday, November 14, 2002 7:45 PM

Index protocols, 659 secure, 208 sending data over, 48 technologies, customer access network with, 716, 717 Web browser-based, 127 Virtual private networks, new perspectives on, 119–128 applications, 121 layer 2 MPLS-VPN, 124–126 layer 3 MPLS-VPN, 126–127 MPLS-based VPNs, 121–124 Web-based IP VPN, 120 Virus(es), see also Malware and computer viruses analysis of using signatures, 625 Apple II, 568 Brain, 542, 572, 576, 583 Chernobyl, 542 Dark Avenger, 542 definition of, 542 detection, common technique for, 557 e-mail, 573, 561, 592, 608 evolution of, 568 examples, 583 eXchange, 604 Explore.zip, 542 handling, 559 HyperCard, 599 Jerusalem, 590 latent, 542 Lehigh, 542, 589 logic bomb carried by, 571 LoveBug, 355 LoveLetter, 242, 577 macro, 544, 557, 574, 575, 592, 593 Magistr, 542 Melissa, 566, 592, 593 Michelangelo, 542, 572 MS-DOS, 576 payloads, 566 PLO, 591 polymorphic, 545 protection, gateway, 555 router, 549 scanning software, 229 script, 600 signature update, 662 SirCam, 566 Sme.g.,pathogen, 542 Stoned, 572, 585 structure, 575 successful, 571 Tequila, 542

types of, 572 Virus Creation Laboratory, 542 wireless, 549 Wm.concept, 542 WM/Concept, 599 writers, 554 Virus Creation Laboratory virus, 542 Visual Basic for Applications (VBA), 599 Visual Basic Script (VBS), 544 VLANs, see Virtual LANs VLLs, see Virtual leased lines VNC, see Virtual Network Computing Vocoder(s), 198, 199 ITU, 200 mixed excitation linear prediction, 198 VOI, see Voice-over-the-Internet Voice conferencing, 182 packet routing, 199 server management techniques, 207 telephone traffic, 124 Voice communications, secure, 191–210 circuit-based PSTN voice network, 191–195 glossary, 209–210 network convergence, 198–208 architecture, 198–200 enterprise IP telephony security, 205–207 numbering, 200–201 quality-of-service, 201–203 VOI security, 203–205 wireless convergence, 207–208 wireless voice communication networks, 195–198 Voicemail, 181 Voice-over-the-Internet (VOI), 199 Voice-over-IP (VoIP), 91, 199, 206 basis for, 188 designs, attack mitigation in, 205 technology, problems in, 189 trunking gateways, 199 Voice security, 175–190 direct inward system access, 185–186 other voice services, 187–188 cellular and wireless access, 188 fax, 187–188 modems, 187 plain old telephone service, 175–179 analog versus digital, 177–178 connecting things, 176–177 direct inward dial, 178–179 direct outward dial, 179 private area branch exchange, 179–181

997

AU1518Index Page 998 Thursday, November 14, 2002 7:45 PM

Index public switched telephone network, 179 security issues, 183–185 calling cards, 185 inappropriate use of authorized access, 184 toll fraud, 183–184 social engineering, 186–187 voice conferencing, 182–183 voicemail, 181–182 voice-over-IP, 188–189 VoIP, see Voice-over-IP VPN, see Virtual private network Vulnerability(ies) analysis, definition of, 685 assessments, 500, 513 definition of, 330 identification of, 858 management, 447 ranking of, 338 scanner, definition of 685

W Walk-throughs, 480 WAN, see Wide area network Wang, outsourcing services oriented around, 384 WAP gateways, 668 War biking, 52 Ward v. Hobart Mfg. Co., 795 War dialing, 337, 700 War driving, 51, 52, 221 War hang gliding, 52 Warhol, 58, 548 War on Terrorism, 930 War walking, 52 Water hazards, 907 Watermarking, 619 WCDMA, see Wideband CDMA W32.DIDer, 544 Web (World Wide Web, WWW) -accessible systems, Internet DMZ created for, 359 application code, hole in, 481 attacks, proliferation of, 260 browser, Java-capable, 735, 736 content liability, 354 e-mail, 121, 552 hacking, 818 mail security, 121 page defacement, 662 file modification, 659 perversion, 481

998

server(s) defacement, 891 with enabled SSL, 668 Microsoft IIS, 60 vulnerability of, 225 sites, government, 954 VNC and, 735 WEP, see Wired Equivalent Privacy Whale Communications, 121 White House, 226 Wide area network (WAN), 201, 708 Wideband CDMA (WCDMA), 196 WildPackets Airopeek, 168 Windows, see also Microsoft peer-to-peer networking feature of, 110 Script Host (WSH), 601 user community, safe computing practices for, 551 worms focused on, 59 WinVNC Windows properties dialog, 734 registry values, 731 Wired Equivalent Privacy (WEP), 53, 170 encryption, 171, 172 keys broken, 54, 55 dynamic, 173 settings, 170 Wireless access points, antenna for discovering, 52 Wireless Internet, 108, 109, 111 Wireless LAN attacks, 51, 53 policies for use of, 54 standards, IEEE, 90 unsecure, 52 Wireless LAN security vulnerabilities, 167–174 IEEE 802.1X standard, 174 MAC address checking, 173 security, 167–173 authentication, 169 encryption, 169–173 network name, 168 Wireless networks, 220 Wireless transmission, 921 Wireless viruses, 549 Wm.concept, 542 WM/Concept, 599 WordBasic, 599 Workplace violence, 905 World Trade Center, 927, 931 World Wide Web (WWW), 827, see also Web

AU1518Index Page 999 Thursday, November 14, 2002 7:45 PM

Index Worm(s), 51 advances, 58 Code Red, 58, 60, 260, 541, 543, 550, 577 defenses, 60, 359 definition of, 58, 543 detection, 560 disguised, 59 e-mail, 543 Hybris, 594 Internet/Morris/UNIX, 597 Linux, 544–545, 597 Love Bug, 825 morphing, 59 Morris, 595 multi-platform, 59 program, technical origin of term, 577 rapidly spreading, 58 router, 549 Warhol, 548 zero-day exploit, 60 WSH, see Windows Script Host WTC-based companies, 767 wu-ftpd, 22, 24 WWW, 827, see also Web

X Xprobe tool, 56

Y Yahoo! attacks on, 226 chat rooms, 827 DDoS attack launched against, 346 Yes men, 243 Y2K maintenance programmers and, 389 upgrades, 391, 951

Z Zero-day exploit worms, 60 Zombie(s), 226, 570 computers, lax security of, 791 defendant, 798 discovery of, 229 ZoneAlarm, 116, 157, 229, 577

999

AU1518Index Page 1000 Thursday, November 14, 2002 7:45 PM