Information Security Management Handbook, Fourth Edition, Volume 4

  • 38 65 6
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Information Security Management Handbook, Fourth Edition, Volume 4

Information Security Management H A N D B O O K 4 TH E D I T I O N VOLUME 4 OTHER AUERBACH PUBLICATIONS The ABCs of IP

2,494 526 12MB

Pages 1018 Page size 437.04 x 684.6 pts Year 2002

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Information Security Management H A N D B O O K 4 TH E D I T I O N VOLUME 4

OTHER AUERBACH PUBLICATIONS The ABCs of IP Addressing Gilbert Held ISBN: 0-8493-1144-6 The ABCs of TCP/IP Gilbert Held ISBN: 0-8493-1463-1

Information Security Management Handbook, 4th Edition, Volume 4 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1518-2

Building an Information Security Awareness Program Mark B. Desman ISBN: 0-8493-0116-5

Information Security Policies, Procedures, and Standards: Guidelines for Effective Information Security Management Thomas R. Peltier ISBN: 0-8493-1137-3

Building a Wireless Office Gilbert Held ISBN: 0-8493-1271-X

Information Security Risk Analysis Thomas R. Peltier ISBN: 0-8493-0880-1

The Complete Book of Middleware Judith Myerson ISBN: 0-8493-1272-8

A Practical Guide to Security Engineering and Information Assurance Debra Herrmann ISBN: 0-8493-1163-2

Computer Telephony Integration, 2nd Edition William A. Yarberry, Jr. ISBN: 0-8493-1438-0 Cyber Crime Investigator’s Field Guide Bruce Middleton ISBN: 0-8493-1192-6 Cyber Forensics: A Field Manual for Collecting, Examining, and Preserving Evidence of Computer Crimes Albert J. Marcella and Robert S. Greenfield, Editors ISBN: 0-8493-0955-7 Global Information Warfare: How Businesses, Governments, and Others Achieve Objectives and Attain Competitive Advantages Andy Jones, Gerald L. Kovacich, and Perry G. Luzwick ISBN: 0-8493-1114-4 Information Security Architecture Jan Killmeyer Tudor ISBN: 0-8493-9988-2 Information Security Management Handbook, 4th Edition, Volume 1 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-9829-0

The Privacy Papers: Managing Technology and Consumers, Employee, and Legislative Action Rebecca Herold ISBN: 0-8493-1248-5 Secure Internet Practices: Best Practices for Securing Systems in the Internet and e-Business Age Patrick McBride, Jody Patilla, Craig Robinson, Peter Thermos, and Edward P. Moser ISBN: 0-8493-1239-6 Securing and Controlling Cisco Routers Peter T. Davis ISBN: 0-8493-1290-6 Securing E-Business Applications and Communications Jonathan S. Held and John R. Bowers ISBN: 0-8493-0963-8 Securing Windows NT/2000: From Policies to Firewalls Michael A. Simonyi ISBN: 0-8493-1261-2 Six Sigma Software Development Christine B. Tayntor ISBN: 0-8493-1193-4

Information Security Management Handbook, 4th Edition, Volume 2 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-0800-3

A Technical Guide to IPSec Virtual Private Networks James S. Tiller ISBN: 0-8493-0876-3

Information Security Management Handbook, 4th Edition, Volume 3 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1127-6

Telecommunications Cost Management Brian DiMarsico, Thomas Phelps IV, and William A. Yarberry, Jr. ISBN: 0-8493-1101-2

AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail: [email protected]

Information Security Management H A N D B O O K 4 TH E D I T I O N VOLUME 4

Harold F. Tipton Micki Krause EDITORS

AUERBACH PUBLICATIONS A CRC Press Company Boca Raton London New York Washington, D.C.

AU1518 FMFrame.backup Page iv Friday, November 15, 2002 2:07 PM

Chapter 21, “Security Assessment,” © 2003. INTEGRITY. All rights reserved. Chapter 23, “How to Work with a Managed Security service Provider,” © 2003. Laurie Hill McQuillan. All rights reserved. Chapter 44, “Liability for Lax Computer Security in DDoS,” © 2003. Dorsey Morrow. All rights reserved.

Library of Congress Cataloging-in-Publication Data Information security management handbook / Harold F. Titon, Micki Krause, editors.—4th ed. p. cm. Revised edition of: Handbook of information security management 1999. Includes bibliographical references and index. ISBN 0-8493-1518-2 (alk. paper) 1. Computer security — Management — Handbooks, manuals, etc. 2. Data protection— Handbooks, manuals, etc. I. Tipton, Harold F. II. Krause, Micki. III. Title: Handbook of information security management 1999. QA76.9.A25H36 1999a 658¢.0558—dc21 99-42823 CIP

This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the authors and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. All rights reserved. Authorization to photocopy items for internal or personal use, or the personal or internal use of specific clients, may be granted by CRC Press LLC, provided that $1.50 per page photocopied is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA. The fee code for users of the Transactional Reporting Service is ISBN 0-8493-15182/02/$0.00+$1.50. The fee is subject to change without notice. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.

Visit the Auerbach Publications Web site at www.auerbach-publications.com © 2003 by CRC Press LLC Auerbach is an imprint of CRC Press LLC No claim to original U.S. Government works International Standard Book Number 0-8493-1518-2 Library of Congress Card Number 99-42823 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper

iv

AU1518 FMFrame Page v Thursday, November 14, 2002 9:28 PM

Contributors THOMAS AKIN, CISSP, Founding Director, Southeast Cybercrime Institute, Marietta, Georgia ALLEN BRUSEWITZ, CISSP, CBCP, Consultant, Huntington Beach, California CARL BURNEY, CISSP, IBM, Internet Security Analyst, Salt Lake City, Utah KEN BUSZTA, CISSP, Consultant, Lakewood, Ohio MICHAEL J. CORBY, President, QinetiQ Trusted Information Management, Inc., Worcester, Massachusetts KEVIN J. DAVIDSON, CISSP, Senior Staff Systems Engineer, Lockheed Martin Mission Systems, Gaithersburg, Maryland DAVID DECKTER, CISSP, Manager, Enterprise Risk Services, Deloitte & Touche LLP, Chicago, Illinois MARK EDMEAD, CISSP, SSCP, TICSA, President, MTE Software, Inc., Escondido, California JEFFREY H. FENTON, CBCP, CISSP, Senior Staff Computer System Security Analyst, Corporate Information Security Office, Lockheed Martin Corporation, Sunnyvale, California ED GABRYS, CISSP, Information Security Manager, People’s Bank, Bridgeport, Connecticut B RIAN G EFFERT , CISSP, CISA, Senior Manager, Security Services Practice, Deloitte & Touche LLP, San Francisco, California ALEX GOLOD, CISSP, Infrastructure Specialist, EDS, Troy, Michigan CHRIS HARE, CISSP, CISA, Information Security and Control Consultant, Nortel Networks, Dallas, Texas GILBERT HELD, Director, 4-Degree Consulting, Macon, Georgia K EVIN H ENRY, CISA, CISSP, Information Systems Auditor, Oregon Judicial Department, Salem, Oregon PAUL A. HENRY, CISSP, Vice President, CyberGuard Corporation, Fort Lauderdale, Florida R EBECCA H EROLD, CISSP, CISA, FLMI, Senior Security Consultant, QinetiQ Trusted Information Management, Van Meter, Iowa DEBRA S. HERRMANN, Manager of Security Engineering, FAA Telecommunications Infrastructure, ITT Advanced Engineering Sciences, Washington, D.C.

v

AU1518 FMFrame Page vi Thursday, November 14, 2002 9:28 PM

Contributors RALPH HOEFELMEYER, CISSP, Senior Engineer, WorldCom, Colorado Springs, Colorado PATRICK D. HOWARD, Senior Information Security Architect, QinetiQ Trusted Information Management, Worcester, Massachusetts JAVED IKBAL, CISSP, Director, IT Security, Major Financial Services Company, Reading, Massachusetts CARL B. JACKSON, CISSP, Vice President, Continuity Planning, QinetiQ-Trusted Information Management, Houston, Texas SUDHANSHU KAIRAB, CISSP, CISA, Information Security Consultant, East Brunswick, New Jersey WALTER S. KOBUS, Jr., CISSP, Vice President, Security Consulting Services, Total Enterprise Solutions, Raleigh, North Carolina MOLLIE E. KREHNKE, CISSP, Principal Information Security Analyst, Northrop Grumman, Raleigh, North Carolina DAVID C. KREHNKE, CISSP, Principal Information Security Analyst, Northrop Grumman, Raleigh, North Carolina DAVID LITZAU, Teacher, San Diego, California JEFFREY LOWDER, CISSP, GSEC, Independent Information Security Consultant, Paoli, Pennsylvania DAVID MACLEOD, Ph.D., CISSP, Chief Information Security Officer, The Regence Group, Portland, Oregon LAURIE HILL MCQUILLAN, CISSP, Vice President, KeyCrest Enterprises, Manassas, Virginia DORSEY MORROW, CISSP, JD, Operations Manager and General Counsel, International Information Systems Security Certification Consortium, Inc. [(ISC)2], Framingham, Massachusetts WILLIAM HUGH MURRAY, CISSP, Executive Consultant, IS Security, Deloitte & Touche, New Caanan, Connecticut DR. K. NARAYANASWAMY, Chief Technology Officer, Cs3, Incorporated, Los Angeles, California KEITH PASLEY, CISSP, CNE, Senior Security Technologist, Ciphertrust, Atlanta, Georgia THERESA E. PHILLIPS, CISSP, Senior Engineer, WorldCom, Colorado Springs, Colorado STEVE A. RODGERS, CISSP, Co-founder, Security Professional Services, Leawood, Kansas TY R. SAGALOW, Executive Vice President and Chief Operating Officer, eBusiness Risk Solutions, American International Group (AIG), New York, New York CRAIG A. SCHILLER, CISSP, Information Security Consultant, Hawkeye Security, Wichita, Kansas BRIAN R. SCHULTZ, CISSP, CISA, Chairman of the Board, INTEGRITY, Centreville, Virginia PAUL SERRITELLA, Security Architect, American International Group (AIG), New York, New York vi

AU1518 FMFrame Page vii Thursday, November 14, 2002 9:28 PM

Contributors KEN SHAURETTE, CISSP, CISA, Information Systems Security Staff Advisor, American Family Institute, Madison, Wisconsin CAROL A. SIEGEL, CISSP, Chief Security Officer, American International Group (AIG), New York, New York VALENE SKERPAC, CISSP, President, iBiometrics, Inc., Millwood, New York EDWARD SKOUDIS, Vice President, Security Strategy, Predictive Systems, New York, New York ROBERT SLADE, CISSP, Security Consultant and Educator, Vancouver, British Columbia, Canada ALAN B. STERNECKERT, CISA, CISSP, CFE, COCI, Owner and General Manger, Risk Management Associates, Salt Lake City, Utah JAMES S. TILLER, CISSP, Global Portfolio and Practice Manager, International Network Services, Tampa, Florida JAMES TRULOVE, Network Engineer, Austin, Texas MICHAEL VANGELOS, Information Security Officer, Federal Reserve Bank of Cleveland, Cleveland, Ohio JAYMES WILLIAMS, CISSP, Security Analyst, PG&E National Energy Group, Portland, Oregon JAMES M. WOLFE, MSM, Senior Virus Researcher, Enterprise Virus Management Group, Lockheed Martin Corporation, Orlando, Florida

vii

AU1518 FMFrame Page viii Thursday, November 14, 2002 9:28 PM

AU1518 FMFrame Page ix Thursday, November 14, 2002 9:28 PM

Contents DOMAIN 1 ACCESS CONTROL SYSTEMS AND METHODOLOGY . . .

1

Section 1.1 Access Control Techniques Chapter 1 It Is All about Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chris Hare

3

Chapter 2 Controlling FTP: Providing Secured Data Transfers . . . . . 21 Chris Hare Section 1.2 Access Control Administration Chapter 3 The Case for Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Michael J. Corby Section 1.3 Methods of Attack Chapter 4 Breaking News: The Latest Hacker Attacks and Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Edward Skoudis Chapter 5 Counter-Economic Espionage . . . . . . . . . . . . . . . . . . . . . . . 67 Craig A. Schiller DOMAIN 2 TELECOMMUNICATIONS AND NETWORK SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Section 2.1 Communications and Network Security Chapter 6 What’s Not So Simple about SNMP? . . . . . . . . . . . . . . . . . . 93 Chris Hare Section 2.2 Internet, Intranet, and Extranet Security Chapter 7 Security for Broadband Internet Access Users . . . . . . . . . 107 James Trulove Chapter 8 New Perspectives on VPNs. . . . . . . . . . . . . . . . . . . . . . . . . . 119 Keith Pasley Chapter 9 An Examination of Firewall Architectures . . . . . . . . . . . . . 129 Paul A. Henry ix

AU1518 FMFrame Page x Thursday, November 14, 2002 9:28 PM

Contents Chapter 10 Deploying Host-Based Firewalls across the Enterprise: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Jeffery Lowder Chapter 11 Overcoming Wireless LAN Security Vulnerabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Gilbert Held Section 2.3 Secure Voice Communication Chapter 12 Voice Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Chris Hare Chapter 13 Secure Voice Communications (VoI) . . . . . . . . . . . . . . . . . . 191 Valene Skerpac Section 2.4 Network Attacks and Countermeasures Chapter 14 Packet Sniffers: Use and Misuse. . . . . . . . . . . . . . . . . . . . . . 211 Steve A. Rodgers Chapter 15 ISPs and Denial-of-Service Attacks. . . . . . . . . . . . . . . . . . . . 225 Dr. K. Narayanaswamy DOMAIN 3 SECURITY MANAGEMENT PRACTICES. . . . . . . . . . . . . . . 237 Section 3.1 Security Management Concepts and Principles Chapter 16 The Human Side of Information Security . . . . . . . . . . . . . . 239 Kevin Henry Chapter 17 Security Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Ken Buszta Section 3.2 Policies, Standards, Procedures, and Guidelines Chapter 18 The Common Criteria for IT Security Evaluation. . . . . . . . 275 Debra S. Herrmann Chapter 19 The Security Policy Life Cycle: Functions and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Patrick Howard Section 3.3 Risk Management Chapter 20 Security Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Sudhanshu Kairab Chapter 21 Evaluating the Security Posture of an Information Technology Environment: The Challenges of Balancing Risk, Cost, and Frequency of Evaluating Safeguards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Brian R. Schultz x

AU1518 FMFrame Page xi Thursday, November 14, 2002 9:28 PM

Contents Chapter 22 Cyber-Risk Management: Technical and Insurance Controls for Enterprise-Level Security . . . . . . . . . . . . . . . . 341 Carol A. Siegel, Ty R. Sagalow, and Paul Serritella Section 3.4 Security Management Planning Chapter 23 How to Work with a Managed Security Service Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Laurie Hill McQuillan Chapter 24 Considerations for Outsourcing Security . . . . . . . . . . . . . . 383 Michael J. Corby Section 3.5 Employment Policies and Practices Chapter 25 Roles and Responsibilities of the Information Systems Security Officer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Carl Burney Chapter 26 Information Protection: Organization, Roles and Separation of Duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Rebecca Herold Chapter 27 Organizing for Success: Human Resources Issues in Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Jeffrey H. Fenton and James M. Wolfe Chapter 28 Ownership and Custody of Data . . . . . . . . . . . . . . . . . . . . . 461 William Hugh Murray DOMAIN 4 APPLICATION PROGRAM SECURITY . . . . . . . . . . . . . . . . 473 Section 4.1 Application Issues Chapter 29 Application Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Walter S. Kobus, Jr. Section 4.2 Systems Development Controls Chapter 30 Certification and Accreditation Methodology . . . . . . . . . . 485 Mollie Krehnke and David Krehnke Chapter 31 A Framework for Certification Testing . . . . . . . . . . . . . . . . 509 Kevin J. Davidson Section 4.3 Malicious Code Chapter 32 Malicious Code: The Threat, Detection, and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Ralph Hoefelmeyer and Theresa E. Phillips Chapter 33 Malware and Computer Viruses. . . . . . . . . . . . . . . . . . . . . . 565 Robert Slade xi

AU1518 FMFrame Page xii Thursday, November 14, 2002 9:28 PM

Contents DOMAIN 5 CRYPTOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 Section 5.1 Crypto Concepts, Methodologies, and Practices Chapter 34 Steganography: The Art of Hiding Messages . . . . . . . . . . . 619 Mark Edmead Chapter 35 An Introduction to Cryptography . . . . . . . . . . . . . . . . . . . . 627 Javek Ikbel Chapter 36 Hash Algorithms: From Message Digests to Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Keith Pasley Section 5.2 Public Key Infrastructure (PKI) Chapter 37 PKI Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Alex Golod DOMAIN 6 COMPUTER, SYSTEM, AND SECURITY ARCHITECTURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 Section 6.1 Principles of Computer and Network Organizations, Architectures, and Designs Chapter 38 Security Infrastructure: Basics of Intrusion Detection Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Ken Shaurette Chapter 39 Firewalls, Ten Percent of the Solution: A Security Architecture Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699 Chris Hare Chapter 40 The Reality of Virtual Computing. . . . . . . . . . . . . . . . . . . . . 719 Chris Hare DOMAIN 7 OPERATIONS SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 Section 7.1 Operations Controls Chapter 41 Directory Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747 Ken Buszta DOMAIN 8 BUSINESS CONTINUITY PLANNING. . . . . . . . . . . . . . . . . . 759 Chapter 42 The Changing Face of Continuity Planning . . . . . . . . . . . . . 761 Carl Jackson Chapter 43 Business Continuity Planning: A Collaborative Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775 Kevin Henry xii

AU1518 FMFrame Page xiii Thursday, November 14, 2002 9:28 PM

Contents DOMAIN 9 LAW, INVESTIGATION, AND ETHICS . . . . . . . . . . . . . . . . 789 Section 9.1 Information Law Chapter 44 Liability for Lax Computer Security in DDoS Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791 Dorsey Morrow Chapter 45 HIPAA 201: A Framework Approach to HIPAA Security Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799 David MacLeod, Brian Geffert, and David Deckter Section 9.2 Major Categories of Computer Crime Chapter 46 The International Dimensions of Cyber-Crime . . . . . . . . . 815 Ed Gabrys Section 9.3 Incident Handling Chapter 47 Reporting Security Breaches . . . . . . . . . . . . . . . . . . . . . . . . 841 James S. Tiller Chapter 48 Incident Response Management . . . . . . . . . . . . . . . . . . . . . 855 Alan B. Sterneckert Chapter 49 Managing the Response to a Computer Security Incident . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873 Michael Vangelos Chapter 50 Cyber-Crime: Response, Investigation, and Prosecution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889 Thomas Akin DOMAIN 10 PHYSICAL SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899 Section 10.1 Elements of Physical Security Chapter 51 Computing Facility Physical Security . . . . . . . . . . . . . . . . . 901 Allen Brusewitz Chapter 52 Closed-Circuit Television and Video Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915 David Litzau Section 10.2 Environment and Life Safety Chapter 53 Physical Security: The Threat after September 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927 Jaymes Williams INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957

xiii

AU1518 FMFrame Page xiv Thursday, November 14, 2002 9:28 PM

AU1518 FMFrame Page xv Thursday, November 14, 2002 9:28 PM

Introduction This past year has brought an increasing focus on the need for information security at all levels of public- and private-sector organizations. The continuous growth of technology, distributed denial-of-service attacks, a significant (13 percent) increase in virus and worm attacks over the prior year, and, of course, the anticipated aftermath of September 11 — terrorism over the Internet — all have worked to increase concerns about how well we are protecting our information processing assets. This Volume 4, in combination with the previous volumes of the 4th Edition of the Information Security Management Handbook (ISMH), is designed to cover more of the topics in the Common Body of Knowledge as well as address items resulting from new technology. As such, it should be a valuable reference for those preparing to take the CISSP examination as well as for those who are working in the field. Those CISSP candidates who take the (ISC)2 CBK Review Seminar and use the volumes of the 4th Edition of the ISMH to study those areas that they have not covered in their work experience have achieved an exceptionally high pass rate for the examination. On the other hand, those who have already attained CISSP status comment frequently that the ISMH books are a very useful reference in the workplace. These comments are especially heartwarming because they underscore our success in obtaining the most proficient authors for the ISMH chapters. The environment in which information processing is required to perform these days is very challenging from an information security viewpoint. Consequently, it is more and more imperative that organizations employ the most qualified information security personnel available. Although qualifications can be reflected in several different ways, one of the best is through the process of professional certification. Achieving professional certification by passing the CISSP examination is considered to be the worldwide leader. There are currently over 9000 CISSPs internationally. With this in mind, we have again formatted the Table of Contents for this volume to be consistent with the ten domains of the Common Body of Knowledge for the field of information security. This makes it easier for the xv

AU1518 FMFrame Page xvi Thursday, November 14, 2002 9:28 PM

Introduction reader to select chapters for study in preparation for the CISSP examination and for professionals to find the chapters they need to refer to in order to solve specific problems in the workplace. None of the chapters in the 4th Edition Volumes 1 through 4 is repeated. All represent new material, and the several volumes supplement each other. HAL TIPTON MICKI KRAUSE October 2002

xvi

AU1518Ch01Frame Page 1 Thursday, November 14, 2002 6:27 PM

Domain 1

Access Control Systems and Methodology

AU1518Ch01Frame Page 2 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY There is ample justification for beginning this volume of the Handbook with the fundamental concept of controlling access to critical resources. Absent access controls, there is little if any assurance that information will be used or disclosed in an authorized manner. In this domain, one of our authors aptly points to Sawyer’s Internal Auditing, Fourth Edition, to offer a comprehensive definition of control and clearly conveys that control can take on many diverse forms while achieving similar results. The definition follows: “Control is the employment of all the means in an enterprise to promote, direct, restrain, govern, and check upon its various activities for the purpose of seeing that the enterprise objectives are met. These means of control include, but are not limited to, form of organization, policies, systems, procedures, instructions, standards, committees, charts of account, forecasts, budgets, schedules, reports, checklists, records, devices, and internal auditing.” Paradoxically, we often employ computer technology to counter and control the threats posed by evolving computer technologies. While this Handbook is being written, wireless networking is coming of age as prices continue to decrease and usability and interoperability continue to increase. As attractive as wireless networks are, however, wide deployment is still hampered by the acknowledged lack of security. As we read in this domain, computer attackers continue to gain unauthorized system access by exploiting insecure technologies. The good news offered herein is that there are numerous controls available to be implemented for wireless local area networks that will minimize or mitigate risk. The terrorist attacks of September 11, 2001, still live clearly in our minds and hearts as ever-living proof that we live in a constant state of world war. Although the resultant losses from economic espionage are clearly not of the magnitude suffered by the loss of lives from the World Trade Center catastrophe, they are sufficient to be reckoned with. In this domain, we feature a chapter that details the history of economic espionage, many of the players, stories of organizations affected, and some of the ways in which we can counter the economic espionage threat.

2

AU1518Ch01Frame Page 3 Thursday, November 14, 2002 6:27 PM

Chapter 1

It Is All about Control Chris Hare, CISSP, CISA

The security professional and the auditor come together around one topic: control. The two professionals may not agree with the methods used to establish control, but their concerns are related. The security professional is there to evaluate the situation, identify the risks and exposures, recommend solutions, and implement corrective actions to reduce the risk. The auditor also evaluates risk, but the primary role is to evaluate the controls implemented by the security professional. This role often puts the security professional and the auditor at odds, but this does not need to be the case. This chapter discusses controls in the context of the Common Body of Knowledge of the Certified Information Systems Security Professional (CISSP), but it also introduces the language and definitions used by the audit profession. This approach will ease some of the concept misconceptions and terminology differences between the security and audit professions. Because both professions are concerned with control, albeit from different perspectives, the security and audit communities should have close interaction and cooperate extensively. Before discussing controls, it is necessary to define some parameters. Audit does not mean security. Think of it this way: the security professional does not often think in control terms. Rather, the security professional is focused on what measures or controls should be put into operation to protect the organization from a variety of threats. The goal of the auditor is not to secure the organization but to evaluate the controls to ensure risk is managed to the satisfaction of management. Two perspectives of the same thing — control. WHAT IS CONTROL? According to Webster’s Dictionary, control is a method “to exercise restraining or directing influence over.” An organization uses controls to regulate or define the limits of behavior for its employees or its operations for processes and systems. For example, an organization may have a process for defining widgets and uses controls within the process to maintain quality or production standards. Many manufacturing facilities use controls 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

3

AU1518Ch01Frame Page 4 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY to limit or regulate production of their finished goods. Professions such as medicine use controls to establish limits on acceptable conduct for their members. For example, the actions of a medical student or intern are monitored, reviewed, and evaluated — hence controlled — until the applicable authority licenses the medical student. Regardless of the application, controls establish the boundaries and limits of operation. The security professional establishes controls to limit access to a facility or system or privileges granted to a user. Auditors evaluate the effectiveness of the controls. There are five principle objectives for controls: 1. 2. 3. 4. 5.

Propriety of information Compliance with established rules Safeguarding of assets Efficient use of resources Accomplishment of established objectives and goals

Propriety of information is concerned with the appropriateness and accuracy of information. The security profession uses integrity or data integrity in this context, as the primary focus is to ensure the information is accurate and has not been inappropriately modified. Compliance with established rules defines the limits or boundaries within which people or systems must work. For example, one method of compliance is to evaluate a process against a defined standard to verify correct implementation of that process. Safeguarding the organization’s assets is of concern for management, the security professional, and the auditor alike. The term asset is used to describe any object, tangible or intangible, that has value to the organization. The efficient use of resources is of critical concern in the current market. Organizations and management must concern themselves with the appropriate and controlled use of all resources, including but not limited to cash, people, and time. Most importantly, however, organizations are assembled to achieve a series of goals and objectives. Without goals to establish the course and desired outcomes, there is little reason for an organization to exist. To complete our definition of controls, Sawyer’s Internal Auditing, 4th Edition, provides an excellent definition: Control is the employment of all the means and devices in an enterprise to promote, direct, restrain, govern, and check upon its various activities for the purpose of seeing that enterprise objectives are met. These means of control include, but are not limited to, form of organization, 4

AU1518Ch01Frame Page 5 Thursday, November 14, 2002 6:27 PM

It Is All about Control policies, systems, procedures, instructions, standards, committees, charts of account, forecasts, budgets, schedules, reports, checklists, records, methods, devices, and internal auditing. — Lawrence Sawyer Internal Auditing, 4th Edition The Institute of Internal Auditors

Careful examination of this definition demonstrates that security professionals use many of these same methods to establish control within the organization. COMPONENTS USED TO ESTABLISH CONTROL A series of components are used to establish controls, specifically: • • • • •

The control environment Risk assessment Control activities Information and communication Monitoring

The control environment is a term more often used in the audit profession, but it refers to all levels of the organization. It includes the integrity, ethical values, and competency of the people and management. The organizational structure, including decision making, philosophy, and authority assignments are critical to the control environment. Decisions such as the type of organizational structure, where decision-making authority is located, and how responsibilities are assigned all contribute to the control environment. Indeed, these areas can also be used as the basis for directive or administrative controls as discussed later in the chapter. Consider an organization where all decision-making authority is at the top of the organization. Decisions and progress are slower because all information must be focused upward. The resulting pace at which the organization changes is lower, and customers may become frustrated due to the lack of employee empowerment. However, if management abdicates its responsibility and allows anyone to make any decision they wish, anarchy results, along with differing decisions made by various employees. Additionally, the external audit organization responsible for reviewing the financial statements may have less confidence due to the increased likelihood that poor decisions are being made. Risk assessments are used in many situations to assess the potential problems that may arise from poor decisions. Project managers use risk assessments to determine the activities potentially impacting the schedule or budget associated with the project. Security professionals use risk 5

AU1518Ch01Frame Page 6 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY assessments to define the threats and exposures and to establish appropriate controls to reduce the risk of their occurrence and impact. Auditors also use risk assessments to make similar decisions, but more commonly use risk assessment to determine the areas requiring analysis in their review. Control activities revolve around authorizations and approvals for specific responsibilities and tasks, verification and review of those activities, and promoting job separation and segregation of duties within activities. The control activities are used by the security professional to assist in the design of security controls within a process or system. For example, SAP associates a transaction — an activity — with a specific role. The security professional assists in the review of the role to ensure no unauthorized activity can occur and to establish proper segregation of duties. The information and communication conveyed within an organization provide people with the data they need to fulfill their job responsibilities. Changes to organizational policies or management direction must be effectively communicated to allow people to know about the changes and adjust their behavior accordingly. However, communications with customers, vendors, government, and stockholders are also of importance. The security professional must approach communications with care. Most commonly, the issue is with the security of the communication itself. Was the communication authorized? Can the source be trusted, and has the information been modified inappropriately since its transmission to the intended recipients? Is the communication considered sensitive by the organization, and was the confidentiality of the communication maintained? Monitoring of the internal controls systems, including security, is of major importance. For example, there is little value gained from the installation of intrusion detection systems if there is no one to monitor the systems and react to possible intrusions. Monitoring also provides a sense of learning or continuous improvement. There is a need to monitor performance, challenge assumptions, and reassess information needs and information systems in order to take corrective action or even take advantage of opportunities for enhanced operations. Without monitoring or action resulting from the monitoring, there is no evolution in an organization. Organizations are not closed static systems and, hence, must adapt their processes to changes, including controls. Monitoring is a key control process to aid the evolution of the organization. CONTROL CHARACTERISTICS Several characteristics available to assess the effectiveness of the implemented controls are commonly used in the audit profession. Security professionals should consider these characteristics when selecting or designing the control structure. The characteristics are: 6

AU1518Ch01Frame Page 7 Thursday, November 14, 2002 6:27 PM

It Is All about Control • • • • • • • •

Timeliness Economy Accountability Placement Flexibility Cause identification Appropriateness Completeness

Ideally, controls should prevent and detect potential deviations or undesirable behavior early enough to take appropriate action. The timeliness of the identification and response can reduce or even eliminate any serious cost impact to the organization. Consider anti-virus software: organizations deploying this control must also concern themselves with the delivery method and timeliness of updates from the anti-virus vendor. However, having updated virus definitions available is only part of the control because the new definitions must be installed in the systems as quickly as possible. Security professionals regularly see solutions provided by vendors that are not economical due to the cost or lack of scalability in large environments. Consequently, the control should be economical and cost effective for the benefit it brings. There is little economic benefit for a control costing $100,000 per year to manage a risk with an annual impact of $1000. The control should be designed to hold people accountable for their actions. The user who regularly attempts to download restricted material and is blocked by the implemented controls must be held accountable for such attempts. Similarly, financial users who attempt to circumvent the controls in financial processes or systems must also be held accountable. In some situations, users may not be aware of the limits of their responsibilities and thus may require training. Other users knowingly attempt to circumvent the controls. Only an investigation into the situation can tell the difference. The effectiveness of the control is often determined by its placement. Accepted placement of controls are considered: • Before an expensive part of a process. For example, before entering the manufacturing phase of a project, the controls must be in place to prevent building the incorrect components. • Before points of difficulty or no return. Some processes or systems have a point where starting over introduces new problems. Consequently, these systems must include controls to ensure all the information is accurate before proceeding to the next phase. • Between discrete operations. As one operation is completed, a control must be in place to separate and validate the previous operation. For 7

AU1518Ch01Frame Page 8 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY





• •

example, authentication and authorization are linked but discrete operations. Where measurement is most convenient. The control must provide the desired measurement in the most appropriate place. For example, to measure the amount and type of traffic running through a firewall, the measurement control would not be placed at the core of the network. Corrective action response time. The control must alert appropriate individuals and initiate corrective action either automatically or through human intervention within a defined time period. After the completion of an error-prone activity. Activities such as data entry are prone to errors due to keying the data incorrectly. Where accountability changes. Moving employee data from a human resources system to a finance system may involve different accountabilities. Consequently, controls should be established to provide both accountable parties confidence in the data export and import processes.

As circumstances or situations change, so too must the controls. Flexibility of controls is partially a function of the overall security architecture. The firewall with a set of hard-coded and inflexible rules is of little value as organizational needs change. Consequently, controls should ideally be modular in a systems environment and easily replaced when new methods or systems are developed. The ability to respond and correct a problem when it occurs is made easier when the control can establish the cause of the problem. Knowing the cause of the problem makes it easier for the appropriate corrective action to be taken. Controls must provide management with the appropriate responses and actions. If the control impedes the organization’s operations or does not address management’s concerns, it is not appropriate. As is always evident to the security professional, a delicate balance exists between the two; and often the objectives of business operations are at odds with other management concerns such as security. For example, the security professional recommending system configuration changes may affect the operation of a critical business system. Without careful planning and analysis of the controls, the change may be implemented and a critical business function paralyzed. Finally, the control must be complete. Implementing controls in only one part of the system or process is no better than ignoring controls altogether. This is often very important in information systems. We can control the access of users and limit their ability to perform specific activities within an application. However, if we allow the administrator or programmer a backdoor into the system, we have defeated the controls already established. 8

AU1518Ch01Frame Page 9 Thursday, November 14, 2002 6:27 PM

It Is All about Control There are many factors affecting the design, selection, and implementation of controls. This theme runs throughout this chapter and is one the security professional and auditor must each handle on a daily basis. TYPES OF CONTROLS There are many types of controls found within an organization to achieve its objectives. Some are specific to particular areas within the organization but are nonetheless worthy of mention. The security professional should be aware of the various controls because he will often be called upon to assist in their design or implementation. Internal Internal controls are those used to primarily manage and coordinate the methods used to safeguard an organization’s assets. This process includes verifying the accuracy and reliability of accounting data, promoting operational efficiency, and adhering to managerial polices. We can expand upon this statement by saying internal controls provide the ability to: • Promote an effective and efficient operation of the organization, including quality products and services • Reduce the possibility of loss or destruction of assets through waste, abuse, mismanagement, or fraud • Adhere to laws and external regulations • Develop and maintain accurate financial and managerial data and report the same information to the appropriate parties on a timely basis The term internal control is primarily used within the audit profession and is meant to extend beyond the limits of the organization’s accounting and financial departments. Directive/Administrative Directive and administrative controls are often used interchangeably to identify the collection of organizational plans, policies, and records. These are commonly used to establish the limits of behavior for employees and processes. Consider the organizational conflict of interest policy. Such a policy establishes the limits of what the organization’s employees can do without violating their responsibilities to the organization. For example, if the organization states employees cannot operate a business on their own time and an employee does so, the organization may implement the appropriate repercussions for violating the administrative control. Using this example, we can more clearly see why these mechanisms are called administrative or directive controls — they are not easily enforced in 9

AU1518Ch01Frame Page 10 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY automated systems. Consequently, the employee or user must be made aware of limits and stay within the boundaries imposed by the control. One directive control is legislation. Organizations and employees are bound to specific conduct based upon the general legislation of the country where they work, in addition to any specific legislation regarding the organization’s industry or reporting requirements. Every organization must adhere to revenue, tax collection, and reporting legislation. Additionally, a publicly traded company must adhere to legislation defining reporting requirements, senior management, and the responsibilities and liabilities of the board of directors. Organizations that operate in the healthcare sector must adhere to legislation specific to the protection of medical information, confidentiality, patient care, and drug handling. Adherence to this legislation is a requirement for the ongoing existence of the organization and avoidance of criminal or civil liabilities. The organizational structure is an important element in establishing decision-making and functional responsibilities. The division of functional responsibilities provides the framework for segregation of duties controls. Through segregation of duties, no single person or department is responsible for an entire process. This control is often implemented within the systems used by organizations. Aside from the division of functional responsibilities, organizations with a centralized decision-making authority have all decisions made by a centralized group or person. This places a high degree of control over the organization’s decisions, albeit potentially reducing the organization’s effectiveness and responsiveness to change and customer requirements. Decentralized organizations place decision making and authority at various levels in the company with a decreasing range of approval. For example, the president of the company can approve a $1 million expenditure, but a first-level manager cannot. Limiting the range and authority of decision making and approvals gives the company control while allowing the decisions to be made at the correct level. However, there are also many examples in the news of how managers abuse or overstep their authority levels. The intent in this chapter is not to present one as better than the other but rather to illustrate the potential repercussions of choosing either. The organization must make the decision regarding which model is appropriate at which time. The organization also establishes internal policies to control the behavior of its employees. These policies typically are implemented by procedures, standards, and guidelines. Policies describe senior management’s decisions. They limit employee behavior by typically adding sanctions for noncompliance, often affecting an employee’s position within the organization. Policies may also include codes of conduct and ethics in addition to 10

AU1518Ch01Frame Page 11 Thursday, November 14, 2002 6:27 PM

It Is All about Control the normal finance, audit, HR, and systems policies normally seen in an organization. The collective body of documentation described here instructs employees on what the organization considers acceptable behavior, where and how decisions are made, how specific tasks are completed, and what standards are used in measuring organizational or personal performance. Accounting Accounting controls are an area of great concern for the accounting and audit departments of an organization. These controls are concerned with safeguarding the organization’s financial assets and accounting records. Specifically, these controls are designed to ensure that: • Only authorized transactions are performed, recorded correctly, and executed according to management’s directions. • Transactions are recorded to allow for preparation of financial statements using generally accepted accounting principles. • Access to assets, including systems, processes, and information, is obtained and permitted according to management’s direction. • Assets are periodically verified against transactions to verify accuracy and resolve inconsistencies. While these are obviously accounting functions, they establish many controls implemented within automated systems. For example, an organization that allows any employee to make entries into the general ledger or accounting system will quickly find itself financially insolvent and questioning its operational decisions. Financial decision making is based upon the data collected and reported from the organization’s financial systems. Management wants to know and demonstrate that only authorized transactions have been entered into the system. Failing to demonstrate this or establish the correct controls within the accounting functions impacts the financial resources of the organization. Additionally, internal or external auditors cannot validate the authenticity of the transactions; they will not only indicate this in their reports but may refuse to sign the organization’s financial reports. For publicly traded companies, failing to demonstrate appropriate controls can be disastrous. The recent events regarding mishandling of information and audit documentation in the Enron case (United States, 2001–2002) demonstrate poor compliance with legislation, accepted standards, accounting, and auditing principles. 11

AU1518Ch01Frame Page 12 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Preventive As presented thus far, controls may exist for the entire organization or for subsets of specific groups or departments. However, some controls are implemented to prevent undesirable behavior before it occurs. Other controls are designed to detect the behaviors when they occur, to correct them, and improve the process so that a similar behavior will not recur. This suite of controls is analogous to the prevent–detect–correct cycle used within the information security community. Preventive controls establish mechanisms to prevent the undesirable activity from occurring. Preventive controls are considered the most costeffective approach of the preventive–detective–corrective cycle. When a preventive control is embedded into a system, the control prevents errors and minimizes the use of detective and corrective techniques. Preventive controls include trustworthy, trained people, segregation of duties, proper authorization, adequate documents, proper record keeping, and physical controls. For example, an application developer who includes an edit check in the zip or postal code field of an online system has implemented a preventive control. The edit check validates the data entered as conforming to the zip or postal code standards for the applicable country. If the data entered does not conform to the expected standards, the check generates an error for the user to correct. Detective Detective controls find errors when the preventive system does not catch them. Consequently, detective controls are more expensive to design and implement because they not only evaluate the effectiveness of the preventive control but must also be used to identify potentially erroneous data that cannot be effectively controlled through prevention. Detective controls include reviews and comparisons, audits, bank and other account reconciliation, inventory counts, passwords, biometrics, input edit checks, checksums, and message digests. A situation in which data is transferred from one system to another is a good example of detective controls. While the target system may have very strong preventive controls when data is entered directly, it must accept data from other systems. When the data is transferred, it must be processed by the receiving system to detect errors. The detection is necessary to ensure that valid, accurate data is received and to identify potential control failures in the source system. Corrective The corrective control is the most expensive of the three to implement and establishes what must be done when undesirable events occur. No 12

AU1518Ch01Frame Page 13 Thursday, November 14, 2002 6:27 PM

It Is All about Control matter how much effort or resources are placed into the detective controls, they provide little value to the organization if the problem is not corrected and is allowed to recur. Once the event occurs and is detected, appropriate management and other resources must respond to review the situation and determine why the event occurred, what could have been done to prevent it, and implement the appropriate controls. The corrective controls terminate the loop and feed back the new requirements to the beginning of the cycle for implementation. From a systems security perspective, we can demonstrate these three controls. • An organization is concerned with connecting the organization to the Internet. Consequently, it implements firewalls to limit (prevent) unauthorized connections to its network. The firewall rules are designed according to the requirements established by senior management in consultation with technical and security teams. • Recognizing the need to ensure the firewall is working as expected and to capture events not prevented by the firewall, the security teams establish an intrusion detection system (IDS) and a log analysis system for the firewall logs. The IDS is configured to detect network behaviors and anomalies the firewall is expected to prevent. Additionally, the log analysis system accepts the firewall logs and performs additional analysis for undesirable behavior. These are the detective controls. • Finally, the security team advises management that the ability to review and respond to issues found by the detective controls requires a computer incident response team (CIRT). The role of the CIRT is to accept the anomalies from the detective systems, review them, and determine what action is required to correct the problem. The CIRT also recommends changes to the existing controls or the addition of new ones to close the loop and prevent the same behavior from recurring. Deterrent The deterrent control is used to discourage violations. As a control itself, it cannot prevent them. Examples of deterrent controls are sanctions built into organizational policies or punishments imposed by legislation. Recovery Recovery controls include all practices, procedures, and methods to restore the operations of the business in the event of a disaster, attack, or system failure. These include business continuity planning, disaster recovery plans, and backups. 13

AU1518Ch01Frame Page 14 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY All of these mechanisms enable the enterprise to recover information, systems, and business processes, thereby restoring normal operations. Compensating If the control objectives are not wholly or partially achieved, an increased risk of irregularities in the business operation exists. Additionally, in some situations, a desired control may be missing or cannot be implemented. Consequently, management must evaluate the cost–benefit of implementing additional controls, called compensating controls, to reduce the risk. Compensating controls may include other technologies, procedures, or manual activities to further reduce risk. For example, it is accepted practice to prevent application developers from accessing a production environment, thereby limiting the risk associated with insertion of improperly tested or unauthorized program code changes. However, in many enterprises, the application developer may be part of the application support team. In this situation, a compensating control could be used to allow the developer restricted (monitored and/or limited) access to the production system, only when access is required. CONTROL STANDARDS With this understanding of controls, we must examine the control standards and objectives of security professionals, application developers, and system managers. Control standards provide developers and administrators with the knowledge to make appropriate decisions regarding key elements within the security and control framework. The standards are closely related to the elements discussed thus far. Standards are used to implement the control objectives, namely: • • • • • •

Data validation Data completeness Error handling Data management Data distribution System documentation

Application developers who understand these objectives can build applications capable of meeting or exceeding the security requirements of many organizations. Additionally, the applications will be more likely to satisfy the requirements established by the audit profession. Data accuracy standards ensure the correctness of the information as entered, processed, and reported. Security professionals consider this an element of data integrity. Associated with data accuracy is data completeness. Similar to ensuring the accuracy of the data, the security professional 14

AU1518Ch01Frame Page 15 Thursday, November 14, 2002 6:27 PM

It Is All about Control must also be concerned with ensuring that all information is recorded. Data completeness includes ensuring that only authorized transactions are recorded and none are omitted. Timeliness relates to processing and recording the transactions in a timely fashion. This includes service levels for addressing and resolving error conditions. Critical errors may require that processing halts until the error is identified and corrected. Audit trails and logs are useful in determining what took place after the fact. There is a fundamental difference between audit trails and logs. The audit trail is used to record the status and processing of individual transactions. Recording the state of the transaction throughout the processing cycle allows for the identification of errors and corrective actions. Log files are primarily used to record access to information by individuals and what actions they performed with the information. Aligned with audit trails and logs is system monitoring. System administrators implement controls to warn of excessive processor utilization, low disk space, and other conditions. Developers should insert controls in their applications to advise of potential or real error conditions. Management is interested in information such as the error condition, when it was recorded, the resolution, and the elapsed time to determine and implement the correction. Through techniques including edit controls, control totals, log files, checksums, and automated comparisons, developers can address traditional security concerns. CONTROL IMPLEMENTATION The practical implementations of many of the control elements discussed in this chapter are visible in today’s computing environments. Both operating system and application-level implementations are found, often working together to protect access and integrity of the enterprise information. The following examples illustrate and explain various control techniques available to the security professional and application developer. Transmission Controls The movement of data from the origin to the final processing point is of importance to security professionals, auditors, management, and the actual information user. Implementation of transmission controls can be established through the communications protocol itself, hardware, or within an application. 15

AU1518Ch01Frame Page 16 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY For example, TCP/IP implementations handle transmission control through the retransmission of information errors when received. The ability of TCP/IP to perform this service is based upon error controls built into the protocol or service. When a TCP packet is received and the checksum calculated for the packet is incorrect, TCP requests retransmission of the packet. However, UDP packets must have their error controls implemented at the application layer, such as with NFS. Sequence Sequence controls are used to evaluate the accuracy and completeness of the transmission. These controls rely upon the source system generating a sequence number, which is tested by the receiving system. If the data is received out of sequence or a transmission is missing, the receiving system can request retransmission of the missing data or refuse to accept or process any of it. Regardless of the receiving system’s response, the sequence controls ensure data is received and processed in order. Hash Hash controls are stored in the record before it is transmitted. These controls identify errors or omissions in the data. Both the transmitting and receiving systems must use the same algorithm to compute and verify the computed hash. The source system generates a hash value and transmits both the data and the hash value. The receiving system accepts both values, computes the hash, and verifies it against the value sent by the source system. If the values do not match, the data is rejected. The strength of the hash control can be improved through strong algorithms that are difficult to fake and by using different algorithms for various data types. Batch Totals Batch totals are the precursors to hashes and are still used in many financial systems. Batch controls are sums of information in the transmitted data. For example, in a financial system, batch totals are used to record the number of records and the total amounts in the transmitted transactions. If the totals are incorrect on the receiving system, the data is not processed. Logging A transaction is often logged on both the sending and receiving systems to ensure continuity. The logs are used to record information about the 16

AU1518Ch01Frame Page 17 Thursday, November 14, 2002 6:27 PM

It Is All about Control transmission or received data, including date, time, type, origin, and other information. The log records provide a history of the transactions, useful for resolving problems or verifying that transmissions were received. If both ends of the transaction keep log records, their system clocks must be synchronized with an external time source to maintain traceability and consistency in the log records. Edit Edit controls provide data accuracy and consistency for the application. With edit activities such as inserting or modifying a record, the application performs a series of checks to validate the consistency of the information provided. For example, if the field is for a zip code, the data entered by the user can be verified to conform to the data standards for a zip code. Likewise, the same can be done for telephone numbers, etc. Edit controls must be defined and inserted into the application code as it is developed. This is the most cost-efficient implementation of the control; however, it is possible to add the appropriate code later. The lack of edit controls affects the integrity and quality of the data, with possible repercussions later. PHYSICAL The implementation of physical controls in the enterprise reduces the risk of theft and destruction of assets. The application of physical controls can decrease the risk of an attacker bypassing the logical controls built into the systems. Physical controls include alarms, window and door construction, and environmental protection systems. The proper application of fire, water, electrical, temperature, and air controls reduces the risk of asset loss or damage. DATA ACCESS Data access controls determine who can access data, when, and under what circumstances. Common forms of data access control implemented in computer systems are file permissions. There are two primary control methods — discretionary access control and mandatory access control. Discretionary access control, or DAC, is typically implemented through system services such as file permissions. In the DAC implementation, the user chooses who can access a file or program based upon the file permissions established by the owner. The key element here is that the ability to access the data is decided by the owner and is, in turn, enforced by the system. 17

AU1518Ch01Frame Page 18 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Mandatory access control, also known as MAC, removes the ability of the data owner alone to decide who can access the data. In the MAC model, both the data and the user are assigned a classification and clearance. If the clearance assigned to the user meets or exceeds the classification of the data and the owner permits the access, the system grants access to the data. With MAC, the owner and the system determine access based upon owner authorization, clearance, and classification. Both DAC and MAC models are available in many operating system and application implementations. WHY CONTROLS DO NOT WORK While everything present in this chapter makes good sense, implementing controls can be problematic. Overcontrolling an environment or implementing confusing and redundant controls results in excessive human/ monetary expense. Unclear controls might bring confusion to the work environment and leave people wondering what they are supposed to do, delaying and impacting the ability of the organization to achieve its goals. Similarly, controls might decrease effectiveness or entail an implementation that is costlier than the risk (potential loss) they are designed to mitigate. In some situations, the control may become obsolete and effectively useless. This is often evident in organizations whose polices have not been updated to reflect changes in legislation, economic conditions, and systems. Remember: people will resist attempts to control their behaviors. This is human nature and very common in situations in which the affected individuals were not consulted or involved in the development of the control. Resistance is highly evident in organizations in which the controls are so rigid or overemphasized as to cause mental or organizational rigidity. The rigidity causes a loss of flexibility to accommodate certain situations and can lead to strict adherence to procedures when common sense and rationality should be employed. Personnel can and will accept controls. Most people are more willing to accept them if they understand what the control is intended to do and why. This means the control must be a means to an end and not the end itself. Alternatively, the control may simply not achieve the desired goal. There are four primary reactions to controls the security professional should consider when evaluating and selecting the control infrastructure: 1. The control is a game. Employees consider the control as a challenge, and they spend their efforts in finding unique methods to circumvent the control. 2. Sabotage. Employees attempt to damage, defeat, or ignore the control system and demonstrate, as a result, that the control is worthless. 18

AU1518Ch01Frame Page 19 Thursday, November 14, 2002 6:27 PM

It Is All about Control 3. Inaccurate information. Information may be deliberately managed to demonstrate the control as ineffective or to promote a department as more efficient than it really is. 4. Control illusion. While the control system is in force and working, employees ignore or misinterpret results. The system is credited when the results are positive and blamed when results are less favorable. The previous four reactions are fairly complex reactions. Far more simplistic reactions leading to the failure of control systems have been identified: • Apathy. Employees have no interest in the success of the system, leading to mistakes and carelessness. • Fatigue. Highly complex operations result in fatigue of systems and people. Simplification may be required to address the problem. • Executive override. The executives in the organization provide a “get out of jail free” card for ignoring the control system. Unfortunately, the executives involved may give permission to employees to ignore all the established control systems. • Complexity. The system is so complex that people cannot cope with it. • Communication. The control operation has not been well communicated to the affected employees, resulting in confusion and differing interpretations. • Efficiency. People often see the control as impeding their abilities to achieve goals. Despite the reasons why controls fail, many organizations operate in very controlled environments due to business competitiveness, handling of national interest or secure information, privacy, legislation, and other reasons. People can accept controls and assist in their design, development, and implementation. Involving the correct people at the correct time results in a better control system. SUMMARY This chapter has examined the language of controls, including definitions and composition. It has looked at the different types of controls, some examples, and why controls fail. The objective for the auditor and the security professional alike is to understand the risk the control is designed to address and implement or evaluate as their role may be. Good controls do depend on good people to design, implement, and use the control. However, the balance between the good and the bad control can be as simple as the cost to implement or the negative impact to business operations. For a control to be effective, it must achieve management’s objectives, be relevant to the situation, be cost effective to implement, and easy for the affected employees to use. 19

AU1518Ch01Frame Page 20 Thursday, November 14, 2002 6:27 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Acknowledgments

Many thanks to my colleague and good friend, Mignona Cote. She continues to share her vast audit experience daily, having a positive effect on information systems security and audit. Her mentorship and leadership have contributed greatly to my continued success. References Gallegos, Frederick. Information Technology Control and Audit. Auerbach Publications, Boca Raton, FL, 1999. Sawyer, Lawrence. Internal Auditing. The Institute of Internal Auditors, 1996.

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

20

AU1518Ch02Frame Page 21 Thursday, November 14, 2002 6:26 PM

Chapter 2

Controlling FTP: Providing Secured Data Transfers Chris Hare, CISSP, CISA

Several scenarios exist that must be considered when looking for a solution: • The user with a log-in account who requires FTP access to upload or download reports generated by an application. The user does not have access to a shell; rather, his default connection to the box will connect him directly to an application. He requires access to only his home directory to retrieve and delete files. • The user who uses an application as his shell but does not require FTP access to the system. • An application that automatically transfers data to a remote system for processing by a second application. It is necessary to find an elegant solution to each of these problems before that solution can be considered viable by an organization. Scenario A A user named Bob accesses a UNIX system through an application that is a replacement for his normal UNIX log-in shell. Bob has no need for, and does not have, direct UNIX command-line access. While using the application, Bob creates reports or other output that he must upload or download for analysis or processing. The application saves this data in either Bob’s home directory or a common directory for all application users. Bob may or may not require the ability to put files onto the application server. The requirements break down as follows:

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

21

AU1518Ch02Frame Page 22 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY • Bob requires FTP access to the target server. • Bob requires access to a restricted number of directories, possibly one or two. • Bob may or may not require the ability to upload files to the server. Scenario B Other application users in the environment illustrated in Scenario A require no FTP access whatsoever. Therefore, it is necessary to prevent them from connecting to the application server using FTP. Scenario C The same application used by the users in Scenarios A and B regularly dumps data to move to another system. The use of hard-coded passwords in scripts is not advisable because the scripts must be readable for them to be executed properly. This may expose the passwords to unauthorized users and allow them to access the target system. Additionally, the use of hard-coded passwords makes it difficult to change the password on a regular basis because all scripts using this password must be changed. A further requirement is to protect the data once stored on the remote system to limit the possibility of unauthorized access, retrieval, and modification of the data. While there are a large number of options and directives for the /etc/ ftpaccess file, the focus here is on those that provide secured access to meet the requirements in the scenarios described. CONTROLLING FTP ACCESS Advanced FTP servers such as wu-ftpd provide extensive controls for controlling FTP access to the target system. This access does not extend to the IP layer, as the typical FTP client does not offer encryption of the data stream. Rather, FTP relies on the properties inherent in the IP (Internet Protocol) to recover from malformed or lost packets in the data stream. This means one still has no control over the network component of the data transfer. This may allow for the exposure of the data if the network is compromised. However, that is outside the scope of the immediate discussion. wu-ftpd uses two control files: /etc/ftpusers and /etc/ftpaccess. The /etc/ftpusers file is used to list the users who do not have FTP access rights on the remote system. For example, if the /etc/ftpusers file is empty, then all users, including root, have FTP rights on the system. This is not the desired operation typically, because access to system accounts such as root are to be controlled. Typically, the /etc/ftpusers file contains the following entries: 22

AU1518Ch02Frame Page 23 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Exhibit 2-1. Denying FTP access.

C:\WINDOWS>ftp 192.168.0.2 Connected to 192.168.0.2. 220 poweredge.home.com FTP server (Version wu-2.6.1(1) Wed Aug 9 05:54:50 EDT 20 00) ready. User (192.168.0.2:(none)): root 331 Password required for root. Password: 530 Login incorrect. Login failed. ftp>

• • • • • • • • • • • • • •

root bin daemon adm lp sync shutdown halt mail news uucp operator games nobody

When a user in this list, root for example, attempts to access the remote system using FTP, they are denied access because their account is listed in the /etc/ftpusers file. This is illustrated in Exhibit 2-1. By adding additional users to this list, one can control who has FTP access to this server. This does, however, create an additional step in the creation of a user account, but it is a related process and could be added as a step in the script used to create a user. Should a user with FTP privileges no longer require this access, the user’s name can be added to the /etc/ftpusers list at any time. Similarly, if a denied user requires this access in the future, that user can be removed from the list and FTP access restored. Recall the requirements of Scenario B: the user has a log-in on the system to access his application but does not have FTP privileges. This scenario has been addressed through the use of /etc/ftpusers. The user 23

AU1518Ch02Frame Page 24 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-2. Sample /etc/ftpaccess file.

class all real,guest,anonymous * email [email protected] loginfails 5 readme readme

README* README*

login cwd=*

message /var/ftp/welcome.msg message .message compress tar chmod delete overwrite rename

yes yes no no no no

login cwd=*

all all guest,anonymous guest,anonymous guest,anonymous guest,anonymous

log transfers anonymous,real inbound,outbound shutdown /etc/shutmsg passwd-check rfc822 warn

can still have UNIX shell access or access to a UNIX-based application through the normal UNIX log-in process. However, using /etc/ftpusers prevents access to the FTP server and eliminates the problem of unauthorized data movement to or from the FTP server. Most current FTP server implementations offer the /etc/ftpusers feature. EXTENDING CONTROL Scenarios A and C require additional configuration because reliance on the extended features of the wu-ftpd server is required. These control extensions are provided in the file /etc/ftpaccess. A sample /etc/ftpaccess file is shown in Exhibit 2-2. This is the default /etc/ftpaccess file distributed with wu-ftpd. Before one can proceed to the problem at hand, one must examine the statements in the /etc/ftpaccess file. Additional explanations for other statements not found in this example, but required for the completion of our scenarios, are also presented later in the chapter. The class statement in /etc/ftpaccess defines a class of users, in the sample file a user class named all, with members of the class being real, guest, and anonymous. The syntax for the class definition is: class [ ...] 24

AU1518Ch02Frame Page 25 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Typelist is one of real, guest, and anonymous. The real keyword matches users to their real user accounts. Anonymous matches users who are using anonymous FTP access, while guest matches guest account access. Each of these classes can be further defined using other options in this file. Finally, the class statement can also identify the list of allowable addresses, hosts, or domains that connections will be accepted from. There can be multiple class statements in the file; the first one matching the connection will be used. Defining the hosts requires additional explanation. The host definition is a domain name, a numeric address, or the name of a file, beginning with a slash (‘/’) that specifies additional address definitions. Additionally, the address specification may also contain IP address:netmask or IP address/CIDR definition. (CIDR, or Classless Internet Domain Routing, uses a value after the IP address to indicate the number of bits used for the network. A Class C address would be written as 192.168.0/24, indicating 24 bits are used for the network.) It is also possible to exclude users from a particular class using a ‘!’ to negate the test. Care should be taken in using this feature. The results of each of the class statements are OR’d together with the others, so it is possible to exclude an allowed user in this manner. However, there are other mechanisms available to deny connections from specific hosts or domains. The primary purpose of the class statement is to assign connections from specific domains or types of users to a class. With this in mind, one can interpret the class statement in Exhibit 2-2, shown here as: class all real,guest,anonymous *

This statement defines a class named all, which includes user types real, anonymous, and guest. Connections from any host are applicable to this class. The email clause specifies the e-mail address of the FTP archive maintainer. It is printed at various times by the FTP server. The message clause defines a file to be displayed when the user logs in or when they change to a directory. The statement message /var/ftp/welcome.msg login

causes wu-ftpd to display the contents of the file /var/ftp/welcome.msg when a user logs in to the FTP server. It is important for this file to be somewhere accessible to the FTP server so that anonymous users will also be greeted by the message. NOTE: Some FTP clients have problems with multiline responses, which is how the file is displayed. 25

AU1518Ch02Frame Page 26 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY When accessing the test FTP server constructed for this chapter, the message file contains: ***** WARNING ***** This is a private FTP server. If you do not have an account, you are not welcome here. ******************* It is currently %T local time in Ottawa, Canada. You are %[email protected]%R accessing %L. for help, contact %E.

The % strings are converted to the actual text when the message is displayed by the server. The result is: 331 Password required for chare. Password: 230-***** WARNING ***** 230-This is a private FTP server. If you do not have an account, 230-you are not welcome here. 230-******************* 230-It is currently Sun Jan 28 18:28:01 2001 local time in Ottawa, Canada. 230-You are [email protected] accessing poweredge.home.com. 230-for help, contact [email protected] 230230230 User chare logged in. ftp>

The % tags available for inclusion in the message file are listed in Exhibit 2-3. It is allowable to define a class and attach a specific message to that class of users. For example: class class message

real anon /var/ftp/welcome.msg

real anonymous login

* * real

Now, the message is only displayed when a real user logs in. It is not displayed for either anonymous or guest users. Through this definition, one can provide additional information using other tags listed in Exhibit 2-3. The ability to display class-specific message files can be extended on a 26

AU1518Ch02Frame Page 27 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Exhibit 2-3. %char definitions. Tag

Description

%T %F %C %E %R %L %u %U %M %N %B %b %Q %I %i %q %H %h %xu %xd %xR %xc %xT %xE %xL %xU %xD

Local time (form Thu Nov 15 17:12:42 1990) Free space in partition of CWD (kbytes) Current working directory The maintainer’s e-mail address as defined in ftpaccess Remote host name Local host name Username as determined via RFC931 authentication Username given at log-in time Maximum allowed number of users in this class Current number of users in this class Absolute limit on disk blocks allocated Preferred limit on disk blocks Current block count Maximum number of allocated inodes (+1) Preferred inode limit Current number of allocated inodes Time limit for excessive disk use Time limit for excessive files Uploaded bytes Downloaded bytes Upload/download ratio (1:n) Credit bytes Time limit (minutes) Elapsed time since log-in (minutes) Time left Upload limit Download limit

user-by-user basis by creating a class for each user. This is important because individual limits can be defined for each user. The message command can also be used to display information when a user enters a directory. For example, using the statement message /var/ftp/etc/.message CWD=*

causes the FTP server to display the specified file when the user enters the directory. This is illustrated in Exhibit 2-4 for the anonymous user. The message itself is displayed only once to prevent annoying the user. The noretrieve directive establishes specific files no user is permitted to retrieve through the FTP server. If the path specification for the file 27

AU1518Ch02Frame Page 28 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-4. Directory-specific messages. User (192.168.0.2:(none)): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd etc 250-***** WARNING ***** 250-There is no data of any interest in the /etc directory. 250250 CWD command successful. ftp>

begins with a ‘/’, then only those files are marked as nonretrievable. If the file specification does not include the leading ‘/’, then any file with that name cannot be retrieved. For example, there is a great deal of sensitivity with the password file on most UNIX systems, particularly if that system does not make use of a shadow file. Aside from the password file, there is a long list of other files that should not be retrievable from the system, even if their use is discouraged. The files that should be marked for nonretrieval are files containing the names: • • • • • • • • • • • •

passwd shadow .profile .netrc .rhosts .cshrc profile core .htaccess /etc /bin /sbin

This is not a complete list, as the applications running on the system will likely contain other files that should be specifically identified. Using the noretrieve directive follows the syntax: noretrieve [absolute|relative] [class=] ... [-] ...

For example, noretrieve passwd

prevents any user from downloading any file on the system named passwd. 28

AU1518Ch02Frame Page 29 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers When specifying files, it is also possible to name a directory. In this situation, all files in that directory are marked as nonretrievable. The option absolute or relative keywords identify if the file or directory is an absolute or relative path from the current environment. The default operation is to consider any file starting with a ‘/’ as an absolute path. Using the optional class keyword on the noretrieve directive allows this restriction to apply to only certain users. If the class keyword is not used, the restriction is placed against all users on the FTP server. Denying Connections Connections can be denied based on the IP address or domain of the remote system. Connections can also be denied based on how the user enters his password at log-in. NOTE: This password check applies only to anonymous FTP users. It has no effect on real users because they authenticate with their standard UNIX password. The password-check directive informs the FTP server to conduct checks against the password entered. The syntax for the password-check directive is passwd-check ()

It is not recommended to use password-check with the none argument because this disables analysis of the entered password and allows meaningless information to be entered. The trivial argument performs only checking to see if there is an ‘@’ in the password. Using the argument is the recommended action and ensures the password is compliant with the RFC822 e-mail address standard. If the password is not compliant with the trivial or rfc822 options, the FTP server can take two actions. The warn argument instructs the server to warn the user that his password is not compliant but still allows access. If the enforce argument is used, the user is warned and the connection terminated if a noncomplaint password is entered. Use of the deny clause is an effective method of preventing access from specific systems or domains. When a user attempts to connect from the specified system or domain, the message contained in the specified file is displayed. The syntax for the deny clause is: deny

The file location must begin with a slash (‘/’). The same rules described in the class section apply to the addrglob definition for the deny command. In addition, the use of the keyword !nameservd is allowed to deny connections from sites without a working nameserver. 29

AU1518Ch02Frame Page 30 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Consider adding a deny clause to this file; for example, adding deny!nameservd /var/ftp/.deny to /etc/ftpaccess. When testing the deny clause, the denied connection receives the message contained in the file. Using the !nameservd definition means that any host not found in a reverse DNS query to get a host name from an IP address is denied access. Connected to 192.168.0.2. 220 poweredge.home.com FTP server (Version wu-2.6.1(1) Wed Aug 9 05:54:50 EDT 20 00) ready. User (192.168.0.2:(none)): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 530-**** ACCESS DENIED **** 530530-Access to this FTP server from your domain has been denied by the administrator. 530530 Login incorrect. Login failed. ftp>

The denial of the connection is based on where the connection is coming from, not the user who authenticated to the server. Connection Management With specific connections denied, this discussion must focus on how to control the connection when it is permitted. A number of options for the server allow this and establish restrictions from throughput to access to specific files or directories. Preventing anonymous access to the FTP server is best accomplished by removing the ftp user from the /etc/passwd file. This instructs the FTP server to deny all anonymous connection requests. The guestgroup and guestuser commands work in a similar fashion. In both cases, the session is set up exactly as with anonymous FTP. In other words, a chroot() is done and the user is no longer permitted to issue the USER and PASS commands. If using guestgroup, the groupname must be defined in the /etc/group file; or in the case of guestuser, a valid entry in /etc/passwd. guestgroup [ ...] guestuser [ ...] realgroup [ ...] realuser [ ...] 30

AU1518Ch02Frame Page 31 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers In both cases, the user’s home directory must be correctly set up. This is accomplished by splitting the home directory entry into two components separated by the characters ‘/./’. The first component is the base directory for the FTP server and the second component is the directory the user is to be placed in. The user can enter the base FTP directory but cannot see any files above this in the file system because the FTP server establishes a restricted environment. Consider the /etc/passwd entry: systemx::503:503:FTP Only Access from systemx:/var/ftp/./systemx:/etc/ftponly

When systemx successfully logs in, the FTP server will chroot(“/var/ ftp”) and then chdir(“/systemx”). The guest user will only be able to access the directory structure under /var/ftp (which will look and act as / to systemx), just as an anonymous FTP user would. Either an actual name or numeric ID specifies the group name. To use a numeric group ID, place a ‘%’ before the number. Ranges may be given and the use of an asterisk means all groups. guestuser works like guestgroup except uses the username (or numeric ID). realuser and realgroup have the same syntax but reverse the effect of guestuser and guestgroup. They allow real user access when the remote user would otherwise be determined a guest. For example: guestuser * realuser chare

causes all nonanonymous users to be treated as guest, with the sole exception of user chare, who is permitted real user access. Bear in mind, however, that the use of /etc/ftpusers overrides this directive. If the user is listed in /etc/ftpusers, he is denied access to the FTP server. It is also advisable to set timeouts for the FTP server to control the connection and terminate it appropriately. The timeout directives are listed in Exhibit 2-5. The accept timeout establishes how long the FTP server will wait for an incoming connection. The default is 120 seconds. The connect value establishes how long the FTP server will wait to establish an outgoing connection. The FTP server generally makes several attempts and will give up after the defined period if a successful connection cannot be established. The data timeout determines how long the FTP server will wait for some activity on the data connection. This should be kept relatively long because the remote client may have a low-speed link and there may be a lot of data queued for transmission. The idle timer establishes how long the 31

AU1518Ch02Frame Page 32 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-5. Timeout directives. Timeout Value Timeout accept Timeout connect Timeout data Timeout idle Timeout maxidle Timeout RFC931

Default

Recommended

120 120 1200 900 7200 10

120 120 1200 900 1200 10

server will wait for the next command from the client. This can be overridden with the –a option to the server. Using the access clause overrides both the command line parameter if used and the default. The user can also use the SITE IDLE command to establish a higher value for the idle timeout. The maxidle value establishes the maximum value that can be established by the FTP client. The default is 7200 seconds. Like the idle timeout, the default can be overridden using the –A command line option to the FTP server. Defining this parameter overrides the default and the command line. The last timeout value allows the maximum time for the RFC931 ident/AUTH conversation to occur. The information recorded from the RFC931 conversation is recorded in the system logs and used for any authentication requests. Controlling File Permissions File permissions in the UNIX environment are generally the only method available to control who has access to a specific file and what they are permitted to do with that file. It may be a requirement of a specific implementation to restrict the file permissions on the system to match the requirements for a specific class of users. The defumask directive allows the administrator to define the umask, or default permissions, on a per-class or systemwide basis. Using the defumask command as defumask 077

causes the server to remove all permissions except for the owner of the file. If running a general access FTP server, the use of a 077 umask may be extreme. However, umask should be at least 022 to prevent modification of the files by other than the owner. By specifying a class of user following the umask, as in defumask 077 real

all permissions are removed. Using these parameters prevents world writable files from being transferred to your FTP server. If required, it is possible 32

AU1518Ch02Frame Page 33 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers to set additional controls to allow or disallow the use of other commands on the FTP server to change file permissions or affect the files. By default, users are allowed to change file permissions and delete, rename, and overwrite files. They are also allowed to change the umask applied to files they upload. These commands allow or restrict users from performing these activities. chmod delete overwrite rename umask

To restrict all users from using these commands, apply the directives as: chmod no all delete no all overwrite no all rename no all umask no all

Setting these directives means no one can execute commands on the FTP server that require these privileges. This means the FTP server and the files therein are under the full control of the administrator. ADDITIONAL SECURITY FEATURES There are a wealth of additional security features that should be considered when configuring the server. These control how much information users are shown when they log in about the server, and print banner messages, among other capabilities. The greeting directive informs the FTP server to change the level of information printed when the user logs in. The default is full, which prints all information about the server. A full message is: 220 poweredge.home.com FTP server (Version wu-2.6.1(1) Wed Aug 9 05:54:50 EDT 2000) ready.

A brief message on connection prints the server name as: 220 poweredge.home.com FTP server ready.

Finally, the terse message, which is the preferred choice, prints only: 220 FTP server ready.

The full greeting is the default unless the greeting directive is defined. This provides the most information about the FTP server. The terse greeting is the preferred choice because it provides no information about the server to allow an attacker to use that information for identifying potential attacks against the server. 33

AU1518Ch02Frame Page 34 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY The greeting is controlled with the directive: greeting

An additional safeguard is the banner directive using the format: banner

This causes the text contained in the named file to be presented when the users connect to the server prior to entering their username and password. The path of the file is relative from the real root directory, not from the anonymous FTP directory. If one has a corporate log-in banner that is displayed when connecting to a system using Telnet, it would also be available to use here to indicate that the FTP server is for authorized users only. NOTE: Use of this command can completely prevent noncompliant FTP clients from establishing a connection. This is because not all clients can correctly handle multiline responses, which is how the banner is displayed. Connected to 192.168.0.2. 220-************************************************************* 220-* * 220-* * W A R N I N G * * 220-* * 220-* ACCESS TO THIS FTP SERVER IS FOR AUTHORIZED USERS ONLY. * 220-* ALL ACCESS IS LOGGED AND MONITORED. IF YOU ARE NOT AN * 220-* AUTHORIZED USER, OR DO NOT AGREE TO OUR MONITORING POLICY,* 220-* DISCONNECT NOW. * 220-* * 220-* NO ABUSE OR UNAUTHORIZED ACCESS IS TOLERATED. * 220-* * 220-************************************************************* 220220 FTP server ready. User (192.168.0.2:(none)):

At this point, one has controlled how the remote user gains access to the FTP server, and restricted the commands they can execute and the permissions assigned to their files. Additionally, certain steps have been taken to ensure they are aware that access to this FTP server is for authorized use only. However, one must also take steps to record the connections and transfers made by users to fully establish what is being done on the FTP server. LOGGING CAPABILITIES Recording information in the system logs is a requirement for proper monitoring of transfers and activities conducted on the FTP server. There are a number of commands that affect logging, and each is presented in this section. Normally, only connections to the FTP server are logged. However, using the log commands directive, each command executed by the 34

AU1518Ch02Frame Page 35 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers user can be captured. This may create a high level of output on a busy FTP server and may not be required. However, it may be advisable to capture traffic for anonymous and guest users specifically. The directive syntax is: log commands

As with other directives, it is known that typelist is a combination of real, anonymous, and guest. If the real keyword is used, logging is done for users accessing FTP using their real accounts. Anonymous logs all commands performed by anonymous users, while guest matches users identified using the guestgroup or guestuser directives. Consider the line log commands guest, anonymous

which results in all commands performed by anonymous and guest users being logged. This can be useful for later analysis to see if automated jobs are being properly performed and what files are uploaded or downloaded. Like the log commands directive, log transfers performs a similar function, except that it records all file transfers for a given class of users. The directive is stated as: log transfers

The directions argument is inbound or outbound. Both arguments can be used to specify logging of transfers in both directions. For clarity, inbound are files transferred to the server, or uploads, and outbound are transfers from the server, or downloads. The typelist argument again consists of real, anonymous, and guest. It is not only essential to log all of the authorized functions, but also to record the various commands and requests made by the user that are denied due to security requirements. For example, if there are restrictions placed on retrieving the password file, it is desirable to record the security events. This is accomplished for real, anonymous, and guest users using the log security directive, as in: log security

If rename is a restricted command on the FTP server, the log security directive results in the following entries Feb 11 20:44:02 poweredge ftpd[23516]: RNFR dayo.wav Feb 11 20:44:02 poweredge ftpd[23516]: RNTO day-o.wav Feb 11 20:44:02 poweredge ftpd[23516]: systemx of localhost.home.com [127.0.0.1] tried to rename /var/ftp/systemx/dayo.wav to /var/ftp/ systemx/day-o.wav

This identifies the user who tried to rename the file, the host that the user connected from, and the original and desired filenames. With this information, the 35

AU1518Ch02Frame Page 36 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY system administrator or systems security personnel can investigate the situation. Downloading information from the FTP server is controlled with the noretrieve clause in the /etc/ftpaccess file. It is also possible to limit uploads to specific directories. This may not be required, depending on the system configuration. A separate entry for each directory one wishes to allow uploads to is highly recommended. The syntax is: upload [absolute|relative] [class=]... [-] [“dirs”|”nodirs”] []

This looks overly complicated, but it is in fact relatively simple. Define a directory called that permits or denies uploads. Consider the following entry: upload /var/ftp /incoming yes ftpadmin ftpadmin 0440 nodirs

This means that for a user with the home directory of /var/ftp, allow uploads to the incoming directory. Change the owner and group to be ftpadmin and change the permissions to readonly. Finally, do not allow the creation of directories. In this manner, users can be restricted to the directories to which they can upload files. Directory creation is allowed by default, so one must disable it if required. For example, if one has a user on the system with the following password file entry: chare:x:500:500:Chris Hare:/home/chare:/bin/bash

and if one wants to prevent the person with this userid from being able to upload files to his home directory, simply add the line: upload /home/chare no

to the /etc/ftpaccess file. This prevents the user chare from being able to upload files to his home directory. However, bear in mind that this has little effect if this is a real user, because real users will be able to upload files to any directory they have write permission to. The upload clause is best used with anonymous and guest users. Note: The wu-ftpd server denies anonymous uploads by default. To see the full effect of the upload clause, one must combine its use with a guest account, as illustrated with the systemx account shown here: systemx:x:503:503:FTP access from System X:/home/ systemx/./:/bin/false 36

AU1518Ch02Frame Page 37 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Note in this password file entry the home directory path. This entry cannot be made when the user account is created. The ‘/./’ is used by wu-ftpd to establish the chroot environment. In this case, the user is placed into his home directory, /home/systemx, which is then used as the base for his chroot file system. At this point, the guest user can see nothing on the system other than what is in his home directory. Using the upload clause of upload /home/chare yes

means the user can upload files to his home directory. When coupled with the noretrieve clause discussed earlier, it is possible to put a high degree of control around the user. THE COMPLETE /etc/ftpaccess FILE The discussion thus far has focused on a number of control directives available in the wu-ftpd FTP server. It is not necessary that these directives appear in any particular order. However, to further demonstrate the directives and relationships between those directives, the /etc/ftpaccess file is illustrated in Exhibit 2-6. REVISITING THE SCENARIOS Recall the scenarios from the beginning of this chapter. This section reviews each scenario and defines an example configuration to achieve it. Scenario A A user named Bob accesses a UNIX system through an application that is a replacement for his normal UNIX log-in shell. Bob has no need for, and does not have, direct UNIX command-line access. While using the application, Bob creates reports or other output that he must retrieve for analysis. The application saves this data in either Bob’s home directory or a common directory for all application users. Bob may or may not require the ability to put files onto the application server. The requirements break down as follows: • Bob requires FTP access to the target server. • Bob requires access to a restricted number of directories, possibly one or two. • Bob may or may not require the ability to upload files to the server. Bob requires the ability to log into the FTP and access several directories to retrieve files. The easiest way to do this is to deny retrieval for the entire system by adding a line to /etc/ftpaccess as noretrieve / 37

AU1518Ch02Frame Page 38 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 2-6. The /etc/ftpaccess file.

# # Define the user classes # class all real,guest * class anonymous anonymous * class real real * # # Deny connections from systems with no reverse DNS # deny !nameservd /var/ftp/.deny # # What is the email address of the server administrator. # Make sure someone reads this from time to time. email [email protected] # # How many login attempts can be made before logging an # error message and terminating the connection? # loginfails 5 greeting terse readme README* login readme README* cwd=* # # Display the following message at login # message /var/ftp/welcome.msg login banner /var/ftp/warning.msg # # display the following message when entering the directory # message .message cwd=* # # ACCESS CONTROLS # # What is the default umask to apply if no other matching # directive exists # defumask 022 chmod no guest,anonymous delete no guest,anonymous overwrite no guest,anonymous rename no guest,anonymous # remove all permissions except for the owner if the user # is a member of the real class # defumask 077 real guestuser systemx realuser chare # #establish timeouts #

38

AU1518Ch02Frame Page 39 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers Exhibit 2-6. The /etc/ftpaccess file (Continued).

timeout timeout timeout timeout timeout

accept 120 connect 120 data 1200 idle 900 maxidel 1200

# # establish non-retrieval # # noretrieve passwd # noretrieve shadow # noretrieve .profile # noretrieve .netrc # noretrieve .rhosts # noretrieve .cshrc # noretrieve profile # noretrieve core # noretrieve .htaccess # noretrieve /etc # noretrieve /bin # noretrieve /sbin noretrieve / allow-retrieve /tmp upload /home/systemx / no # # Logging # log commands anonymous,guest,real log transfers anonymous,guest,real inbound,outbound log security anonymous,real,guest compress tar

yes yes

all all

shutdown /etc/shutmsg passwd-check rfc822 warn

This marks every file and directory as nonretrievable. To allow Bob to get the files he needs, one must set those files or directories as such. This is done using the allow-retrieve directive. It has exactly the same syntax as the noretrieve directive, except that the file or directory is now retrievable. Assume that Bob needs to retrieve files from the /tmp directory. Allow this using the directive allow-retrieve /tmp

When Bob connects to the FTP server and authenticates himself, he cannot get files from his home directory.

39

AU1518Ch02Frame Page 40 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY ftp> pwd 257 “/home/bob” is current directory. ftp> get .xauth xauth 200 PORT command successful. 550 /home/chare/.xauth is marked unretrievable

However, Bob can retrieve files from the /tmp directory. ftp> cd /tmp 250 CWD command successful. ftp> pwd 257 “/tmp” is current directory. ftp> get .X0-lock X0lock 200 PORT command successful. 150 Opening ASCII mode data connection for .X0-lock (11 bytes). 226 Transfer complete. ftp: 12 bytes received in 0.00Seconds 12000.00Kbytes/sec. ftp>

If Bob must be able to retrieve files from his home directory, an additional allow-retrieve directive is required: class real real * allow-retrieve /home/bob class=real

When Bob tries to retrieve a file from anywhere other than /tmp or his home directory, access is denied. Additionally, it may be necessary to limit Bob’s ability to upload files. If a user requires the ability to upload files, no additional configuration is required, as the default action for the FTP server is to allow uploads for real users. If one wants to prohibit uploads to Bob’s home directory, use the upload directive: upload /home/bob / no

This command allows uploads to the FTP server. The objective of Scenario A has been achieved. Scenario B Other application users in the environment illustrated in Scenario A require no FTP access whatsoever. Therefore, it is necessary to prevent them from connecting to the application server using FTP. 40

AU1518Ch02Frame Page 41 Thursday, November 14, 2002 6:26 PM

Controlling FTP: Providing Secured Data Transfers This is done by adding those users to the /etc/ftpaccess file. Recall that this file lists a single user per line, which is checked. Additionally, it may be advisable to deny anonymous FTP access. Scenario C The same application used by the users in Scenarios A and B regularly dumps data to move to another system. The use of hard-coded passwords in scripts is not advisable because the scripts must be readable for them to be executed properly. This may expose the passwords to unauthorized users and allow them to access the target system. Additionally, the use of hard-coded passwords makes it difficult to change the password on a regular basis because all scripts using this password must be changed. A further requirement is to protect the data once stored on the remote system to limit the possibility of unauthorized access, retrieval, and modification of the data. Accomplishing this requires the creation of a guest user account on the system. This account will not support a log-in and will be restricted in its FTP abilities. For example, create a UNIX account on the FTP server using the source hostname, such as systemx. The password is established as a complex string but with the other compensating controls, the protection on the password itself does not need to be as stringent. Recall from an earlier discussion that the account resembles systemx:x:503:503:FTP access from System X:/home/ systemx/./:/bin/false

Also recall that the home directory establishes the real user home directory, and the ftp chroot directory. Using the upload command upload /home/systemx / no

means that the systemx user cannot upload files to the home directory. However, this is not the desired function in this case. In this scenario, one wants to allow the remote system to transfer files to the FTP server. However, one does not want to allow for downloads from the FTP server. To do this, the command noretrieve / upload /home/systemx / yes

prevents downloads and allows uploads to the FTP server. One can further restrict access by controlling the ability to rename, overwite, change permissions, and delete a file using the appropriate directives in the /etc/ftpaccess file: 41

AU1518Ch02Frame Page 42 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY chmod delete overwrite rename

no no no no

guest,anonymous guest,anonymous guest,anonymous guest,anonymous

Because the user account has no interactive privileges on the system and has restricted privileges on the FTP server, there is little risk involved with using a hard-coded password. While using a hard-coded password is not considered advisable, there are sufficient controls in place to compensate for this. Consider the following controls protecting the access: The user cannot retrieve files from the system. The user can upload files. The user cannot see what files are on the system and thus cannot determine the names of the files to block the system from putting the correct data on the server. The user cannot change file permissions. The user cannot delete files. The user cannot overwrite existing files. The user cannot rename files. The user cannot establish an interactive session. FTP access is logged. With these compensating controls to address the final possibility of access to the system and the data using a password attack or by guessing the password, it will be sufficiently difficult to compromise the integrity of the data. The requirements defined in the scenario have been fulfilled. SUMMARY This discussion has shown how one can control access to an FTP server and allow controlled access for downloads or uploads to permit the safe exchange of information for interactive and automated FTP sessions. The extended functionality offered by the wu-ftpd FTP server provides extensive access, and preventative and detective controls to limit who can access the FTP server, what they can do when they can connect, and the recording of their actions. ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture. 42

AU1518Ch03Frame Page 43 Thursday, November 14, 2002 6:26 PM

Chapter 3

The Case for Privacy Michael J. Corby

Any revelation of a secret happens by the mistake of [someone] who shared it in confidence. — La Bruyere, 1645–1694

It is probably safe to say that since the beginning of communication, back in prehistoric times, there were things that were to be kept private. From the location of the best fishing to the secret passage into the cave next door, certain facts were reserved only for a few knowledgeable friends. Maybe even these facts were so private that there was only one person in the world who knew them. We have made “societal rules” around a variety of things that we want to keep private or share only among a few, but still the concept of privacy expectations comes with our unwritten social code. And wherever there has been the code of privacy, there has been the concern over its violation. Have computers brought this on? Certainly not! Maintaining privacy has been important and even more important have been the methods used to try to keep that data a secret. Today in our wired society, however, we still face the same primary threat to privacy that has existed for centuries: mistakes and carelessness of the individuals who have been entrusted to preserve privacy — maybe even the “owner” of the data. In the past few years, and heightened within the past few months, we have become more in tune to the cry — no, the public outcry — regarding the “loss of privacy” that has been forced upon us because of the information age. Resolving this thorny problem requires that we re-look at the way we design and operate our networked systems, and most importantly, that we re-think the way we allocate control to the rightful owners of the information which we communicate and store. Finally, we need to be careful about how we view the data that we provide and for which we are custodians. PRIVACY AND CONTROL The fact that data is being sent, printed, recorded, and shared is not the real concern of privacy. The real concern is that some data has been

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

43

AU1518Ch03Frame Page 44 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY implied, by social judgment, to be private, for sharing only by and with the approval of its owner. If a bank balance is U.S.$1240, that is an interesting fact. If it happens to be my account, that is private information. I have, by virtue of my agreement with the bank, given them the right to keep track of my balance and to provide it to me for the purpose of keeping me informed and maintaining a control point with which I can judge their accuracy. I did not give them permission to share that balance with other people indiscriminately, nor did I give them permission to use that balance even subtly to communicate my standing in relation to others (i.e., publish a list of account holders sorted by balance). The focal points of the issue of privacy are twofold: • How is the data classified as private? • What can be done to preserve the owner’s (my) expectations of privacy? Neither of these are significantly more challenging than, for example, sending digital pictures and sound over a telephone line. Why has this subject caused such a stir in the technology community? This chapter sheds some light on this issue and then comes up with an organized approach to resolve the procedural challenges of maintaining data privacy. RUDIMENTS OF PRIVACY One place to start examining this issue is with a key subset of the first point on classifying data as private: What, exactly, is the data we are talking about? Start with the obvious: private data includes those facts that I can recognize as belonging to me, and for which I have decided reveal more about myself or my behavior than I would care to reveal. This includes three types of data loosely included in the privacy concerns of information technology (IT). These three types of data shown in Exhibit 3-1 are: static, dynamic, and derived data. Static Data Static data is pretty easy to describe. It kind of sits there in front of us. It does not move. It does not change (very often). Information that describes who we are, significant property identifiers, and other tangible elements are generally static. This information can of course take any form. It can be entered into a computer by a keyboard; it can be handwritten on a piece of paper or on a form; it can be photographed or created as a result of using a biological interface such as a fingerprint pad, retina scanner, voice or facial image recorder, or pretty much any way that information can be retained. It does not need to describe an animate object. It can also identify something we have. Account numbers, birth certificates, passport numbers, and employee numbers are all concepts that can be recorded and would generally be considered static data. 44

AU1518Ch03Frame Page 45 Thursday, November 14, 2002 6:26 PM

The Case for Privacy Exhibit 3-1. Types of private data. 1. Static data: a. Who we are: i. Bio-identity (fingerprints, race, gender, height, weight) ii. Financial identity (bank accounts, credit card numbers) iii. Legal identity (Social Security number, driver’s license, birth certificate, passport) iv. Social identity (church, auto clubs, ethnicity) b. What we have: i. Property (buildings, automobiles, boats, etc.) ii. Non-real property (insurance policies, employee agreements) 2. Dynamic data: a. Transactions (financial, travel, activities) b. How we live (restaurants, sporting events) c. Where we are (toll cards, cell phone records) 3. Derived data: a. Financial behavior (market analysis): i. Trends and changes (month-to-month variance against baseline) ii. Perceived response to new offerings (match with experience) b. Social behavior (profiling): i. Behavior statistics (drug use, violations or law, family traits)

In most instances, we get to control the initial creation of static data. Because we are the one identifying ourselves by name, account number, address, driver’s license number, or by speaking into a voice recorder or having our retina or face scanned or photographed, we usually will know when a new record is being made of our static data. As we will see later, we need to be concerned about the privacy of this data under three conditions: when we participate in its creation, when it is copied from its original form to a duplicate form, and when it is covertly created (created without our knowledge) such as in secretly recorded conversations or hidden cameras. Dynamic Data Dynamic data is also easy to identify and describe, but somewhat more difficult to control. Records of transactions we initiate constitute the bulk of dynamic data. It is usually being created much more frequently than static data. Every charge card transaction, telephone call, and bank transaction adds to the collection of dynamic data. Even when we drive on toll roads or watch television programs, information can be recorded without our doing anything special. These types of transactions are more difficult for us to control. We may know that a computerized recording of the event is being made, but we often do not know what that information contains, nor if it contains more information than we suspect. Take, for example, purchasing a pair of shoes. You walk into a shoe store, try on various styles and sizes, make your selection, pay for the shoes, and walk out with your 45

AU1518Ch03Frame Page 46 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY purchase in hand. You may have the copy of your charge card transaction, and you know that somewhere in the store’s data files, one pair of shoes has been removed from their inventory and the price you just paid has been added to their cash balance. But what else might have been recorded? Did the sales clerk, for example, record your approximate age or ethnic or racial profile, or make a judgment as to your income level. Did you have children with you? Were you wearing a wedding band? What other general observations were made about you when the shoes were purchased? These items are of great importance in helping the shoe store replenish its supply of shoes, determining if they have attracted the type of customer they intended to attract and analyzing whether they are, in general, serving a growing or shrinking segment of the population. Without even knowing it, some information that you may consider private may have been used without your knowledge simply by the act of buying a new pair of shoes. Derived Data Finally, derived data is created by analyzing groups of dynamic transactions over time to build a profile of your behavior. Your standard way of living out your day, week, and month may be known by others even better than you may know it yourself. For example, you may, without even planning it, have dinner at a restaurant 22 Thursdays during the year. The other six days of the week, you may only dine out eight times in total. If you and others in your area fall into a given pattern, the restaurant community may begin to offer “specials” on Tuesday, or raise their prices slightly on Thursdays to accommodate the increased demand. In this case, your behavior is being recorded and used by your transaction partners in ways you do not even know or approve of. If you use an electronic toll recorder, as has become popular in many U.S. states, do you know if they are also computing the time it took to enter and exit the highway, and consequently your average speed? Most often, this derived data is being collected without even a hint to us, and certainly without our expressed permission. PRESERVING PRIVACY One place to start examining this issue is with a key subset of the first point on classifying data as private: What, exactly, is the data we are talking about? Start with the obvious: private data includes those items that we believe belong to us exclusively and it is not necessary for us to receive the product or service we wish to receive. To examine privacy in the context of computer technology today, we need to examine the following four questions: 1. Who owns the private data? 2. Who is responsible for security and accuracy? 46

AU1518Ch03Frame Page 47 Thursday, November 14, 2002 6:26 PM

The Case for Privacy 3. Who decides how it can be used? 4. Does the owner need to be told when it is used or compromised? You already have zero privacy. Get over it. — Scott McNealy, Chairman, Sun Microsystems, 1999

Start with the first question about ownership. Cyber-consumers love to get offers tailored to them. Over 63 percent of the buying public in the United States bought from direct mail in 1998. Companies invest heavily in personalizing their marketing approach because it works. So what makes it so successful? By allowing the seller to know some pretty personal data about your preferences, a trust relationship is implied. (Remember that word “trust”; it will surface later.) The “real deal” is this: vendors do not know about your interests because they are your friend and want to make you happy. They want to take your trust and put together something private that will result in their product winding up in your home or office. Plain and simple: economics. And what does this cost them? If they have their way, practically nothing. You have given up your own private information that they have used to exploit your buying habits or personal preferences. Once you give up ownership, you have let the cat out of the bag. Now they have the opportunity to do whatever they want with it. “Are there any controls?” That brings us to the second question. The most basic control is to ask you clearly whether you want to give up something you own. That design method of having you “opt in” to their data collection gives you the opportunity to look further into their privacy protection methods, a stated or implied process for sharing (or not sharing) your information with other organizations and how your private information is to be removed. By simply adding this verification of your agreement, 85 percent of surveyed consumers would approve of having their profile used for marketing. Not that they ask, but they will be responsible for protecting your privacy. You must do some work to verify that they can keep their promise, but at least you know they have accepted some responsibility (their privacy policy should tell you how much). Their very mission will ensure accuracy. No product vendor wants to build its sales campaign on inaccurate data — at least not a second time. Who decides use? If done right, both you and the marketer can decide based on the policy. If you are not sure if they are going to misuse their data, you can test them. Use a nickname, or some identifying initial to track where your profile is being used. I once tested an online information service by using my full middle name instead of an initial. Lo and behold, I discovered that my “new” name ended up on over 30 different mailing lists, and it took me several months to be removed from most of them. Some still are using my name, despite my repeated attempts to stop the vendors from 47

AU1518Ch03Frame Page 48 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY doing so. Your method for deciding who to trust (there is that word again) depends on your preferences and the genre of services and products you are interested in buying. Vendors also tend to reflect the preferences of their customers. Those who sell cheap, ultra-low-cost commodities have a different approach than those who sell big-ticket luxuries to a well-educated executive clientele. Be aware and recognize the risks. Special privacy concerns have been raised in three areas: data on children, medical information, and financial information (including credit/debit cards). Be especially aware if these categories of data are collected and hold the collector to a more stringent set of protection standards. You, the public, are the judge. If your data is compromised, it is doubtful that the collector will know. This situation is unfortunate. Even if it is known, it could cost them their business. Now the question of ethics comes into play. I actually know of a company that had its customer credit card files “stolen” by hackers. Rather than notify the affected customers and potentially cause a mass exodus to other vendors, the company decided to keep quiet. That company may be only buying some time. It is a far greater mistake to know that a customer is at risk and not inform them that they should check their records carefully than it is to have missed a technical component and, as a result, their system was compromised. The bottom line is that you are expected to report errors, inconsistencies, and suspected privacy violations to them. If you do, you have a right to expect immediate correction. WHERE IS THE DATA TO BE PROTECTED? Much ado has been made about the encryption of data while connected to the Internet. This is a concern; but to be really responsive to privacy directives, more than transmitting encrypted data is required. For a real privacy policy to be developed, the data must be protected when it is: • • • • •

Captured Transmitted Stored Processed Archived

That means more than using SSL or sending data over a VPN. It also goes beyond strong authentication using biometrics or public/private keys. It means developing a privacy architecture that protects data when it is sent, even internally; while stored in databases, with access isolated from those who can see other data in the same database; and while it is being stored in program work areas. All these issues can be solved with technology and should be discussed with the appropriate network, systems development, or data center managers. Despite all best efforts to make technology 48

AU1518Ch03Frame Page 49 Thursday, November 14, 2002 6:26 PM

The Case for Privacy respond to the issues of privacy, the most effective use of resources and effort is in developing work habits that facilitate data privacy protection. GOOD WORK HABITS Privacy does not just happen. Everyone has certain responsibilities when it comes to protecting the privacy of one’s own data or the data that belongs to others. In some cases, the technology exists to make that responsibility easier to carry out. Vendor innovations continue to make this technology more responsive, for both data “handlers” and data “owners.” For the owners, smart cards carry a record of personal activity that never leaves the wallet-sized token itself. For example, smart cards can be used to record selection of services (video, phone, etc.) without divulging preferences. They can maintain complex medical information (e.g., health, drug interactions) and can store technical information in the form of x-rays, nuclear exposure time (for those working in the nuclear industry), and tanning time (for those who do not). For the handlers, smart cards can record electronic courier activities when data is moved from one place to another. They can enforce protection of secret data and provide proper authentication, either using a biometric such as a fingerprint or a traditional personal identification number (PIN). There are even cards that can scan a person’s facial image and compare it to a digitized photo stored on the card. They are valuable in providing a digital signature that does not reside on one’s office PC, subject to theft or compromise by office procedures that are less than effective. In addition to technology, privacy can be afforded through diligent use of traditional data protection methods. Policies can develop into habits that force employees to understand the sensitivity of what they have access to on their desktops and personal storage areas. Common behavior such as protecting one’s territory before leaving that area and when returning to one’s area is as important as protecting privacy while in one’s area. Stories about privacy, the compromise of personal data, and the legislation (both U.S. and international) being enacted or drafted are appearing daily. Some are redundant and some are downright scary. One’s mission is to avoid becoming one of those stories. RECOMMENDATIONS For all 21st-century organizations (and all people who work in those organizations), a privacy policy is a must and adherence to it is expected. Here are several closing tips: 49

AU1518Ch03Frame Page 50 Thursday, November 14, 2002 6:26 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY 1. If your organization has a privacy coordinator (or chief privacy officer), contact that person or a compliance person if you have questions. Keep their numbers handy. 2. Be aware of the world around you. Monitor national and international developments, as well as all local laws. 3. Be proactive; anticipate privacy issues before they become a crisis. 4. Much money can be made or lost by being ahead of the demands for privacy or being victimized by those who capitalize on your shortcomings. 5. Preserve your reputation and that of your organization. As with all bad news, violations of privacy will spread like wildfire. Everyone is best served by collective attention to maintaining an atmosphere of respect for the data being handled. 6. Communicate privacy throughout all areas of your organization. 7. Imbed privacy in existing processes — even older legacy applications. 8. Provide notification and allow your customers/clients/constituents to opt out or opt in. 9. Conduct audits and consumer inquiries. 10. Create a positive personalization image of what you are doing (how does this really benefit the data owner). 11. Use your excellent privacy policies and behavior as a competitive edge. ABOUT THE AUTHOR Michael Corby is president of QinetiQ Trusted Information Management, Inc. He was most recently vice president of the Netigy Global Security Practice, CIO for Bain & Company and the Riley Stoker division of Ashland Oil, and founder of M Corby & Associates, Inc., a regional consulting firm in continuous operation since 1989. He has more than 30 years of experience in the information security field and has been a senior executive in several leading IT and security consulting organizations. He was a founding officer of (ISC)2 Inc., developer of the CISSP program, and was named the first recipient of the CSI Lifetime Achievement Award. A frequent speaker and prolific author, Corby graduated from WPI in 1972 with a degree in electrical engineering.

50

AU1518Ch04Frame Page 51 Thursday, November 14, 2002 6:25 PM

Chapter 4

Breaking News: The Latest Hacker Attacks and Defenses Edward Skoudis

Computer attackers continue to hone their techniques, getting ever better at undermining our systems and networks. As the computer technologies we use advance, these attackers find new and nastier ways to achieve their goals — unauthorized system access, theft of sensitive data, and alteration of information. This chapter explores some of the recent trends in computer attacks and presents tips for securing your systems. To create effective defenses, we need to understand the latest tools and techniques our adversaries are throwing at our networks. With that in mind, we will analyze four areas of computer attack that have received significant attention in the past 12 months: wireless LAN attacks, active and passive operating system fingerprinting, worms, and sniffing backdoors. WIRELESS LAN ATTACKS (WAR DRIVING) In the past year, a very large number of companies have deployed wireless LANs, using technology based on the IEEE 802.11b protocol, informally known as Wi-Fi. Wireless LANs offer tremendous benefits from a usability and productivity perspective: a user can access the network from a conference room, while sitting in an associate’s cubicle, or while wandering the halls. Unfortunately, wireless LANs are often one of the least secure methods of accessing an organization’s network. The technology is becoming very inexpensive, with a decent access point costing less than U.S.$200 and wireless cards for a laptop or PC costing below U.S.$100. In addition to affordability, setting up an access point is remarkably simple (if security is ignored, that is). Most access points can be plugged into the corporate network and configured in a minute by a completely inexperienced user. Because of their low cost and ease of (insecure) use, wireless LANs are in rapid deployment in most networks today, whether upper management or 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

51

AU1518Ch04Frame Page 52 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY even IT personnel realize or admit it. These wireless LANs are usually completely unsecure because the inexperienced employees setting them up have no idea of or interest in activating security features of their wireless LANs. In our consulting services, we often meet with CIOs or Information Security Officers to discuss issues associated with information security. Given the widespread use of wireless LANs, we usually ask these upper-level managers what their organization is doing to secure its wireless infrastructure. We are often given the answer, “We don’t have to worry about it because we haven’t yet deployed a wireless infrastructure.” After hearing that stock answer, we conduct a simple wireless LAN assessment (with the CIO’s permission, of course). We walk down a hall with a wireless card, laptop, and wireless LAN detection software. Almost always we find renegade, completely unsecure wireless networks in use that were set up by employees outside of formal IT roles. The situation is similar to what we saw with Internet technology a decade ago. Back then, we would ask corporate officers what their organizations were doing to secure their Internet gateways. They would say that they did not have one, but we would quickly discover that the organization was laced with homegrown Internet connectivity without regard to security. Network Stumbling, War Driving, and War Walking Attackers have taken to the streets in their search for convenient ways to gain access to organizations’ wireless networks. By getting within a few hundred yards of a wireless access point, an attacker can detect its presence and, if the access point has not been properly secured, possibly gain access to the target network. The process of searching for wireless access points is known in some circles as network stumbling. Alternatively, using an automobile to drive around town looking for wireless access points is known as war driving. As you might guess, the phrases war walking and even war biking have been coined to describe the search for wireless access points using other modes of transportation. I suppose it is only a matter of time before someone attempts war hang gliding. When network stumbling, attackers set up a rig consisting of a laptop PC, wireless card, and antenna for discovering wireless access points. Additionally, a global positioning system (GPS) unit can help record the geographic location of discovered access points for later attack. Numerous software tools are available for this task as well. One of the most popular is NetStumbler (available at www.netstumbler.com), an easy-to-use GUIbased tool written by Marius Milner. NetStumbler runs on Windows systems, including Win95, 98, and 2000, and a PocketPC version called MiniStumbler has been released. For UNIX, several war-driving scripts have 52

AU1518Ch04Frame Page 53 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses been released, with Wi-scan (available at www.dis.org/wl/) among the most popular. This wireless LAN discovery process works because most access points respond, indicating their presence and their services set identifier (SSID) to a broadcast request from a wireless card. The SSID acts like a name for the wireless access point so that users can differentiate between different wireless LANs in close proximity. However, the SSID provides no real security. Some users think that a difficult-to-guess SSID will get them extra security. They are wrong. Even if the access point is configured not to respond to a broadcast request for an SSID, the SSIDs are sent in cleartext and can be intercepted. In a recent war-driving trip in a taxi in Manhattan, an attacker discovered 455 access points in one hour. Some of these access points had their SSIDs set to the name of the company using the access point, gaining the attention of attackers focusing on juicy targets. After discovering target networks, many attackers will attempt to get an IP address on the network, using the Dynamic Host Configuration Protocol (DHCP). Most wireless LANs freely give out addresses to anyone asking for them. After getting an address via DHCP, the attacker will attempt to access the LAN itself. Some LANs use the Wired Equivalent Privacy (WEP) protocol to provide cryptographic authentication and confidentiality. While WEP greatly improves the security of a wireless LAN, it has some significant vulnerabilities that could allow an attacker to determine an access point’s keys. An attacker can crack WEP keys by gathering a significant amount of traffic (usually over 500 MB) using a tool such as Airsnort (available at airsnort.shmoo.com/). Defending against Wireless LAN Attacks So, how do you defend against wireless LAN attacks in your environment? There are several levels of security that you could implement for your wireless LAN, ranging from totally unsecure to a strong level of protection. Techniques for securing your wireless LAN include: • Set the SSID to an obscure value. As described above, SSIDs are not a security feature and should not be treated as such. Setting the SSID to an obscure value adds very little from a security perspective. However, some access points can be configured to prohibit responses to SSID broadcast requests. If your access point offers that capability, you should activate it. • Use MAC address filtering. Each wireless card has a unique hardwarelevel address called the media access control (MAC) address. A wireless access point can be configured so that it will allow traffic only from specific MAC addresses. While this MAC filtering does improve 53

AU1518Ch04Frame Page 54 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY security a bit, it is important to note that an attacker can spoof wireless card MAC addresses. • Use WEP, with periodic rekeying. While WEP keys can be broken using Airsnort, the technology significantly improves the security of a wireless LAN. Some vendors even support periodic generation of new WEP keys after a given timeout. If an attacker does crack a WEP key, it is likely that they break the old key, while a newer key is in use on the network. If your access points support dynamic rotating of WEP keys, such as Cisco’s Aironet security solution, activate this feature. • Use a virtual private network (VPN). Because SSID, MAC, and even WEP solutions have various vulnerabilities as highlighted above, the best method for securing wireless LANs is to use a VPN. VPNs provide endto-end security without regard to the unsecured wireless network used for transporting the communication. The VPN client encrypts all data sent from the PC before it gets sent into the air. The wireless access point simply collects encrypted streams of bits and forwards them to a VPN gateway before they can get access to the internal network. In this way, the VPN ensures that all data is strongly encrypted and authenticated before entering the internal network. Of course, before implementing these technical solutions, you should establish specific policies for the use of wireless LANs in your environment. The particular wireless LAN security policies followed by an organization depend heavily on the need for security in that organization. The following list, which I wrote with John Burgess of Predictive Systems, contains recommended security policies that could apply in many organizations. This list can be used as a starting point, and pared down or built up to meet specific needs. • All wireless access points/base stations connected to the corporate network must be registered and approved by the organization’s computer security team. These access points/base stations are subject to periodic penetration tests and audits. Unregistered access points/ base stations on the corporate network are strictly forbidden. • All wireless network interface cards (i.e., PC cards) used in corporate laptop or desktop computers must be registered with the corporate security team. • All wireless LAN access must use corporate-approved vendor products and security configurations. • All computers with wireless LAN devices must utilize a corporate-approved virtual private network (VPN) for communication across the wireless link. The VPN will authenticate users and encrypt all network traffic. • Wireless access points/base stations must be deployed so that all wireless traffic is directed through a VPN device before entering the 54

AU1518Ch04Frame Page 55 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses corporate network. The VPN device should be configured to drop all unauthenticated and unencrypted traffic. While the policies listed above fit the majority of organizations, the policies listed below may or may not fit, depending on the technical level of employees and how detailed an organizations’ security policy and guidelines are: • The wireless SSID provides no security and should not be used as a password. Furthermore, wireless card MAC addresses can be easily gathered and spoofed by an attacker. Therefore, security schemes should not be based solely on filtering wireless MAC addresses because they do not provide adequate protection for most uses. • WEP keys can be broken. WEP may be used to identify users, but only together with a VPN solution. • The transmit power for access points/base stations near a building’s perimeter (such as near exterior walls or top floors) should be turned down. Alternatively, wireless systems in these areas could use directional antennas to control signal bleed out of the building. With these types of policies in place and a suitable VPN solution securing all traffic, the security of an organization’s wireless infrastructure can be vastly increased. ACTIVE AND PASSIVE OPERATING SYSTEM FINGERPRINTING Once access is gained to a network (through network stumbling, a renegade unsecured modem, or a weakness in an application or firewall), attackers usually attempt to learn about the target environment so they can hone their attacks. In particular, attackers often focus on discovering the operating system (OS) type of their targets. Armed with the OS type, attackers can search for specific vulnerabilities of those operating systems to maximize the effectiveness of their attacks. To determine OS types across a network, attackers use two techniques: (1) the familiar, time-tested approach called active OS fingerprinting, and (2) a technique with new-found popularity, passive OS fingerprinting. We will explore each technique in more detail. Active OS Fingerprinting The Internet Engineering Task Force (IETF) defines how TCP/IP and related protocols should work. In an ever-growing list of Requests for Comment (RFCs), this group specifies how systems should respond when specific types of packets are sent to them. For example, if someone sends a TCP SYN packet to a listening port, the IETF says that a SYN ACK packet should be sent in response. While the IETF has done an amazing job of 55

AU1518Ch04Frame Page 56 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY defining how the protocols we use every day should work, it has not thoroughly defined every case of how the protocols should fail. In other words, the RFCs defining TCP/IP do not handle all of the meaningless or perverse cases of packets that can be sent in TCP/IP. For example, what should a system do if it receives a TCP packet with the code bits SYN-FIN-URG-PUSH all set? I presume such a packet means to SYNchronize a new connection, FINish the connection, do this URGently, and PUSH it quickly through the TCP stack. That is nonsense, and a standard response to such a packet has not been devised. Because there is no standard response to this and other malformed packets, different vendors have built their OSs to respond differently to such bizarre cases. For example, a Cisco router will likely send a different response than a Windows NT server for some of these unexpected packets. By sending a variety of malformed packets to a target system and carefully analyzing the responses, an attacker can determine which OS it is running. An active OS fingerprinting capability has been built into the Nmap port scanner (available at www.insecure.org/nmap). If the OS detection capability is activated, Nmap will send a barrage of unusual packets to the target to see how it responds. Based on this response, Nmap checks a user-customizable database of known signatures to determine the target OS type. Currently, this database houses over 500 known system types. A more recent addition to the active OS fingerprinting realm is the Xprobe tool by Fyodor Yarochkin and Ofir Arkin. Rather than manipulating the TCP code bit options like Nmap, Xprobe focuses exclusively on the Internet Control Message Protocol (ICMP). ICMP is used to send information associated with an IP-based network, such as ping requests and responses, port unreachable messages, and instructions to quench the rate of packets sent. Xprobe sends between one and four specially crafted ICMP messages to the target system. Based on a very carefully constructed logic tree on the sending side, Xprobe can determine the OS type. Xprobe is stealthier than the Nmap active OS fingerprinting capability because it sends far fewer packets. Passive OS Fingerprinting While active OS fingerprinting involves sending packets to a target and analyzing the response, passive OS fingerprinting does not send any traffic while determining a target’s OS type. Instead, passive OS fingerprinting tools include a sniffer to gather data from a network. Then, by analyzing the particular packet settings captured from the network and consulting a local database, the tool can determine what OS type sent that traffic. This technique is far stealthier than active OS fingerprinting because the attacker sends no data to the target machine. However, the attacker must 56

AU1518Ch04Frame Page 57 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses be in a position to analyze traffic sent from the target system, such as on the same LAN or on a network where the target frequently sends packets. One of the best passive OS fingerprinting tools is p0f (available at www.stearns.org/p0f/), originally written by Michal Zalewski and now maintained by William Stearns. P0f determines the OS type by analyzing several fields sent in TCP and IP traffic, including the rounded-up initial time-to-live (TTL), window size, maximum segment size, don’t fragment flag, window scaling option, and initial packet size. Because different OSs set these initial values to varying levels, p0f can differentiate between 149 different system types. Defending against Operating System Fingerprinting To minimize the impact an attacker can have using knowledge of your OS types, you should have a defined program for notification, testing, and implementation of system patches. If you keep your systems patched with the latest security fixes, an attacker will be far less likely to compromise your machines even if they know which OS you are running. One or more people in your organization should have assigned tasks of monitoring vendor bulletins and security lists to determine when new patches are released. Furthermore, once patches are identified, they should be thoroughly but quickly tested in a quality assurance environment. After the full functionality of the tested system is verified, the patches should be rolled into production. While a solid patching process is a must for defending your systems, you may also want to analyze some of the work in progress to defeat active OS fingerprinting. Gaël Roualland and Jean-Marc Saffroy wrote the IP personality patch for Linux systems, available at ippersonality.sourceforge.net/. This tool allows a system administrator to configure a Linux system running kernel version 2.4 so that it will have any response of the administrator’s choosing for Nmap OS detection. Using this patch, you could make your Linux machine look like a Solaris system, a Macintosh, or even an old Windows machine during an Nmap scan. Although you may not want to put such a patch onto your production systems due to potential interference with critical processes, the technique is certainly worth investigating. To foil passive OS fingerprinting, you may want to consider the use of a proxy-style firewall. Proxy firewalls do not route packets, so all information about the OS type transmitted in the packet headers is destroyed by the proxy. Proxy firewalls accept a connection from a client, and then start a new connection to the server on behalf of that client. All packets on the outside of the firewall will have the OS fingerprints of the firewall itself. Therefore, the OS type of all systems inside the firewall will be masked. Note that this technique does not work for most packet filter firewalls 57

AU1518Ch04Frame Page 58 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY because packet filters route packets and, therefore, transmit the fingerprint information stored in the packet headers. RECENT WORM ADVANCES A computer worm is a self-replicating computer attack tool that propagates across a network, spreading from vulnerable system to vulnerable system. Because they use one set of victim machines to scan for and exploit new victims, worms spread on an exponential basis. In recent times, we have seen a veritable zoo of computer worms with names like Ramen, L10n, Cheese, Code Red, and Nimda. New worms are being released at a dizzying rate, with a new generation of worm hitting the Internet every two to six months. Worm developers are learning lessons from the successes of each generation of worms and expanding upon them in subsequent attacks. With this evolutionary loop, we are rapidly approaching an era of super worms. Based on recent advances in worm functions and predictions for the future, we will analyze the characteristics of the coming super worms we will likely see in the next six months. Rapidly Spreading Worms Many of the worms released in the past decade have spread fairly quickly throughout the Internet. In July 2001, Code Red was estimated to have spread to 250,000 systems in about six hours. Fortunately, recent worms have had rather inefficient targeting mechanisms, a weakness that actually impeded their speeds. By randomly generating addresses and not taking into account the accurate distribution of systems in the Internet address space, these worms often wasted time looking for nonexistent systems or scanning machines that were already conquered. After Code Red, several articles appeared on the Internet describing more efficient techniques for rapid worm distribution. These articles, by Nicholas C. Weaver and the team of Stuart Staniford, Gary Grim, and Roelof Jonkman, described the hypothetical Warhol and Flash worms, which theoretically could take over all vulnerable systems on the Internet in 15 minutes or even less. Warhol and Flash, which are only mathematical models and not actual worms (yet), are based on the idea of fast-forwarding through an exponential spread. Looking at a graph of infected victims over time for a conventional worm, a hockey-stick pattern appears. Things start out slowly as the initial victims succumb to the worm. Only after a critical mass of victims succumbs to the attack does the worm rapidly spread. Warhol and Flash jump past this initial slow spread by prescanning the Internet for vulnerable systems. Through automated scanning techniques from static machines, an attacker can find 100,000 or more vulnerable systems before ever releasing the worm. The attacker then loads these known vulnerable addresses into the worm. As the worm spreads, the addresses 58

AU1518Ch04Frame Page 59 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses of these prescanned vulnerable systems would be split up among the segments of the worm propagating across the network. By using this initial set of vulnerable systems, an attacker could easily infect 99 percent of vulnerable systems on the Internet in less than an hour. Such a worm could conquer the Internet before most people have even heard of the problem. Multi-Platform Worms The vast majority of worms we have seen to date focused on a single platform, often Windows or Linux. For example, Nimda simply ripped apart as many Microsoft products as it could, exploiting Internet Explorer, the IIS Web server, Outlook, and Windows file sharing. While it certainly was challenging, Nimda’s Window-centric approach actually limited its spread. The security community implemented defenses by focusing on repairing Windows systems. While single-platform worms can cause trouble, be on the lookout for worms that are far less discriminating from a platform perspective. New worms will contain exploits for Windows, Solaris, Linux, BSD, HP-UX, AIX, and other operating systems, all built into a single worm. Such worms are even more difficult to eradicate because security personnel and system administrators will have to apply patches in a coordinated fashion to many types of machines. The defense job will be more complex and require more time, allowing the worm to cause more damage. Morphing and Disguised Worms Recent worms have been relatively easy to detect. Once spotted, the computer security community has been able to quickly determine their functionalities. Once a worm has been isolated in the lab, some brilliant folks have been able to rapidly reverse-engineer each worm’s operation to determine how best to defend against it. In the very near future, we will face new worms that are far stealthier and more difficult to analyze. We will see polymorphic worms, which change their patterns every time they run and spread to a new system. Detection becomes more difficult because the worm essentially recodes itself each time it runs. Additionally, these new worms will encrypt or otherwise obscure much of their own payloads, hiding their functionalities until a later time. Reverse-engineering to determine the worm’s true functions and purpose will become more difficult because investigators will have to extract the crypto keys or overcome the obfuscation mechanisms before they can really figure out what the worm can do. This time lag for the analysis will allow the worm to conquer more systems before adequate defenses are devised. 59

AU1518Ch04Frame Page 60 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Zero-Day Exploit Worms The vast majority of worms encountered so far are based on old, off-theshelf exploits to attack systems. Because they have used old attacks, a patch has been readily available for administrators to fix their machines quickly after infection or to prevent infection in the first place. Using our familiar example, Code Red exploited systems using a flaw in Microsoft’s IIS Web server that had been known for over a month and for which a patch had already been published. In the near future, we are likely going to see a worm that uses brand-new exploits for which no patch exists. Because they are brand new, such attacks are sometimes referred to as Zero-Day Exploits. New vulnerabilities are discovered practically every day. Oftentimes, these problems are communicated to a vendor, who releases a patch. Unfortunately, these vulnerabilities are all too easy to discover; and it is only a matter of time before a worm writer discovers a major hole and first devises a worm that exploits it. Only after the worm has propagated across the Internet will the computer security community be capable of analyzing how it spreads so that a patch can be developed. More Damaging Attacks So far, worms have caused damage by consuming resources and creating nuisances. The worms we have seen to date have not really had a malicious payload. Once they take over hundreds of thousands of systems, they simply continue to spread without actually doing something nasty. Do not get me wrong; fighting Code Red and Nimda consumed much time and many resources. However, these attacks did not really do anything beyond simply consuming resources. Soon, we may see worms that carry out some plan once they have spread. Such a malicious worm may be released in conjunction with a terrorist attack or other plot. Consider a worm that rapidly spreads using a zero-day exploit and then deletes the hard drives of ten million victim machines. Or, perhaps worse, a worm could spread and then transfer the financial records of millions of victims to a country’s adversaries. Such scenarios are not very far-fetched, and even nastier ones could be easily devised. Worm Defenses All of the pieces are available for a moderately skilled attacker to create a truly devastating worm. We may soon see rapidly spreading, multi-platform, morphing worms using zero-day exploits to conduct very damaging attacks. So, what can you do to get ready? You need to establish both reactive and proactive defenses. 60

AU1518Ch04Frame Page 61 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses Incident Response Preparation. From a reactive perspective, your organization must establish a capability for determining when new vulnerabilities are discovered, as well as rapidly testing patches and moving them into production. As described above, your security team should subscribe to various security mailing lists, such as Bugtraq (available at www.securityfocus.com), to help alert you to such vulnerabilities and the release of patches. Furthermore, you must create an incident response team with the skills and resources necessary to discover and contain a worm attack. Vigorously Patch and Harden Your Systems. From the proactive side, your

organization must carefully harden your systems to prevent attacks. For each platform type, your organization should have documentation describing to system administrators how to build the machine to prevent attacks. Furthermore, you should periodically test your systems to ensure they are secure. Block Unnecessary Outbound Connections. Once a worm takes over a system, it attempts to spread by making outgoing connections to scan for other potential victims. You should help stop worms in their tracks by severely limiting all outgoing connections on your publicly available systems (such as your Web, DNS, e-mail, and FTP servers). You should use a border router or external firewall to block all outgoing connections from such servers, unless there is a specific business need for outgoing connections. If you do need some outgoing connections, allow them only to those IP addresses that are absolutely critical. For example, your Web server needs to send responses to users requesting Web pages, of course. But does your Web server ever need to initiate connections to the Internet? Likely, the answer is no. So, do yourself and the rest of the Internet a favor by blocking such outgoing connections from your Internet servers. Nonexecutable System Stack Can Help Stop Some Worms. In addition to overall system hardening, one particular step can help stop many worms. A large number of worms utilize buffer overflow exploits to compromise their victims. By sending more data than the program developer allocated space for, a buffer overflow attack allows an attacker to get code entered as user input to run on the target system. Most operating systems can be inoculated against simple stack-based buffer overflow exploits by being configured with nonexecutable system stacks. Keep in mind that nonexecutable stacks can break some programs (so test these fixes before implementing them), and they do not provide a bulletproof shield against all buffer overflow attacks. Still, preventing the execution of code from the stack will stop a huge number of both known and as-yet-undiscovered vulnerabilities in their tracks. Up to 90 percent of buffer overflows can be prevented using this technique. To create a nonexecutable stack on a Linux system, you can use the free kernel patch at www.openwall.com/linux. On a Solaris 61

AU1518Ch04Frame Page 62 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY machine, you can configure the system to stop execution of code from the stack by adding the following lines to the/etc/system file: set noexec_user_stack = 1 set noexec_user_stack_log = 1

On a Windows NT/2000 machine, you can achieve the same goal by deploying the commercial program SecureStack, available at www.securewave.com. SNIFFING BACKDOORS Once attackers compromise a system, they usually install a backdoor tool to allow them to access the machine repeatedly. A backdoor is a program that lets attackers access the machine on their own terms. Normal users are required to type in a password or use a cryptographic token; attackers use a backdoor to bypass these normal security controls. Traditionally, backdoors have listened on a TCP or UDP port, silently waiting in the background for a connection from the attacker. The attacker uses a client tool to connect to these backdoor servers on the proper TCP or UDP port to issue commands. These traditional backdoors can be discovered by looking at the listening ports on a system. From the command prompt of a UNIX or Windows NT/2000/XP machine, a user can type “netstat-na” to see which TCP and UDP ports on the local machine have programs listening on them. Of course, normal usage of a machine will cause some TCP and UDP ports to be listening, such as TCP port 80 for Web servers, TCP port 25 for mail servers, and UDP port 53 for DNS servers. Beyond these expected ports based on specific server types, a suspicious port turned up by the netstat command could indicate a backdoor listener. Alternatively, a system or security administrator could remotely scan the ports of the system, using a port-scanning tool such as Nmap (available at www.insecure.org/nmap). If Nmap’s output indicates an unexpected listening port, an attacker may have installed a backdoor. Because attackers know that we are looking for their illicit backdoors listening on ports, a major trend in the attacker community is to avoid listening ports altogether for backdoors. You may ask, “How can they communicate with their backdoors if they aren’t listening on a port?” To accomplish this, attackers are integrating sniffing technology into their backdoors to create sniffing backdoors. Rather than configuring a process to listen on a port, a sniffing backdoor uses a sniffer to grab traffic from the network. The sniffer then analyzes the traffic to determine which packets are supposed to go to the backdoor. Instead of listening on a port, the sniffer employs pattern matching on the network traffic to determine what to scoop up and pass to the backdoor. The backdoor then executes the commands and sends responses to the attacker. An excellent example of a sniffing backdoor is the 62

AU1518Ch04Frame Page 63 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses Cd00r program written by FX. Cd00r is available at http://www. phenoelit. de/stuff/cd00r.c. There are two general ways of running a sniffing backdoor, based on the mode used by the sniffer program to gather traffic: the so-called nonpromiscuous and promiscuous modes. A sniffer that puts an Ethernet interface in promiscuous mode gathers all data from the LAN without regard to the actual destination address of the traffic. If the traffic passes by the interface, the Ethernet card in promiscuous mode will suck in the traffic and pass it to the backdoor. Alternatively, a nonpromiscuous sniffer gathers traffic destined only for the machine on which the sniffer runs. Because these differences in sniffer types have significant implications on how attackers can use sniffing backdoors, we will explore nonpromiscuous and promiscuous backdoors separately below. Nonpromiscuous Sniffing Backdoors As their name implies, nonpromiscuous sniffing backdoors do not put the Ethernet interface into promiscuous mode. The sniffer sees only traffic going to and from the single machine where the sniffing backdoor is installed. When attackers use a nonpromiscuous sniffing backdoor, they do not have to worry about a system administrator detecting the interface in promiscuous mode. In operation, the nonpromiscuous backdoor scours the traffic going to the victim machine looking for specific ports or other fields (such as a cryptographically derived value) included in the traffic. When the special traffic is detected, the backdoor wakes up and interacts with the attacker. Promiscuous Sniffing Backdoors By putting the Ethernet interface into promiscuous mode to gather all traffic from the LAN, promiscuous sniffing backdoors can make an investigation even more difficult. To understand why, consider the scenario shown in Exhibit 4-1. This network uses a tri-homed firewall to separate the DMZ and internal network from the Internet. Suppose an attacker takes over the Domain Name System (DNS) server on the DMZ and installs a promiscuous sniffing backdoor. Because this backdoor uses a sniffer in promiscuous mode, it can gather all traffic from the LAN. The attacker configures the sniffing backdoor to listen in on all traffic with a destination address of the Web server (not the DNS server) to retrieve commands from the attacker to execute. In our scenario, the attacker does not install a backdoor or any other software on the Web server. Only the DNS server is compromised. Now the attacker formulates packets with commands for the backdoor. These packets are all sent with a destination address of the Web server (not the DNS server). The Web server does not know what to do with these commands, so it will either discard them or send a RESET or related 63

AU1518Ch04Frame Page 64 Thursday, November 14, 2002 6:25 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY

Sniffer listens for traffic destined for the WWW server

DNS World Wide Web

Internet Black Hat Attacker

Firewall

Exhibit 4-1. A promiscuous sniffing backdoor.

message to the attacker. However, the DNS server with the sniffing backdoor will see the commands on the LAN. The sniffer will gather these commands and forward them to the backdoor where they will be executed. To further obfuscate the situation, the attacker can send all responses from the backdoor using the spoofed source address of the Web server. Given this scenario, consider the dilemma faced by the investigator. The system administrator or an intrusion detection system complains that there is suspicious traffic going to and from the Web server. The investigator conducts a detailed and thorough analysis of the Web server. After a painstaking process to verify the integrity of the applications, operating system programs, and kernel on the Web server machine, the investigator determines that this system is intact. Yet backdoor commands continue to be sent to this machine. The investigator would only discover what is really going on by analyzing other systems connected to the LAN, such as the DNS server. The investigative process is significantly slowed down by the promiscuous sniffing backdoor. Defending against Sniffing Backdoor Attacks It is important to note that the use of a switch on the DMZ network between the Web server and DNS server does not eliminate this dilemma. As described in Chapter 3, Volume 3 of Information Security Management Handbook, attackers can use active sniffers to conduct ARP cache poisoning attacks and successfully sniff a switched environment. An active sniffer such as Dsniff (available at http://www.monkey.org/~dugsong/dsniff/) married to a sniffing backdoor can implement this type of attack in a switched environment. So if a switch does not eliminate this problem, how can you defend against this kind of attack? First, as with most backdoors, system and security administrators must know what is supposed to be running on their systems, especially processes running with root or system-level privileges. Keeping up 64

AU1518Ch04Frame Page 65 Thursday, November 14, 2002 6:25 PM

Breaking News: The Latest Hacker Attacks and Defenses with this information is not a trivial task, but it is especially important for all publicly available servers such as systems on a DMZ. If a security or system administrator notices a new process running with escalated privileges, the process should be investigated immediately. Tools such as lsof for UNIX (available at ftp://vic.cc.purdue.edu/pub/tools/ unix/lsof/) or Inzider for Windows NT/2000 (available at http://ntsecurity. nu/toolbox/inzider/) can help to indicate the files and ports used by any process. Keep in mind that most attackers will not name their backdoors “cd00r” or “backdoor,” but instead will use less obvious names to camouflage their activities. In my experience, attackers like to name their backdoors “SCSI” or “UPS” to prevent a curious system administrator from questioning or shutting off the attackers’ processes. Also, while switches do not eliminate attacks with sniffers, a switched environment can help to limit an attacker’s options, especially if it is carefully configured. For your DMZs and other critical networks, you should use a switch and hard-code all ARP entries in each host on the LAN. Each system on your LAN has an ARP cache holding information about the IP and MAC addresses of other machines on the LAN. By hard-coding all ARP entries on your sensitive LANs so that they are static, you minimize the possibility of ARP cached poisoning. Additionally, implement port-level security on your switch so that only specific Ethernet MAC addresses can communicate with the switch. CONCLUSIONS The computer underground and information security research fields remain highly active in refining existing methods and defining completely new ways to attack and compromise computer systems. Advances in our networking infrastructures, especially wireless LANs, are not only giving attackers new avenues into our systems, but they are also often riddled with security vulnerabilities. With this dynamic environment, defending against attacks is certainly a challenge. However, these constantly evolving attacks can be frustrating and exciting at the same time, while certainly providing job security to solid information security practitioners. While we need to work diligently in securing our systems, our reward is a significant intellectual challenge and decent employment in a challenging economy. ABOUT THE AUTHOR Edward Skoudis is the vice president of security strategy for Predictive Systems’ Global Integrity consulting practice. His expertise includes hacker attacks and defenses, the information security industry, and computer privacy issues. Skoudis is a frequent speaker on issues associated with hacker tools and defenses. He has published the book Counter Hack (Prentice Hall) and the interactive CD-ROM, Hack–Counter Hack. 65

AU1518Ch04Frame Page 66 Thursday, November 14, 2002 6:25 PM

AU1518Ch05Frame Page 67 Thursday, November 14, 2002 6:24 PM

Chapter 5

Counter-Economic Espionage Craig A. Schiller, CISSP

Today’s economic competition is global. The conquest of markets and technologies has replaced former territorial and colonial conquests. We are living in a state of world economic war, and this is not just a military metaphor — the companies are training the armies, and the unemployed are the casualties. — Bernard Esambert, President of the French Pasteur Institute at a Paris Conference on Economic Espionage

The Attorney General of the United States defined economic espionage as “the unlawful or clandestine targeting or acquisition of sensitive financial, trade, or economic policy information; proprietary economic information; or critical technologies.” Note that this definition excludes the collection of open and legally available information that makes up the majority of economic collection. This means that aggressive intelligence collection that is entirely open and legal may harm U.S. companies but is not considered espionage, economic or otherwise. The FBI has extended this definition to include the unlawful or clandestine targeting or influencing of sensitive economic policy decisions. Intelligence consists of two broad categories — open source and espionage. Open-source intelligence collection is the name given to legal intelligence activities. Espionage is divided into the categories of economic and military/political/governmental; the distinction is the targets involved. A common term, industrial espionage was used (and is still used to some degree) to indicate espionage between two competitors. As global competitors began to conduct these activities with possible assistance from their governments, the competitor-versus-competitor nature of industrial espionage became less of a discriminator. As the activities expanded to include sabotage and interference with commerce and proposal competitions, the term economic espionage was coined for the broader scope. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

67

AU1518Ch05Frame Page 68 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY While the examples and cases discussed in this chapter focus mainly on the United States, the issues are universal. The recommendations and types of information gathered can and should be translated for any country. BRIEF HISTORY The prosperity and success of this country is due in no small measure to economic espionage committed by Francis Cabot Lowell during the Industrial Revolution. Britain replaced costly, skilled hand labor with waterdriven looms that were simple and reliable. The looms were so simple that they could be operated by a few unskilled women and children. The British government passed strict patent laws and prohibited the export of technology related to the making of cotton. A law was passed making it illegal to hire skilled textile workers for work abroad. Those workers who went abroad had their property confiscated. It was against the law to make and export drawings of the mills. So Lowell memorized and stole the plans to a Cartwright loom, a waterdriven weaving machine. It is believed that Lowell perfected the art of spying by driving around. Working from Edinburgh, he and his wife traveled daily throughout the countryside, including Lancashire and Derbyshire, the hearts of the industrial revolution. Returning home, he built a scale model of the loom. His company built its first loom in Waltham. Soon, his factories were capable of producing up to 30 miles of cloth a day.1 This marked America’s entry into the Industrial Revolution. By the early 20th century, we had become “civilized” to the point that Henry L. Stimson, our Secretary of State, said for the record that “Gentlemen do not read other gentlemen’s mail” while refusing to endorse a codebreaking operation. For a short time the U.S. Government was the only government that believed this fantasy. At the beginning of World War II, the United States found itself almost completely blind to activities inside Germany and totally dependent on other countries’ intelligence services for information. In 1941 the United States recognized that espionage was necessary to reduce its losses and efficiently engage Germany. To meet this need, first the COI and then the OSS were created under the leadership of General “Wild Bill” Donovan. It would take tremendous forces to broaden this awakening to include economic espionage. WATERSHED: END OF COLD WAR, BEGINNING OF INFORMATION AGE In the late 1990s, two events occurred that radically changed information security for many companies. The end of the Cold War — marked by the collapse of the former Soviet Union — created a pool of highly trained intelligence officers without targets. In Russia, some continued to work for 68

AU1518Ch05Frame Page 69 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage the government, some began to work in the newly created private sector, and some provided their services for the criminal element. Some did all three. The world’s intelligence agencies began to focus their attentions on economic targets and information war, just in time for watershed event number-two — the beginning of the information age. John Lienhard, M.D. Anderson Professor of Mechanical Engineering and History at the University of Houston, is the voice and driving force behind the “Engines of Our Ingenuity,” a syndicated program for public radio. He has said that the change of our world into an information society is not like the Industrial Revolution. No; this change is more like the change from a hunter-gatherer society to an agrarian society. A change of this magnitude happened only once or twice in all of history. Those who were powerful in the previous society may have no power in the new society. In the huntergatherer society, the strongest man and best hunter rules. But where is he in an agrarian society? There, the best hunter holds little or no power. During the transition to an information society, those with power in the old ways will not give it up easily. Now couple the turmoil caused by this shift with the timing of the “end” of the Cold War. The currency of the new age is information. The power struggle in the new age is the struggle to gather, use, and control information. It is at the beginning of this struggle that the Cold War ended, making available a host of highly trained information gatherers to countries and companies trying cope with the new economy. Official U.S. acknowledgment of the threat of economic espionage came in 1996 with the passage of the Economic Espionage Act. For the information security professional, the world has fundamentally changed. Until 1990, a common practice had been to make the cost of an attack prohibitively expensive. How do you make an attack prohibitively expensive, when your adversaries have the resources of governments behind them? Most information security professionals have not been trained and are not equipped to handle professional intelligence agents with deep pockets. Today, most business managers are incapable of fathoming that such a threat exists. ROLE OF INFORMATION TECHNOLOGY IN ECONOMIC ESPIONAGE In the 1930s, the German secret police divided the world of espionage into five roles.2 Exhibit 5-1 illustrates some of the ways that information technology today performs these five divisions of espionage functionality. In addition to these roles, information technology may be exploited as a target, used as a tool, used for storage (for good or bad), used as protection for critical assets as a weapon, used as a transport mechanism, or used as an agent to carry out tasks when activated. 69

AU1518Ch05Frame Page 70 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Exhibit 5-1. Five divisions of espionage functionality. Role

WWII Description

IT Equivalent

Collectors

Located and gathered desired People or IT (hardware or software) information agents, designer viruses that transmit data to the Internet Transmitters Forwarded the data to GerE-mail, browsers with convenient 128-bit many, by coded mail or encryption, FTP, applications with shortwave radio built-in collection and transmission capabilities (e.g., comet cursors, Real Player, Media Player, or other spy ware), covert channel applications Couriers Worked on steamship lines Visiting country delegations, partand transatlantic clippers, ners/suppliers, temporary workers, and carried special mesand employees that rotate in and out of sages to and from Germany companies with CD-R/CD-RW, Zip disks, tapes, drawings, digital camera images, etc. Drops Innocent-seeming addresses E-mail relays, e-mail anonymizers, Web of businesses or private indi- anonymizers, specially designed softviduals, usually in South ware that spreads information to multiAmerican or neutral Europle sites (the reverse of distributed pean ports; reports were DoS) to avoid detection sent to these addresses for forwarding to Germany Specialists Expert saboteurs Viruses, worms, DDoS, Trojan horses, chain e-mail, hoaxes, using e-mail to spread dissension, public posting of sensitive information about salaries, logic bombs, insiders sabotaging products, benchmarks, etc.

• Target. Information and information technology can be the target of interest. The goal of the exploitation may be to discover new information assets (breach of confidentiality), deprive one of exclusive ownership, acquire a form of the asset that would permit or facilitate reverse-engineering, corrupt the integrity of the asset — either to diminish the reputation of the asset or to make the asset become an agent — or to deny the availability of the asset to those who rely on it (denial of service). • Tool. Information technology can be the tool to monitor and detect traces of espionage or to recover information assets. These tools include intrusion detection systems, log analysis programs, content monitoring programs, etc. For the bad guys, these tools would include probes, enumeration programs, viruses that search for PGP keys, etc. • Storage. Information technology can store stolen or illegal information. IT can store sleeper agents for later activation. 70

AU1518Ch05Frame Page 71 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage • Protection. Information technology may have the responsibility to protect the information assets. The protection may be in the form of applications such as firewalls, intrusion detection systems, encryption tools, etc., or elements of the operating system such as file permissions, network configurations, etc. • Transport. Information technology can be the means by which stolen or critical information is moved, whether burned to CDs, e-mailed, FTP’d, hidden in a legitimate http stream, or encoded in images or music files. • Agent. Information technology can be used as an agent of the adversary, planted to extract significant sensitive information, to launch an attack when given the appropriate signal, or to receive or initiate a covert channel through a firewall. IMPLICATIONS FOR INFORMATION SECURITY Implication 1 A major tenet of our profession has been that, because we cannot always afford to prevent information system-related losses, we should make it prohibitively expensive to compromise those systems. How does one do that when the adversary has the resources of a government behind him? Frankly, this tenet only worked on adversaries who were limited by time, money, or patience. Hackers with unlimited time on their hands — and a bevy of unpaid researchers who consider a difficult system to be a trophy waiting to be collected — turn this tenet into Swiss cheese. This reality has placed emphasis on the onion model of information security. In the onion model you assume that all other layers will fail. You build prevention measures but you also include detection measures that will tell you that those measures have failed. You plan for the recovery of critical information, assuming that your prevention and detection measures will miss some events. Implication 2 Information security professionals must now be able to determine if their industry or their company is a target for economic espionage. If their company/industry is a target, then the information security professionals should adjust their perceptions of their potential adversaries and their limits. One of the best-known quotes from the Art of War by Sun Tsu says, “Know your enemy.” Become familiar with the list of countries actively engaging in economic espionage against your country or within your industry. Determine if any of your vendors, contractors, partners, suppliers, or customers come from these countries. In today’s global economy, it may not be easy to determine the country of origin. Many companies move their global headquarters to the United States and keep only their main R&D offices in the country of origin. Research the company and its 71

AU1518Ch05Frame Page 72 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY founders. Learn where and how they gained their expertise. Research any publicized accounts regarding economic espionage/intellectual property theft attributed to the company, the country, or other companies from the country. Pay particular attention to the methods used and the nature of the known targets. Contact the FBI or its equivalent and see if they can provide additional information. Do not forget to check your own organization’s history with each company. With this information you can work with your business leaders to determine what may be a target within your company and what measures (if any) may be prudent. He who protects everything, protects nothing. — Napoleon

Applying the wisdom of Napoleon implies that, within the semipermeable external boundary, we should determine which information assets truly need protection, to what degree, and from what threats. Sun Tsu speaks to this need as well. It is not enough to only know your enemy. Therefore I say, “Know the enemy and know yourself; in a hundred battles you will never be in peril.” When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and yourself, you are certain in every battle to be in peril. — Sun Tzu, The Art of War (III.31–33)

A company can “know itself” using a variation from the business continuity concept of a business impact assessment (BIA). The information security professional can use the information valuation data collected during the BIA and extend it to produce information protection guides for sensitive and critical information assets. The information protection guides tell users which information should be protected, from what threats, and what to do if an asset is found unprotected. It should tell the technical staff about threats to each information asset and about any required and recommended safeguards. A side benefit gained from gathering the information valuation data is that, in order to gather the value information, the business leaders must internalize questions of how the data is valuable and the degrees of loss that would occur in various scenarios. This is the most effective security awareness that money can buy. After the information protection guides have been prepared, you should meet with senior management again to discuss the overall posture the company wants to take regarding information security and counter-economic 72

AU1518Ch05Frame Page 73 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage espionage. Note that it is significant that you wait until after the information valuation exercise is complete before addressing the security posture. If management has not accepted the need for security, the question about desired posture will yield damaging results. Here are some potential postures that you can describe to management: • Prevent all. In this posture, only a few protocols are permitted to cross your external boundary. • City wall. A layered approach, prevention, detection, mitigation, and recovery strategies are all, in effect, similar to the walled city in the Middle Ages. Traffic is examined, but more is permitted in and out. Because more is permitted, detection, mitigation, and recovery strategies are needed internally because the risk of something bad getting through is greater. • Aggressive. A layered approach, but embracing new technology, is given a higher priority than protecting the company. New technology is selected, and then security is asked how they will deal with it. • Edge racer. Only general protections are provided. The company banks on running faster than the competition. “We’ll be on the next technology before they catch up with our current release.” This is a common position before any awareness has been effective. Implication 3 Another aspect of knowing your enemy is required. As security professionals we are not taught about spycraft. It is not necessary that we become trained as spies. However, the FBI, in its annual report to Congress on economic espionage, gives a summary about techniques observed in cases involving economic espionage. Much can be learned about modern techniques in three books written about the Mossad — Gideon’s Spies by Gordon Thomas, and By Way of Deception, and The Other Side of Deception, both by Victor Ostrovsky and Claire Hoy. These describe the Mossad as an early adopter of technology as a tool in espionage, including their use of Trojan code in software sold commercially. The books describe software known as Promis that was sold to intelligence agencies to assist in tracking terrorists; and the authors allege that the software had a Trojan that permitted the Mossad to gather information about the terrorists tracked by its customers. By Way of Deception describes the training process as seen by Ostrovsky. Implication 4 Think Globally, Act Locally. The Chinese government recently announced that the United States had placed numerous bugging devices on a plane for President Jiang Zemin. During the customization by a U.S. company of the 73

AU1518Ch05Frame Page 74 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY interior of the plane for its use as the Chinese equivalent of Air Force One, bugs were allegedly placed in the upholstery of the president’s chair, in his bedroom, and even in the toilet. When the United States built a new embassy in Moscow, the then-extant Soviet Union insisted it be built using Russian workers. The United States called a halt to its construction in 1985 when it discovered it was too heavily bugged for diplomatic purposes. The building remained unoccupied for a decade following the discovery. The 1998 Annual Report to Congress on Foreign Economic Collection and Industrial Espionage concluded with the following statement: ...foreign software manufacturers solicited products to cleared U.S. companies that had been embedded with spawned processes and multithreaded tasks.

This means that foreign software companies sold products with Trojans and backdoors to targeted U.S. companies. In response to fears about the Echelon project, in 2001 the European Union announced recommendations that member nations use open-source software to ensure that Echelon software agents are not present. Security teams would benefit by using open-source software tools if they could be staffed sufficiently to maintain and continually improve the products. Failing that, security in companies in targeted industries should consider the origins of the security products they use. If your company knows it is a target for economic espionage, it would be wise to avoid using security products from countries actively engaged in economic espionage against your country. If unable to follow this strategy, the security team should include tools in the architecture (from other countries) that could detect extraneous traffic or anomalous behavior of the other security tools. In this strategy you should follow the effort all the way through implementation. In one company, the corporate standard for firewall was a product of one of the most active countries engaging in economic espionage. Management was unwilling to depart from the standard. Security proposed the use of an intrusion detection system (IDS) to guard against the possibility of the firewall being used to permit undetected, unfiltered, and unreported access. The IDS was approved; but when procurement received the order, they discovered that the firewall vendor sold a special, optimized version of the same product and — without informing the security team — ordered the IDS from the vendor that the team was trying to guard against. Implication 5 The system of rating computers for levels of security protection is incapable of providing useful information regarding products that might have 74

AU1518Ch05Frame Page 75 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage malicious code that is included intentionally. In fact, companies that have intentions of producing code with these Trojans are able to use the system of ratings to gain credibility without merit. It appears that the first real discovery by one of the ratings systems caused the demise of the ratings system and a cover-up of the findings. I refer to the MISSI ratings system’s discovery of a potential backdoor in Checkpoint Firewall-1 in 1997. After this discovery, the unclassified X31 report3 for this product and all previous reports were pulled from availability. The Internet site that provided them was shut down, and requestors were told that the report had been classified. The federal government had begun pulling Checkpoint Firewall-1 from military installations and replacing them with other companies’ products. While publicly denying that these actions were happening, Checkpoint began a correspondence with the NSA, owners of the MISSI process, to answer the findings of that study. The NSA provided a list of findings and preferred corrective actions to resolve the issue. In Checkpoint’s response4 to the NSA, they denied that the code in question, which involved SNMP and which referenced files containing IP addresses in Israel, was a backdoor. According to the NSA, two files with IP addresses in Israel “could provide access to the firewall via SNMPv2 mechanisms.” Checkpoint’s reply indicated that the code was dead code from the Carnegie Mellon University and that the files were QA testing data that was left in the final released configuration files. The X31 report, which I obtained through an FOIA request, contains no mention of the incident and no indication that any censorship had occurred. This fact is particularly disturbing because a report of this nature should publish all issues and their resolutions to ensure that there is no complicity between testers and the test subjects. However, the letter also reveals two other vulnerabilities that I regard as backdoors, although the report classes them as software errors to be corrected. The Checkpoint response to some of these “errors” is to defend aspects of them as desirable. One specific reference claims that most of Checkpoint’s customers prefer maximum connectivity to maximum security, a curious claim that I have not seen in their marketing material. This referred to the lack of an ability to change the implicit rules in light of the vulnerability of stateful inspection’s handling of DNS using UDP, which existed in Version 3 and earlier. Checkpoint agreed to most of the changes requested by the NSA; however, the exception is notable in that it would have required Checkpoint to use digital signatures to sign the software and data electronically to prevent someone from altering the product in a way that would go undetected. These changes would have provided licensees of the software with the ability to know that, at least initially, the software they were running was indeed the software and data that had been tested during the security review. 75

AU1518Ch05Frame Page 76 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY It is interesting to note that Checkpoint had released an internal memo nine months prior to the letter responding to the NSA claims in which they claimed nothing had ever happened.5 Both the ITSEC and Common Criteria security rating systems are fatally flawed when it comes to protection against software with intentional malicious code. Security companies are able to submit the software for rating and claim the rating even when the entire system has not been submitted. For example, a company can submit the assurance processes and documentation for a targeted rating. When it achieves the rating on just that portion, it can advertise the rating although the full software functionality has not been tested. For marketing types, they gain the benefit of claiming the rating without the expense of full testing. Even if the rating has an asterisk, the damage is done because many that authorize the purchase of these products only look for the rating. When security reports back to management that the rating only included a portion of the software functionality, it is portrayed as sour grapes by those who negotiated the “great deal” they were going to get. The fact is that there is no commercial push to require critical software such as operating systems and security software to include exhaustive code reviews, covert channel analysis, and to only award a rating when it is fully earned. To make matters worse, if it appears that a company is going to get a poor rating from a test facility, the vendor can stop the process and start over at a different facility, perhaps in another country, with no penalty and no carry-over. WHAT ARE THE TARGETS? The U.S. Government publishes a list of military critical technologies (MCTs). A summary of the list is published annually by the FBI (see Exhibit 5-2). There is no equivalent list for nonmilitary critical technologies. However, the government has added “targeting the national information infrastructure” to the National Security Threat List (NSTL). Targeting the national information infrastructure speaks primarily to the infrastructure as an object of potential disruption, whereas the MCT list contains technologies that foreign governments may want to acquire illegally. The NSTL consists of two tables. One is a list of issues (see Exhibit 5-3); the other is a classified list of countries engaged in collection activities against the United States. This is not the same list captured in Exhibit 5-4. Exhibit 5-4 contains the names of countries engaged in economic espionage and, as such, contains the names of countries that are otherwise friendly trading partners. You will note that the entire subject of economic espionage is listed as one of the threat list issues. 76

AU1518Ch05Frame Page 77 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage Exhibit 5-2. Military Critical Technologies (MCTs). Information systems Sensors and lasers Electronics Aeronautics systems technology Armaments and energetic materials Marine systems Guidance, navigation and vehicle signature control Space systems Materials Manufacturing and fabrication Information warfare Nuclear systems technology Power systems Chemical/biological systems Weapons effects and counter-measures Ground systems Directed and kinetic energy systems

Exhibit 5-3. National security threat list issues. Terrorism Espionage Proliferation Economic espionage Targeting the national information infrastructure Targeting the U.S. Government Perception management Foreign intelligence activities

Exhibit 5-4. Most active collectors of economic intelligence. China Japan Israel France Korea Taiwan India

According to the FBI, the collection of information by foreign agencies continues to focus on U.S. trade secrets and science and technology products, particularly dual-use technologies and technologies that provide high profitability. 77

AU1518Ch05Frame Page 78 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Examining the cases that have been made public, you can find intellectual property theft, theft of proposal information (bid amounts, key concepts), and requiring companies to participate in joint ventures to gain access to new country markets — then either stealing the IP or awarding the contract to an internal company with an identical proposal. Recently, a case involving HP found a planted employee sabotaging key benchmarking tests to HP’s detriment. The message from the HP case is that economic espionage also includes efforts beyond the collection of information, such as sabotage of the production line to cause the company to miss key delivery dates, deliver faulty parts, fail key tests, etc. You should consider yourself a target if your company works in any of the technology areas on the MCT list, is a part of the national information infrastructure, or works in a highly competitive international business. WHO ARE THE PLAYERS? Countries This section is written from the published perspective of the U.S. Government. Readers from other countries should attempt to locate a similar list from their government’s perspective. It is likely that two lists will exist: a “real” list and a “diplomatically correct” edition. For the first time since its original publication in 1998, the Annual Report to Congress on Foreign Economic Collection and Industrial Espionage 2000 lists the most active collectors of economic intelligence. The delay in providing this list publicly is due to the nature of economic espionage. To have economic espionage you must have trade. Our biggest trading partners are our best friends in the world. Therefore, a list of those engaged in economic espionage will include countries that are otherwise friends and allies. Thus the poignancy of Bernard Esambert’s quote used to open this chapter. Companies Stories of companies affected by economic espionage are hard to come by. Public companies fear the effect on stock prices. Invoking the economic espionage law has proven very expensive — a high risk for a favorable outcome — and even the favorable outcomes have been inadequate considering the time, money, and commitment of company resources beyond their primary business. The most visible companies are those that have been prosecuted under the Economic Espionage Act, but there have only been 20 of those, including: • Four Pillars Company, Taiwan, stole intellectual property and trade secrets from Avery Dennison. 78

AU1518Ch05Frame Page 79 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage • Laser Devices, Inc, attempted to illegally ship laser gun sights to Taiwan without Department of Commerce authorization. • Gilbert & Jones, Inc., New Britain, exported potassium cyanide to Taiwan without the required licenses. • Yuen Foong Paper Manufacturing Company, Taiwan, attempted to steal the formula for Taxol, a cancer drug patented and licensed by the Bristol-Myers Squibb (BMS) Company. • Steven Louis Davis attempted to disclose trade secrets of the Gillette Company to competitors Warner-Lambert Co., Bic, and American Safety Razor Co. The disclosures were made by fax and e-mail. Davis worked for Wright Industries, a subcontractor of the Gillette Company. • Duplo Manufacturing Corporation, Japan, used a disgruntled former employee of Standard Duplicating Machines Corporation to gain unauthorized access into a voicemail system. The data was used to compete against Standard. Standard learned of the issue through an unsolicited phone call from a customer. • Harold Worden attempted to sell Kodak trade secrets and proprietary information to Kodak rivals, including corporations in the Peoples Republic of China. He had formerly worked for Kodak. He established his own consulting firm upon retirement and subsequently hired many former Kodak employees. He was convicted on one felony count of violating the Interstate Transportation of Stolen Property law. • In 1977, Mitsubishi Electric bought one of Fusion Systems Corporation’s microwave lamps, took it apart, then filed 257 patent actions on its components. Fusion Systems had submitted the lamp for a patent in Japan two years earlier. After 25 years of wrangling with Mitsubishi, the Japanese patent system, congress, and the press, Fusion’s board fired the company’s president (who had spearheaded the fight) and settled the patent dispute with Mitsubishi a year later. • The French are known to have targeted IBM, Corning Glass, Boeing, Bell Helicopter, Northrup, and Texas Instruments (TI). In 1991, a guard in Houston noticed two well-dressed men taking garbage bags from the home of an executive of a large defense contractor. The guard ran the license number of the van and found it belonged to the French Consul General in Houston, Bernard Guillet. Two years earlier, the FBI had helped TI remove a French sleeper agent. According to Cyber Wars6 by Jean Guisnel, the French intelligence agency (the DGSE) had begun to plant young French engineers in various French subsidiaries of well-known American firms. Over the years they became integral members of the companies they had entered, some achieving positions of power in the corporate hierarchy. Guillet claims that the primary beneficiary of these efforts was the French giant electronics firm, Bull. 79

AU1518Ch05Frame Page 80 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY WHAT HAS BEEN DONE? REAL-WORLD EXAMPLES Partnering with a Company and Then Hacking the Systems Internally In one case, very senior management took a bold step. In the spirit of the global community, they committed the company to use international partners for major aspects of a new product. Unfortunately, in selecting the partners, they chose companies from three countries listed as actively conducting economic espionage against their country. In the course of developing new products, the employees of one company were caught hacking sensitive systems. Security measures were increased but the employees hacked through them as well. The company of the offending partners was confronted. Its senior management claimed that the employees had acted alone and that their actions were not sanctioned. Procurement, now satisfied that their fragile quilt of partners was okay, awarded the accused partner company a lucrative new product partnership. Additionally, they erased all database entries regarding the issues and chastised internal employees who continued to voice suspicions. No formal investigation was launched. Security had no record of the incident. There was no information security function at the time of the incident. When the information security function was established, it stumbled upon rumors that these events had occurred. In investigating, they found an internal employee who had witnessed the stolen information in use at the suspect partner’s home site. They also determined that the offending partner had a history of economic espionage, perhaps the most widely known in the world. Despite the corroboration of the partner’s complicity, line management and procurement did nothing. Procurement knew that the repercussions within their own senior management and line management would be severe because they had pressured the damaged business unit to accept the suspected partner’s earlier explanation. Additionally, it would have underscored the poor choice of partners that had occurred under their care and the fatal flaw in the partnering concept of very senior management. It was impossible to extricate the company from this relationship without causing the company to collapse. IT line management would not embrace this issue because they had dealt with it before and had been stung, although they were right all along. Using Language to Hide in Plain Sight Israeli Air Force officers assigned to the Recon/Optical Company passed on technical information beyond the state-of-the-art optics to a competing Israeli company, El Op Electro-Optics Industries Ltd. Information was written in Hebrew and faxed. The officers tried to carry 14 boxes out of the plant when the contract was terminated. The officers were punished upon return to Israel — for getting caught.7 80

AU1518Ch05Frame Page 81 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage In today’s multinational partnerships, language can be a significant issue for information security and for technical support. Imagine the difficulty in monitoring and supporting computers for five partners, each in a different language. The Annual Report to Congress 20008 reveals that the techniques used to steal trade secrets and intellectual property are limitless. The insider threat, briefcase and laptop computer thefts, and searching hotel rooms have all been used in recent cases. The information collectors are using a wide range of redundant and complementary approaches to gather their target data. At border crossings, foreign officials have conducted excessive attempts at elicitation. Many U.S. citizens unwittingly serve as third-party brokers to arrange visits or circumvent official visitation procedures. Some foreign collectors have invited U.S. experts to present papers overseas to gain access to their expertise in export-controlled technologies. There have been recent solicitations to security professionals asking for research proposals for security ideas as a competition for awarding grants to conduct studies on security topics. The solicitation came from one of the most active countries engaging in economic espionage. Traditional clandestine espionage methods (such as agent recruitment, U.S. volunteers, and cooptees) are still employed. Other techniques include: • • • • • •

Breaking away from tour groups Attempting access after normal working hours Swapping out personnel at the last minute Customs holding laptops for an extended period of time Requests for technical information Elicitation attempts at social gatherings, conferences, trade shows, and symposia • Dumpster diving (searching a company’s trash for corporate proprietary data) • Using unencrypted Internet messages

To these I would add holding out the prospect of lucrative sales or contracts, but requiring the surrender or sharing of intellectual property as a condition of partnering or participation. WHAT CAN WE, AS INFORMATION SECURITY PROFESSIONALS, DO? We must add new skills and improve our proficiency in others to meet the challenge of government funded/supported espionage. Our investigative and forensic skills need improvement over the level required for nonespionage cases. We need to be aware of the techniques that have been and may be used against us. We need to add the ability to elicit information without raising suspicion. We need to recognize when elicitation is attempted and be able to teach our sales, marketing, contracting, and executive personnel to recognize such attempts. We need sources that tell us 81

AU1518Ch05Frame Page 82 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY where elicitation is likely to occur. For example, at this time, the Paris Air Show is considered the number-one economic espionage event in the world. We need to be able to raise the awareness of our companies regarding the perceived threat and real examples from industry that support those perceptions. Ensure that you brief the procurement department. Establish preferences for products from countries not active in economic espionage. When you must use a product from a country active in economic espionage, attempt to negotiate an indemnification against loss. Have procurement add requirements that partners/suppliers provide proof of background investigations, particularly if individuals will be on site. Management and procurement should be advised that those partners with intent to commit economic espionage are likely to complain to management that the controls are too restrictive, that they cannot do their jobs, or that their contract requires extraordinary access. You should counter these objectives before they occur by fully informing management and procurement about awareness, concerns, and measures to be taken. The measures should be applied to all supplier/partners. Ensure that these complaints and issues will be handed over to you for an official response. Treat each one individually and ask for specifics rather than generalities. If procurement has negotiated a contract that commits the company to extraordinary access, your challenge is greater. Procurement may insist that you honor their contract. At this time you will discover where security stands in the company’s pecking order. A stance you can take is, “Your negotiated contract does not and cannot relieve me of my obligation to protect the information assets of this corporation.” It may mean that the company has to pay penalties or go back to the negotiating table. You should not have to sacrifice the security of the company’s information assets to save procurement some embarrassment. We need to develop sources to follow developments in economic espionage in industries and businesses similar to ours. Because we are unlikely to have access to definitive sources about this kind of information, we need to develop methods to vet the information we find in open sources. The FBI provides advanced warning to security professionals through ANSIR (Awareness of National Security Issues and Responses) systems. Interested security professionals for U.S. corporations should provide their email addresses, positions, company names and addresses, and telephone and fax numbers to [email protected] A representative of the nearest field division office will contact you. The FBI has also created Infraguard (http:// www.infragard.net/fieldoffice.htm) chapters for law enforcement and corporate security professionals to share experiences and advice. 9 82

AU1518Ch05Frame Page 83 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage InfraGard is dedicated to increasing the security of the critical infrastructures of the United States. All InfraGard participants are committed to the proposition that a robust exchange of information about threats to and actual attacks on these infrastructures is an essential element in successful infrastructure protection efforts. The goal of InfraGard is to enable information flow so that the owners and operators of infrastructures can better protect themselves and so that the U.S. Government can better discharge its law enforcement and national security responsibilities. BARRIERS ENCOUNTERED IN ATTEMPTS TO ADDRESS ECONOMIC ESPIONAGE A country is made up of many opposing and cooperating forces. Related to economic espionage, for information security, there are two significant forces. One force champions the businesses of that country. Another force champions the relationships of that country to other countries. Your efforts to protect your company may be hindered by the effect of the opposition of those two forces. This was evident in the first few reports to Congress by the FBI on economic espionage. The FBI was prohibited from listing even the countries that were most active in conducting economic espionage. There is no place in the U.S. Government that you can call to determine if a partner you are considering has a history of economic espionage, or if a software developer has been caught with backdoors, placing Trojans, etc. You may find that, in many cases, the FBI interprets the phrase information sharing to mean that you share information with them. In one instance, a corporate investigator gave an internal e-mail that was written in Chinese to the FBI, asking that they translate it. This was done to keep the number of individuals involved in the case to a minimum. Unless you know the translator and his background well, you run the risk of asking someone that might have ties to the Chinese to perform the translation. Once the translation was performed, the FBI classified the document as secret and would not give the investigator the translated version until the investigator reasoned with them that he would have to translate the document with an outside source unless the FBI relented. Part of the problem facing the FBI is that there is no equivalent to a DoD or DoE security clearance for corporate information security personnel. There are significant issues that complicate any attempt to create such a clearance. A typical security clearance background check looks at criminal records. Background investigations may go a step further and check references, interview old neighbors, schoolmates, colleagues, etc. The most rigorous clearance checks include viewing bank records, credit records, and other signs of fiscal responsibility. They may include a psychological evaluation. They are not permitted to include issues of national origin or religion unless 83

AU1518Ch05Frame Page 84 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY the United States is at war with a particular country. In those cases, the DoD has granted the clearance but placed the individuals in positions that would not create a conflict of interest. In practice, this becomes impossible. Do you share information about all countries and religious groups engaging in economic espionage, except for those to which the security officer may have ties? Companies today cannot ask those questions of its employees. Unfortunately, unless a system of clearances is devised, the FBI will always be reluctant to share information, and rightfully so. Another aspect of the problem facing the FBI today is the multinational nature of corporations today. What exactly is a U.S. corporation? Many companies today were conceived in foreign countries but established their corporate headquarters in the United States, ostensibly to improve their competitiveness in the huge U.S. marketplace. What of U.S. corporations that are wholly owned by foreign corporations? Should they be entitled to assistance, to limited assistance, or to no assistance? If limited assistance, how are the limits determined? Within your corporation there are also opposing and cooperating forces. One of the most obvious is the conflict between marketing/sales and information security. In many companies, sales and marketing personnel are the most highly paid and influential people in the company. They are, in most cases, paid largely by commission. This means that if they do not make the sale, they do not get paid. They are sometimes tempted to give the potential customer anything they want, in-depth tours of the plant, details on the manufacturing process, etc., in order to make the sale. Unless you have a well-established and accepted information protection guide that clearly states what can and cannot be shared with these potential customers, you will have little support when you try to protect the company. The marketing department may have such influence that they cause your procurement personnel to abandon reason and logic in the selection of critical systems and services. A Canadian company went through a lengthy procurement process for a massive wide area network contract. An RFP was released. Companies responded. A selection committee met and identified those companies that did not meet the RFP requirements. Only those companies that met the RFP requirements were carried over into the final phase of the selection process. At this point, marketing intervened and required that procurement re-add two companies to the final selection process — companies that had not met the requirements of the RFP. These two companies purchased high product volumes from this plant. Miracle of miracles, one of the two unqualified companies won the contract. It is one thing for the marketing department to request that existing customers be given some preference from the list of qualified finalists. It is 84

AU1518Ch05Frame Page 85 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage quite another to require that unqualified respondents be given any consideration. A product was developed in a country that conducts economic espionage operations against U.S. companies in your industry sector. This product was widely used throughout your company, leaving you potentially vulnerable to exploitation or exposed to a major liability. When the issue was raised, management asked if this particular product had a Trojan or evidence of malicious code. The security officer responded, “No, but due to the nature of this product, if it did contain a Trojan or other malicious code, it could be devastating to our company. Because there are many companies that make this kind of product in countries that do not conduct economic espionage in our industry sector, we should choose one of those to replace this one and thus avoid the risk.” Management’s response was surprising. “Thank you very much, but we are going to stay with this product and spread it throughout the corporation — but do let us know if you find evidence of current backdoors and the like.” One day the security team learned that, just as feared, there had indeed been a backdoor, in fact several. The news was reported to management. Their response was unbelievable. “Well, have they fixed it?” The vendor claimed to have fixed it, but that was not the point. The point was that they had placed the code in the software to begin with, and there was no way to tell if they had replaced the backdoor with another. Management responded, “If they have fixed the problem, we are going to stay with the product, and that is the end of it. Do not bring this subject up again.” In security you must raise every security concern that occurs with a product, even after management has made up its mind. To fail to do so would set the company up for charges of negligence should a loss occur that relates to that product. “Doesn’t matter, do not raise this subject again.” So why would management make a decision like this? One possible answer has to do with pressure from marketing and potential sales to that country. Another has to do with embarrassment. Some vice president or director somewhere made a decision to use the product to begin with. They may even have had to fall on a sword or two to get the product they wanted. Perhaps it is because a more powerful director had already chosen this product for his site. This director may have forced the product’s selection as the corporate standard so that staff would not be impacted. One rumor has it that the product was selected as a corporate standard because the individual choosing the standard was being paid a kickback by a relative working for a third-party vendor of the product. If your IT department raises the issue, it runs the risk of embarrassing one or more of these senior managers and incurring their wrath. Your director may feel intimidated enough that he will not even raise the issue. 85

AU1518Ch05Frame Page 86 Thursday, November 14, 2002 6:24 PM

ACCESS CONTROL SYSTEMS AND METHODOLOGY Even closer to home is the fact that the issue was raised to your management in time to prevent the spread of the questionable product throughout the corporation. Now if the flag is raised, someone may question why it was not raised earlier. That blame would fall squarely on your director’s shoulders. Does it matter that both the vice president and the director have fiduciary responsibility for losses related to these decisions should they occur? Does it matter that their decisions would not pass the prudent man test and thus place them one step closer to being found negligent? No, it does not. The director is accepting the risk — not the risk to the corporation, but the risk that damage might occur during his watch. The vice president probably does not know about the issue or the risks involved but could still be implicated via the concept of respondent superior. The director may think he is protecting the vice president by keeping him out of the loop — the concept of plausible deniability — but the courts have already tackled that one. Senior management is responsible for the actions of those below them, regardless of whether they know about the actions. Neither of these cases exists if the information security officer reports to the CEO. There is only a small opportunity for it to exist if the information security officer reports to the CIO. As the position sinks in the management structure, the opportunity for this type of situation increases. The first time you raise the specter of economic espionage, you may encounter resistance from employees and management. “Our company isn’t like that. We don’t do anything important. No one I know has ever heard of anything like that happening here. People in this community trust one another.” Some of those who have been given evidence that such a threat does exist have preferred to ignore the threat, for to acknowledge it would require them to divert resources (people, equipment, or money) from their own initiatives and goals. They would prefer to “bet the company” that it would not occur while they are there. After they are gone it no longer matters to them. When you raise these issues as the information security officer, you are threatening the careers of many people — from the people who went along with it because they felt powerless to do anything, to the senior management who proposed it, to the people in between who protected the concept and decisions of upper management in good faith to the company. Without a communication path to the CEO and other officers representing the stockholders, you do not have a chance of fulfilling your fiduciary liability to them. 86

AU1518Ch05Frame Page 87 Thursday, November 14, 2002 6:24 PM

Counter-Economic Espionage The spy of the future is less likely to resemble James Bond, whose chief assets were his fists, than the Line X engineer who lives quietly down the street and never does anything more violent than turn a page of a manual or flick on his computer. — Alvin Toffler, Power Shift: Knowledge, Wealth and Violence at the Edge of the 21st Century

References 1. War by Other Means, John J. Fialka, W.W. Norton Company, 1997. 2. Sabotage! The Secret War Against America, Michael Sayers and Albert E. Kahn, Harper & Brothers, 1942, p. 25. 3. NSA X3 Technical Report X3-TR001–97 Checkpoint Firewall-1 Version 3.0a, Analysis and Penetration Test Report. 4. Letter of reply from David Steinberg, Director, Federal Checkpoint Software, Inc. to Louis F. Giles, Deputy Chief Commercial Solutions & Enabling Technology; 9800 Savage Road Suite 6740, Ft. Meade, MD, dated September 10, 1998. 5. E-mail from Craig Johnson dated June 3, 1998, containing memo dated Jan 19, 1998, to all U.S. Sales of Checkpoint. 6. Cyber Wars, Jean Guisnel, Perseus Books, 1997. 7. War by Other Means, John J Fialka, W.W. Norton Company, 1997, pp. 181–184. 8. Annual Report to Congress on Foreign Economic Collection and Industrial Espionage — 2000, prepared by the National Counterintelligence Center. 9. Infragard National By-Laws, undated, available online at http://www.infragard.net/applic_ requirements/natl_bylaws.htm.

ABOUT THE AUTHOR Craig Schiller, CISSP, an information security consultant for Hawkeye Security, is the principal author of the first published edition of Generally Accepted System Security Principles.

87

AU1518Ch05Frame Page 88 Thursday, November 14, 2002 6:24 PM

AU1518Ch06Frame Page 89 Thursday, November 14, 2002 6:24 PM

Domain 2 Telecommunications and Network Security

AU1518Ch06Frame Page 90 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY This domain is certainly the most technical as well as the most volatile of the ten. It is also the one that attracts the most questions on the CISSP examination. As before, we devote a major amount of effort to assembling pertinent chapters that can enable readers to keep up with the security issues involved with this rapidly evolving area. Section 2.1 deals with communications and network security. Chapter 6 addresses SNMP security. Simple Network Management Protocol (SNMP) simply provides for monitoring network and computing devices everywhere. The chapter defines SNMP and discusses its operation. Then it goes on to explain the inherent security issues, most resulting from system/network administrators’ failures to change default values — which could lead to denial-of-service attacks or other availability issues. Section 2.2 focuses on Internet, intranet, and extranet security. Chapter 7 talks to the security issues resulting from the advent of highspeed, broadband Internet access. Broadband access methods are thoroughly discussed and the related security risks described. How to achieve broadband security in view of its rapidly increasing popularity is explained as difficult but not impossible. Chapter 8 provides new perspectives on the use of VPNs. With the growth of broadband, more companies are using VPNs for remote access and telecommuting, and they already are widely used to protect data transiting insecure networks. Several new mechanisms are identified that add to the feasibility of increased use of VPN technology. Following that, Chapter 9 examines firewall architectures, complete with a review of the fundamentals of firewalls, the basic types, and their pros and cons. This chapter explains in detail the various kinds of firewalls available today and comes to some excellent conclusions. Chapter 10 presents a case study of the use of personal firewalls as host-based firewalls to provide layered protection against the wide spectrum of attacks mounted against hosts from networks. The conclusions from the case study contain some surprising advantages discovered for the use of personal firewall technology in a host environment. Chapter 11 deals with wireless security vulnerabilities — probably the most frequently discussed issue we face these days. The author describes the three IEEE wireless LAN standards and their common security issues. The security mechanisms available (network name, authentication, and encryption) all have security problems. This chapter is a must for those using or intending to use wireless LANs. Section 2.3 covers secure voice communication, an area that we have not paid much attention to previously, but one that is nevertheless quite important in the field of information security. Chapter 12 points out that, although we spend most of our security resources on the protection of electronic information, we are losing millions of dollars annually to voice 90

AU1518Ch06Frame Page 91 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY and telecommunications fraud. The terminology related to voice communication is clarified, and the security issues are discussed in detail. It is also pointed out that the next set of security challenges is Voice-over-IP. Chapter 12 talks about secure voice communications. Events are driving a progressive move toward convergence of voice over some combination of ATM, IP, and MPLS. New security mechanisms will be necessary that include encryption and security services. This chapter reviews architectures, protocols, features, quality-of-service, and security issues related to both landline and wireless voice communication and then examines the convergence aspects. The final section in this rather large domain is probably the most interesting to information security professionals because it addresses network attacks and countermeasures — our bread-and-butter concerns. There are two chapters in this section. The first deals with packet sniffers. The use and misuse of packet sniffers has been a big security concern for many years. Here we have a complete description of what they are, how they work, an example, and the legitimate uses of them. Then we go on to describe their misuse and why they are such a security concern. Ways to reduce this serious risk are described. You must be aware of the inherent dangers associated with sniffers and the methods to mitigate their threat. The second chapter discusses the various types of denial-of-service attacks and their importance to us in the security world. The answer is in their relationship to the growth and success of ISPs. We focus on what ISPs can do about these attacks, particularly with respect to the newest rage — distributed denial-of-service attacks. The chapters in this section in the telecommunications and network security domain contain extremely important information for any organization involved in this technology.

91

AU1518Ch06Frame Page 92 Thursday, November 14, 2002 6:24 PM

AU1518Ch06Frame Page 93 Thursday, November 14, 2002 6:24 PM

Chapter 6

What’s Not So Simple about SNMP? Chris Hare, CISSP, CISA

The Simple Network Management Protocol, or SNMP, is a defined Internet standard from the Internet Engineering Task Force, as documented in Request for Comment (RFC) 1157. This chapter discusses what SNMP is, how it is used, and the challenges facing network management and security professionals regarding its use. While several SNMP applications are mentioned in this chapter, no support or recommendation of these applications is made or implied. As with any application, the enterprise must select its SNMP application based upon its individual requirements. SNMP DEFINED SNMP is used to monitor network and computer devices around the globe. Simply stated, network managers use SNMP to communicate management information, both status and configuration, between the network management station and the SNMP agents in the network devices. The protocol is aptly named because despite the intricacies of a network, SNMP itself is very simple. Before examining the architecture, a review of the terminology used is required. • Network element: any device connected to the network, including hosts, gateways, servers, terminal servers, firewalls, routers, switches and active hubs • Network management station (or management station): a computing platform with SNMP management software to monitor and control the network elements; examples of common management stations are HP Openview and CA Unicenter • SNMP agent: a software management agent responsible for performing the network management functions received from the management station 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

93

AU1518Ch06Frame Page 94 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Requests

Traps

Management Station

Network Devices with SNMP Agents

Exhibit 6-1. The SNMP network manager.

• SNMP request: a message sent from the management station to the SNMP agent on the network device • SNMP trap receiver: the software on the management station that receives event notification messages from the SNMP agent on the network device • Management information base: a standard method identifying the elements in the SNMP database A network configured to SNMP for the management of network devices consists of at least one SNMP agent and one management station. The management station is used to configure the network elements and receive SNMP traps from those elements. Through SNMP, the network manager can monitor the status of the various network elements, make appropriate configuration changes, and respond to alerts received from the network elements (see Exhibit 6-1). As networks increase in size and complexity, a centralized method of monitoring and management is essential. Multiple management stations may exist and be used to compartmentalize the network structure or to regionalize operations of the network. SNMP can retrieve the configuration information for a given network element in addition to device errors or alerts. Error conditions will vary from one SNMP agent to another but would include network interface failures, system failures, disk space warnings, etc. When the device issues an alert to the management station, network management personnel can investigate to resolve the problem. Access to systems is controlled through knowledge of a community string, which can be compared to a password. Community strings are discussed in more detail later in the chapter, but by themselves should not be considered a form of authentication. 94

AU1518Ch06Frame Page 95 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? From time to time it is necessary for the management station to send configuration requests to the device. If the correct community string is provided, the device configuration is changed appropriately. Even this simple explanation evidences the value gained from SNMP. An organization can monitor the status of all its equipment and perform remote troubleshooting and configuration management. THE MANAGEMENT INFORMATION BASE (MIB) The MIB defines the scope of information available for retrieval or configuration on the network element. There is a standard MIB all devices should support. The manufacturer of the device can also define custom extensions to the device to support additional configuration parameters. The definition of MIB extensions must follow a defined convention for the management stations to understand and interpret the MIB correctly. The MIB is expressed using the ASN.1 language; and, while important to be aware of, it is not a major concern unless you are specifically designing new elements for the MIB. All MIB objects are defined explicitly in the Internet standard MIB or through a defined naming convention. Using the defined naming convention limits the ability of product vendors to create individual instances of an MIB element for a particular network device. This is important, given the wide number of SNMP capable devices and the relatively small range of monitoring station equipment. An understanding of the MIB beyond this point is only necessary for network designers who must concern themselves with the actual MIB structure and representations. Suffice to say for this discussion, the MIB components are represented using English identifiers. SNMP OPERATIONS All SNMP agents must support both inspection and alteration of the MIB variables. These operations are referred to as SNMP get (retrieval and inspection) and SNMP set (alteration). The developers of SNMP established only these two operations to minimize the number of essential management functions to support and to avoid the introduction of other imperative management commands. Most network protocols have evolved to support a vast array of potential commands, which must be available in both the client and the server. The File Transfer Protocol (FTP) is a good example of a simple command set that has evolved to include more than 74 commands. The SNMP management philosophy uses the management station to poll the network elements for appropriate information. SNMP uses traps to send messages from the agent running on the monitored system to the monitoring station, which are then used to control the polling. Limiting the 95

AU1518Ch06Frame Page 96 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY number of messages between the agent and the monitoring station achieves the goal of simplicity and minimizes the amount of traffic associated with the network management functions. As mentioned, limiting the number of commands makes implementing the protocol easier: it is not necessary to develop an interface to the operating system, causing a system reboot, or to change the value of variables to force a reboot after a defined time period has elapsed. The interaction between the SNMP agent and management station occurs through the exchange of protocol messages. Each message has been designed to fit within a single User Datagram Protocol (UDP) packet, thereby minimizing the impact of the management structure on the network. ADMINISTRATIVE RELATIONSHIPS The management of network elements requires an SNMP agent on the element itself and on a management station. The grouping of SNMP agents to a management station is called a community. The community string is the identifier used to distinguish among communities in the same network. The SNMP RFC specifies an authentic message as one in which the correct community string is provided to the network device from the management station. The authentication scheme consists of the community string and a set of rules to determine if the message is in fact authentic. Finally, the SNMP authentication service describes a function identifying an authentic SNMP message according to the established authentication schemes. Administrative relationships are called communities, that pair a monitored device with the management station. Through this scheme, administrative relationships can be separated among devices. The agent and management station defined within a community establish the SNMP access policy. Management stations can communicate directly with the agent or, in the event of network design, an SNMP proxy agent. The proxy agent relays communications between the monitored device and the management station. The use of proxy agents allows communication with all network elements, including modems, multiplexors, and other devices that support different management frameworks. Additional benefits from the proxy agent design include shielding network elements from access policies, which might be complex. The community string establishes the access policy community to use, and it can be compared to passwords. The community string establishes the password to access the agent in either read-only mode, commonly referred to the public community, or the read-write mode, known as the private community. 96

AU1518Ch06Frame Page 97 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? SNMP REQUESTS There are two access modes within SNMP: read-only and read-write. The command used, the variable, and the community string determine the access mode. Corresponding with the access mode are two community strings, one for each access mode. Access to the variable and the associated action is controlled by: • If the variable is defined with an access type of none, the variable is not available under any circumstances. • If the variable is defined with an access type of read-write or read-only, the variable is accessible for the appropriate get, set, or trap commands. • If the variable does not have an access type defined, it is available for get and trap operations. However, these rules only establish what actions can be performed on the MIB variable. The actual communication between the SNMP agent and the monitoring station follows a defined protocol for message exchange. Each message includes the: • SNMP version identifier • Community string • Protocol data unit (PDU) The SNMP version identifier establishes the version of SNMP in use — Version 1, 2, or 3. As mentioned previously, the community string determines which community is accessed, either public or private. The PDU contains the actual SNMP trap or request. With the exception of traps, which are reported on UDP port 162, all SNMP requests are received on UDP port 161. RFC 1157 specifies that protocol implementations need not accept messages more than 484 bytes in length, although in practice a longer message length is typically supported. There are five PDUs supported within SNMP: 1. 2. 3. 4. 5.

GetRequest-PDU GetNextRequest-PDU GetResponse-PDU SetRequest-PDU Trap-PDU

When transmitting a valid SNMP request, the PDU must be constructed using the implemented function, the MIB variable in ASN.1 notation. The ASN.1 notation, the source and destination IP addresses, and UDP ports are included along with the community string. Once processed, the resulting request is sent to the receiving system. 97

AU1518Ch06Frame Page 98 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Send SNMP request with version, variable, and community string

Return requested information or trap with an error

Exhibit 6-2. The SNMP transmission process.

As shown in Exhibit 6-2, the receiving system accepts the request and assembles an ASN.1 object. The message is discarded if the decoding fails. If implemented correctly, this discard function should cause the receiving system to ignore malformed SNMP requests. Similarly, the SNMP version is checked; and if there is a mismatch, the packet is also dropped. The request is then authenticated using the community string. If the authentication fails, a trap may be generated indicating an authentication failure, and the packet is dropped. If the message is accepted, the object is again parsed to assemble the actual request. If the parse fails, the message is dropped. If the parse is successful, the appropriate SNMP profile is selected using the named community, and the message is processed. Any resulting data is returned to the source address of the request. THE PROTOCOL DATA UNIT As mentioned, there are five protocol data units supported. Each is used to implement a specific request within the SNMP agent and management station. Each will be briefly examined to review purpose and functionality. The GetRequest PDU requests information to be retrieved from the remote device. The management station uses the GetRequest PDU to make queries of the various network elements. If the MIB variable specified is matched exactly in the network element MIB, the value is returned using the GetResponse PDU. We can see the direct results of the GetRequest and GetResponse messages using the snmpwalk command commonly found on Linux systems: [[email protected] chare]$ for host in 1 2 3 4 5 > do > snmpwalk 192.168.0.$host public system.sysDescr.0 > done system.sysDescr.0 = Instant Internet version 7.11.2 Timeout: No Response from 192.168.0.2 98

AU1518Ch06Frame Page 99 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? system.sysDescr.0 = Linux linux 2.4.9–31 #1 Tue Feb 26 07:11:02 EST 2002 i686 Timeout: No Response from 192.168.0.4 Timeout: No Response from 192.168.0.5 [[email protected] chare]$

Despite the existence of a device at all five IP addresses in the above range, only two are configured to provide a response; or perhaps the SNMP community string provided was incorrect. Note that, on those systems where snmpwalk is not installed, the command is available in the net-ucb-cnmp source code available from many network repositories. The GetResponse PDU is the protocol type containing the response to the request issued by the management station. Each GetRequest PDU results in a response using GetResponse, regardless of the validity of the request. The GetNextResponse PDU is identical in form to the GetResponse PDU, except it is used to get additional information from a previous request. Alternatively, table traversals through the MIB are typically done using the GetNextResponse PDU. For example, using the snmpwalk command, we can traverse the entire table using the command: # snmpwalk localhost public system.sysDescr.0 = Linux linux 2.4.9–31 #1 Tue Feb 26 07:11:02 EST 2002 i686 system.sysObjectID.0 = OID: enterprises.ucdavis.ucdSnmpAgent.linux system.sysUpTime.0 = Timeticks: (4092830521) 473 days, 16:58:25.21 system.sysContact.0 = [email protected] system.sysName.0 = linux system.sysLocation.0 = Unknown system.sysORLastChange.0 = Timeticks: (4) 0:00:00.04 …

In our example, no specific MIB variable is requested, which causes all MIB variables and their associated values to be printed. This generates a large amount of output from snmpwalk. Each variable is retrieved until there is no additional information to be received. Aside from the requests to retrieve information, the management station also can set selected variables to new values. This is done using the SetRequest PDU. When receiving the SetRequest PDU, the receiving station has several valid responses: 99

AU1518Ch06Frame Page 100 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • If the named variable cannot be changed, the receiving station returns a GetResponse PDU with an error code. • If the value does not match the named variable type, the receiving station returns a GetResponse PDU with a bad value indication. • If the request exceeds a local size limitation, the receiving station responds with a GetResponse PDU with an indication of too big. • If the named variable cannot be altered and is not covered by the preceding rules, a general error message is returned by the receiving station using the GetResponse PDU. If there are no errors in the request, the receiving station updates the value for the named variable. The typical read-write community is called private , and the correct community string must be provided for this access. If the value is changed, the receiving station returns a GetResponse PDU with a “No error” indication. As discussed later in this chapter, if the SNMP read-write community string is the default or set to another well-known value, any user can change MIB parameters and thereby affect the operation of the system. SNMP TRAPS SNMP traps are used to send an event back to the monitoring station. The trap is transmitted at the request of the agent and sent to the device specified in the SNMP configuration files. While the use of traps is universal across SNMP implementations, the means by which the SNMP agent determines where to send the trap differs among SNMP agent implementations. There are several traps available to send to the monitoring station: • • • • • • •

coldStart warmStart linkDown linkUp authenticationFailure egpNeighborLoss enterpriseSpecific

Traps are sent using the PDU, similar to the other message types, previously discussed. The coldStart trap is sent when the system is initialized from a poweredoff state and the agent is reinitializing. This trap indicates to the monitoring station that the SNMP implementation may have been or may be altered. The warmStart trap is sent when the system restarts, causing the agent to reinitialize. In a warmStart trap event, neither the SNMP agent’s implementation nor its configuration is altered. 100

AU1518Ch06Frame Page 101 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP?

Exhibit 6-3. Router with multiple network interfaces.

Most network management personnel are familiar with the linkDown and linkUp traps. The linkDown trap is generated when a link on the SNMP agent recognizes a failure of one or more of the network links in the SNMP agent’s configuration. Similarly, when a communication link is restored, the linkUp trap is sent to the monitoring station. In both cases, the trap indicates the network link where the failure or restoration has occurred. Exhibit 6-3 shows a device, in this case a router, with multiple network interfaces, as seen in a Network Management Station. The failure of the red interface (shown here in black) caused the router to send a linkDown trap to the management station, resulting in the change in color for the object. The green objects (shown in white) represent currently operational interfaces. The authenticationFailure trap is generated when the SNMP agent receives a message with the incorrect community string, meaning the attempt to access the SNMP community has failed. When the SNMP agent communicates in an Exterior Gateway Protocol (EGP) relationship, and the peer is no longer reachable, an egpNeighborLoss trap is generated to the management station. This trap means routing information available from the EGP peer is no longer available, which may affect other network connectivity. Finally, the enterpriseSpecific trap is generated when the SNMP agent recognizes an enterpriseSpecific trap has occurred. This is implementation dependent and includes the specific trap information in the message sent back to the monitoring station. SNMP SECURITY ISSUES The preceding brief introduction to SNMP should raise a few issues for the security professional. As mentioned, the default SNMP community 101

AU1518Ch06Frame Page 102 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY strings are public for read-only access and private for read-write. Most system and network administrators do not change these values. Consequently, any user, authorized or not, can obtain information through SNMP about the device and potentially change or reset values. For example, if the read-write community string is the default, any user can change the device’s IP address and take it off the network. This can have significant consequences, most notably surrounding the availability of the device. It is not typically possible to access enterprise information or system passwords or to gain command line or terminal access using SNMP. Consequently, any changes could result in the monitoring station identifying the device as unavailable, forcing corrective action to restore service. However, the common SNMP security issues are: • Well-known default community strings • Ability to change the configuration information on the system where the SNMP agent is running • Multiple management stations managing the same device • Denial-of-service attacks Many security and network professionals are undoubtedly familiar with the Computer Emergency Response Team (CERT) Advisory CA-2002–03 published in February 2002. While this is of particular interest to the network and security communities today, it should not overshadow the other issues mentioned above because many of the issues in CA-2002-03 are possible due to the other security issues. Well-Known Community Strings As mentioned previously, there are two SNMP access polices, read-only and read-write, using the default community strings of public and private, respectively. Many organizations do not change the default community strings. Failing to change the default values means it is possible for an unauthorized person to change the configuration parameters associated with the device. Consequently, SNMP community strings should be treated as passwords. The better the quality of the password, the less likely an unauthorized person could guess the community string and change the configuration. Ability to Change SNMP Configuration On many systems, users who have administrative privileges can change the configuration of their system, even if they have no authority to do so. This ability to change the local SNMP agent configuration can affect the operation of the system, cause network management problems, or affect the operation of the device. 102

AU1518Ch06Frame Page 103 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? Consequently, SNMP configuration files should be controlled and, if possible, centrally managed to identify and correct configuration changes. This can be done in a variety of ways, including tools such as tripwire. Multiple Management Stations While this is not a security problem per se, multiple management stations polling the same device can cause problems ranging from poor performance, to differing SNMP configuration information, to the apparent loss of service. If your network is large enough to require multiple management stations, separate communities should be established to prevent these events from taking place. Remember, there is no constraint on the number of SNMP communities that can be used in the network; it is only the network engineer who imposes the limits. Denial-of-Service Attacks Denial of service is defined as the loss of service availability either through authorized or unauthorized configuration changes. It is important to be clear about authorized and unauthorized changes. The system or application administrator who makes a configuration change as part of his job and causes a loss of service has the same impact as the attacker who executes a program to cause the loss of service remotely. A key problem with SNMP is the ability to change the configuration of the system causing the service outage, or to change the SNMP configuration and imitate a denial of service as reported by the monitoring station. In either situation, someone has to review and possibly correct the configuration problem, regardless of the cause. This has a cost to the company, even if an authorized person made the change. The Impact of CERT CA-2002–03 Most equipment manufacturers, enterprises, and individuals felt the impact of the CERT advisory issued by the Carnegie Mellon Software Engineering Institute (CM-SEI) Computer Emergency Response Team Coordination Center (CERT-CC). The advisory was issued after the Oulu University Secure Programming Group conducted a very thorough analysis of the message-handling capabilities of SNMP Version 1. While the advisory is specifically for SNMP Version 1, most SNMP implementations use the same program code for decoding the PDU, potentially affecting all SNMP versions. The primary issues noted in the advisory as it affects SNMP involve the potential for unauthorized privileged access, denial-of-service attacks, or other unstable behavior. Specifically, the work performed by Oulu University found problems with decoding trap messages received by the SNMP 103

AU1518Ch06Frame Page 104 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY management station or requests received by the SNMP agent on the network device. It was also identified that some of the vulnerabilities found in the SNMP implementation did not require the correct community string. Consequently, vendors have been issuing patches for their SNMP implementations; but more importantly, enterprises have been testing for vulnerabilities within their networks. The cost of the vulnerabilities in code, which has been in use for decades, will cost developers millions of dollars for new development activities to remove the vulnerabilities, verify them, and release patches. The users of those products will also spend millions of dollars on patching and implementing other controls to limit the potential exposures. Many of the recommendations provided by CERT for addressing the problem are solutions for the common security problems when using SNMP. The recommendations provided by CERT can be considered common sense, because SNMP should be treated as a network service: • Disable SNMP. If the device in question is not monitored using SNMP, it is likely safe to disable the service. Remember, if you are monitoring the device and disable SNMP in error, your management station will report the device as down. • Implement perimeter network filtering. Most enterprises should filter inbound SNMP requests from external networks to prevent unauthorized individuals or organizations from retrieving SNMP information about your network devices. Sufficient information exists in the SNMP data to provide a good view of how to attack your enterprise. Secondly, outbound filtering should be applied to prevent SNMP requests from leaving your network and being directed to another enterprise. The obvious exceptions here are if you are monitoring another network outside yours, or if an external organization is providing SNMPbased monitoring systems for your network. • Implement authorized SNMP host filtering. Not every user who wants to should be able to issue SNMP queries to the network devices. Consequently, filters can be installed in the network devices such as routers and switches to limit the source and destination addresses for SNMP requests. Additionally, the SNMP configuration of the agent should include the appropriate details to limit the authorized SNMP management and trap stations. • Change default community strings. A major problem in most enterprises, the default community strings of public and private should be changed to a complex string; and knowledge of that string should be limited to as few people as possible. • Create a separate management network. This can be a long, involved, and expensive process that many enterprises do not undertake. A separate 104

AU1518Ch06Frame Page 105 Thursday, November 14, 2002 6:24 PM

What’s Not So Simple about SNMP? management network keeps connectivity to the network devices even when there is a failure on the network portion. However, it requires a completely separate infrastructure, making it expensive to implement and difficult to retrofit. If you are building a new network, or have an existing network with critical operational requirements, a separate management network is highly advisable. The recommendations identified here should be implemented by many enterprises, even if all their network devices have the latest patches implemented. Implementing these techniques for other network protocols and services in addition to SNMP can greatly reduce the risk of unauthorized network access and data loss. SUMMARY The goal of SNMP is to provide a simple yet powerful mechanism to change the configuration and monitor the state and availability of the systems and network devices. However, the nature of SNMP, as with other network protocols, also exposes it to attack and improper use by network managers, system administrators, and security personnel. Understanding the basics of SNMP and the major security issues affecting its use as discussed here helps the security manager communicate concerns about network design and implementation with the network manager or network engineer. Acknowledgments

The author thanks Cathy Buchanan of Nortel Network’s Internet Engineering team for her editorial and technical clarifications. And thanks to Mignona Cote, my friend and colleague, for her continued support and ideas. Her assistance continues to expand my vision and provides challenges on a daily basis. References Internet Engineering Task Force (IETF) Request for Comments (RFC) documents: RFC-1089 SNMP over Ethernet RFC-1157 SNMP over Ethernet RFC-1187 Bulk Table Retrieval with the SNMP RFC-1215 Convention for Defining Traps for Use with the SNMP RFC-1227 SNMP MUX Protocol and MIB RFC-1228 SNMP-DPI: Simple Network Management Protocol Distributed Program RFC-1270 SNMP Communications Services RFC-1303 A Convention for Describing SNMP-Based Agents

105

AU1518Ch06Frame Page 106 Thursday, November 14, 2002 6:24 PM

TELECOMMUNICATIONS AND NETWORK SECURITY RFC-1351 SNMP Administrative Model RFC-1352 SNMP Security Protocols RFC-1353 Definitions of Managed Objects for Administration of SNMP RFC-1381 SNMP MIB Extension for X.25 LAPB RFC-1382 SNMP MIB Extension for the X.25 Packet Layer RFC-1418 SNMP over OSI RFC-1419 SNMP over AppleTalk RFC-1420 SNMP over IPX RFC-1461 SNMP MIB Extension for Multiprotocol Interconnect over X.25 RFC-1503 Algorithms for Automating Administration in SNMPv2 Managers RFC-1901 Introduction to Community-Based SNMPv2 RFC-1909 An Administrative Infrastructure for SNMPv2 RFC-1910 User-Based Security Model for SNMPv2 RFC-2011 SNMPv2 Management Information Base for the Internet Protocol RFC-2012 SNMPv2 Management Information Base for the Transmission Control Protocol RFC-2013 SNMPv2 Management Information Base for the User Datagram Protocol RFC-2089 V2ToV1 Mapping SNMPv2 onto SNMPv1 within a Bi-Lingual SNMP Agent RFC-2273 SNMPv3 Applications RFC-2571 An Architecture for Describing SNMP Management Frameworks RFC-2573 SNMP Applications RFC-2742 Definitions of Managed Objects for Extensible SNMP Agents RFC-2962 An SNMP Application-Level Gateway for Payload Address CERT Advisory CA-2002–03

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

106

AU1518Ch07Frame Page 107 Thursday, November 14, 2002 6:23 PM

Chapter 7

Security for Broadband Internet Access Users James Trulove

High-speed access is becoming increasingly popular for connecting to the Internet and to corporate networks. The term “high-speed” is generally taken to mean transfer speeds above the 56 kbps of analog modems, or the 64 to 128 kbps speeds of ISDN. There are a number of technologies that provide transfer rates from 256 kbps to 1.544 Mbps and beyond. Some offer asymmetrical uplink and downlink speeds that may go as high as 6 Mbps. These high-speed access methods include DSL, cable modems, and wireless point-to-multipoint access. DSL services include all of the so-called “digital subscriber line” access methods that utilize conventional copper telephone cabling for the physical link from customer premise to central office (CO). The most popular of these methods is ADSL, or asymmetrical digital subscriber line, where an existing POTS (plain old telephone service) dial-up line does double duty by having a higher frequency digital signal multiplexed over the same pair. Filters at the user premise and at the central office tap off the digital signal and send it to the user’s PC and the CO router, respectively. The actual transport of the ADSL data is via ATM, a factor invisible to the user, who is generally using TCP/IP over Ethernet. A key security feature of DSL service is that the transport media (one or two pairs) is exclusive to a single user. In a typical neighborhood of homes or businesses, individual pairs from each premise are, in turn, consolidated into larger cables of many pairs that run eventually to the service provider’s CO. As with a conventional telephone line, each user is isolated from other users in the neighborhood. This is inherently more secure than competing high-speed technologies. The logical structure of an ADSL distribution within a neighborhood is shown in Exhibit 7-1A. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

107

AU1518Ch07Frame Page 108 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Cable modems (CMs) allow a form of high-speed shared access over media used for cable television (CATV) delivery. Standard CATV video channels are delivered over a frequency range from 54 MHz to several hundred megahertz. Cable modems simply use a relatively narrow band of those frequencies that are unused for TV signal delivery. CATV signals are normally delivered through a series of in-line amplifiers and signal splitters to a typical neighborhood cable segment. Along each of these final segments, additional signal splitters (or taps) distribute the CATV signals to users. Adding two-way data distribution to the segment is relatively easy because splitters are inherently two-way devices and no amplifiers are within the segment. However, the uplink signal from users in each segment must be retrieved at the head of the segment and either repeated into the next up-line segment or converted and transported separately. As shown in Exhibit 7-1B, each neighborhood segment is along a tapped coaxial cable (in most cases) that terminates in a common-equipment cabinet (similar in design to the subscriber-line interface cabinets used in telephone line multiplexing). This cabinet contains the equipment to filter off the data signal from the neighborhood coax segment and transport it back to the cable head-end. Alternative data routing may be provided between the common equipment cabinets and the NOC (network operations center), often over fiber-optic cables. As a matter of fact, these neighborhood distribution cabinets are often used as a transition point for all CATV signals between fiber-optic transmission links and the installed coaxial cable to the users. Several neighborhood segments may terminate in each cabinet. When a neighborhood has been rewired for fiber distribution and cable modem services, the most often outward sign is the appearance of a four-foot high green or gray metal enclosure. These big green (or gray) boxes are metered and draw electrical power from a local power pole and often have an annoying little light to warn away would-be villains. Many areas do not have ready availability of cable modem circuits or DSL. Both technologies require the user to be relatively near the corresponding distribution point and both need a certain amount of infrastructure expansion by the service provider. A wireless Internet option exists for high-speed access from users who are in areas that are otherwise unserved. The term “wireless Internet” refers to a variety of noncellular radio services that interconnect users to a central access point, generally with a very high antenna location on a high building, a broadcast tower, or even a mountaintop. Speeds can be quite comparable to the lower ranges of DSL and CM (i.e., 128 to 512 kbps). Subscriber fees are somewhat higher, but still a great value to someone who would otherwise have to deal with low-speed analog dial access. Wireless Internet is often described as point-to-multipoint operation. This refers to the coverage of several remote sites from a central site, as opposed 108

AU1518Ch07Frame Page 109 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users

A. DSL Distribution

To Central Office

B. Cable Modem Segment

Data Link Neighborhood Interface To Cable Cabinet Head End

C. Wireless Internet Distribution

Exhibit 7-1. Broadband and wireless Internet access methods.

to point-to-point links that are intended to serve a pair of sites exclusively. As shown in Exhibit 7-1C, remote user sites at homes or businesses are connected by a radio link to a central site. In general, the central site has an omnidirectional antenna (one that covers equally in all radial directions) while remote sites have directional antennas that point at the central antenna. Wireless Internet users share the frequency spectrum among all the users of a particular service frequency. This means that these remote users must share the available bandwidth as well. As a result, as with the cable modem situation, the actual data throughput depends on how many users are online and active. In addition, all the transmissions are essentially broadcast into the air and can be monitored or intercepted with the proper equipment. Some wireless links include a measure of encryption but the key may still be known to all subscribers to the service. 109

AU1518Ch07Frame Page 110 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY There are several types of wireless systems permitted in the United States, as with the European Union, Asia, and the rest of the world. Some of these systems permit a single provider to control the rights to a particular frequency allocation. These exclusively licensed systems protect users from unwanted interference from other users and protect the large investment required of the service provider. Other systems utilize a frequency spectrum that is shared and available to all. For example, the 802.11 systems at 2.4 GHz and 5.2 GHz are shared-frequency, nonlicensed systems that can be adapted to point-to-multipoint distribution. Wireless, or radio-frequency (RF), distribution is subject to all of the same distance limitations, antenna designs, antenna siting, and interference considerations of any RF link. However, in good circumstances, wireless Internet provides a very satisfactory level of performance, one that is comparable to its wired competitors. BROADBAND SECURITY RISKS Traditional remote access methods, by their very nature, provide a fair measure of link security. Dial-up analog and dial-on-demand ISDN links have relatively good protection along the path between the user’s computer and the access service provider (SP). Likewise, dedicated links to an Internet service provider (ISP) are inherently safe as well, barring any intentional (and unauthorized/illegal) tapping. However, this is not necessarily the case with broadband access methods. Of the common broadband access methods, cable modems and wireless Internet have inherent security risks because they use shared media for transport. On the other hand, DSL does indeed utilize an exclusive path to the CO but has some more subtle security issues that are shared with the other two methods. The access-security issue with cable modems is probably the most significant. Most PC users run a version of the Microsoft Windows® operating system, popularly referred to just as Windows. All versions of Windows since Windows 95® have included a feature called peer-to-peer networking. This feature is in addition to the TCP/IP protocol stack that supports Internet-oriented traffic. Microsoft Windows NT® and Windows 2000® clients also support peer-to-peer networking. These personal operating systems share disk, printer, and other resources in a network neighborhood utilizing the NetBIOS protocol. NetBIOS is inherently nonroutable although it can be encapsulated within TCP/IP and IPX protocols. A particular network neighborhood is identified by a Workgroup name and, theoretically, devices with different Workgroup names cannot converse. A standard cable modem is essentially a two-way repeater connected between a user’s PC (or local network) and the cable segment. As such, it 110

AU1518Ch07Frame Page 111 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users repeats everything along your segment to your local PC network and everything on your network back out to the cable segment. Thus, all the “private” conversations one might have with one’s network-connected printer or other local PCs are available to everyone on the segment. In addition, every TCP/IP packet that goes between one’s PC and the Internet is also available for eavesdropping along the cable segment. This is a very serious security risk, at least among those connected to a particular segment. It makes an entire group of cable modem users vulnerable to monitoring, or even intrusion. Specific actions to mitigate this risk are discussed later. Wireless Internet acts essentially as a shared Ethernet segment, where the segment exists purely in space rather than within a copper medium. It is “ethereal,” so to speak. What this means in practice is that every transmission to one user also goes to every authorized (and unauthorized) station within reception range of the central tower. Likewise, a user’s transmissions back to the central station are available to anyone who is capable of receiving that user’s signal. Fortunately, the user’s remote antenna is fairly directional and is not at the great height of the central tower. But someone who is along the path between the two can still pick up the user’s signal. Many wireless Internet systems also operate as a bridge rather than a TCP/IP router, and can pass the NetBIOS protocol used for file and printer sharing. Thus, they may be susceptible to the same type of eavesdropping and intrusion problems of the cable modem, unless they are protected by link encryption. In addition to the shared-media security issue, broadband security problems are more serious because of the vast communication bandwidth that is available. More than anything else, this makes the broadband user valuable as a potential target. An enormous amount of data can be transferred in a relatively short period of time. If the broadband user operates mail systems or servers, these may be more attractive to someone wanting to use such resources surreptitiously. Another aspect of broadband service is that it is “always on,” rather than being connected on-demand as with dial-up service. This also makes the user a more accessible target. How can a user minimize exposure to these and other broadband security weaknesses? INCREASING BROADBAND SECURITY The first security issue to deal with is visibility. Users should immediately take steps to minimize exposure on a shared network. Disabling or hiding processes that advertise services or automatically respond to inquiries effectively shields the user’s computer from intruding eyes. 111

AU1518Ch07Frame Page 112 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Shielding the computer will be of benefit whether the user is using an inherently shared broadband access, such as with cable modems or wireless, or has DSL or dial-up service. Also, remember that the user might be on a shared Ethernet at work or on the road. Hotel systems that offer highspeed access through an Ethernet connection are generally shared networks and thus are subject to all of the potential problems of any shared broadband access. Shared networks clearly present a greater danger for unauthorized access because the Windows Networking protocols can be used to detect and access other computers on the shared medium. However, that does not mean that users are unconditionally safe in using other access methods such as DSL or dial-up. The hidden danger in DSL or dial-up is the fact that the popular peer-to-peer networking protocol, NetBIOS, can be transported over TCP/IP. In fact, a common attack is a probe to the IP port that supports this. There are some specific steps users can take to disable peer networking if they are a single-PC user. Even if there is more than one PC in the local network behind a broadband modem, users can take action to protect their resources. Check Vulnerability Before taking any local-PC security steps, users might want to check on their vulnerabilities to attacks over the Web. This is easy to do and serves as both a motivation to take action and a check on security steps. Two sites are recommended: www.grc.com and www.symantec.com/securitycheck. GRC.com is the site created by Steve Gibson for his company, Gibson Research Corp. Users should look for the “shields up” icon to begin the testing. GRC is free to use and does a thorough job of scanning for open ports and hidden servers. The Symantec URL listed should take the user directly to the testing page. Symantec can also test vulnerabilities in Microsoft Internet Explorer as a result of ActiveX controls. Potentially harmful ActiveX controls can be inadvertently downloaded in the process of viewing a Web page. The controls generally have full access to the computer’s file system, and can thus contain viruses or even hidden servers. As is probably known, the Netscape browser does not have these vulnerabilities, although both types of browsers are somewhat vulnerable to Java and JavaScript attacks. According to information on this site, the online free version at Symantec does not have all the test features of the retail version, so users must purchase the tool to get a full test. These sites will probably convince users to take action. It is truly amazing how a little demonstration can get users serious about security. 112

AU1518Ch07Frame Page 113 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users Remember that this eye-opening experience will not decrease security in any way … it will just decrease a user’s false sense of security! Start by Plugging Holes in Windows To protect a PC against potential attack that might compromise personal data or even harm a PC, users will need to change the Windows Networking default configurations. Start by disabling file and printer sharing, or by password-protecting them, if one must use these features. If specific directories must be shared to other users on the local network, share just that particular directory rather than the entire drive. Protect each resource with a unique password. Longer passwords, and passwords that use a combination of upper/lower case, numbers, and allow punctuation, are more secure. Windows Networking is transported over the NetBIOS protocol, which is inherently unroutable. The advantage to this feature is that any NetBIOS traffic, such as that for printer or file sharing, is blocked at any WAN router. Unfortunately, Windows has the flexibility of encapsulating NetBIOS within TCP/IP packets, which are quite routable. When using IP Networking, users may be inadvertently enabling this behavior. As a matter of fact, it is a little difficult to block. However, there are some steps users can take to isolate their NetBIOS traffic from being routed out over the Internet. The first step is to block NetBIOS over TCP/IP. To do this in Windows, simply go to the Property dialog for TCP/IP and disable “NetBIOS over TCP/IP.” Likewise, disable “Set this protocol to be the default.” Now go to bindings and uncheck all of the Windows-oriented applications, such as Microsoft Networking or Microsoft Family Networking. The next step is to give local networking features an alternate path. Do this by adding IPX/SPX compatible protocol from the list in the Network dialog. After adding IPX/SPX protocol, configure its properties to take up the slack created with TCP/IP. Set it to be the default protocol; check the “enable NetBIOS over IPX/SPX” option; and check the Windows-oriented bindings that were unchecked for TCP/IP. In exiting the dialog, by checking OK, notice that a new protocol has been added, called “NetBIOS support for IPX/SPX compatible Protocol.” This added feature allows NetBIOS to be encapsulated over IPX, isolating the protocol from its native mode and from unwanted encapsulation over TCP/IP. This action provides some additional isolation of the local network’s NetBIOS communication because IPX is generally not routed over the user’s access device. Be sure that IPX routing, if available, is disabled on the router. This will not usually be a problem with cable modems (which do not route) or with DSL connections because both are primarily used in IPonly networks. At the first IP router link, the IPX will be blocked. If the simple NAT firewall described in the next section is used, IPX will likewise be 113

AU1518Ch07Frame Page 114 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

A. Typical Broadband Access Public Address

Public Address Service Provider and the Internet

Cable or DSL Modem

B. Broadband Access with a NAT Firewall Private Address 192.168.1.100

Simple IP NAT Router

Public Address Cable or DSL Modem

Public Address Service Provider and the Internet

Exhibit 7-2. Addition of a NAT firewall for broadband Internet access.

blocked. However, if ISDN is used for access, or some type of T1 router, check that IPX routing is off. Now Add a NAT Firewall Most people do not have the need for a full-fledged firewall. However, a simple routing device that provides network address translation (NAT) can shield internal IP addresses from the outside world while still providing complete access to Internet services. Exhibit 7-2A shows the normal connection provided by a cable or DSL modem. The user PC is assigned a public IP address from the service provider’s pool. This address is totally visible to the Internet and available for direct access and, therefore, for direct attacks on all IP ports. A great deal of security can be provided by masking internal addresses inside a NAT router. This device is truly a router because it connects between two IP subnets, the internal “private” network and the external “public” network. A private network is one with a known private network subnet address, such as 192.168.x.x or 10.x.x.x. These private addresses are nonroutable because Internet Protocol convention allows them to be duplicated at will by anyone who wants to use them. In the example shown in Exhibit 7-2B, the NAT router is inserted between the user’s PC (or internal network of PCs) and the existing cable or DSL modem. The NAT router can act as a DHCP (Dynamic Host Control Protocol) server to the internal private network, and it can act as a DHCP client to the service provider’s DHCP server. In this manner, dynamic IP address assignment can be 114

AU1518Ch07Frame Page 115 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users accomplished in the same manner as before, but the internal addresses are hidden from external view. A NAT router is often called a simple firewall because it does the address-translation function of a full-featured firewall. Thus, the NAT router provides a first level of defense. A common attack uses the source IP address of a user’s PC and steps through the known and upper IP ports to probe for a response. Certain of these ports can be used to make an unauthorized access to the user’s PC. Although the NAT router hides the PC user’s IP address, it too has a valid public IP address that may now be the target of attacks. NAT routers will often respond to port 23 Telnet or port 80 HTTP requests because these ports are used for the router’s configuration. The user must change the default passwords on the router, as a minimum; and, if allowable, disable any access to these ports from the Internet side. Several companies offer simple NAT firewalls for this purpose. In addition, some products are available that combine the NAT function with the cable or DSL modem. For example, LinkSYS provides a choice of NAT routers with a single local Ethernet port or with four switched Ethernet ports. List prices for these devices are less than $200, with much lower street prices. Install a Personal Firewall The final step in securing a user’s personal environment is to install a personal firewall. The current software environment includes countless user programs and processes that access the Internet. Many of the programs that connect to the Internet are obvious: the e-mail and Web browsers that everyone uses. However, one may be surprised to know that a vast array of other software also makes transmissions over the Internet connection whenever it is active. And if using a cable modem or DSL modem (or router), one’s connection is always active if one’s PC is on. For example, Windows 98 has an update feature that regularly connects to Microsoft to check for updates. A virus checker, personal firewall, and even personal finance programs can also regularly check for updates or, in some cases, for advertising material. The Windows update is particularly persistent and can check every five or ten minutes if it is enabled. Advertisements can annoyingly pop up a browser mini-window, even when the browser is not active. However, the most serious problems arise from the unauthorized access or responses from hidden servers. Chances are that a user has one or more Web server processes running right now. Even the music download services (e.g., MP3) plant servers on PCs. Surprisingly, these are often either hidden or ignored, although they represent a significant security risk. 115

AU1518Ch07Frame Page 116 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY These servers can provide a backdoor into a PC that can be opened without the user’s knowledge. In addition, certain viruses operate by planting a stealth server that can be later accessed by an intruder. A personal firewall will provide a user essential control over all of the Internet accesses that occur to or from his PC. Several products are on the market to provide this function. Two of these are Zone Alarm from Zone Labs (www.zonelabs.com) and Black Ice Defender from Network Ice (www.networkice.com). Other products are available from Symantec and Network Associates. The use of a personal firewall will alert the user to all traffic to or from his broadband modem and allow the user to choose whether he wants that access to occur. After an initial setup period, Internet access will appear perfectly normal, except that unwanted traffic, probes, and accesses will be blocked. Some of the products alert the user to unwanted attempts to connect to his PC. Zone Alarm, for example, will pop up a small window to advise the user of the attempt, the port and protocol, and the IP address of the attacker. The user can also observe and approve the ability of his applications to access the Internet. After becoming familiar with the behavior of these programs, the user can direct the firewall to always block or allow access. In addition, the user can explicitly block server behavior from particular programs. A log is kept of actions so that the user can review the firewall activities later, whether or not he disables the pop-up alert window. Thus far, this chapter has concentrated on security for broadband access users. However, after seeing what the personal firewall detects and blocks, users will certainly want to put it on all their computers. Even dialup connections are at great risk from direct port scanning and NetBIOS/IP attacks. After installation of a personal firewall, it is not unusual to notice probes beginning within the first 30 seconds after connecting. And if one monitors these alerts, one will continue to see such probes blocked over the course of a session. Do not be alarmed. These probes were happening before the firewall was installed, just without the user’s knowledge. The personal firewall is now blocking all these attempts before they can do any harm. Broadband users with a consistent public IP address will actually see a dramatic decrease over time in these probes. The intruders do not waste time going where they are unwelcome. SUMMARY Broadband access adds significant security risks to a network or a personal computer. The cable modem or DSL connection is normally always active and the bandwidth is very high compared to slower dial-up or ISDN methods. Consequently, these connections make easy targets for intrusion and disruption. Wireless Internet users have similar vulnerabilities, in addition to possible eavesdropping through the airwaves. Cable modem 116

AU1518Ch07Frame Page 117 Thursday, November 14, 2002 6:23 PM

Security for Broadband Internet Access Users users suffer additional exposure to nonroutable workgroup protocols, such as Windows-native NetBIOS. Steps should be taken in three areas to help secure PC resources from unwanted intrusions. 1. Eliminate or protect Windows workgroup functions such as file and printer sharing. Change the default passwords and enable IPX encapsulation if these functions are absolutely necessary. 2. Add a simple NAT firewall/router between the access device and PCs. This will screen internal addresses from outside view and eliminate most direct port scans. 3. Install and configure a personal firewall on each connected PC. This will provide control over which applications and programs have access to Internet resources. ABOUT THE AUTHOR James Trulove has more than 25 years of experience in data networking with companies such as Lucent, Ascend, AT&T, Motorola, and Intel. He has a background in designing, configuring, and implementing multimedia communications systems for local and wide area networks, using a variety of technologies. He writes on networking topics and is the author of LAN Wiring, An Illustrated Guide to Network Cabling and A Guide to Fractional T1, and the editor of Broadband Networking, as well as the author of numerous articles on networking.

117

AU1518Ch07Frame Page 118 Thursday, November 14, 2002 6:23 PM

AU1518Ch08Frame Page 119 Thursday, November 14, 2002 6:23 PM

Chapter 8

New Perspectives on VPNs Keith Pasley, CISSP

Wide acceptance of security standards in IP and deployment of quality-ofservice (QoS) mechanisms like Differentiated Services (DiffServ) and Resource Reservation Protocol (RSVP) within multi-protocol label switching (MPLS) is increasing the feasibility of virtual private networks (VPNs). VPNs are now considered mainstream; most service providers include some type of VPN service in their offerings, and IT professionals have grown familiar with the technology. Also, with the growth of broadband, more companies are using VPNs for remote access and telecommuting. Specifically, the small office/home-office market has the largest growth projections according to industry analysts. However, where once lay the promise of IPSec-based VPNs, it is now accepted that IPSec does not solve all remote access VPN problems. As user experience with VPNs has grown, so have user expectations. Important user experience issues such as latency, delay, legacy application support, and service availability are now effectively dealt with through the use of standard protocols such as MPLS and improved network design. VPN management tools that allow improved control and views of VPN components and users are now being deployed, resulting in increased scalability and lower ongoing operational costs of VPNs. At one time it was accepted that deploying a VPN meant installing “fat”-client software on user desktops, manual configuration of encrypted tunnels, arcane configuration entry into server-side text-based configuration files, intrusive network firewall reconfigurations, minimal access control capability, and a state of mutual mystification due to vendor hype and user confusion over exactly what the VPN could provide in the way of scalability and manageability. New approaches to delivering on the objective of secure yet remote access are evolving, as shown by the adoption of alternatives to that pure layer 3 tunneling VPN protocol, IPSec. User feedback to vendor technology, the high cost of deploying and managing large-scale VPNs, and opportunity 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

119

AU1518Ch08Frame Page 120 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY cost analysis are helping to evolve these new approaches to encrypting, authenticating, and authorizing remote access into enterprise applications. WEB-BASED IP VPN A granular focus on Web-enabling business applications by user organizations has led to a rethinking of the problem and solution by VPN vendors. The ubiquitous Web browser is now frequently the “client” of choice for many network security products. The Web-browser-as-client approach solves a lot of the old problems but also introduces new ones. For example, what happens to any residual data left over from a Web VPN session? How is strong authentication performed? How can the remote computer be protected from subversion as an entry point to the internal network while the VPN tunnel is active? Until these questions are answered, Web browserbased VPNs will be limited from completely obsolescing client/server VPNs. Most Web-based VPN solutions claim to deliver applications, files, and data to authorized users through any standard Web browser. How that is accomplished differs by vendor. A trend toward turnkey appliances is influencing the development of single-purpose, highly optimized and scalable solutions based on both proprietary and open-source software preinstalled on hardware. A three-tiered architecture is used by most of these vendors. This architecture consists of a Web browser, Web server/middleware, and back-end application. The Web browser serves as the user interface to the target application. The Web server/middleware is the core component that translates the LAN application protocol and application requests into a Web browser-presentable format. Transport Layer Security (TLS) and Secure Socket Layer (SSL) are the common tunneling protocols used. Authentication options include user name and password across TLS/SSL, two-factor tokens such as RSA SecureID, and (rarely) Web browser-based digital certificates. Due to the high business value assigned to e-mail access, resilient hardware design and performance tuning of software to specific hardware is part of the appeal of the appliance approach. Redundant I/O, RAID 1 disk subsystems, redundant power supplies, hotswappable cooling fans and disk drives, failover/clustering modes, dual processors, and flash memory-based operating systems are features that help ensure access availability. Access control is implemented using common industry-standard authentication protocols such as Remote Access Dial-In User Service (RADIUS, RFC 2138) and Lightweight Directory Access Protocol (LDAP, RFCs 2251–2256).

120

AU1518Ch08Frame Page 121 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs APPLICATIONS E-mail access is the number-one back-end application for this class of VPN. E-mail has become the lifeblood of enterprise operations. Imagine how a business could survive for very long if its e-mail infrastructure were not available. However, most Web-based e-mail systems allow cleartext transmissions of authentication and mail messages by default. A popular Web mail solution is to install a server-side digital certificate and enable TLS/SSL between the user browsers and the Web mail server. The Web mail server would proxy mail messages to the internal mail server. Variations to this include using a mail security appliance (Mail-VPN) that runs a hardened operating system and Web mail reverse proxy. Another alternative is to install the Web mail server on a firewall DMZ. The firewall would handle Web mail authentication and message proxying to and from the Web server on the DMZ. A firewall rule would be configured to only allow the DMZ Web server to connect to the internal mail server using an encrypted tunnel from the DMZ. E-mail gateways such as the McAfee series of e-mail security appliances focus on anti-virus and content inspection with no emphasis on securing the appliance itself from attack. Depending on how the network firewall is configured, this type of solution may be acceptable in certain environments. On the other end of the spectrum, e-mail infrastructure vendors such as Mirapoint focus on e-mail components such as message store and LDAP directory server; but they offer very little integrated security of the appliance platform or the internal e-mail server. In the middle is the in-house solution, cobbled together using open-source components and cheap hardware with emphasis on low costs over resiliency, security, and manageability. Another class of Web mail security is offered by remote access VPN generalists such as Netilla, Neoteris, and Whale Communications. These vendors rationalize that the issue with IPSec VPNs is not that you cannot build an IPSec VPN tunnel between two IPSec gateways; rather, the issue is in trying to convince the peer IT security group to allow an encrypted tunnel through their firewall. Therefore, these vendors have designed their product architectures to use common Web protocols such as TLS/SSL and PPTP to tunnel to perimeter firewalls, DMZ, or directly to applications on internal networks. VPN AS A SERVICE: MPLS-BASED VPNS Multi-Protocol Label Switching (MPLS) defines a data-link layer service (see Exhibit 8-1) based on an Internet Engineering Task Force specification (RFC 3031). MPLS specification does not define encryption or authentication. However, IPSec is a commonly used security protocol to encrypt IP data carried across an MPLS-based network. Similarly, various existing mechanisms can be used for authenticating users of MPLS-based networks. The 121

AU1518Ch08Frame Page 122 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Exhibit 8-1. MPLS topologies. Intranet/closed group Simplest Each site has routing knowledge of all other VPN sites BGP updates are propagated between provider edge routers Extranet/overlapping Access control to prevent unwanted access Strong authentication Centralized firewall and Internet access Use network address translation Inter-provider BGP4 updates exchange Sub-interface for VPNs Sub-interface for routing updates Dial-up Establish L2TP tunnel to virtual network gateway Authenticate using RADIUS Virtual routing and forwarding info downloaded as part authentication/authorization Hub-and-spoke Internet access Use a sub-interface for Internet Use a different sub-interface for VPN

MPLS specification defines a network architecture and routing protocol that efficiently forwards and allows prioritization of packets containing higher layer protocol data. Its essence is in the use of so-called labels. An MPLS label is a short identifier used to identify a group of packets that is forwarded in the same manner, such as along the same path, or given the same treatment. The MPLS label is inserted into existing protocol headers or can shimmed between protocol headers, depending on the type of device used to forward packets and overall network implementation. For example, labels can be shimmed between the data-link and network layer headers or they can be encoded in layer 2 headers. The label is then used to route the so-called labeled packets between MPLS nodes. A network node that participates in MPLS network architectures is called a label switch router (LSR). The particular treatment of a labeled packet by an LSR is defined through the use of protocols that assign and distribute labels. Existing protocols have been extended to allow them to distribute MPLS LSP information, such as label distribution using BGP (MPLS-BGP). Also, new protocols have been defined explicitly to distribute LSP information between MPLS peer nodes. For example, one such newly defined protocol is the Label Distribution Protocol (LDP, RFC 3036). The route that a labeled packet traverses is termed a label switched path (LSP). In general, the MPLS architecture supports LSPs with different label stack encodings used on different hops. Label stacking defines the hierarchy of labels defining packet treatment for a packet as it traverses an MPLS internetwork. Label 122

AU1518Ch08Frame Page 123 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs Exhibit 8-2. Sample MPLS equipment criteria. Hot standby loadsharing of MPLS tunnels Authentication via RADIUS, TACACS+, AAA Secure Shell access (SSH) Secure Copy (SCP) Multi-level access modes (EXEC, standard, etc.) ACL support to protect against DoS attacks Traffic engineering support via RSVP-TE, OSPF-TE, ISIS-TE Scalability via offering a range of links: 10/100 Mbps Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, to OC-3c ATM, OC-3c SONET, OC-12c SONET, and OC-48c SONET Redundant, hot-swappable interface modules Rapid fault detection and failover Network layer route redundancy protocols for resiliency; Virtual Router Redundancy Protocol (VRRP, RFC 2338) for layer 3 MPLS-VPN; Virtual Switch Redundancy Protocol (VSRP); and RSTP for layer 2 MPLS-VPN Multiple queuing methods (e.g., weighted fair queuing, strict priority, etc.) Rate limiting Single port can support tens of thousands of tunnels

stacking occurs when more than one label is used, within a packet, to forward traffic across an MPLS architecture that employs various MPLS node types. For example, a group of network providers can agree to allow MPLS labeled packets to travel between their individual networks and still provide consistent treatment of the packets (i.e., maintain prioritization and LSP). This level of interoperability allows network service providers the ability to deliver true end-to-end service-level guarantees across different network providers and network domains. By using labels, a service provider and organizations can create closed paths that are isolated from other traffic within the service provider’s network, providing the same level of security as other private virtual circuit (PVC)-style services such as Frame Relay or ATM. Because MPLS-VPNs require modifications to a service provider’s or organization’s network, they are considered network-based VPNs (see Exhibit 8-2). Although there are topology options for deploying MPLS-VPNs down to end users, generally speaking, MPLS-VPNs do not require inclusion of client devices and tunnels usually terminate at the service provider edge router. From a design perspective, most organizations and service providers want to set up bandwidth commitments through RSVP and use that bandwidth to run VPN tunnels, with MPLS operating within the tunnel. This design allows MPLS-based VPNs to provide guaranteed bandwidth and application quality-of-service features within that guaranteed bandwidth tunnel. In real terms, it is now possible to not only run VPNs but also enterprise resource planning applications, legacy production systems, and company 123

AU1518Ch08Frame Page 124 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY e-mail, video, and voice telephone traffic over a single MPLS-based network infrastructure. Through the use of prioritization schemes within MPLS, such as Resource Reservation Protocol (RSVP), bandwidth can be reserved for specific data flows and applications. For example, highest prioritization can be given to performance-sensitive traffic that has to be delivered with minimal latency and packet loss and requires confirmation of receipt. Examples include voice and live video streaming, videoconferencing, and financial transactions. A second priority level could then be defined to allow traffic that is mission critical yet only requires an enhanced level of performance. Examples include FTP (e.g., CAD files, video clips) and ERP applications. The next highest priority can be assigned to traffic that does not require specific prioritization, such as e-mail and general Web browsing. A heightened focus on core competencies by companies, now more concerned with improving customer service and reducing cost, has led to an increase in outsourcing of VPN deployment and management. Service providers have responded by offering VPNs as a service using the differentiating capability of MPLS as a competitive differentiator. Service providers and large enterprises are typically deploying two VPN alternatives to traditional WAN offerings such as Frame Relay, ATM, or leased line: IPSecencrypted tunnel VPNs, and MPLS-VPNs. Additional flexibility is an added benefit because MPLS-based VPNs come in two flavors: layer 2 and layer 3. This new breed of VPN based on Multi-Protocol Label Switching (RFC 3031) is emerging as the most marketed alternative to traditional pure IP-based VPNs. Both support multicast routing via Internet Group Membership Protocol (IGMP, RFC 2236), which forwards only a single copy of a transmission to only the requesting port. The appeal of MPLS-based VPNs includes their inherent any-to-any reachability across a common data link. Availability of network access is also a concern of secure VPN design. This objective is achieved through the use of route redundancy along with routing protocols that enhance network availability, such as BGP. MPLS-VPNs give users greater control, allowing them to customize the service to accommodate their specific traffic patterns and business requirements. As a result, they can lower their costs by consolidating all of their data communications onto a single WAN platform and prioritizing traffic for specific users and applications. The resulting simplicity of architecture, efficiencies gained by consolidation of network components, and ability to prioritize traffic make MPLS-VPNs a very attractive and scalable option. LAYER 2 MPLS-VPN Layer 2 MPLS-VPNs, based on the Internet Engineering Task Force’s (IETF) Martini draft or Kompella draft, simply emulate layer 2 services such as Frame Relay, ATM, or Ethernet. With the Martini approach, a customer’s layer 2 traffic is encapsulated when it reaches the edge of the service provider network, mapped onto a label-switched path, and carried 124

AU1518Ch08Frame Page 125 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs across a network. The Martini draft describes point-to-point VPN services across virtual leased lines (VLLs), transparently connecting multiple subscriber sites together, independent of the protocols used. This technique takes advantage of MPLS label stacking, whereby more than one label is used to forward traffic across an MPLS architecture. Specifically, two labels are used to support layer 2 MPLS-VPNs. One label represents a point-topoint virtual circuit, while the second label represents the tunnel across the network. The current Martini drafts define encapsulations for Ethernet, ATM, Frame Relay, Point-to-Point Protocol, and High-level Data Link Control protocols. The Kompella draft describes another method for simplifying MPLS-VPN setup and management by combining the auto-discovery capability of BGP (to locate VPN sites) with the signaling protocols that use the MPLS labels. The Kompella draft describes how to provide multi-pointto-multi-point VPN services across VLLs, transparently connecting multiple subscriber sites independent of the protocols used. This approach simplifies provisioning of new VPNs. Because the packets contain their own forwarding information (e.g., attributes contained in the packet’s label), the amount of forwarding state information maintained by core routers is independent of the number of layer 2 MPLS-VPNs provisioned over the network. Scalability is thereby enhanced because adding a site to an existing VPN in most cases requires reconfiguring only the service provider edge router connected to the new site. Layer 2 MPLS-VPNs are transparent, from a user perspective, much in the same way the underlying ATM infrastructure is invisible to Frame Relay users. The customer is still buying Frame Relay or ATM, regardless of how the provider configures the service. Because layer 2 MPLS-VPNs are virtual circuit-based, they are as secure as other virtual circuit- or connection-oriented technologies such as ATM. Because layer 2 traffic is carried transparently across an MPLS backbone, information in the original traffic, such as class-of-service markings and VLAN IDs, remains unchanged. Companies that need to transport non-IP traffic (such as legacy IPX or other protocols) may find layer 2 MPLS-VPNs the best solution. Layer 2 MPLS-VPNs also may appeal to corporations that have private addressing schemes or prefer not to share their addressing information with service providers. In a layer 2 MPLS-VPN, the service provider is responsible only for layer 2 connectivity; the customer is responsible for layer 3 connectivity, which includes routing. Privacy of layer 3 routing is implicitly ensured. Once the service provider edge router (PE) provides layer 2 connectivity to its connected customer edge (CE) router in an MPLS-VPN environment, the service provider’s job is done. In the case of troubleshooting, the service provider need only prove that connectivity exists between the PE and CE. From a customer perspective, traditional, pure layer 2 VPNs function in the same way. Therefore, there are few migration issues to deal with on the customer side. Configuring a layer 2 MPLS-VPN is similar in process to configuring a 125

AU1518Ch08Frame Page 126 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY traditional layer 2 VPN. The “last mile” connectivity, Frame Relay, HDLC, and PPP must be provisioned. In a layer 2 MPLS-VPN environment, the customer can run any layer 3 protocol they would like, because the service provider is delivering only layer 2 connectivity. Most metropolitan area networks using MPLS-VPNs provision these services in layer 2 of the network and offer them over a high-bandwidth pipe. An MPLS-VPN using the layer 3 BGP approach is quite a complex implementation and management task for the average service provider; the layer 2 approach is much simpler and easier to provision. LAYER 3 Layer 3 MPLS-VPNs are also known as IP-enabled or Private-IP VPNs. The difference between layer 2 and layer 3 MPLS-VPNs is that, in layer 3 MPLSVPNs, the labels are assigned to layer 3 IP traffic flows, whereas layer 2 MPLS-VPNs encode or shim labels between layer 2 and 3 protocol headers. A traffic flow is a portion of traffic, delimited by a start and stop time, that is generated by a particular source or destination networking device. The traffic flow concept is roughly equivalent to the attributes that make up a call or connection. Data associated with traffic flows are aggregate quantities reflecting events that take place in the duration between the start and stop times of the flow. These labels represent unique identifiers and allow for the creation of label switched paths (LSP) within a layer 3 MPLS-VPN. Layer 3 VPNs offer a good solution when the customer traffic is wholly IP, customer routing is reasonably simple, and the customer sites are connected to the SP with a variety of layer 2 technologies. In a layer 3 MPLSVPN environment, internetworking depends on both the service provider and customer using the same routing and layer 3 protocols. Because pure IPSec VPNs require each end of the tunnel to have a unique address, special care must be taken when implementing IPSec VPNs in environments using private IP addressing based on network address translation. Although several vendors provide solutions to this problem, this adds more management complexity in pure IPSec VPNs. One limitation of layer 2 MPLS-VPNs is the requirement that all connected VPN sites, using the same provider, use the same data-link connectivity. On the other hand, the various sites of a layer 3 MPLS-VPN can connect to the service provider with any supported data-link connectivity. For example, some sites may connect with Frame Relay circuits and others with Ethernet. Because the service provider in a layer 3 MPLS-VPN can also handle IP routing for the customer, the customer edge router need only participate with the provider edge router. This is in contrast to layer 2 MPLS-VPNs, wherein the customer edge router must deal with an unknown 126

AU1518Ch08Frame Page 127 Thursday, November 14, 2002 6:23 PM

New Perspectives on VPNs number of router peers. The traditional layer 2 problem of n*(n – 1)/2 inherent to mesh topologies carries through to layer 2 MPLS-VPNs as well. Prioritization via class of service is available in layer 3 MPLS-VPNs because the provider edge router has visibility into the actual IP data layer. As such, customers can assign priorities to traffic flows, and service providers can then provide a guaranteed service level for those IP traffic flows. Despite the complexities, service providers can take advantage of layer 3 IP MPLS-VPNs to offer secure differentiated services. For example, due to the use of prioritization protocols such as DiffServ and RSVP, service providers are no longer hindered by business models based on flatrate pricing or time and distance. MPLS allows them to meet the challenges of improving customer service interaction, offer new differentiated premium services, and establish new revenue streams. SUMMARY VPN technology has come a long way since its early beginnings. IPSec is no longer the only standardized option for creating and managing enterprise and service provider VPNs. The Web-based application interface is being leveraged to provide simple, easily deployable, and easily manageable remote access and extranet VPNs. The strategy for use is as a complementary — not replacement — remote access VPN for strategic applications that benefit from Web browser user interfaces. So-called clientless or Web browser-based VPNs are targeted to users who frequently log onto their corporate servers several times a day for e-mails, calendar updates, shared folders, and other collaborative information sharing. Most of these new Web browser-based VPNs use hardware platforms using a three-tiered architecture consisting of a Web browser user interface, reverse proxy function, and reference monitor-like middleware that transforms back-end application protocols into browser-readable format for presentation to end users. Benefits of this new approach include ease of training remote users and elimination of compatibility issues when installing software on remote systems. Drawbacks include lack of support for legacy applications and limited throughput and scalability for large-scale and carrier-class VPNs. The promise of any-to-any carrier-class and large-enterprise VPNs is being realized as MPLS-VPN standards develop and technology matures. Interservice provider capability allows for the enforcement of true end-toend quality-of-service (QoS) guarantees across different provider networks. Multi-Protocol Label Switching can be accomplished at two levels: layer 2 for maximum flexibility, low-impact migrations from legacy layer 2 connectivity, and layer 3 for granular service offerings and management of IP VPNs. MPLS allows a service provider to deliver many services using only one network infrastructure. Benefits for service providers include reduced operational costs, greater scalability, faster provisioning of services, 127

AU1518Ch08Frame Page 128 Thursday, November 14, 2002 6:23 PM

TELECOMMUNICATIONS AND NETWORK SECURITY and competitive advantage in a commodity-perceived market. Large enterprises benefit from more efficient use of available bandwidth, increased security, and extensible use of existing well-known networking protocols. Users benefit from the increased interoperability among multiple service providers and consistent end-to-end service guarantees as MPLS products improve. In MPLS-based VPNs, confidentiality, or data privacy, is enhanced by the use of labels that provide virtual tunnel separation. Note that encryption is not accounted for in the MPLS specifications. Availability is provided through various routing techniques allowed by the specifications. MPLS only provides for layer 2 data-link integrity. Higher-layer controls should be applied accordingly. Further Reading http://www.mplsforum.org/ www.mplsworld.com http://www.juniper.net/techcenter/techpapers/200012.html h t t p : / / w w w. c i s c o . c o m / u n i v e r c d / c c / t d / d o c / p r o d u c t / s o f t ware/ios120/120newft/120t/120t5/vpn.htm http://www.nortelnetworks.com/corporate/technology/mpls/doclib.html http://advanced.comms.agilent.com/insight/2001–08/ http://www.ericsson.com/datacom/emedia/qoswhite_paper_317.pdf http://www.riverstonenet.com/technology/whitepapers.shtml http://www.equipecom.com/whitepapers.html http://www.convergedigest.com/Bandwidth/mpls.htm http://www.convergedigest.com/Bandwidth/mpls.htm

ABOUT THE AUTHOR Keith Pasley, CISSP, CNE, is a senior security technologist with Ciphertrust in Atlanta, Georgia.

128

AU1518Ch09Frame Page 129 Thursday, November 14, 2002 6:22 PM

Chapter 9

An Examination of Firewall Architectures Paul A. Henry, CISSP

Today, the number-one and number-two (in sales) firewalls use a technique known as stateful packet filtering, or SPF. SPF has the dual advantages of being fast and flexible and this is why it has become so popular. Notice that I didn’t even mention security, as this is not the number-one reason people choose these firewalls. Instead, SPF is popular because it is easy to install and doesn’t get in the way of business as usual. It is as if you hired a guard for the entry to your building who stood there waving people through as fast as possible. — Rik Farrow, World-renowned independent security consultant July 2000, Foreword Tangled Web — Tales of Digital Crime from the Shadows of Cyberspace Firewall customers once had a vote, and voted in favor of transparency, performance and convenience instead of security; nobody should be surprised by the results. — From an e-mail conversation with Marcus J. Ranum, “Grandfather of Firewalls,” Firewall Wizard Mailing List, October 2000

FIREWALL FUNDAMENTALS: A REVIEW The current state of insecurity in which we find ourselves today calls for a careful review of the basics of firewall architectures. The level of protection that any firewall is able to provide in securing a private network when connected to the public Internet is directly related to the architectures chosen for the firewall by the respective vendor. Generally speaking, most commercially available firewalls utilize one or more of the following firewall architectures: 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

129

AU1518Ch09Frame Page 130 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • • • • • • •

Static packet filter Dynamic (stateful) packet filter Circuit-level gateway Application-level gateway (proxy) Stateful inspection Cutoff proxy Air gap

NETWORK SECURITY: A MATTER OF BALANCE Network security is simply the proper balance of trust and performance. All firewalls rely on the inspection of information generated by protocols that function at various layers of the OSI (Open Systems Interconnection) model. Knowing the OSI layer at which a firewall operates is one of the keys to understanding the different types of firewall architectures. • Generally speaking, the higher up the OSI layer the architecture goes to examine the information within the packet, the more processor cycles the architecture consumes. • The higher up in the OSI layer at which an architecture examines packets, the greater the level of protection the architecture provides because more information is available upon which to base decisions. Historically, there had always been a recognized trade-off in firewalls between the level of trust afforded and speed (throughput). Faster processors and the performance advantages of symmetric multi-processing (SMP) have narrowed the performance gap between the traditional fast packet filters and high overhead-consuming proxy firewalls. One of the most important factors in any successful firewall deployment is who makes the trust/performance decisions: (1) the firewall vendor, by limiting the administrator’s choices of architectures, or (2) the administrator, in a robust firewall product that provides for multiple firewall architectures. In examining the firewall architectures in Exhibit 9-1, looking within the IP packet, the most important fields are (see Exhibits 9-2 and 9-3): • • • •

IP Header TCP Header Application-Level Header Data/payload Header

STATIC PACKET FILTER The packet-filtering firewall is one of the oldest firewall architectures. A static packet filter operates at the network layer or OSI layer 3 (see Exhibit 9-4). 130

AU1518Ch09Frame Page 131 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

OSI Model Application Proxy

TCP/IP Model

Application Presentation

Circuit Gateway

FTP

Telnet

SMTP

Other

Session TCP

Transport Packet Filter - SPF

Network Data Link Physical

UDP IP

Ethernet

FDDI

X.25

Other

Exhibit 9-1. Firewall architectures.

Source Source Destination Destination IP Address Port

Application State and Data Flow

Payload

IP Header TCP Header

Application Level Header

Data

TCP Header Segment Bits 0 4 8 12 16 20 24 28 1 Source Port Destination Port 2 Sequence Number 3 Acknowledgment Number 4 Offset Reserved Flags Window 5 Checksum Urgent Pointer 6 Options Padding Data begins here . . .

Header

1 2 3 4 5 6

IP Header Segment Bits 31 0 4 8 12 16 20 24 28 Version IHL Type of Service Total Length Identification Flags Fragmentation Offset Time to Live Protocol Header Checksum Source Address Destination Address Options Padding Data begins here . . .

31

Header

Words

Words

Exhibit 9-2. IP packet structure.

Exhibit 9-3. IP header segment versus TCP header segment. 131

AU1518Ch09Frame Page 132 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

Network Interface

External Network

Internal Network

Exhibit 9-4. Static packet filter operating at the network layer.

Source Source Destination Destination IP Address Port

Application State and Data Flow

PayLoad

IP Header TCP Header

Application Level Header

Data

Packet Filter

Exhibit 9-5. Static packet filter IP packet structure.

The decision to accept or deny a packet is based upon an examination of specific fields within the packet’s IP and protocol headers (see Exhibit 9-5): • Source address • Destination address • Application or protocol 132

AU1518Ch09Frame Page 133 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures • Source port number • Destination port number Before forwarding a packet, the firewall compares the IP Header and TCP Header against a user-defined table — rule base — containing the rules that dictate whether the firewall should deny or permit packets to pass. The rules are scanned in sequential order until the packet filter finds a specific rule that matches the criteria specified in the packet-filtering rule. If the packet filter does not find a rule that matches the packet, then it imposes a default rule. The default rule explicitly defined in the firewall’s table typically instructs the firewall to drop a packet that meets none of the other rules. There are two schools of thought on the default rule used with the packet filter: (1) ease of use and (2) security first. Ease of use proponents prefer a default allow all rule that permits all traffic unless it is explicitly denied by a prior rule. Security first proponents prefer a default deny all rule that denies all traffic unless explicitly allowed by a prior rule. Within the static packet-filter rules database, the administrator can define rules that determine which packets are accepted and which packets are denied. The IP Header information allows the administrator to write rules that can deny or permit packets to and from a specific IP address or range of IP addresses. The TCP Header information allows the administrator to write service-specific rules, that is, allow or deny packets to or from ports related to specific services. The administrator can write rules that allow certain services such as HTTP from any IP address to view the Web pages on the protected Web server. The administrator can also write rules that block a certain IP address or entire ranges of addresses from using the HTTP service and viewing the Web pages on the protected server. In the same respect, the administrator can write rules that allow certain services such as SMTP from a trusted IP address or range of IP addresses to access files on the protected mail server. The administrator could also write rules that block access for certain IP addresses or entire ranges of addresses to access the protected FTP server. The configuration of packet-filter rules can be difficult because the rules are examined in sequential order. Great care must be taken in the order in which packet-filtering rules are entered into the rule base. Even if the administrator manages to create effective rules in the proper order of precedence, a packet filter has one inherent limitation: A packet filter only examines data in the IP Header and TCP Header; it cannot know the difference between a real and a forged address. If an address is present and meets the packet-filter rules along with the other rule criteria, the packet will be allowed to pass. 133

AU1518Ch09Frame Page 134 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Suppose the administrator took the precaution to create a rule that instructed the packet filter to drop any incoming packets with unknown source addresses. This packet-filtering rule would make it more difficult, but not impossible, for a hacker to access at least some trusted servers with IP addresses. The hacker could simply substitute the actual source address on a malicious packet with the source address of a known trusted client. This common form of attack is called IP address spoofing. This form of attack is very effective against a packet filter. The CERT Coordination Center has received numerous reports of IP spoofing attacks, many of which resulted in successful network intrusions. Although the performance of a packet filter can be attractive, this architecture alone is generally not secure enough to keep out hackers determined to gain access to the protected network. Equally important is what the static packet filter does not examine. Remember that in the static packet filter, only specific protocol headers are examined: (1) Source–Destination IP Address and (2) Source–Destination Port numbers (services). Hence, a hacker can hide malicious commands or data in unexamined headers. Further, because the static packet filter does not inspect the packet payload, the hacker has the opportunity to hide malicious commands or data within the packet’s payload. This attack methodology is often referred to as a covert channel attack and is becoming more popular. Finally, the static packet filter is not state aware. Simply put, the administrator must configure rules for both sides of the conversation to a protected server. To allow access to a protected Web server, the administrator must create a rule that allows both the inbound request from the remote client as well as the outbound response from the protected Web server. Of further consideration is that many services such as FTP and e-mail servers in operation today require the use of dynamically allocated ports for responses, so an administrator of a static packet-filtering-based firewall has little choice but to open up an entire range of ports with static packetfiltering rules. Static packet filter considerations include: • Pros: — Low impact on network performance — Low cost, now included with many operating systems • Cons: — Operates only at network layer and therefore only examines IP and TCP Headers — Unaware of packet payload; offers low level of security — Lacks state awareness; may require numerous ports be left open to facilitate services that use dynamically allocated ports — Susceptible to IP spoofing — Difficult to create rules (order of precedence) — Only provides for a low level of protection 134

AU1518Ch09Frame Page 135 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-6. Advanced dynamic packet filter operating at the transport layer.

DYNAMIC (STATEFUL) PACKET FILTER The dynamic (stateful) packet filter is the next step in the evolution of the static packet filter. As such, it shares many of the inherent limitations of the static packet filter with one important difference: state awareness. The typical dynamic packet filter, like the static packet filter, operates at the network layer or OSI layer 3. An advanced dynamic packet filter may operate up into the transport layer — OSI layer 4 (see Exhibit 9-6) — to collect additional state information. Most often, the decision to accept or deny a packet is based on examination of the packet’s IP and Protocol Headers: • Source address • Destination address • Application or protocol 135

AU1518Ch09Frame Page 136 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • Source port number • Destination port number In simplest terms, the typical dynamic packet filter is aware of the difference between a new and an established connection. Once a connection is established, it is entered into a table that typically resides in RAM. Subsequent packets are compared to this table in RAM, most often by software running at the operating system (OS) kernel level. When the packet is found to be an existing connection, it is allowed to pass without any further inspection. By avoiding having to parse the packet-filter rule base for each and every packet that enters the firewall and by performing this alreadyestablished connection table test at the kernel level in RAM, the dynamic packet filter enables a measurable performance increase over a static packet filter. There are two primary differences in dynamic packet filters found among firewall vendors: 1. Support of SMP 2. Connection establishment In writing the firewall application to fully support SMP, the firewall vendor is afforded up to a 30 percent increase in dynamic packet filter performance for each additional processor in operation. Unfortunately, many implementations of dynamic packet filters in current firewall offerings operate as a single-threaded process, which simply cannot take advantage of the benefits of SMP. Most often to overcome the performance limitation of their single-threaded process, these vendors require powerful and expensive RISC processor-based servers to attain acceptable levels of performance. As available processor power has increased and multi-processor servers have become widely utilized, this single-threaded limitation has become much more visible. For example, vendor A running on an expensive RISC-based server offers only 150 Mbps dynamic packet filter throughput, while vendor B running on an inexpensive off-the-shelf Intel multi-processor server can attain dynamic packet filtering throughputs of above 600 Mbps. Almost every vendor has its own proprietary methodology for building the connection table; but beyond the issues discussed above, the basic operation of the dynamic packet filter for the most part is essentially the same. In an effort to overcome the performance limitations imposed by their single-threaded, process-based dynamic packet filters, some vendors have taken dangerous shortcuts when establishing connections at the firewall. RFC guidelines recommend following the three-way handshake to establish a connection at the firewall. One popular vendor will open a new connection upon receipt of a single SYN packet, totally ignoring RFC recommendations. 136

AU1518Ch09Frame Page 137 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures In effect, this exposes the servers behind the firewall to single-packet attacks from spoofed IP addresses. Hackers gain great advantage from anonymity. A hacker can be much more aggressive in mounting attacks if he can remain hidden. Similar to the example in the examination of a static packet filter, suppose the administrator took the precaution to create a rule that instructed the packet filter to drop any incoming packets with unknown source addresses. This packet-filtering rule would make it more difficult, but, again, not impossible for a hacker to access at least some trusted servers with IP addresses. The hacker could simply substitute the actual source address on a malicious packet with the source address of a known trusted client. In this attack methodology, the hacker assumes the IP address of the trusted host and must communicate through the three-way handshake to establish the connection before mounting an assault. This provides additional traffic that can be used to trace back to the hacker. When the firewall vendor fails to follow RFC recommendations in the establishment of the connection and opens a connection without the threeway handshake, the hacker can simply spoof the trusted host address and fire any of the many well-known single-packet attacks at the firewall, or servers protected by the firewall, while maintaining complete anonymity. One presumes that administrators are unaware that their popular firewall products operate in this manner; otherwise, it would be surprising that so many have found this practice acceptable following the many historical well-known single-packet attacks like LAND, Ping of Death, and Tear Drop that have plagued administrators in the past. Dynamic packet filter considerations include: • Pros: — Lowest impact of all examined architectures on network performance when designed to be fully SMP-compliant — Low cost, now included with some operating systems — State awareness provides measurable performance benefit • Cons: — Operates only at network layer, and therefore only examines IP and TCP Headers — Unaware of packet payload, offers low level of security — Susceptible to IP spoofing — Difficult to create rules (order of precedence) — Can introduce additional risk if connections can be established without following the RFC-recommended three-way handshake — Only provides for a low level of protection 137

AU1518Ch09Frame Page 138 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-7. Circuit-level gateway operating at the session layer.

CIRCUIT-LEVEL GATEWAY The circuit-level gateway operates at the session layer — OSI layer 5 (see Exhibit 9-7). In many respects, a circuit-level gateway is simply an extension of a packet filter in that it typically performs basic packet filter operations and then adds verification of proper handshaking and the legitimacy of the sequence numbers used to establish the connection. The circuit-level gateway examines and validates TCP and User Datagram Protocol (UDP) sessions before opening a connection, or circuit, through the firewall. Hence, the circuit-level gateway has more data to act upon than a standard static or dynamic packet filter. Most often, the decision to accept or deny a packet is based upon examining the packet’s IP and TCP Headers (see Exhibit 9-8): 138

AU1518Ch09Frame Page 139 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Source Source Destination Destination IP Address Port

Application State and Data Flow

Payload

IP Header TCP Header

Application Level Header

Data

Circuit Level Gateway

Exhibit 9-8. Circuit-level gateway IP packet structure.

• • • • • •

Source address Destination address Application or protocol Source port number Destination port number Handshaking and sequence numbers

Similar to a packet filter, before forwarding the packet, a circuit-level gateway compares the IP Header and TCP Header against a user-defined table containing the rules that dictate whether the firewall should deny or permit packets to pass. The circuit-level gateway then determines that a requested session is legitimate only if the SYN flags, ACK flags, and sequence numbers involved in the TCP handshaking between the trusted client and the untrusted host are logical. If the session is legitimate, the packet-filter rules are scanned until one is found that agrees with the information in a packet’s full association. If the packet filter does not find a rule that applies to the packet, then it imposes a default rule. The default rule explicitly defined in the firewall’s table typically instructs the firewall to drop a packet that meets none of the other rules. The circuit-level gateway is literally a step up from a packet filter in the level of security it provides. Further, like a packet filter operating at a low level in the OSI model, it has little impact on network performance. However, once a circuit-level gateway establishes a connection, any application can run across that connection because a circuit-level gateway filters packets only at the session and network layers of the OSI model. In other words, a circuit-level gateway cannot examine the data content of the packets it relays between a trusted network and an untrusted network. The potential exists to slip harmful packets through a circuit-level gateway to a server behind the firewall. Circuit-level gateway considerations include: • Pros: — Low to moderate impact on network performance — Breaks direct connection to server behind firewall — Higher level of security than a static or dynamic (stateful) packet filter 139

AU1518Ch09Frame Page 140 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • Cons: — Shares many of the same negative issues associated with packet filters — Allows any data to simply pass through the connection — Only provides for a low to moderate level of security APPLICATION-LEVEL GATEWAY Like a circuit-level gateway, an application-level gateway intercepts incoming and outgoing packets, runs proxies that copy and forward information across the gateway, and functions as a proxy server, preventing any direct connection between a trusted server or client and an untrusted host. The proxies that an application-level gateway runs often differ in two important ways from the circuit-level gateway: 1. The proxies are application specific. 2. The proxies examine the entire packet and can filter packets at the application layer of the OSI model (see Exhibit 9-9). Unlike the circuit-level gateway, the application-level gateway accepts only packets generated by services they are designed to copy, forward, and filter. For example, only an HTTP proxy can copy, forward, and filter HTTP traffic. If a network relies only on an application-level gateway, incoming and outgoing packets cannot access services for which there is no proxy. For example, if an application-level gateway ran FTP and HTTP proxies, only packets generated by these services could pass through the firewall. All other services would be blocked. The application-level gateway runs proxies that examine and filter individual packets, rather than simply copying them and recklessly forwarding them across the gateway. Application-specific proxies check each packet that passes through the gateway, verifying the contents of the packet up through the application layer (layer 7) of the OSI model. These proxies can filter on particular information or specific individual commands in the application protocols the proxies are designed to copy, forward, and filter. As an example, an FTP application-level gateway can filter on dozens of commands to allow a high degree of granularity on the permissions of specific users of the protected FTP service. Current-technology application-level gateways are often referred to as strong application proxies. A strong application proxy extends the level of security afforded by the application-level gateway. Instead of copying the entire datagram on behalf of the user, a strong application proxy actually creates a brand-new empty datagram inside the firewall. Only those commands and data found acceptable to the strong application proxy are copied from the original datagram outside the firewall to the new datagram inside the firewall. Then, and only then, is this new datagram forwarded to 140

AU1518Ch09Frame Page 141 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-9. Proxies filtering packets at the application layer.

the protected server behind the firewall. By employing this methodology, the strong application proxy can mitigate the risk of an entire class of covert channel attacks. An application-level gateway filters information at a higher OSI layer than the common static or dynamic packet filter, and most automatically create any necessary packet filtering rules, usually making them easier to configure than traditional packet filters. By facilitating the inspection of the complete packet, the applicationlevel gateway is one of the most secure firewall architectures available. However, historically some vendors (usually those that market stateful inspection firewalls) and users made claims that the security an application-level gateway offers had an inherent drawback — a lack of transparency. 141

AU1518Ch09Frame Page 142 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY In moving software from older 16-bit code to current technology’s 32-bit environment, and with the advent of SMP, many of today’s application-level gateways are just as transparent as they are secure. Users on the public or trusted network in most cases do not notice that they are accessing Internet services through a firewall. Application-level gateway considerations include: • Pros: — Application gateway with SMP affords a moderate impact on network performance. — Breaks direct connection to server behind firewall, eliminating the risk of an entire class of covert channel attacks. — Strong application proxy that inspects protocol header lengths can eliminate an entire class of buffer overrun attacks. — Highest level of security. • Cons: — Poor implementation can have a high impact on network performance. — Must be written securely. Historically, some vendors have introduced buffer overruns within the application gateway. — Vendors must keep up with new protocols. A common complaint of application-level gateway users is lack of timely vendor support for new protocols. — A poor implementation that relies on the underlying OS Inetd daemon will suffer from a severe limitation to the number of allowed connections in today’s demanding high simultaneous session environment. STATEFUL INSPECTION Stateful inspection combines the many aspects of dynamic packet filtering, and circuit-level and application-level gateways. While stateful inspection has the inherent ability to examine all seven layers of the OSI model (see Exhibit 9-10), in the majority of applications observed by the author, stateful inspection was operated only at the network layer of the OSI model and used only as a dynamic packet filter for filtering all incoming and outgoing packets based on source and destination IP addresses and port numbers. While the vendor claims this is the fault of the administrator’s configuration, many administrators claim that the operating overhead associated with the stateful inspection process prohibits its full utilization. While stateful inspection has the inherent ability to inspect all seven layers of the OSI model, most installations only operate as a dynamic packet filter at the network layer of the model. 142

AU1518Ch09Frame Page 143 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-10. Stateful inspection examining all seven layers of the OSI model.

As indicated, stateful inspection can also function as a circuit-level gateway, determining whether the packets in a session are appropriate. For example, stateful inspection can verify that inbound SYN and ACK flags and sequence numbers are logical. However, in most implementations the stateful inspection-based firewall operates only as a dynamic packet filter and, dangerously, allows new connections to be established with a single SYN packet. A unique limitation of one popular stateful inspection implementation is that it does not provide the ability to inspect sequence numbers on outbound packets from users behind the firewall. This leads to a flaw whereby internal users can easily spoof IP address of other internal users to open holes through the associated firewall for inbound connections. Finally, stateful inspection can mimic an application-level gateway. Stateful inspection can evaluate the contents of each packet up through the application layer and ensure that these contents match the rules in the administrator’s network security policy. 143

AU1518Ch09Frame Page 144 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Better Performance, But What about Security? Like an application-level gateway, stateful inspection can be configured to drop packets that contain specific commands within the Application Header. For example, the administrator could configure a stateful inspection firewall to drop HTTP packets containing a Put command. However, historically the performance impact of application-level filtering by the single-threaded process of stateful inspection has caused many administrators to abandon its use and to simply opt for dynamic packet filtering to allow the firewall to keep up with network load requirements. In fact, the default configuration of a popular stateful inspection firewall utilizes dynamic packet filtering and not stateful inspection of the most popular protocol on today’s Internet — HTTP traffic. Do Current Stateful Inspection Implementations Expose the User to Additional Risks? Unlike an application-level gateway, stateful inspection does not break the client/server model to analyze application-layer data. An applicationlevel gateway creates two connections: one between the trusted client and the gateway, and another between the gateway and the untrusted host. The gateway then copies information between these two connections. This is the core of the well-known proxy versus stateful inspection debate. Some administrators insist that this configuration ensures the highest degree of security; other administrators argue that this configuration slows performance unnecessarily. In an effort to provide a secure connection, a stateful inspection-based firewall has the ability to intercept and examine each packet up through the application layer of the OSI model. Unfortunately, because of the associated performance impact of the single-threaded stateful inspection process, this configuration is not the one typically deployed. Looking beyond marketing hype and engineering theory, stateful inspection relies on algorithms within an inspection engine to recognize and process application-layer data. These algorithms compare packets against known bit patterns of authorized packets. Vendors have claimed that, theoretically, they are able to filter packets more efficiently than applicationspecific proxies. However, most stateful inspection engines represent a single-threaded process. With current-technology, SMP-based applicationlevel gateways operating on multi-processor servers, the gap has dramatically narrowed. As an example, one vendor’s SMP-capable multi-architecture firewall that does not use stateful inspection outperforms a popular stateful inspection-based firewall up to 4:1 on throughput and up to 12:1 on simultaneous sessions. Further, due to limitations in the inspection language used in stateful inspection engines, application gateways are now commonly used to fill in the gaps. Stateful inspection considerations include: 144

AU1518Ch09Frame Page 145 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures • Pros: — Offers the ability to inspect all seven layers of the OSI model and is user configurable to customize specific filter constructs. — Does not break the client/server model. — Provides an integral dynamic (stateful) packet filter. — Fast when operated as dynamic packet filter; however, many SMPcompliant dynamic packet filters are actually faster. • Cons: — The single-threaded process of the stateful inspection engine has a dramatic impact on performance, so many users operate the stateful inspection-based firewall as nothing more than a dynamic packet filter. — Many believe the failure to break the client/server model creates an unacceptable security risk because the hacker has a direct connection to the protected server. — A poor implementation that relies on the underlying OS Inetd daemon will suffer from a severe limitation to the number of allowed connections in today’s demanding high simultaneous session environment. — Low level of security. No stateful inspection-based firewall has achieved higher than a Common Criteria EAL 2. Per the Common Criteria EAL 2 certification documents, EAL 2 products are not intended for use in protecting private networks when connecting to the public Internet. CUTOFF PROXY The cutoff proxy is a hybrid combination of a dynamic (stateful) packet filter and a circuit-level proxy. In the most common implementations, the cutoff proxy first acts as a circuit-level proxy in verifying the RFC-recommended three-way handshake and then switches over to a dynamic packet filtering mode of operation. Hence, it initially works at the session layer — OSI layer 5 — and then switches to a dynamic packet filter working at the network layer — OSI layer 3 — after the connection is completed (see Exhibit 9-11). The cutoff proxy verifies the RFC-recommended three-way handshake and then switches to a dynamic packet filter mode of operation. Some vendors have expanded the capability of the basic cutoff proxy to reach all the way up into the application layer to handle limited authentication requirements (FTP type) before switching back to a basic dynamic packet-filtering mode of operation. We pointed out what the cutoff proxy does; now, more importantly, we need to discuss what it does not do. The cutoff proxy is not a traditional circuit-level proxy that breaks the client/server model for the duration of the connection. There is a direct connection established between the remote 145

AU1518Ch09Frame Page 146 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Network Interface

External Network

Beginning of Transmission

End of Transmission

Application

Application

Presentation

Presentation

Session

Session

Transport

Transport

Network

Network

Data Link

Data Link

Physical

Physical

Network Interface

Internal Network

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-11. Cutoff proxy filtering packets.

client and the protected server behind the firewall. This is not to say that a cutoff proxy does not provide a useful balance between security and performance. At issue with respect to the cutoff proxy are vendors who exaggerate by claiming that their cutoff proxy offers a level of security equivalent to a traditional circuit-level gateway with the added benefit of the performance of a dynamic packet filter. In clarification, this author believes that all firewall architectures have their place in Internet security. If your security policy requires authentication of basic services and examination of the three-way handshake and does not require breaking of the client/server model, the cutoff proxy is a good fit. However, administrators must be fully aware and understand that a cutoff proxy clearly is not equivalent to a circuit-level proxy because the client/server model is not broken for the duration of the connection. 146

AU1518Ch09Frame Page 147 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures

Internet

External Host

Internal Host

Content Inspection Control Station SCSI-Based Memory Bank

Secure Data Switch

Exhibit 9-12. Air gap architecture.

Cutoff proxy considerations include: • Pros: — There is less impact on network performance than in a traditional circuit gateway. — IP spoofing issue is minimized as the three-way connection is verified. • Cons: — Simply put, it is not a circuit gateway. — It still has many of the remaining issues of a dynamic packet filter. — It is unaware of packet payload and thus offers low level of security. — It is difficult to create rules (order of precedence). — It can offer a false sense of security because vendors incorrectly claim it is equivalent to a traditional circuit gateway. AIR GAP The latest entry into the array of available firewall architectures is the air gap. At the time of this writing, the merits of air gap technology remain hotly debated among the security-related Usenet news groups. With air gap technology, the external client connection causes the connection data to be written to a SCSI e-disk (see Exhibit 9-12). The internal connection then reads this data from the SCSI e-disk. By breaking the direct connection between the client to the server and independently writing to and reading from the SCSI e-disk, the respective vendors believe they have provided a higher level of security and a resultant “air gap.” Air gap vendors claim that, while the operation of air gap technology resembles that of the application-level gateway (see Exhibit 9-13), an important difference is the separation of the content inspection from the 147

AU1518Ch09Frame Page 148 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Application

Presentation

Session

Transport

Network

Data Link

Physical

Network Interface

External Network

Network Interface

Internal Network

Exhibit 9-13. Air gap operating at the application layer.

“front end” by the isolation provided by the air gap. This may very well be true for those firewall vendors that implement their firewall on top of a standard commercial operating system. But with the current-technology firewall operating on a kernel-hardened operating system, there is little distinction. Simply put, those vendors that chose to implement kernel-level hardening of the underlying operating system utilizing multilevel security (MLS) or containerization methodologies provide no less security than current air gap technologies. The author finds it difficult to distinguish air gap technology from application-level gateway technology. The primary difference appears to be that air gap technology shares a common SCSI e-disk, while application-level technology shares common RAM. One must also consider the performance limitations of establishing the air gap in an external process (SCSI drive) 148

AU1518Ch09Frame Page 149 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures and the high performance of establishing the same level of separation in a secure kernel-hardened operating system running in kernel memory space. Any measurable benefit of air gap technology has yet to be verified by any recognized third-party testing authority. Further, the current performance of most air gap-like products falls well behind that obtainable by traditional application-level gateway based products. Without a verifiable benefit to the level of security provided, the necessary performance costs are prohibitive for many system administrators. Air gap considerations include: • Pros: — It breaks direct connection to the server behind the firewall, eliminating the risk of an entire class of covert channel attacks. — Strong application proxy that inspects protocol header lengths can eliminate an entire class of buffer overrun attacks. — As with an application-level gateway, an air gap can potentially offer a high level of security. • Cons: — It can have a high negative impact on network performance. — Vendors must keep up with new protocols. A common complaint of application-level gateway users is the lack of timely response from a vendor to provide application-level gateway support for a new protocol. — It is currently not verified by any recognized third-party testing authority. OTHER CONSIDERATIONS ASIC-Based Firewalls Looking at typical ASIC-based offerings, the author finds that virtually all are VPN/firewall hybrids. These hybrids provide fast VPN capabilities but most often are only complemented with a limited single-architecture stateful firewall capability. Today’s security standards are in flux, so ASIC designs must be left programmable or “soft” enough that the full speed of ASICs simply cannot be unleashed. ASIC technology most certainly brings a new level of performance to VPN operations. IPSec VPN encryption and decryption run inarguably better in hardware than in software. However, in most accompanying firewall implementations, a simple string comparison (packet to rule base) is the only functionality that is provided within the ASIC. Hence, the term “ASIC-based firewall” is misleading at best. The majority of firewall operations in ASIC-based firewalls are performed in software operating on microprocessors. These 149

AU1518Ch09Frame Page 150 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY firewall functions often include NAT, routing, cutoff proxy, authentication, alerting, and logging. When you commit to an ASIC, you eliminate the flexibility necessary to deal with future Internet security issues. Network security clearly remains in flux. While an ASIC can be built to be good enough for a particular purpose or situation, is good enough today really good enough for tomorrow’s threats? Hardware-Based Firewalls The term hardware-based firewall is another point of confusion in today’s firewall market. For clarification, most hardware-based firewalls are products that have simply eliminated the spinning media (hard disk drive) associated with the typical server or appliance-based firewalls. Most hardware firewalls are either provided with some form of solidstate disk, or they simply boot from ROM, load the OS and application from firmware to RAM, and then operate in a manner similar to a conventional firewall. The elimination of the spinning media is both a strength and a weakness of a hardware-based firewall. Strength is derived from limited improvements in MTBF and environmental performance by eliminating the spinning media. Weakness is present in severe limitations to the local alerting and logging capability, which most often requires a separate logging server to achieve any usable historical data retention. OTHER CONSIDERATIONS: A BRIEF DISCUSSION OF OS HARDENING One of the most misunderstood terms in network security with respect to firewalls today is OS hardening or hardened OS. Many vendors claim their network security products are provided with a hardened OS. What you will find in virtually all cases is that the vendor simply turned off or removed unnecessary services and patched the operating system or OS for known vulnerabilities. Clearly, this is not a hardened OS but really a patched OS. What Is a Hardened OS? A hardened OS (see Exhibit 9-14) is one in which the vendor has modified the kernel source code to provide for a mechanism that clearly provides a security perimeter among the non-secure application software, the secure application software, and the network stack. This eliminates the risk of the exploitation of a service running on the hardened OS that could otherwise provide root-level privilege to the hacker. In a hardened OS, the security perimeter is established using one of two popular methodologies: 150

AU1518Ch09Frame Page 151 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures Security Attack

Security Attack

Non-Secure Application Software

Firewall Secure Application Software

Network Attack

Security Perimeter Evaluated - Secure OS

Evaluated - Secure Network

Computer Hardware

Exhibit 9-14. Hardened OS.

1. Multi-Level Security (MLS): establishes a perimeter through the use of labels assigned to each packet and applies rules for the acceptance of said packets at various levels of the OS and services 2. Compartmentalization: provides a sandbox approach whereby an application effectively runs in a dedicated kernel space with no path to another object within the kernel Other security-related enhancements typically common in kernel-level hardening methodologies include: • • • •

Separation of event logging from root Mandatory access controls File system security enhancements Log EVERYTHING from all running processes

What Is a Patched OS? A patched OS is typically a commercial OS from which the administrator turns off or removes all unnecessary services and installs the latest security patches from the OS vendor. A patched OS has had no modifications made to the kernel source code to enhance security. Is a Patched OS as Secure as a Hardened OS? Simply put, no. A patched OS is only secure until the next vulnerability in the underlying OS or allowed services is discovered. An administrator may argue that, when he has completed installing his patches and turning off services, his OS is, in fact, secure. The bottom-line question is: with 151

AU1518Ch09Frame Page 152 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY more than 100 new vulnerabilities being posted to Bug Traq each month, how long will it remain secure? How Do You Determine if a Product Is Provided with a Hardened OS? If the product was supplied with a commercial OS, you can rest assured that it is not a hardened OS. The principal element here is that, to harden an OS, you must own the source code to the OS so you can make the necessary kernel modification to harden the OS. If you really want to be sure, ask the vendor to provide third-party validation that the OS is, in fact, hardened at the kernel level (e.g., http://www.radium.ncsc.mil/tpep/epl/historical. html). Why Is OS Hardening Such an Important Issue? Too many in the security industry have been lulled into a false sense of security. Decisions on security products are based primarily on popularity and price, with little regard for the actual security the product can provide. Where Can You Find Additional Information about OS Vulnerabilities? • • • • •

www.securiteam.com www.xforce.iss.net www.rootshell.com www.packetstorm.securify.com www.insecure.org/sploits.html

Where Can You Find Additional Information about Patching an OS? More than 40 experts in the SANS community have worked together over a full year to create the following elegant and effective scripts: • For Solaris, http://yassp.parc.xerox.com/ • For Red Hat Linux, http://www.sans.org/newlook/projects/bastille_ linux.htm Lance Spitzner (http://www.enteract.com/~lspitz/pubs.html) has written a number of excellent technical documents, including: • Armoring Linux • Armoring Solaris • Armoring NT Stanford University (http://www.stanford.edu/group/itss-ccs/security/ Bestuse/Systems/) has also released a number of informative technical documents: • Red Hat Linux • Solaris • SunOS 152

AU1518Ch09Frame Page 153 Thursday, November 14, 2002 6:22 PM

An Examination of Firewall Architectures • AIX 4.x • HPUX • NT CONCLUSION Despite claims by various vendors, no single firewall architecture is the “holy grail” in network security. It has been said many times and in many ways by network security experts: if you believe any one technology is going to solve the Internet security problem, you do not understand the technology and you do not understand the problem. Unfortunately for the Internet community at large, many administrators today design the security policy for their organizations around the limited capabilities of a specific vendor’s product. The author firmly believes all firewall architectures have their respective place or role in network security. Selection of any specific firewall architecture should be a function of the organization’s security policy and should not be based solely on the limitation of the vendor’s proposed solution. The proper application of multiple firewall architectures to support the organization’s security policy in providing the acceptable balance of trust and performance is the only viable methodology in securing a private network when connecting to the public Internet. One of the most misunderstood terms in network security with respect to firewalls today is OS hardening, or hardened OS. Simply put, turning off or removing a few unnecessary services and patching for known product vulnerabilities does not build a hardened OS. Hardening an OS begins with modifying the OS software at the kernel level to facilitate building a security perimeter. This security perimeter isolates services and applications from providing root access in the event of application- or OS-provided service compromise. Effectively, only a properly implemented hardened OS with a barrier at the kernel level will provide for an impenetrable firewall platform. References

This text is based on numerous books, white papers, presentations, vendor literature, and various Usenet newsgroup discussions I have read or participated in throughout my career. Any failure to cite any individual for anything that in any way resembles a previous work is unintentional. ABOUT THE AUTHOR Paul Henry, CISSP, an information security expert who has worked in the security field for more than 20 years, has provided analysis and research support on numerous complex network security projects in Asia, the Middle East, and North America, including several multimillion dollar 153

AU1518Ch09Frame Page 154 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY network security projects such as Saudi Arabia’s National Banking System and the DoD Satellite Data Project USA. Henry has given keynote speeches at security seminars and conferences worldwide on topics including DDoS attack risk mitigation, firewall architectures, intrusion methodology, enterprise security, and security policy development. An accomplished author, Henry has also published numerous articles and white papers on firewall architectures, covert channel attacks, distributed denial-of-service (DDoS) attacks, and buffer overruns. Henry has also been interviewed by ZD Net, the San Francisco Chronicle, the Miami Herald, NBC Nightly News, CNBC Asia, and many other media outlets.

154

AU1518Ch10Frame Page 155 Thursday, November 14, 2002 6:22 PM

Chapter 10

Deploying Host-Based Firewalls across the Enterprise: A Case Study Jeffery Lowder, CISSP

Because hosts are exposed to a variety of threats, there is a growing need for organizations to deploy host-based firewalls across the enterprise. This chapter outlines the ideal features of a host-based firewall — features that are typically not needed or present in a purely personal firewall software implementation on a privately owned PC. In addition, the author describes his own experiences with, and lessons learned from, deploying agentbased, host-based firewalls across an enterprise. The author concludes that host-based firewalls provide a valuable additional layer of security. A SEMANTIC INTRODUCTION Personal firewalls are often associated with (and were originally designed for) home PCs connected to “always-on” broadband Internet connections. Indeed, the term personal firewall is itself a vestige of the product’s history: originally distinguished from enterprise firewalls, personal firewalls were initially viewed as a way to protect home PCs.1 Over time, it was recognized that personal firewalls had other uses. The security community began to talk about using personal firewalls to protect notebooks that connect to the enterprise LAN via the Internet and eventually protecting notebooks that physically reside on the enterprise LAN.

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

155

AU1518Ch10Frame Page 156 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Consistent with that trend — and consistent with the principle of defense-in-depth — it can be argued that the time has come for the potential usage of personal firewalls to be broadened once again. Personal firewalls should really be viewed as host-based firewalls. As soon as one makes the distinction between host-based and network-based firewalls, the additional use of a host-based firewall becomes obvious. Just as organizations deploy host-based intrusion detection systems (IDS) to provide an additional detection capability for critical servers, organizations should consider deploying host-based firewalls to provide an additional layer of access control for critical servers (e.g., exchange servers, domain controllers, print servers, etc.). Indeed, given that many host-based firewalls have an IDS capability built in, it is conceivable that, at least for some small organizations, host-based firewalls could even replace specialized host-based IDS software. The idea of placing one firewall behind another is not new. For years, security professionals have talked about using so-called internal firewalls to protect especially sensitive back-office systems.2 However, internal firewalls, like network-based firewalls in general, are still dedicated devices. (This applies to both firewall appliances such as Cisco’s PIX and softwarebased firewalls such as Symantec’s Raptor.) In contrast, host-based firewalls require no extra equipment. A host-based firewall is a firewall software package that runs on a preexisting server or client machine. Given that a host-based firewall runs on a server or client machine (and is responsible for protecting only that machine), host-based firewalls offer greater functionality than network-based firewalls, even including internal firewalls that are dedicated to protecting a single machine. Whereas both network- and host-based firewalls have the ability to filter inbound and outbound network connections, only host-based firewalls possess the additional capabilities of blocking network connections linked to specific programs and preventing the execution of mail attachments. To put this into proper perspective, consider the network worm and Trojan horse program QAZ, widely suspected to be the exploit used in the November 2000 attack on Microsoft’s internal network. QAZ works by hijacking the NOTEPAD.EXE program. From the end user’s perspective, Notepad still appears to run normally; but each time Notepad is launched, QAZ sends an e-mail message (containing the IP address of the infected machine) to some address in China.3 Meanwhile, in the background, the Trojan patiently waits for a connection on TCP port 7597, through which an intruder can upload and execute any applications.4 Suppose QAZ were modified to run over TCP port 80 instead.5 While all firewalls can block outbound connections on TCP port 80, implementing such a configuration would interfere with legitimate traffic. Only a host-based firewall can block an outbound connection on TCP port 80 associated with NOTEPAD.EXE and notify the user of the event. As Steve Riley notes, “Personal firewalls 156

AU1518Ch10Frame Page 157 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study that monitor outbound connections will raise an alert; seeing a dialog with the notice ‘Notepad is attempting to connect to the Internet’ should arouse anyone’s suspicions.”6 STAND-ALONE VERSUS AGENT-BASED FIREWALLS Host-based firewalls can be divided into two categories: stand-alone and agent-based.7 Stand-alone firewalls are independent of other network devices in the sense that their configuration is managed (and their logs are stored) on the machine itself. Examples of stand-alone firewalls include ZoneAlarm, Sygate Personal Firewall Pro, Network Associates’ PGP Desktop Security, McAfee Personal Firewall,8 Norton Internet Security 2000, and Symantec Desktop Firewall. In contrast, agent-based firewalls are not locally configured or monitored. Agent-based firewalls are configured from (and their logs are copied to) a centralized enterprise server. Examples of agent-based firewalls include ISS RealSecure Desktop Protector (formerly Network ICE’s Black ICE Defender) and InfoExpress’s CyberArmor Personal Firewall. We chose to implement agent-based firewall software on our hosts. While stand-alone firewalls are often deployed as an enterprise solution, we wanted the agent-based ability to centrally administer and enforce a consistent access control list (ACL) across the enterprise. And as best practice dictates that the logs of network-based firewalls be reviewed on a regular basis, we wanted the ability to aggregate logs from host-based firewalls across the enterprise into a single source for regular review and analysis. OUR PRODUCT SELECTION CRITERIA Once we adopted an agent-based firewall model, our next step was to select a product. Again, as of the time this chapter was written, our choices were RealSecure Desktop Protector or CyberArmor. We used the following criteria to select a product:9 • Effectiveness in blocking attacks. The host-based firewall should effectively deny malicious inbound traffic. It should also at least be capable of effectively filtering outbound connections. As Steve Gibson argues, “Not only must our Internet connections be fortified to prevent external intrusion, they also [must] provide secure management of internal extrusion.”10 By internal extrusion, Gibson is referring to outbound connections initiated by Trojan horses, viruses, and spyware. To effectively filter outbound connections, the host-based firewall must use cryptographic sums. The host-based firewall must first generate cryptographic sums for each authorized application and then regenerate and compare that sum to the one stored in the database before any program (no matter what the filename) is allowed access. If the application 157

AU1518Ch10Frame Page 158 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY







• •







• • •

158

does not maintain a database of cryptographic sums for all authorized applications (and instead only checks filenames or file paths), the host-based firewall may give an organization a false sense of security. Centralized configuration. Not only did we need the ability to centrally define the configuration of the host-based firewall, we also required the ability to enforce that configuration. In other words, we wanted the option to prevent end users from making security decisions about which applications or traffic to allow. Transparency to end users. Because the end users would not be making any configuration decisions, we wanted the product to be as transparent to them as possible. For example, we did not want users to have to ‘tell’ the firewall how their laptops were connected (e.g., corporate LAN, home Internet connection, VPN, extranet, etc.) in order to get the right policy applied. In the absence of an attack, we wanted the firewall to run silently in the background without noticeably degrading performance. (Of course, in the event of an attack, we would want the user to receive an alert.) Multiple platform support. If we were only interested in personal firewalls, this would not have been a concern. (While Linux notebooks arguably might need personal firewall protection, we do not have such machines in our environment.) However, because we are interested in implementing host-based firewalls on our servers as well as our client PCs, support for multiple operating systems is a requirement. Application support. The firewall must be compatible with all authorized applications and the protocols used by those applications. VPN support. The host-based firewall must support our VPN implementation and client software. In addition, it must be able to detect and transparently adapt to VPN connections. Firewall architecture. There are many options for host-based firewalls, including packet filtering, application-level proxying, and stateful inspection. IDS technology. Likewise, there are several different approaches to IDS technology, each with its own strengths and weaknesses. The number of attacks detectable by a host-based firewall will clearly be relevant here. Ease of use and installation. As an enterprisewide solution, the product should support remote deployment and installation. In addition, the central administrative server should be (relatively) easy to use and configure. Technical support. Quality and availability are our prime concerns. Scalability. Although we are a small company, we do expect to grow. We need a robust product that can support a large number of agents. Disk space. We were concerned about the amount of disk space required on end-user machines as well as the centralized policy and logging server. For example, does the firewall count the number of times

AU1518Ch10Frame Page 159 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study an attack occurs rather than log a single event for every occurrence of an attack? • Multiple policy groups. Because we have diverse groups of end users, each with unique needs, we wanted the flexibility to enforce different policies on different groups. For example, we might want to allow SQLNet traffic from our development desktops while denying such traffic for the rest of our employees. • Reporting. As with similar enterprise solutions, an ideal reporting feature would include built-in reports for top intruders, targets, and attack methods over a given period of time (e.g., monthly, weekly, etc.). • Cost. As a relatively small organization, we were especially concerned about the cost of selecting a high-end enterprise solution. OUR TESTING METHODOLOGY We eventually plan to install and evaluate both CyberArmor and RealSecure Desktop Protector by conducting a pilot study on each product with a small, representative sample of users. (At the time this chapter was written, we were nearly finished with our evaluation of CyberArmor and about to begin our pilot study of ISS Real Secure.) While the method for evaluating both products according to most of our criteria is obvious, our method for testing one criterion deserves a detailed explanation: effectiveness in blocking attacks. We tested the effectiveness of each product in blocking unauthorized connections in several ways: • Remote Quick Scan from HackYourself.com.11 From a dial-up connection, we used HackYourself.com’s Quick Scan to execute a simple and remote TCP and UDP port scan against a single IP address. • Nmap scan. We used nmap to conduct two different scans. First, we performed an ACK scan to determine whether the firewall was performing stateful inspection or a simple packet filter. Second, we used nmap’s operating system fingerprinting feature to determine whether the host-based firewall effectively blocked attempts to fingerprint target machines. • Gibson Research Corporation’s LeakTest. LeakTest determines a firewall product’s ability to effectively filter outbound connections initiated by Trojans, viruses, and spyware.12 This tool can test a firewall’s ability to block LeakTest when it masquerades as a trusted program (OUTLOOK.EXE). • Steve Gibson’s TooLeaky. TooLeaky determines whether the firewall blocks unauthorized programs from controlling trusted programs. The TooLeaky executable tests whether this ability exists by spawning Internet Explorer to send a short, innocuous string to Steve Gibson’s Web site, and then receiving a reply.13 • Firehole. Firehole relies on a modified dynamic link library (DLL) that is used by a trusted application (Internet Explorer). The test is whether 159

AU1518Ch10Frame Page 160 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY the firewall allows the trusted application, under the influence of the malicious DLL, to send a small text message to a remote machine. The message contains the currently logged-on user’s name, the name of the computer, and a message claiming victory over the firewall and the time the message was sent.14 CONFIGURATION One of our reasons for deploying host-based firewalls was to provide an additional layer of protection against Trojan horses, spyware, and other programs that initiate outbound network connections. While host-based firewalls are not designed to interfere with Trojan horses that do not send or receive network connections, they can be quite effective in blocking network traffic to or from an unauthorized application when configured properly. Indeed, in one sense, host-based firewalls have an advantage over anti-virus software. Whereas anti-virus software can only detect Trojan horses that match a known signature, host-based firewalls can detect Trojan horses based on their network behavior. Host-based firewalls can detect, block, and even terminate any unauthorized application that attempts to initiate an outbound connection, even if that connection is on a well-known port like TCP 80 or even if the application causing that connection appears legitimate (NOTEPAD.EXE). However, there are two well-known caveats to configuring a host-based firewall to block Trojan horses. First, the firewall must block all connections initiated by new applications by default. Second, the firewall must not be circumvented by end users who, for whatever reason, click “yes” whenever asked by the firewall if it should allow a new application to initiate outbound traffic. Taken together, these two caveats can cause the cost of ownership of host-based firewalls to quickly escalate. Indeed, other companies that have already implemented both caveats report large numbers of help desk calls from users wanting to get a specific application authorized.15 Given that we do not have a standard desktop image and given that we have a very small help desk staff, we decided to divide our pilot users into two different policy groups: pilot-tech-technical and pilot-normal-regular (See Exhibit 10-1). The first configuration enabled users to decide whether to allow an application to initiate an outbound connection. This configuration was implemented only on the desktops of our IT staff. The user must choose whether to allow or deny the network connection requested by the connection. Once the user makes that choice, the host-based firewall generates a checksum and creates a rule reflecting the user’s decision. (See Exhibit 10-2 for a sample rule set in CyberArmor.) 160

AU1518Ch10Frame Page 161 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study

Exhibit 10-1. CyberArmor policy groups.

Exhibit 10-2. Sample user-defined rules in CyberArmor.

The second configuration denied all applications by default and only allowed applications that had been specifically authorized. We applied this configuration on all laptops outside our IT organization, because we did not want to allow nontechnical users to make decisions about the configuration of their host-based firewall. LESSONS LEARNED Although at the time this chapter was finished we had not yet completed our pilot studies on both host-based firewall products, we had already 161

AU1518Ch10Frame Page 162 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY learned several lessons about deploying agent-based, host-based firewalls across the enterprise. These lessons may be summarized as follows. 1. Our pilot study identified one laptop with a nonstandard and, indeed, unauthorized network configuration. For small organizations that do not enforce a standard desktop image, this should not be a surprise. 2. The ability to enforce different policies on different machines is paramount. This was evident from our experience with the host-based firewall to restrict outbound network connections. By having the ability to divide our users into two groups, those we would allow to make configuration decisions and those we would not, we were able to get both flexibility and security. 3. As is the case with network-based intrusion detection systems, our experience validated the need for well-crafted rule sets. Our configuration includes a rule that blocks inbound NetBIOS traffic. Given the amount of NetBIOS traffic present on both our internal network as well as external networks, this generated a significant amount of alerts. This, in turn, underscored the need for finely tuned alerting rules. 4. As the author has found when implementing network-based firewalls, the process of constructing and then fine-tuning a host-based firewall rule set is time consuming. This is especially true if one decides to implement restrictions on outbound traffic (and not allow users or a portion of users to make configuration decisions of their own), because one then has to identify and locate the exact file path of each authorized application that has to initiate an outbound connection. While this is by no means an insurmountable problem, there was a definite investment of time in achieving that configuration. 5. We did not observe any significant performance degradation on end user machines caused by the firewall software. At the time this chapter was written, however, we had not yet tested deploying hostbased firewall software on critical servers. 6. Our sixth observation is product specific. We discovered that the built-in reporting tool provided by CyberArmor is primitive. There is no built-in support for graphical reports, and it is difficult to find information using the text reporting. For example, using the built-in text-reporting feature, one can obtain an “alarms” report. That report, presented in spreadsheet format, merely lists alarm messages and the number of occurrences. Source IP addresses, date, and time information are not included in the report. Moreover, the alarm messages are somewhat cryptic. (See Exhibit 10-3 for a sample CyberArmor Alarm Report.) While CyberArmor is compatible with Crystal Reports, using Crystal Reports to produce useful reports requires extra software and time. 162

AU1518Ch10Frame Page 163 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study

Exhibit 10-3. Sample CyberArmor alarm report.

HOST-BASED FIREWALLS FOR UNIX? Host-based firewalls are often associated with Windows platforms, given the history and evolution of personal firewall software. However, there is no reason in theory why host-based firewalls cannot (or should not) be implemented on UNIX systems as well. To be sure, some UNIX packet filters already exist, including ipchains, iptables, and ipfw.16 Given that UNIX platforms have not been widely integrated into commercial host-based firewall products, these utilities may be very useful in an enterprisewide hostbased firewall deployment. However, such tools generally have two limitations worth noting. First, unlike personal firewalls, those utilities are packet filters. As such, they do not have the capability to evaluate an outbound network connection according to the application that generated the connection. Second, the utilities are not agent based. Thus, as an enterprise solution, those tools might not be easily scalable. The lack of an agent-based architecture in such tools might also make it difficult to provide centralized reporting on events detected on UNIX systems. CONCLUSIONS While host-based firewalls are traditionally thought of as a way to protect corporate laptops and privately owned PCs, host-based firewalls can also provide a valuable layer of additional protection for servers. Similarly, while host-based firewalls are typically associated with Windows platforms, 163

AU1518Ch10Frame Page 164 Thursday, November 14, 2002 6:22 PM

TELECOMMUNICATIONS AND NETWORK SECURITY they can also be used to protect UNIX systems as well. Moreover, hostbased firewalls can be an effective tool for interfering with the operation of Trojan horses and similar applications. Finally, using an agent-based architecture can provide centralized management and reporting capability over all host-based firewalls in the enterprise. Acknowledgments

The author wishes to acknowledge Frank Aiello and Derek Conran for helpful suggestions. The author is also grateful to Lance Lahr, who proofread an earlier version of this chapter. References 1. Michael Cheek, Personal firewalls block the inside threat. Gov. Comp. News 19:3 (3 April 2000). Spotted electronically at , February 6, 2002. 2. William R. Cheswick and Steven M. Bellovin, Firewalls and Internet Security: Repelling the Wily Hacker (New York: Addison-Wesley, 1994), pp. 53–54. 3. F-Secure Computer Virus Information Pages: QAZ (, January 2001), spotted February 6, 2002. 4. TROJ_QAZ.A — Technical Details (, October 28, 2000), spotted February 6, 2002. 5. Steve Riley, Is Your Generic Port 80 Rule Safe Anymore? (, February 5, 2001), spotted February 6, 2002. 6. Steve Riley, Is Your Generic Port 80 Rule Safe Anymore? (, February 5, 2001), spotted February 6, 2002. 7. Michael Cheek, Personal firewalls block the inside threat. Gov. Comp. News 19:3 (3 April 2000). Spotted electronically at , February 6, 2002. 8. Although McAfee is (at the time this chapter was written) currently in Beta testing with its own agent-based product, Personal Firewall 7.5, that product is not scheduled to ship until late March 2002. See Douglas Hurd, The Evolving Threat (, February 8, 2002), spotted February 8, 2002. 9. Cf. my discussion of network-based firewall criteria in Firewall Management and Internet Attacks in Information Security Management Handbook (4th ed., New York: Auerbach, 2000), pp. 118–119. 10. Steve Gibson, LeakTest — Firewall Leakage Tester (, January 24, 2002), spotted February 7, 2002. 11. Hack Yourself Remote Computer Network Security Scan (, 2000), spotted February 7, 2002. 12. Leak Test — How to Use Version 1.x (, November 3, 2001), spotted February 7, 2002. 13. Steve Gibson, Why Your Firewall Sucks :-) (, November 5, 2001), spotted February 8, 2002. 14. By default, this message is sent over TCP port 80 but this can be customized. See Robin Keir, Firehole: How to Bypass Your Personal Firewall Outbound Detection (, November 6, 2001), spotted February 8, 2002. 15. See, for example, Barrie Brook and Anthony Flaviani, Case Study of the Implementation of Symantec’s Desktop Firewall Solution within a Large Enterprise (, February 8, 2002), spotted February 8, 2002.

164

AU1518Ch10Frame Page 165 Thursday, November 14, 2002 6:22 PM

Deploying Host-Based Firewalls across the Enterprise: A Case Study 16. See Rusty Russell, Linux IPCHAINS-HOWTO (, July 4, 2000), spotted March 29, 2002; Oskar Andreasson, Iptables Tutorial 1.1.9 (, 2001), spotted March 29, 2002; and Gary Palmer and Alex Nash, Firewalls (, 2001), spotted March 29, 2002. I am grateful to an anonymous reviewer for suggesting I discuss these utilities in this chapter.

ABOUT THE AUTHOR Jeffery Lowder, CISSP, GSEC, is currently working as an independent information security consultant. His interests include firewalls, intrusion detection systems, UNIX security, and incident response. Previously, he has served as the director, security and privacy, for Elemica, Inc.; senior security consultant for PricewaterhouseCoopers, Inc.; and director, network security, at the U.S. Air Force Academy.

165

AU1518Ch10Frame Page 166 Thursday, November 14, 2002 6:22 PM

AU1518Ch11Frame Page 167 Thursday, November 14, 2002 6:21 PM

Chapter 11

Overcoming Wireless LAN Security Vulnerabilities Gilbert Held

The IEEE 802.11b specification represents one of three wireless LAN standards developed by the Institute of Electrical and Electronic Engineers. The original standard, which was the 802.11 specification, defined wireless LANs using infrared, Frequency Hopping Spread Spectrum (FHSS), and Direct Sequence Spread Spectrum (DSSS) communications at data rates of 1 and 2 Mbps. The relatively low operating rate associated with the original IEEE 802.11 standard precluded its widespread adoption. The IEEE 802.11b standard is actually an annex to the 802.11 standard. This annex specifies the use of DSSS communications to provide operating rates of 1, 2, 5.5, and 11 Mbps. A third IEEE wireless LAN standard, IEEE 802.11a, represents another annex to the original standard. Although 802.11- and 802.11b-compatible equipment operate in the 2.4-GHz unlicensed frequency band, to obtain additional bandwidth to support higher data rates resulted in the 802.11a standard using the 5-GHz frequency band. Although 802.11a equipment can transfer data at rates up to 54 Mbps, because higher frequencies attenuate more rapidly than lower frequencies, approximately four times the number of access points are required to service a given geographic area than if 802.11b equipment is used. Due to this, as well as the fact that 802.11b equipment reached the market prior to 802.11a devices, the vast majority of wireless LANs are based on the use of 802.11b compatible equipment. SECURITY Under all three IEEE 802.11 specifications, security is handled in a similar manner. The three mechanisms that affect wireless LAN security under 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

167

AU1518Ch11Frame Page 168 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY the troika of 802.11 specifications include the specification of the network name, authentication, and encryption. Network Name To understand the role of the network name requires a small diversion to discuss a few wireless LAN network terms. Each device in a wireless LAN is referred to as a station, to include both clients and access points. Client stations can communicate directly with one another, referred to as ad hoc networking. Client stations can also communicate with other clients, both wireless and wired, through the services of an access point. The latter type of networking is referred to as infrastructure networking. In an infrastructure networking environment, the group of wireless stations to include the access point form what is referred to as a basic service set (BSS). The basic service set is identified by a name. That name, which is formally referred to as the service set identifier (SSID), is also referred to as the network name. One can view the network name as a password. Each access point normally is manufactured with a set network name that can be changed. To be able to access an access point, a client station must be configured with the same network name as that configured on the access point. Unfortunately, there are three key reasons why the network name is almost valueless as a password. First, most vendors use a well-known default setting that can be easily learned by surfing to the vendor’s Web site and accessing the online manual for their access point. For example, Netgear uses the network name “Wireless.” Second, access points periodically transmit beacon frames that define their presence and operational characteristics to include their network name. Thus, the use of a wireless protocol analyzer, such as the WildPackets’ Airopeek or Sniffer Technologies’ Wireless Sniffer could be used to record beacon frames as a mechanism to learn the network name. A third problem associated with the use of the network name as a password for access to an access point is the fact that there are two client settings that can be used to override most access point network name settings. The configuration of a client station to a network name of “ANY” or its setting to a blank can normally override the setting of a network name or an access point. Exhibit 11-1 illustrates an example of the use of the SMC Networks’ EZ Connect Wireless LAN Configuration Utility program to set the SSID to a value of “ANY.” Once this action was accomplished, this author was able to access a Netgear wireless router/access point whose SSID was by default set to a value of “Wireless.” Thus, the use of the SSID or network name as a password to control access to a wireless LAN needs to be considered as a facility easily compromised, as well as one that offers very limited potential. 168

AU1518Ch11Frame Page 169 Thursday, November 14, 2002 6:21 PM

Overcoming Wireless LAN Security Vulnerabilities

Exhibit 11-1. Setting the value of the SSID or network name to “ANY”.

Authentication A second security mechanism included within all three IEEE wireless LAN specifications is authentication. Authentication represents the process of verifying the identity of a wireless station. Under the IEEE 802.11 standard to include the two addenda, authentication can be either open or shared key. Open authentication in effect means that the identity of a station is not checked. The second method of authentication, which is referred to as shared key, assumes that when encryption is used, each station that has the correct key and is operating in a secure mode represents a valid user. Unfortunately, as soon noted, shared key authentication is vulnerable because the WEP key can be learned by snooping on the radio frequency. Encryption The third security mechanism associated with IEEE 802.11 networks is encryption. The encryption used under the 802.11 series of specifications 169

AU1518Ch11Frame Page 170 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 11-2. WEP settings.

is referred to as Wired Equivalent Privacy (WEP). The initial goal of WEP is reflected by its name. That is, its use is designed to provide a level of privacy equivalent to that occurring when a person uses a wired LAN. Thus, some of the vulnerabilities uncovered concerning WEP should not be shocking because the goal of WEP is not to bulletproof a network. Instead, it is to simply make over the air transmission difficult for a third party to understand. However, as we will note, there are several problems associated with the use of WEP that make it relatively easy for a third party to determine the composition of network traffic flowing on a network. Exhibit 11-2 illustrates the pull-down menu of the WEP settings from the SMC Networks’ wireless LAN Configuration Utility program. Note in the exhibit of the WEP pull-down menu that the highlighted entry of “Disabled” represents the default setting. This means that, by default, WEP is disabled; and unless you alter the configuration on your client stations and access points, any third party within transmission range could use a wireless LAN protocol analyzer to easily record all network activity. In fact, during the 170

AU1518Ch11Frame Page 171 Thursday, November 14, 2002 6:21 PM

Overcoming Wireless LAN Security Vulnerabilities year 2001, several articles appeared in The New York Times and The Wall Street Journal concerning the travel of two men in a van from one parking lot to another in Silicon Valley. Using a directional antenna focused at each building from a parking lot and a notebook computer running a wireless protocol analyzer program, these men were able to easily read most network traffic because most networks were set up using WEP disabled. Although enabling WEP makes it more difficult to decipher traffic, the manner by which WEP encryption occurs has several shortcomings. Returning to Exhibit 11-2, note that the two WEP settings are shown as “64 Bit” and “128 Bit.” Although the use of 64- and 128-bit encryption keys may appear to represent a significant barrier to decryption, the manner by which WEP encryption occurs creates several vulnerabilities. An explanation follows. WEP encryption occurs via the creation of a key that is used to generate a pseudo-random binary string that is modulo-2 added to plaintext to create ciphertext. The algorithm that uses the WEP key is a stream cipher, meaning it uses the key to create an infinite pseudo-random binary string. Exhibit 11-3 illustrates the use of SMC Networks’ Wireless LAN Configuration Utility program to create a WEP key. SMC Networks simplifies the entry of a WEP key by allowing the user to enter a passphrase. Other vendors may allow the entry of hex characters or alphanumeric characters. Regardless of the manner by which a WEP key is entered, the total key length consists of two elements: an initialization vector (IV) that is 24 bits in length and the entered WEP key. Because the IV is part of the key, this means that a user constructing a 64-bit WEP key actually specifies 40 bits in the form of a passphrase or ten hex digits, or 104 bits in the form of a passphrase or 26 hex digits for a 128-bit WEP key. Because wireless LAN transmissions can easily be reflected off surfaces and moving objects, multiple signals can flow to a receiver. Referred to as multipath transmission, the receiver needs to select the best transmission and ignore the other signals. As one might expect, this can be a difficult task, resulting in a transmission error rate considerably higher than that encountered on wired LANs. Due to this higher error rate, it would not be practical to use a WEP key by itself to create a stream cipher that continues for infinity. This is because a single bit received in error would adversely affect the decryption of subsequent data. Recognizing this fact, the IV is used along with the digits of the WEP key to produce a new WEP key on a frame-by-frame basis. While this is a technically sound action, unfortunately the 24-bit length of the IV used in conjunction with a 40- or 104-bit fixed length WEP key causes several vulnerabilities. First, the IV is transmitted in the clear, allowing anyone with appropriate equipment to record its composition along with the encrypted 171

AU1518Ch11Frame Page 172 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 11-3. Creating a WEP encryption key.

frame data. Because the IV is only 24 bits in length, it will periodically repeat. Thus, capturing two or more of the same IVs and the encrypted text makes it possible to perform a frequency analysis of the encrypted text that can be used as a mechanism to decipher the captured data. For example, assume one has captured several frames that had the same IV. Because “e” is the most common letter used in the English language followed by the letter “t,” one would begin a frequency analysis by searching for the most common letter in the encrypted frames. If the letter “x” was found to be the most frequent, there would be a high probability that the plaintext letter “e” was encrypted as the letter “x.” Thus, the IV represents a serious weakness that compromises encryption. During mid-2001, researchers at Rice University and AT&T Laboratories discovered that by monitoring approximately five hours of wireless LAN traffic, it became possible to determine the WEP key through a series of mathematical manipulations, regardless of whether a 64-bit or 128-bit key was used. This research was used by several software developers to produce 172

AU1518Ch11Frame Page 173 Thursday, November 14, 2002 6:21 PM

Overcoming Wireless LAN Security Vulnerabilities programs such as Airsnort, whose use enables a person to determine the WEP key in use and to become a participant on a wireless LAN. Thus, the weakness of the WEP key results in shared key authentication being compromised as a mechanism to validate the identity of wireless station operators. Given an appreciation for the vulnerabilities associated with wireless LAN security, one can now focus on the tools and techniques that can be used to minimize or eliminate such vulnerabilities. MAC ADDRESS CHECKING One of the first methods used to overcome the vulnerabilities associated with the use of the network name or SSID, as well as shared key authentication, was MAC address checking. Under MAC address checking, the LAN manager programs the MAC address of each client station into an access point. The access point only allows authorized MAC addresses occurring in the source address field of frames to use its facilities. Although the use of MAC address checking provides a significant degree of improvement over the use of a network name for accessing the facilities of an access point, by itself it does nothing to alter the previously mentioned WEP vulnerabilities. To attack the vulnerability of WEP, several wireless LAN equipment vendors introduced the use of dynamic WEP keys. Dynamic WEP Keys Because WEP becomes vulnerable by a third party accumulating a significant amount of traffic that flows over the air using the same key, it becomes possible to enhance security by dynamically changing the WEP key. Several vendors have recently introduced dynamic WEP key capabilities as a mechanism to enhance wireless security. Under a dynamic key capability, a LAN administrator, depending on the product used, may be able to configure equipment to either exchange WEP keys on a frame-byfame basis or at predefined intervals. The end result of this action is to limit the capability of a third party to monitor a sufficient amount of traffic that can be used to either perform a frequency analysis of encrypted data or to determine the WEP key in use. While dynamic WEP keys eliminate the vulnerability of a continued WEP key utilization, readers should note that each vendor supporting this technology does so on a proprietary basis. This means that if one anticipates using products from multiple vendors, one may have to forego the use of dynamic WEP keys unless the vendors selected have cross-licensed their technology to provide compatibility between products. Having an appreciation for the manner by which dynamic WEP keys can enhance encryption security, this discussion of methods to minimize wireless security vulnerabilities concludes with a brief discussion of the emerging IEEE 802.1x standard. 173

AU1518Ch11Frame Page 174 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY THE IEEE 802.1X STANDARD The IEEE 802.1x standard is being developed to control access to both wired and wireless LANs. Although the standard was not officially completed during early 2002, Microsoft added support for the technology in its Windows XP operating system released in October 2001. Under the 802.1x standard, a wireless client station attempting to access a wired infrastructure via an access point will be challenged by the access point to identify itself. The client will then transmit its identification to the access point. The access point will forward the challenge response to an authentication server located on the wired network. Upon authentication, the server will inform the access point that the wireless client can access the network, resulting in the access point allowing frames generated by the client to flow onto the wired network. While the 802.1x standard can be used to enhance authentication, by itself it does not enhance encryption. Thus, one must consider the use of dynamic WEP keys as well as proprietary MAC address checking or an 802.1x authentication method to fully address wireless LAN security vulnerabilities. Additional Reading Held, G., “Wireless Application Directions,” Data Communications Management (April/May 2002). Lee, D.S., “Wireless Internet Security,” Data Communications Management (April/May 2002).

ABOUT THE AUTHOR Gilbert Held is an award-winning author and lecturer. Gil is the author of over 40 books and 450 technical articles. Some of Gil’s recent book titles include Building a Wireless Office and The ABCs of IP Addressing, published by Auerbach Publications. Gil can be reached via e-mail at [email protected]

174

AU1518Ch12Frame Page 175 Thursday, November 14, 2002 6:21 PM

Chapter 12

Voice Security Chris Hare, CISSP, CISA

Most security professionals in today’s enterprise spend much of their time working to secure access to corporate electronic information. However, voice and telecommunications fraud still costs the corporate business communities millions of dollars each year. Most losses in the telecommunications arena stem from toll fraud, which is perpetrated by many different methods. Millions of people rely upon the telecommunication infrastructure for their voice and data needs on a daily basis. This dependence has resulted in the telecommunications system being classed as a critical infrastructure component. Without the telephone, many of our daily activities would be more difficult, if not almost impossible. When many security professionals think of voice security, they automatically think of encrypted telephones, fax machines, and the like. However, voice security can be much simpler and start right at the device to which your telephone is connected. This chapter looks at how the telephone system works, toll fraud, voice communications security concerns, and applicable techniques for any enterprise to protect its telecommunication infrastructure. Explanations of commonly used telephony terms are found throughout the chapter. POTS: PLAIN OLD TELEPHONE SERVICE Most people refer to it as “the phone.” They pick up the receiver, hear the dial tone, and make their calls. They use it to call their families, conduct business, purchase goods, and get help or emergency assistance. And they expect it to work all the time. The telephone service we use on a daily basis in our homes is known in the telephony industry as POTS, or plain old telephone service. POTS is delivered to the subscriber through several components (see Exhibit 12-1): • The telephone handset • Cabling 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

175

AU1518Ch12Frame Page 176 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Line Card

Telephone Company Central Office

Exhibit 12-1. Components of POTS.

• A line card • A switching device The telephone handset, or station, is the component with which the public is most familiar. When the customer picks up the handset, the circuit is closed and established to the switch. The line card signals to the processor in the switch that the phone is off the hook, and a dial tone is generated. The switch collects the digits dialed by the subscriber, whether the subscriber is using a pulse phone or Touch-Tone®. A pulse phone alters the voltage on the phone line, which opens and closes a relay at the switch. This is the cause of the clicks or pulses heard on the line. With Touch-Tone dialing, a tone generator at the switch creates the tones for dialing the call. The processor in the switch accepts the digits and determines the best way to route the call to the receiving subscriber. The receiving telephone set may be attached to the same switch, or connected to another halfway around the world. Regardless, the routing of the call happens in a heartbeat due to a very complex network of switches, signaling, and routing. However, the process of connecting the telephone to the switching device, or to connect switching devices together to increase calling capabilities, uses lines and trunks. Connecting Things Together The problem with most areas of technology is with terminology. The telephony industry is no different. Trunks and lines both refer to the same thing — the circuitry and wiring used to deliver the signal to the subscriber. The fundamental difference between them is where they are used. Both trunks and lines can be digital or analog. The line is primarily associated with the wiring from the telephone switch to the subscriber (see Exhibit 12-2). This can be either the residential or business subscriber, 176

AU1518Ch12Frame Page 177 Thursday, November 14, 2002 6:21 PM

Voice Security

Line Card Line

Trunk

Exhibit 12-2. Trunks and lines.

connected directly to the telephone company’s switch, or to a PBX. Essentially, the line typically is associated with carrying the communications of a single subscriber to the switch. The trunk, on the other hand, is generally the connection from the PBX to the telephone carrier’s switch, or from one switch to another. A trunk performs the same function as the line. The only difference is the amount of calls or traffic the two can carry. Because the trunk is used to connect switches together, the trunk can carry much more traffic and calls than the line. The term circuit is often used to describe the connection from one device to the other, without attention for the type of connection, analog or digital, or the devices on either end (station or device). Analog versus Digital Both the trunk and the line can carry either analog or digital signals. That is to say, they can only carry one type at a time. Conceptually, the connection from origin to destination is called a circuit, and there are two principal circuit types. Analog circuits are used to carry voice traffic and digital signals after conversion to sounds. While analog is traditionally associated with voice circuits, many voice calls are made and processed through digital equipment. However, the process of analog/digital conversion is an intense technical discussion and is not described here. An analog circuit uses the variations in amplitude (volume) and frequency to transmit the information from one caller to the other. The circuit has an available bandwidth of 64K, although 8K of the available bandwidth is used for signaling between the handset and the switch, leaving 56K for the actual voice or data signals. 177

AU1518Ch12Frame Page 178 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Think about connecting a computer modem to a phone line. The maximum available speed the modem can function at is 56K. The rationale for the 56K modem should be obvious now. However, most people know a modem connection is rarely made at 56K due to the quality of the circuit, line noise, and the distance from the subscriber to the telephone carrier’s switch. Modems are discussed again later in the chapter. Because analog lines carry the actual voice signals for the conversation, they can be easily intercepted. Anyone with more than one phone in his or her house has experienced the problem with eavesdropping. Anyone who can access the phone circuit can listen to the conversation. A phone tap is not really required — only knowledge of which wires to attach to and a telephone handset. However, despite the problem associated with eavesdropping, many people do not concern themselves too much with the possibility someone may be listening to their phone call. The alternative to analog is digital. While the analog line uses sound to transmit information, the digital circuit uses digital signals to represent data. Consequently, the digital circuit technologies are capable of carrying significantly higher speeds as the bandwidth increases on the circuit. Digital circuits offer a number of advantages. They can carry higher amounts of data traffic and more simultaneous telephone calls than an analog circuit. They offer better protection from eavesdropping and wiretapping due to their design. However, despite the digital signal, any telephone station sharing the same circuit can still eavesdrop on the conversation without difficulty. The circuits are not the principal cause of security problems. Rather, the concern for most enterprises and individuals arises from the unauthorized and inappropriate use of those circuits. Lines and trunks can be used in many different ways and configurations to provide the required level of service. Typically, the line connected to a station offers both incoming and outgoing calls. However, this does not have to be the case in all situations. Direct Inward Dial (DID) If an outside caller must be connected with an operator before reaching their party in the enterprise, the system is generally called a key switch PBX. However, many PBX systems offer direct inward dial, or DID, where each telephone station is assigned a telephone number that connects the external caller directly to the call recipient. Direct inward dial makes reaching the intended recipient easier because no operator is involved. However, DID also has disadvantages. Modems 178

AU1518Ch12Frame Page 179 Thursday, November 14, 2002 6:21 PM

Voice Security connected to DID services can be reached by authorized and unauthorized persons alike. It also makes it easier for individuals to call and solicit information from the workforce, without being screened through a central operator or attendant. Direct Outward Dial (DOD) Direct outward dial is exactly the opposite of DID. Some PBX installations require the user to select a free line on their phone or access an operator to place an outside call. With DOD, the caller picks up the phone, dials an access code, such as the digit 9, and then the external phone number. The call is routed to the telephone carrier and connected to the receiving person. The telephone carrier assembles the components described here to provide service to its subscribers. The telephone carriers then interconnect their systems through gateways to provide the public switched telephone network. THE PUBLIC SWITCHED TELEPHONE NETWORK (PSTN) The pubic switched telephone network is a collection of telephone systems maintained by telephone carriers to provide a global communications infrastructure. It is called the public switched network because it is accessible to the general public and it uses circuit-switching technology to connect the caller to the recipient. The goal of the PSTN is to connect the two parties as quickly as possible, using the shortest possible route. However, because the PSTN is dynamic, it can often configure and route the call over a more complex path to achieve the call connection on the first attempt. While this is extremely complex on a national and global scale, enterprises use a smaller version of the telephone carrier switch called a PBX (or private branch exchange). THE PRIVATE AREA BRANCH EXCHANGE (PABX) The private area branch exchange, or PABX, is also commonly referred to as a PBX. Consequently, you will see the terms used interchangeably. The PBX is effectively a telephone switch for an enterprise; and, like the enterprise, it comes in different sizes. The PBX provides the line card, call processor, and some basic routing. The principal difference is how the PBX connects to the telephone carrier’s network. If we compare the PBX to a router in a data network connecting to the Internet, both devices know only one route to send information, or telephone calls, to points outside the network (see Exhibit 12-3). 179

AU1518Ch12Frame Page 180 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Telephone Company Switch PBX

Exhibit 12-3. PBX connection.

Exhibit 12-4. Network class-of-service levels.

Level

Internal

Local Seven-Digit Dialing

1 2 3 4

X X X X

X X X

Local Ten-Digit Dialing

Domestic Long Distance

International Long Distance

X X X

X X

X

The PBX has many telephone stations connected to it, like the telephone carrier’s switch. The PBX knows how to route calls to the stations connected directly to the same PBX. A call for an external telephone number is routed to the carrier’s switch, which then processes the call and routes it to the receiving station. Both devices have similar security issues, although the telephone carrier has specific concerns: the telephone communications network is recognized as a critical infrastructure element, and there is liability associated with failing to provide service. The enterprise rarely has to deal with these issues; however, the enterprise that fails to provide sufficient controls to prevent the compromise of its PBX may also face specific liabilities. Network Class of Service (NCOS) Each station on the phone PBX can be configured with a network class of service, or NCOS. The NCOS defines the type of calls the station can make. Exhibit 12-4 illustrates different NCOS levels. When examining the table, we can see that each different class of service offers new abilities for the user at the phone station. Typically, class of service is assigned to the station and not the individual, because few phone systems require user authentication before placing the call. 180

AU1518Ch12Frame Page 181 Thursday, November 14, 2002 6:21 PM

Voice Security NOTE: Blocking specific phone numbers or area codes, such as 976, 900, or 809, is not done at the NCOS level but through other call-blocking methods available in the switch.

Through assigning NCOS to various phones, some potential security problems can be avoided. For example, if your enterprise has a phone in the lobby, it should be configured with a class of service low enough to allow calls to internal extensions or local calls only. Long distance should not be permitted from any open-area phone due to the cost associated with those calls. In some situations, it may be desirable to limit the ability of a phone station to receive calls, while still allowing outgoing calls. This can be defined as another network class of service, without affecting the capabilities of the other stations. However, not all PBX systems have this feature. If your enterprise systems have it, it should be configured to allow the employees only the ability to make the calls that are required for their specific job responsibilities. VOICEMAIL Voicemail is ubiquitous with communications today. However, voicemail is often used as the path to the telephone system and free phone calls for the attacker — and toll fraud for the system owner. Voicemail is used for recording telephone messages for users who are not available to answer their phones. The user accesses messages by entering an identifier, which is typically their phone extension number, and a password. Voicemail problems typically revolve around password management. Because voicemail must work with the phone, the password can only contain digits. This means attacking the password is relatively trivial from the attacker’s perspective. Consequently, the traditional password and account management issues exist here as in other systems: • • • • •

Passwords the same as the account name No password complexity rules No password aging or expiry No account lockout Other voicemail configuration issues

A common configuration problem is through-dialing. With through-dialing, the system accepts a phone number and places the call. The feature can be restricted to allow only internal or local numbers, or to disable it. If through-dialing is allowed and not properly configured, the enterprise now pays the bills for the long-distance or other toll calls made. 181

AU1518Ch12Frame Page 182 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Attackers use stale mailboxes — those that have not been accessed in a while — to attempt to gain access to the mailbox. If the mailbox password is obtained, and the voicemail system is configured to allow through-dialing, the attackers are now making free calls. The attacker first changes the greeting on the mailbox to a simple “yes.” Now, any collect call made through an automated system expecting the word response “yes” is automatically accepted. The enterprise pays the cost of the call. The attacker enters the account identifier, typically the phone extension for the mailbox, and the password. Once authenticated by the voicemail system, the attacker then enters the appropriate code and phone number for the external through-call. If there are no restrictions on the digits available, the attacker can dial any phone number anywhere in the world. The scenario depicted here can be avoided using simple techniques applicable to most systems: • • • • • • •

Change the administrator and attendant passwords. Do not use the extension number as the initial password. Disable through-dialing. Configure voicemail to use a minimum of six digits for the password. Enable password history options if available. Enable password expiration if available. Remove stale mailboxes.

Properly configured, voicemail is a powerful tool for the enterprise, as is the data network and voice conferencing. VOICE CONFERENCING Many enterprises use conference calls to regularly conduct business. In the current economic climate, many enterprises use conference calls as the cost-efficient alternative to travel for meetings across disparate locations. The conference call uses a “bridge,” which accepts the calls and determines which conference the caller is to be routed to based upon the phone number and the conference call password. The security options available to the conference call bridge are technology dependent. Regardless, participants on the conference call should be reminded not to discuss enterprise-sensitive information because anyone who acquires or guesses the conference call information could join the call. Consequently, conference call participant information should be protected to limit participation. Conference bridges are used for single-time, repetitive, and ad hoc calls using various technologies. Some conference call vendors provide services allowing anyone in the enterprise to have an on-demand conference bridge. These conference bridges use a “host” or chairperson who must be 182

AU1518Ch12Frame Page 183 Thursday, November 14, 2002 6:21 PM

Voice Security present to start the conference call. The chairperson has a second passcode, used to initiate the call. Any user who learns the host or chairperson code can use the bridge at any time. Security issues regarding conference bridges include: • • • •

Loss of the chairperson code Unauthorized use of the bridge Inappropriate access to the bridge Loss of sensitive information on the bridge

All of these issues are addressed through proper user awareness — which is fortunate because few enterprises actually operate their own conference bridge, relying instead upon the telephone carrier to maintain the configurations. If possible, the conference bridge should be configured with the following settings and capabilities: • The conference call cannot start until the chairperson is present. • All participants should be disconnected when the chairperson disconnects from the bridge. • The chairperson should have the option of specifying a second security access code to enter the bridge. • The chairperson should have commands available to manipulate the bridge, including counting the number of ports in use, muting or un-muting the callers, locking the bridge, and reaching the conference operator. The chairperson’s commands are important for the security of the conference call. Once all participants have joined, the chairperson should verify everyone is there and then lock the bridge. This prevents anyone from joining the conference call. SECURITY ISSUES Throughout the chapter, we have discussed technologies and security issues. However, regardless of the specific configuration of the phone system your enterprise is using, there are some specific security concerns you should be knowledgeable of. Toll Fraud Toll fraud is a major concern for enterprises, individuals, and the telephone carriers. Toll fraud occurs when toll-based or chargeable telephone calls are fraudulently made. There are several methods of toll fraud, including inappropriate use by authorized users, theft of services, calling cards, and direct inward dialing to the enterprise’s communications system. 183

AU1518Ch12Frame Page 184 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY According to a 1998 Consumer News report, about $4 billion are lost to toll fraud annually. The report is available online at the URL http://www.fcc. gov/Bureaus/Common_Carrier/Factsheets/ttf&you.pdf The cost of the fraud is eventually passed on to the businesses and consumers through higher communications costs. In some cases, the telephone carrier holds the subscriber responsible for the charges, which can be devastating. Consequently, enterprises can pay for toll fraud insurance, which pays the telephone carrier after the enterprise pays the deductible. While toll fraud insurance sounds appealing, it is expensive and the deductibles are generally very high. It is not impossible to identify toll fraud within your organization. If you have a small enterprise, simply monitoring the phone usage for the various people should be enough to identify calling patterns. For larger organizations, it may be necessary to get calling information from the PBX for analysis. For example, if you can capture the call records from each telephone call, it is possible to assign a cost for each telephone call. Inappropriate Use of Authorized Access Every employee in an enterprise typically has a phone on the desk, or access to a company-provided telephone. Most employees have the ability to make long-distance toll calls from their desks. While most employees make long-distance calls on a daily basis as part of their jobs, many will not think twice to make personal long-distance calls at the enterprise’s expense. Monitoring this type of usage and preventing it is difficult for the enterprise. Calling patterns, frequently called number analysis, and advising employees of their monthly telecommunications costs are a few ways to combat this problem. Additionally, corporate policies regarding the use of corporate telephone services and penalties for inappropriate use should be established if your enterprise does not have them already. Finally, many organizations use billing or authorization codes when making long-distance phone calls to track the usage and bill the charges to specific departments or clients. However, if your enterprise has its own PBX with conditional toll deny (CTD) as a feature, you should considering enabling this on phone stations where long-distance or toll calls are not permitted. For example, users should not be able to call specific phone numbers or area codes. Alternatively, a phone station may be denied toll-call privileges altogether. However, in Europe, implementing CTD is more difficult to implement because it is not uncommon to call many different countries in a single day. Consequently, management of the CTD parameters becomes very difficult. CTD can be configured as a specific option in an NCOS definition, as discussed earlier in the chapter. 184

AU1518Ch12Frame Page 185 Thursday, November 14, 2002 6:21 PM

Voice Security Calling Cards Calling cards are the most common form of toll fraud. Calling-card numbers are stolen and sold on a daily basis around the world. Calling-card theft typically occurs when an individual observes the subscriber entering the number into a public phone. The card number is then recorded by the thief and sold to make other calls. Calling-card theft is a major problem for telephone carriers, who often have specific fraud units for tracking thieves, and calling software, which monitors the calling patterns and alerts the fraud investigators to unusual calling patterns. In some cases, hotels will print the calling-card number on the invoices provided to their guests, making the numbers available to a variety of people. Additionally, if the PBX is not configured correctly, the calling-card information is shown on the telephone display, making it easy for anyone nearby to see the digits and use the number. Other PBX-based problems include last number redial. If the PBX supports last number redial, any employee can recall the last number dialed and obtain the access and calling-card numbers. Employees should be aware of the problems and costs associated with the illegitimate use of calling cards. Proper protection while using a calling card includes: • Shielding the number with your hands when entering it • Memorizing the number so you do not have a card visible when making the call • Ensuring your company PBX does not store the digits for last number redial • Ensuring your enterprise PBX does not display the digits on the phone for an extended period of time Calling cards provide a method for enterprise employees to call any number from any location. However, some enterprises may decide this is not appropriate for their employees. Consequently, they may offer DISA access to the enterprise phone network as an alternative. DISA Direct inward system access, or DISA, is a service available on many PBX systems. DISA allows a user to dial an access number, enter an authorization code, and appear to the PBX as an extension. This allows callers to make calls as if they were in the office building, whether the calls are internal to the PBX or external to the enterprise. DISA offers some distinct advantages. For example, it removes the need to provide calling cards for your employees because they can call a number and 185

AU1518Ch12Frame Page 186 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY be part of the enterprise voice network. Additionally, long-distance calls placed through DISA services are billed at the corporate rate because the telephone carrier sees the calls as originating from the enterprise. DISA’s advantages also represent problems. If the DISA access number becomes known, an unauthorized user only needs to try random numbers to form an authorization code. Given enough time, they will eventually find one and start making what are free calls from their perspective. However, your enterprise pays the bill. DISA authorization codes, which must be considered passwords, are numeric only because there is no way to enter alphabetic letters on the telephone keypad. Consequently, even an eight-number authorization code is easily defeated. If your organization does use DISA, there are some things you can do to assist in preventing fraudulent access of the service: • Frequent analysis of calling patterns • Monthly “invoices” to the DISA subscribers to keep them aware of the service they are using • Using a minimum of eight-digit authorization codes • Forcing changes of the authorization codes every 30 days • Disabling inactive DISA authorization codes if they are not used for a prescribed period of time or a usage limit is reached • Enabling authorization code alarms to indicate attempts to defeat or guess DISA authorization codes The methods discussed are often used by attackers to gain access to the phone system and make unauthorized telephone calls. However, technical aspects aside, some of the more skillful events occur through social engineering techniques. SOCIAL ENGINEERING The most common ploy from a social engineering perspective is to call an unsuspecting person, indicate the attacker is from the phone company, and request an outside line. The attacker then makes the phone call to the desired location, talks for as long as required, and hangs up. As long as they can find numbers to dial and do not have to go through a central operator, this can go on for months. Another social engineering attack occurs when a caller claims to be a technical support person. The attacker will solicit confidential information, such as passwords, access numbers, or ID information, all under the guise of providing support or maintenance support to ensure the user’s service is not disrupted. In actuality, the attacker is gathering sensitive 186

AU1518Ch12Frame Page 187 Thursday, November 14, 2002 6:21 PM

Voice Security information for better understanding of the enterprise environment and enabling them to perform an attack. OTHER VOICE SERVICES There are other voice services that also create issues for the enterprise, including modems, fax, and wireless services. Modems Modems are connected to the enterprise through traditional technologies using the public switched telephone network. Modems provide a method of connectivity through the PSTN to the enterprise data network. When installed on a DID circuit, the modem answers the phone when an incoming call is received. Attackers have regularly looked for these modems using war-dialing techniques. If your enterprise must provide modems to connect to the enterprise data network, these incoming lines should be outside the normal enterprise’s normal dialing range. This makes it more difficult for the attacker to find. However, because many end stations are analog, the user could connect the modem to the desktop phone without anyone’s knowledge. This is another advantage of digital circuits. While digital-to-analog converters exist to connect a modem to a digital circuit, this is not infallible technology. Should your enterprise use digital circuits to the desktop, you should implement a program to document and approve all incoming analog circuits and their purpose. This is very important for modems due to their connectivity to the data network. Fax The fax machine is still used in many enterprises to send information not easily communicated through other means. The fax transmission sends information such as scanned documents to the remote fax system. The principal concern with fax is the lack of control over the document at the receiving end. For example, if a document is sent to me using a fax in a shared area, anyone who checks the fax machine can read the message. If the information in the fax is sensitive, private, or otherwise classified, control of the information should be considered lost. A second common problem is misdirected faxes. That is, the fax is successfully transmitted, but to the wrong telephone number. Consequently, the intended recipient does not receive the fax. However, fax can be controlled through various means such as dedicated fax machines in controlled areas. For example, 187

AU1518Ch12Frame Page 188 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY • Contact the receiver prior to sending the fax. • Use a dedicated and physically secure fax if the information requires it. • Use a cover page asking for immediate delivery to the recipient. • Use a cover page asking for notification if the fax is misdirected. Fax requires the use of analog lines because it uses a modem to establish the connection. Consequently, the inherent risks of the analog line are applicable here. If an attacker can monitor the line, he may be able to intercept the modem tones from the fax machine and read the fax. Addressing this problem is achieved through encrypted fax if document confidentiality is an ultimate concern. Encrypted fax requires a common or shared key between the two fax machines. Once the connection is established, the document is sent using the shared encryption key and subsequently decoded and printed on the receiving fax machine. If the receiving fax machine does not have the shared key, it cannot decode the fax. Given the higher cost of the encrypted fax machine, it is only a requirement for the most highly classified documents. Cellular and Wireless Access Cellular and wireless access to the enterprise is also a problem due to the issues associated with cellular. Wireless access in this case does not refer to wireless access to the data network, but rather wireless access to the voice network. However, this type of access should concern the security professional because the phone user will employ services such as calling cards and DISA to access the enterprise’s voice network. Because cellular and wireless access technologies are often subject to eavesdropping, the DISA access codes or calling card could potentially be retrieved from the wireless caller. The same is true for conversations — if the conversation between the wireless caller and the enterprise user is of a sensitive nature, it should be conducted over wireless. Additionally, the chairperson for a conference call should find out if there is anyone on the call who is on a cell phone and determine if that level of access is appropriate for the topic to be discussed. VOICE-OVER-IP: THE FUTURE The next set of security challenges for the telecommunications industry is Voice-over-IP. The basis for the technology is to convert the voice signals to packets, which are then routed over the IP network. Unlike the traditional circuit-switched voice network, voice over IP is a packet-switched 188

AU1518Ch12Frame Page 189 Thursday, November 14, 2002 6:21 PM

Voice Security network. Consequently, the same type of problems found in a data network are found in the voice over IP technology. There are a series of problems in the Voice-over-IP technologies, on which the various vendors are collaborating to establish the appropriate standards to protect the privacy of the Voice-over-IP telephone call. Some of those issues include: • No authentication of the person making the call • No encryption of the voice data, allowing anyone who can intercept the packet to reassemble it and hear the voice data • Quality of service, because the data network has not been traditionally designed to provide the quality-of-service levels associated with the voice network The complexities in the Voice-over-IP arena for both the technology and related security issues will continue to develop and resolve themselves over the next few years. SUMMARY This chapter introduced the basics of telephone systems and security issues. The interconnection of the telephone carriers to establish the public switched telephone network is a complex process. Every individual demands there be a dial tone when they pick up the handset of their telephone. Such is the nature of this critical infrastructure. However, enterprises often consider the telephone their critical infrastructure as well, whether they get their service directly from the telephone carrier or use a PBX to provide internal services, which is connected to the public network. The exact configurations and security issues are generally very specific to the technology in use. This chapter has presented some of the risks and prevention methods associated with traditional voice security. The telephone is the easiest way to obtain information from a company, and the fastest method of moving information around in a nondigital form. Aside from implementing the appropriate configurations for your technologies, the best defense is ensuring your users understand their role in limiting financial and information losses through the telephone network. Acknowledgments

The author wishes to thank Beth Key, a telecommunications security and fraud investigator from Nortel Networks’ voice service department. Ms. Key provided valuable expertise and support during the development of this chapter. 189

AU1518Ch12Frame Page 190 Thursday, November 14, 2002 6:21 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Mignona Cote of Nortel Networks’ security vulnerabilities team provided her experiences as an auditor in a major U.S. telecommunications carrier prior to joining Nortel Networks. The assistance of both these remarkable women contributed to the content of this chapter and are examples of the quality and capabilities of the women in our national telecommunications industry. References PBX Vulnerability Analysis, Finding Holes in Your PBX before Someone Else Does, U.S. Department of Commerce, NIST Special Pub. 800-24, http://csrc.nist.gov/publications/nistpubs/80024/sp800-24pbx.pdf. Security for Private Branch Exchange Systems, http://csrc.nist.gov/publications/nistbul/ itl00-08.txt.

ABOUT THE AUTHOR Chris Hare, CISSP, CISA, is an information security and control consultant with Nortel Networks in Dallas, Texas. A frequent speaker and author, his experience includes application design, quality assurance, systems administration and engineering, network analysis, and security consulting, operations, and architecture.

190

AU1518Ch13Frame Page 191 Thursday, November 14, 2002 6:20 PM

Chapter 13

Secure Voice Communications (VoI) Valene Skerpac, CISSP

Voice communications is in the midst of an evolution toward network convergence. Over the past several decades, the coalescence of voice and data through the circuit-based, voice-centric public switched telephone network (PSTN) has been limited. Interconnected networks exist today, each maintaining its own set of devices, services, service levels, skill sets, and security standards. These networks anticipate the inevitable and ongoing convergence onto packet- or cell-based, data-centric networks primarily built for the Internet. Recent deregulation changes and cost savings, as well as the potential for new media applications and services, are now driving a progressive move toward voice over some combination of ATM, IP, and MPLS. This new generation network aims to include novel types of telephony services that utilize packet-switching technology to receive transmission efficiencies while also allowing voice to be packaged in more standard data applications. New security models that include encryption and security services are necessary in telecommunication devices and networks. This chapter reviews architectures, protocols, features, quality-of-service (QoS), and security issues associated with traditional circuit-based landline and wireless voice communication. The chapter then examines convergence architectures, the effects of evolving standards-based protocols, new quality-of-service methods, and related security issues and solutions. CIRCUIT-BASED PSTN VOICE NETWORK The PSTN has existed in some form for over 100 years. It includes telephones, local and interexchange trunks, transport equipment, and 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

191

AU1518Ch13Frame Page 192 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY exchanges; and it represents the whole traditional public telephone system. The foundation for the PSTN is dedicated 64 kbps circuits. Two kinds of 64 kbps pulse code modulation techniques are used to encode human analog voice signals into digital streams of 0s and 1s (mu-law, the North American standard; and a-law, the European standard). The PSTN consists of the local loop that physically connects buildings via landline copper wires to an end office switch called the central office or Class 5 switch. Communication between central offices connected via trunks is performed through a hierarchy of switches related to call patterns. Many signaling techniques are utilized to perform call control functions. For example, analog connections to the central office use dual-tone multifrequency (DTMF) signaling, an in-band signaling technique transmitted over the voice path. Central office connections through a T1/E1 or T3/E3 use in-band signaling techniques such as MF or robbed bit. After World War II, the PSTN experienced high demand for greater capacity and increased function. This initiated new standards efforts, which eventually led to the organization in 1956 of the CCITT, the Comité Consultatif International de Télephonie et de Télégraphie, also known as the ITU-T, International Telecommunication Union Telecommunication Standardization Sector. Recommendations known as Signaling System 7 (SS7) were created, and in 1980 a version was completed for implementation. SS7 is a means of sending messages between switches for basic call control and for custom local area signaling services (CLASS). The move to SS7 represented a change to common-channel signaling versus its predecessor, per-trunk signaling. SS7 is fundamental to today’s networks. Essential architectural aspects of SS7 include a packet data network that controls and operates on top of the underlying voice networks. Second, a completely different transmission path is utilized for signaling information of voice and data traffic. The signaling system is a packet network optimized to speedily manage many signaling messages over one channel; it supports required functions such as call establishment, billing, and routing. Architecturally, the SS7 network consists of three components, as shown in Exhibit 13-1: service switch points (SSPs), service control points (SCPs), and signal transfer points (STPs). SSP switches originate and terminate calls communicating with customer premise equipment (CPE) to process calls for the user. SCPs are centralized nodes that interface with the other components through the STP to perform functions such as digit translation, call routing, and verification of credit cards. SCPs manage the network configuration and callcompletion database to perform the required service logic. STPs translate and route SS7 messages to the appropriate network nodes and databases. In addition to the SS7 signaling data link, there are a number of other SS7 192

AU1518Ch13Frame Page 193 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI)

SCP

STP

STP

STP

SCP

SCP

STP

STP

STP

SCP

SSP

SSP

Exhibit 13-1. Diagram of SS7 key components and links.

links between the SS7 components whereby certain links help to ensure a reliable SS7 network. Functional benefits of SS7 networks include reduced post-dialing delay, increased call completion, and connection to the intelligent network (IN). SS7 supports shared databases among switches, providing the groundwork for IN network-based services such as 800 services and advanced intelligent networks (AINs). SS7 enables interconnection and enhanced services, making the whole next generation and conversion possible. The PSTN assigns a unique number to each telephone line. There are two numbering plans: the North American numbering plan (NANP) and the ITU-T international numbering plan. NANP is an 11-digit or 1+10 dialing plan, whereas the ITU-T is no more than 15 digits, depending on the needs of the country. Commonly available PSTN features are call waiting, call forwarding, and three-way calling. With SS7 end to end, CLASS features such as ANI, call blocking, calling line ID blocking, automatic callback, and call return (*69) are ready for use. Interexchange carriers (IXCs) sell business features including circuit-switched long distance, calling cards, 800/888/877 numbers, VPNs (where the telephone company manages a private dialing plan), private leased lines, and virtual circuits (Frame Relay or ATM). Security features may include line restrictions, employee authorization codes, virtual access to private networks, and detailed call records to track unusual activity. The PSTN is mandated to perform emergency services. The basic U.S. 911 relays the calling party’s telephone number to public safety answering points (PSAPs). Enhanced 911 requirements include the location of the calling party, with some mandates as stringent as location within 50 meters of the handset. The traditional enterprise private branch exchange (PBX) is crucial to the delivery of high availability, quality voice, and associated features to the end user. It is a sophisticated proprietary computer-based switch that 193

AU1518Ch13Frame Page 194 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY operates as a small, in-house phone company with many features and external access and control. The PBX architecture separates switching and administrative functions, is designed for 99.999 percent reliability, and often integrates with a proprietary voicemail system. Documented PBX threats and baseline security methods are well known and can be referenced in the document PBX Vulnerability Analysis by NIST, special publication 800–24. Threats to the PBX include toll fraud theft, eavesdropping on conversations, unauthorized access to routing and address data, data alteration of billing information and system tables to gain additional services, unauthorized access, denial-of-service attacks, and a passive traffic analysis attack. Voice messages are also prone to threats of eavesdropping and accidental or purposeful forwarding. Baseline security policies and controls methods, which to a certain extent depend on the proprietary equipment, need to be implemented. Control methods include manual assurance of database integrity, physical security, operations security, management-initiated controls, PBX system control, and PBX system terminal access control such as password control. Many telephone and system configuration practices need to be developed and adhered to. These include blocking well-known non-call areas or numbers, restart procedures, software update protection using strong error detection based on cryptography, proper routing through the PBX, disabling open ports, and configuration of each of the many PBX features. User quality-of-service (QoS) expectations of basic voice service are quite high in the area of availability. When people pick up the telephone, they expect a dial tone. Entire businesses are dependent on basic phone service, making availability of service critical. Human voice interaction requires delays of no more than 250 milliseconds. Carriers experienced fraud prior to the proliferation of SS7 out-of-band signaling utilized for the communication of call establishment and billing information between switches. Thieves attached a box that generated the appropriate signaling tones, permitting a perpetrator to take control of signaling between switches and defeat billing. SS7 enhanced security and prevented unauthorized use. Within reasonable limitations, PSTN carriers have maintained closed circuit-based networks that are not open to public protocols except under legal agreements with specified companies. In the past, central offices depended on physical security, passwords system access, a relatively small set of trained individuals working with controlled network information, network redundancy, and deliberate change control. U.S. telephone carriers are subject to the Communications Assistance for Law Enforcement Act (CALEA) and need to provide access points and certain information when a warrant has been issued for authorized wiretapping. 194

AU1518Ch13Frame Page 195 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) The network architecture and central office controls described above minimized security exposures, ensuring that high availability and QoS expectations were essentially met. While it is not affordable to secure the entire PSTN, such are the requirements of certain government and commercial users. Encryption of the words spoken into a telephone and decryption of them as they come out of the other telephone is the singular method to implement a secure path between two telephones at arbitrary locations. Such a secure path has never broadly manifested itself costeffectively for commercial users. Historically, PSTN voice scramblers have existed since the 1930s but equipment was large, complicated, and costly. By the 1960s, the KY-3 came to market as one of the first practical voice encryption devices. The secure telephone unit, first generation (STU-1) was introduced in 1970, followed in 1975 by the STU-II used by approximately 10,000 users. In 1987, the U.S. National Security Agency (NSA) approved STU-III and made secure telephone service available to defense contractors where multiple vendors such as AT&T, GE, and Motorola offered user-friendly deskset telephones for less than U.S.$2000. During the 1990s, systems came to market such as an ISDN version of STU called STE, offered by L3 Communications, AT&T Clipper phone, Australian Speakeasy, and British Brent telephone. Also available today are commercial security telephones or devices inserted between the handset and telephone that provide encryption at costs ranging from U.S.$100 to $2000, depending on overall capability. WIRELESS VOICE COMMUNICATION NETWORKS Wireless technology in radio form is more than 100 years old. Radio transmission is the induction of an electrical current at a remote location, intended to communicate information whereby the current is produced via the propagation of an electromagnetic wave through space. The wireless spectrum is a space that the world shares, and there are several methods for efficient spectrum reuse. First, the space is partitioned into smaller coverage areas or cells for the purpose of reuse. Second, a multiple access technique is used to allow the sharing of the spectrum among many users. After the space has been specified and multiple users can share a channel, spread spectrum, duplexing, and compression techniques to utilize the bandwidth with even better efficiency are applied. In digital cellular systems, time division multiplexing (TDMA) and code division multiple (CDMA) access techniques exist. TDMA first splits the frequency spectrum into a number of channels and then applies time division multiplexing to operate multiple users interleaved in time. TDMA standards include Global System for Mobile Communications (GSM), Universal Wireless Communications (UWC), and Japanese Digital Cellular (JDC). CDMA employs universal frequency reuse, whereby everybody utilizes the 195

AU1518Ch13Frame Page 196 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

PSTN

Gateway Mobile Switching Center Base Station Controller

Base Station Controller

Mobile Switching Center

Legend

Mobile Station

Base Transceiver Station (BTS)

Visitor Location Register Home Location Register Authentication Center Equipment Identify Register

Exhibit 13-2. Digital cellular architecture.

same frequency at the same time and each conversation is uniquely encoded, providing greater capacity over other techniques. First-generation CDMA standards and second-generation wideband CDMA (WCDMA) both use a unique code for each conversation and a spread spectrum method. WCDMA uses bigger channels, providing for greater call capacity and longer encoding strings than CDMA, increasing security and performance. Multiple generations of wireless WANs have evolved in a relatively short period of time. The first-generation network used analog transmission and was launched in Japan in 1979. By 1992, second-generation (2G) digital networks were operational at speeds primarily up to 19.2 kbps. Cellular networks are categorized as analog and digital cellular, whereas PCS, a shorter-range, low-power technology, was digital from its inception. Today, cellular networks have evolved to the 2.5G intermediate-generation network, which provides for enhanced data services on present 2G digital platforms. The third-generation (3G) network includes digital transmission. It also provides for an always-on per-user and terminal connection that supports multimedia broadband applications and data speeds of 144 kbps to 384 kbps, potentially up to 2 Mbps in certain cases. The 3G standards are being developed in Europe and Asia, but worldwide deployment has been slow due to large licensing and build costs. There are many competing cellular standards that are impeding the overall proliferation and interoperability of cellular networks. Digital cellular architecture, illustrated in Exhibit 13-2, resembles the quickly disappearing analog cellular network yet is expanded to provide for greater capacity, improved security, and roaming capability. A base 196

AU1518Ch13Frame Page 197 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) transceiver station (BTS), which services each cell, is the tower that transmits signals to and from the mobile unit. Given the large number of cells required to address today’s capacity needs, a base station controller (BSC) is used to control a set of base station transceivers. The base station controllers provide information to the mobile switching center (MSC), which accesses databases that enable roaming, billing, and interconnection. The mobile switching center interfaces with a gateway mobile switching center that interconnects with the PSTN. The databases that make roaming and security possible consist of a home location register, visitor location register, authentication center, and equipment identity register. The home location register maintains subscriber information, with more extensive management required for those registered to that mobile switching center area. The visitor location register logs and periodically forwards information about calls made by roaming subscribers for billing and other purposes. The authentication center is associated with the home location register; it protects the subscriber from unauthorized access, delivering security features including encryption, customer identification, etc. The equipment identity register manages a database of equipment, also keeping track of stolen or blacklisted equipment. Prior to digital cellular security techniques, there was a high amount of toll fraud. Thieves stood on busy street corners, intercepted electronic identification numbers and phone numbers, and then cloned chips. The digitization of identification information allowed for its encryption and enhanced security. Policies and control methods are required to further protect against cellular phone theft. Methods include the use of an encrypted PIN code to telephone access and blocking areas or numbers. Privacy across the air space is improved using digital cellular compression and encoding techniques; CDMA encoding offers the greatest protection of the techniques discussed. Despite security improvements in the commercial cellular networks, end-to-end security remains a challenge. Pioneering efforts for many of the digital communication, measurement, and data techniques available today were performed in a successful attempt to secure voice communication using FSK–FDM radio transmission during World War II. The SIGSALY system was first deployed in 1943 by Bell Telephone Laboratories, who began the investigation of encoding techniques in 1936 to change voice signals into digital signals and then reconstruct the signals into intelligible voice. The effort was spurred on by U.K. and U.S. allies who needed a solution to replace the vulnerable transatlantic high-frequency radio analog voice communications system called A-3. SIGSALY was a twelve-channel system; ten channels each measured the power of the voice signal in a portion of the whole voice frequency spectrum between 250 and 3000 Hz, and two channels provided information regarding the pitch of the speech and presence of 197

AU1518Ch13Frame Page 198 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY unvoiced (hiss) energy. Encryption keys were generated from thermal noise information (output of mercury-vapor rectifier vacuum tubes) sampled every 20 milliseconds and quantized into six levels of equal probability. The level information was converted into channels of a frequency-shift-keyed audio tone signal, which represented the encryption key, and was then recorded on three hard vinyl phonograph records. The physical transportation and distribution of the records provided key distribution. In the 1970s, U.S. Government wireless analog solutions for high-grade end-to-end crypto and authentication became available, though still at a high cost compared to commercial offerings. Secure telephone solutions included STU-III compatible, Motorola, and CipherTac2K. STU-III experienced compatibility problems with 2G and 3G networks. This led to the future narrow-band digital terminal (FNBDT) — a digital secure voice protocol operating at the transport layer and above for most data/voice network configurations across multiple media — and mixed excitation linear prediction vocoder (MELP) — an interoperable 2400-bps vocoder specification. Most U.S. Government personnel utilize commercial off-the-shelf solutions for sensitive but unclassified methods that rely on the commercial wireless cellular infrastructure. NETWORK CONVERGENCE Architecture Large cost-saving potentials and the promise of future capabilities and services drive the move to voice over a next-generation network. New SS7 switching gateways are required to support legacy services and signaling features and to handle a variety of traffic over a data-centric infrastructure. In addition to performing popular IP services, the next-generation gateway switch needs to support interoperability between PSTN circuits and packet-switching networks such as IP backbones, ATM networks, Frame Relay networks, and emerging Multi-Protocol Label Switching (MPLS) networks. A number of overlapping multimedia standards exist, including H.323, Session Initiation Protocol (SIP), and Media Gateway Control Protocol (MGCP). In addition to the telephony-signaling protocols encompassed within these standards, network elements that facilitate VoIP include VoIP gateways, the Internet telephony directory, media gateways, and softswitches. An evolution and blending of protocols, and gateway and switch functions continues in response to vendors’ competitive searches for market dominance. Take an example of a standard voice call initiated by a user located in a building connected to the central office. The central office links to an SS7 media gateway switch that can utilize the intelligence within the SS7 network to add information required to place the requested call. The call then continues on a packet basis through switches or routers until it reaches a 198

AU1518Ch13Frame Page 199 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI)

IP Phone

Directories Public Switched Telephone Network

Back-Office Systems

SS7 Softswitch

Trunks

Media Gateway

Media Gateway

Cable Network Public Switched Telephone Network

POTS Phone

Exhibit 13-3. VoIP network architecture.

destination media gateway switch, where the voice is unpackaged, undigitalized, and sent to the phone called. Voice-over-IP (VoIP) changes voice into packets for transmission over a TCP/IP network. VoIP gateways connect the PSTN and the packet-switched Internet and manage the addressing across networks so that PCs and phones can talk to each other. Exhibit 13-3 illustrates major VoIP network components. The VoIP gateway performs packetization and compression of the voice, enhancement of the voice through voice techniques, DTMF signaling capability, voice packet routing, user authentication, and call detail recording for billing purposes. Many solutions exist, such as enterprise VoIP gateway routers, IP PBXs, service-provider VoIP gateways, VoIP access concentrators, and SS7 gateways. The overlapping functionality of the different types of gateways will progress further as mergers and acquisitions continue to occur. When the user dials the number from a VoIP telephone, the VoIP gateway communicates the number to the server; the callagent software (softswitch) decides what the IP address is for the destination call number and presents back the IP address to the VoIP gateway. The gateway converts the voice signal to IP format, adds the address of the destination node, and sends the signal. The softswitch could be utilized again if enhanced services are required for additional functions. Media gateways interconnect with the SS7 network, enabling interoperability between the PSTN and packet-switched domains. They handle IP services and support various telephony-signaling protocols and Class 4 and Class 5 services. Media servers include categories of VoIP trunking gateways, VoIP access gateways, and network access service devices. Vocoders compress and transmit audio over the network; they are another evolving area of standards for Voice-over-the-Internet (VoI). Vocoders used for VoI such as G.711 (48, 56, and 64 kbps high-bit rate) and 199

AU1518Ch13Frame Page 200 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY G.723 (5.3 and 6.3 kbps high-bit rate) are based on existing standards created for digital telephony applications, limiting the telephony signal band of 200–3400 Hz with 8 kHz sampling. This toll-level audio quality is geared for the minimum a human ear needs to recognize speech and is not nearly that of face-to-face communications. With VoIP in a wideband IP end-to-end environment, better vocoders are possible that can achieve more transparent communication and better speaker recognition. New ITU vocoders — G.722.1 operating at 24 kbps and 32 kbps rates and 16 kHz sampling rate — are now used in some IP phone applications. The third-generation partnership project (3GPP)/ETSI (for GSM and WCDMA) merged on the adaptive multi-rate wideband (AMR-WB) at the 50–7000 Hz bandwidth to form the newly approved ITU G722.2 standard, which provides better voice quality at reduced bit rates and allows seamless interface between VoIP systems and wireless base stations. This eliminates the normal degradation of voice quality between vocoders of different systems. Numbering The Internet telephony directory, an IETF RFC known as ENUM services, is an important piece in the evolving VoI solution. ENUM is a standard for mapping telephone numbers to an IP address, a scheme wherein DNS maps PSTN phone numbers to appropriate URLs based on the E.164 standard. To enable a faster time to market, VoIP continues as new features and service models supporting the PSTN and associated legacy standards are introduced. For example, in response to DTMF tone issues, the IETF RFC RTP Payload for DTMF Digits, Telephony Tones and Telephony Signals evolved, which specifies how to carry and format tones and events using RTP. In addition to the incorporation of traditional telephone features and new integrated media features, VoIP networks need to provide emergency services and comply with law enforcement surveillance requirements. The requirements as well as various aspects of the technical standards and solutions are evolving. The move toward IP PBXs is evolving. Companies that cost-effectively integrate voice and data between locations can utilize IP PBXs on their IP networks, gaining additional advantages from simple moves and changes. Challenges exist regarding the nonproprietary telephony-grade server reliability (built for 99.99 percent reliability) and power distribution compared to traditional PBXs. Complete solutions related to voice quality, QoS, lack of features, and cabling distance limitations are yet evolving. A cost-effective, phased approach to an IP converged system (for example, an IP card in a PBX) enables the enterprise to make IP migration choices, support new applications such as messaging, and maintain the traditional PBX investment where appropriate. The move toward computer telephony greatly 200

AU1518Ch13Frame Page 201 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) increases similar types of PBX security threats discussed previously and is explored further in the “VoI Security” section of this chapter. Quality-of-Service (QoS) Network performance requirements are dictated by both the ITU SS7/C7 standards and user expectations. The standard requires that the end-toend call-setup delay cannot exceed 20 to 30 seconds after the ISDN User Part (ISUP) initial address message (IAM) is sent; users expect much faster response times. Human beings do not like delays when they communicate; acceptable end-to-end delays usually need to meet the recommended 150 milliseconds. QoS guarantees, at very granulated levels of service, are a requirement of next-generation voice networks. QoS is the ability to deliver various levels of service to different kinds of traffic or traffic flows, providing the foundation for tiered pricing based on class-of-service (CoS) and QoS. QoS methods fall into three major categories: first is an architected approach such as ATM; second is a per-flow or session method such as with the reservation protocol of IETF IntServ definitions and MPLS specifications; and third is a packet labeling approach utilizing a QoS priority mark as specified in 802.1p and IETF DiffServ. ATM is a cell-based (small cell), wide area network (WAN) transport that came from the carrier environment for streaming applications. It is connection oriented, providing a way to set up a predetermined path between source and destination; and it allows for control of network resources in real-time. ATM network resource allocation of CoS and QoS provisioning is well defined; there are four service classes based on traffic characteristics. Further options include the definition of QoS and traffic parameters at the cell level that establish service classes and levels. ATM transmission-path virtual circuits include virtual paths and their virtual channels. The ATM virtual path groups the virtual channels that share the same QoS definitions, easing network management and administration functions. IP is a flexible, efficient, connectionless, packet-based network transport that extends all the way to the desktop. Packet-switching methods have certain insufficiencies, including delays due to store-and-forward packetswitching mechanisms, jitter, and packet loss. Jitter is the delay in sending bits between two switches. Jitter results in both an end-to-end delay and delay differences between switches that adversely affect certain applications. As congestion occurs at packet switches or routers, packets are lost, hampering real-time applications. Losses of 30 or 40 percent in the voice stream could result in speech with missing syllables that sounds like gibberish. 201

AU1518Ch13Frame Page 202 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY IntServ and DiffServ are two IP schemes for QoS. IntServ broadens a best-efforts service model, enabling the management of end-to-end packet delays. IntServ reserves resources on a per-flow basis and requires Resource Reservation Protocol (RSVP) as a setup protocol that guarantees bandwidth and a limit to packet delay using router-to-router signaling schemes. Participating protocols include the Real-time Transport Protocol (RTP), which is the transport protocol in which receivers sequence information through packet headers. Real-Time Control Protocol (RTCP) gives feedback of status from senders to receivers. RTP and RTCP are ITU standards under H.225. Real-Time Streaming Protocol (RTSP) runs on top of IP Multicast, UDP, RTP, and RTCP. RSVP supports both IPv4 and IPv6, and is important to scalability and security; it provides a way to ensure that policy-based decisions are followed. DiffServ is a follow-on QoS approach to IntServ. DiffServ is based on a CoS model; it uses a specified set of building blocks from which many services can be built. DiffServ implements a prioritization scheme that differentiates traffic using certain bits in each packet (IPv4 type-of-service [ToS] byte or IPv6 traffic class byte) that designate how a packet is to be forwarded at each network node. The move to IPv6 is advantageous because the ToS field has limited functionality and there are various interpretations. DiffServ uses traffic classification to prioritize the allocation of resources. The IETF DiffServ draft specifies a management information base, which would allow for DiffServ products to be managed by Simple Network Management Protocol (SNMP). Multi-Protocol Label Switching (MPLS) is an evolving protocol with standards originally out of the IETF that designates static IP paths. It provides for the traffic engineering capability essential to QoS control and network optimization, and it forms a basis for VPNs. Unlike IP, MPLS can direct traffic through different paths to overcome IP congested route conditions that adversely affect network availability. To steer IPv4 or IPv6 packets over a particular route through the Internet, MPLS adds a label to the packet. To enable routers to direct classes of traffic, MPLS also labels the type of traffic, path, and destination information. A packet on an MPLS network is transmitted through a web of MPLS-enabled routers or ATM switches called label-switching routers (LSRs). At each hop in the MPLS network, the LSR uses the local label to index a forwarding table, which designates a new label to each packet, and sends the packet to an output port. Routes can be defined manually or via RSVP-TE (RSVP with traffic engineering extensions) or MPLS Label Distribution Protocol (LDP). MPLS supports the desired qualities of circuit-switching technology such as bandwidth reservation and delay variation as well as a best-efforts hop-by-hop routing. Using MPLS, service providers can build VPNs with the benefits of both ATM-like QoS and the flexibility of IP. The potential capabilities of the encapsulating label-based protocol continues to grow; however, there are 202

AU1518Ch13Frame Page 203 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) a number of issues between the IETF and MPLS Forum that need full resolution, such as the transfer of ToS markings from IP headers to MPLS labels and standard LSR interpretation when using MPLS with DiffServ. The management of voice availability and quality issues is performed through policy-based networking. Information about individual users and groups is associated with network services or classes of service. Network protocols, methods, and directories used to enable the granular time-sensitive requirements of policy-based QoS are Common Open Policy Services (COPS), Directory Enabled Networking (DEN), and Lightweight Directory Access Protocol (LDAP). VOI Security Threats to voice communication systems increase given the move to the inherently open Internet. Voice security policies, procedures, and methods discussed previously reflect the legacy closed voice network architecture; they are not adequate for IP telephony networks, which are essentially wide open and require little or no authentication to gain access. New-generation networks require protection from attacks across the legacy voice network, wireless network, WAN, and LAN. Should invalid signaling occur on the legacy network, trunk groups could be taken out of service, calls placed to invalid destinations, resources locked up without proper release, and switches directed to incorrectly reduce the flow of calls. As new IP telephony security standards and vendor functions continue to evolve, service providers and enterprises can make use of voice-oriented firewalls as well as many of the same data security techniques to increase voice security. Inherent characteristics of Voice-over-IP protocols and multimedia security schemes are in conflict with many current methods used by firewalls or network address translation (NAT). Although no official standards exist, multiple security techniques are available to operate within firewall and NAT constraints. These methods typically use some form of dynamic mediation of ports and addresses whereby each scheme has certain advantages given the configuration and overall requirements of the network. Security standards, issues, and solutions continue to evolve as security extensions to signaling protocols, related standards, and products likewise evolve and proliferate. SIP, H.323, MGCP, and Megaco/H.248 signaling protocols use TCP as well as UDP for call setup and transport. Transport addresses are embedded in the protocol messages, resulting in a conflict of interest. Secure firewall rules specify static ports for desirable data block H.323 because the signaling protocol uses dynamically allocated port numbers. Related issues trouble NAT devices. An SIP user on an internal network behind a NAT sends an INVITE message to another user outside the network. The outside user extracts the FROM address from the INVITE message and sends a 200(Ok) 203

AU1518Ch13Frame Page 204 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY response back. Because the INVITE message comes from behind the NAT, the FROM address is not correct. The call never connects because the 200 response message does not succeed. H.323 and SIP security solution examples available today are described. H.323, an established ITU standard designed to handle real-time voice and videoconferencing, has been used successfully for VoIP. The standard is based on the IETF Real-Time Protocol (RTP) and Real-Time Control Protocol (RTCP) in addition to other protocols for call signaling and data and audiovisual communications. This standard is applied to peer-to-peer applications where the intelligence is distributed throughout the network. The network can be partitioned into zones, and each zone is under the control of an intelligent gatekeeper. One voice firewall solution in an H.323 environment makes use of the mediating element that intervenes in the logical process of call setup and tear-down, handles billing capabilities, and provides high-level policy control. In this solution, the mediating element is the H323 gatekeeper; it is call-state aware and trusted to make networkwide policy decisions. The data ports of the voice firewall device connect to the output of the H.323 gateway device. The gatekeeper incorporates firewall management capabilities via API calls; it controls connections to the voice firewall device that opens dynamic “pinholes,” which permit the relevant traffic through the voice firewall. Voice firewalls are configured with required pinholes and policy for the domain, and no other traffic can flow through the firewall. For each call setup, additional pinholes are configured dynamically to permit the precise traffic required to carry that call; and no other traffic is allowed. The voice firewall simplicity using stateless packet filtering can perform faster at lower costs compared to a traditional application firewall, with claims of 100 calls per second to drill and seal pinholes and a chassis that supports hundreds of simultaneous calls with less than one millisecond of latency SIP, an increasingly popular approach, operates at the application layer of the OSI model and is based on IETF RFC 2543. SIP is a peer-to-peer signaling protocol controlling the creation, modification, and termination of sessions with one or more participants. SIP establishes a temporary call to the server, which performs required, enhanced service logic. The SIP stack consists of SIP using Session Description Protocol (SDP), RTCP, and RTP. Recent announcements — a Windows XP® SIP telephony client and designation of SIP as the signaling and call control standard for IP 3G mobile networks — have accelerated service providers’ deployments of SIP infrastructures. Comprehensive firewall and NAT security solutions for SIP service providers include a combination of technologies, including an edge proxy, a firewall control proxy, and a media-enabled firewall. An edge proxy acts as a guard, serving the incoming and outgoing SIP signaling traffic. It performs 204

AU1518Ch13Frame Page 205 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) authentication and authorization of services through transport layer security (TLS) and hides the downstream proxies from the outside network. The edge proxy forwards calls from trusted peers to the next internal hop. The firewall control proxy works in conjunction with the edge proxy and firewall. For each authorized media stream, it dynamically opens and closes pinhole pairs in the firewall. The firewall control proxy also operates closely with the firewall to perform NAT and remotely manages firewall policy and message routing. Dynamic control and failover functions of these firewall control proxies provide the additional required reliability in the service provider network. The media-enabled firewall is a transparent, non-addressable VoIP firewall that does not allow access to the internal network except from the edge proxy. Carrier-class high-performance firewalls can limit entering traffic to the edge proxy and require a secure TLS connection for only media traffic for authorized calls. Enterprise IP Telephony Security Threats associated with conversation eavesdropping, call recording and modification, and voicemail forwarding or broadcasting are greater in a VoIP network, where voice files are stored on servers and control and media flows reside on the open network. Threats related to fraud increase given the availability of control information on the network such as billing and call routing. Given the minimal authentication functionality of voice systems, threats related to rogue devices or users increase and can also make it more difficult to track the hacker of a compromised system if an attack is initiated in a phone system. Protection needs to be provided against denial-of-service (DoS) conditions, malicious software to perform a remote boot, TCP SYN flooding, ping of death, UDP fragment flooding, and ICMP flooding attacks. Control and data flows are prone to eavesdropping and interception given the use of packet sniffers and tools to capture and reassemble generally unencrypted voice streams. Viruses and Trojan horse attacks are possible against PCbased phones that connect to the voice network. Other attacks include a caller identity attack on the IP phone system to gain access as a legitimate user or administrator. Attacks to user registration on the gatekeeper could result in redirected calls. IP spoofing attacks using trusted IP addresses could fool the network that a hacker conversation is that of a trusted computer such as the IP-PBX, resulting in a UDP flood of the voice network. Although attack mitigation is a primary consideration in VoIP designs, issues of QoS, reliability, performance, scalability, authentication of users and devices, availability, and management are crucial to security. VoIP security requirements are different than data security requirements for several reasons. VoIP applications are under no-downtime, high-availability requirements, operate in a badly behaved manner using dynamically 205

AU1518Ch13Frame Page 206 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY negotiated ports, and are subject to extremely sensitive performance needs. VoIP security solutions are comprehensive; they include signaling protocols, operating systems, administration interface; and they need to fit into existing security environments consisting of firewalls, VPNs, and access servers. Security policies must be in place because they form a basis for an organization’s acceptance of benefits and risks associated with VoIP. Certain signaling protocol security recommendations exist and are evolving. For example, the ITU-T H.235 Recommendation under the umbrella of H.323 provides for authentication, privacy, and integrity within the current H-Series protocol framework. Vendor products, however, do not necessarily fully implement such protection. In the absence of widely adopted standards, today’s efforts rely on securing the surrounding network and its components. Enterprise VoIP security design makes use of segmentation and the switched infrastructure for QoS, scalability, manageability, and security. Today, layer 3 segmentation of IP voice from the traditional IP data network aids in the mitigation of attacks. A combination of virtual LANs (VLANs), access control, and stateful firewall provides for voice and data segmentation at the network access layer. Data devices on a separate segment from the voice segment cannot instigate call monitoring, and the use of a switched infrastructure baffles devices on the same segment sufficiently to prevent call monitoring and maintain confidentiality. Not all IP phones with data ports, however, support other than basic layer 2 connectivity that acts as a hub, combining the data and voice segments. Enhanced layer 2 support is required in the IP phone for VLAN technology (like 802.1q), which is one aspect needed to perform network segmentation today. The use of PC-based IP phones provides an avenue for attacks such as a UDP flood DoS attack on the voice segment making a stateful firewall that brokers the data–voice interaction required. PC-based IP phones are more susceptible to attacks than closed custom operating system IP phones because they are open and sit within the data network that is prone to network attacks such as worms or viruses. Controlling access between the data and voice segments uses a strategically located stateful firewall. The voice firewall provides host-based DoS protection against connection starvation and fragmentation attacks, dynamic per-port-granular access through the firewall, spoof mitigation, and general filtering. Typical authorized connections such as voicemail connections in the data segment, call establishment, voice browsing via the voice segment proxy server, IP phone configuration setting, and voice proxy server data resource access generally use well-known TCP ports or a combination of well-known TCP ports and UDP. The VoIP firewall handles known TCP traditionally and opens port-level-granular access for UDP between segments. If higher-risk PC-based IP phones are utilized, it is possible to implement a private address space for IP telephony devices as provided by RFC 1918. Separate 206

AU1518Ch13Frame Page 207 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) address spaces reduce potential traffic communication outside the network and keep hackers from being able to scan a properly configured voice segment for vulnerabilities. The main mechanism for device authentication of IP phones is via the MAC address. Assuming automatic configuration has been disabled, an IP phone that tries to download a network configuration from an IP-PBX needs to exhibit a MAC address known to the IP-PBX to proceed with the configuration process. This precludes the insertion of a rogue phone into the network and subsequent call placement unless a MAC address is spoofed. User log-on is supported on some IP phones for device setup as well as identification of the user to the IP-PBX, although this could be inconvenient in certain environments. To prevent rogue device attacks, employ traditional best practice regarding locking down switched ports, segments, and services holds. In an IP telephony environment, several additional methods could be deployed to further guard against such attacks. Assignment of static IP addresses to known MAC addresses versus Dynamic Host Configuration Protocol (DHCP) could be used so that, if an unknown device is plugged into the network, it does not receive an address. Also, assuming segmentation, separate voice and data DHCP servers means that a DoS attack on the DHCP data segment server has little chance of affecting the voice segment. The temporary use only when needed guideline should be implemented for the commonly available automatic phone registration feature that bootstraps an unknown phone with a temporary configuration. A MAC address monitoring tool on the voice network that tracks changes in MAC to IP address pairings could be helpful, given that voice MAC addresses are fairly static. Assuming network segmentation, filtering could be used to limit devices from unknown segments as well as keeping unknown devices within the segment from connecting to the IPPBX. Voice servers are prone to similar attacks as data servers and therefore could require tools such as an intrusion detection system (IDS) to alarm, log, and perhaps react to attack signatures found in the voice network. There are no voice control protocol attack signatures today, but an IDS could be used for UDP DoS attack and HTTP exploits that apply to a voice network. Protection of servers also includes best practices, such as disabling unnecessary services, applying OS patches, turning off unused voice features, and limiting the number of applications running on the server. Traditional best practices should be followed for the variety of voice server management techniques, such as HTTP, SSL, and SNMP. Wireless Convergence Wireless carriers look to next-generation networks to cost-effectively accommodate increased traffic loads and to form a basis for a pure packet 207

AU1518Ch13Frame Page 208 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY network as they gradually move toward 3G networks. The MSCs in a circuitswitched wireless network as described earlier in this chapter interconnect in a meshed architecture that lacks easy scaling or cost-effective expansion; a common packet infrastructure to interconnect MSCs could overcome limitations and aid in the move to 3G networks. In this architecture, the common packet framework uses packet tandems consisting of centralized MGCs or softswitches that control distributed MGs deployed and located with MSCs. TDM trunks from each MSC are terminated on an MG that performs IP or ATM conversion under the management of the softswitch. Because point-to-point connections no longer exist between MSCs, a less complicated network emerges that requires less bandwidth. Now MSCs can be added to the network with one softswitch connection instead of multiple MSC connections. Using media gateways negates the need to upgrade software at each MSC to deploy next-generation services, and it offloads precious switching center resources. Centrally located softswitches with gateway intelligence can perform lookups and route calls directly to the serving MSC versus the extensive routing required among MSCs or gateway MSCs to perform lookups at the home location register. With the progression of this and other IP-centric models, crucial registration, authentication, and equipment network databases need to be protected. Evolving new-generation services require real-time metering and integration of session management with the transfer data. Service providers look to support secure virtual private networks (VPNs) between subscribers and providers of content, services, and applications. While the emphasis of 2.5G and 3G mobile networks is on the delivery of data and new multimedia applications, current voice services must be sustained and new integrated voice capabilities exploited. Regardless of specific implementations, it is clear that voice networks and systems will continue to change along with new-generation networks. References Telecommunications Essentials, Addison-Wesley, 2002, Lillian Goleniewski. Voice over IP Fundamentals, Cisco Press, 2002, Jonathan Davidson and James Peters. SS7 Tutorial, Network History, 2001, SS8 Networks. Securing future IP-based phone networks, ISSA Password, Sept/Oct. 2001, David K. Dumas, CISSP. SAFE: IP Telephony Security in Depth, Cisco Press, 2002, Jason Halpern. Security Analysis of IP-Telephony Scenarios, Darmstadt University of Technology, KOM — Industrial Process and System Communications, 2001, Utz Roedig. Deploying a Dynamic Voice over IP Firewall with IP Telephony Applications, Aravox Technologies, 2001, Andrew Molitor. Building a strong foundation for SIP-based networks, Internet Telephony, February 2002, Erik Giesa and Matt Lazaro. Traversal of IP Voice and Video Data through Firewalls and NATS, RADVision, 2001.

208

AU1518Ch13Frame Page 209 Thursday, November 14, 2002 6:20 PM

Secure Voice Communications (VoI) PBX Vulnerability Analysis, Finding Holes in Your PBX Before Someone Else Does, U.S. Department of Commerce, National Institute of Standards and Technology, Special Publication 80024. The Start of the Digital Revolution: SIGSALY Secure Digital Voice Communications in World War II, The National Security Agency (NSA), J.V. Boone and R.R. Peterson. Wireless carriers address network evolution with packet technology, Internet Telephony, November 2001, Ravi Ravishankar.

GLOSSARY OF TERMS AIN (Advanced Intelligent Network) — The second generation of intelligent networks, which was pioneered by Bellcore and later spun off as Telcordia. A common service-independent network architecture geared to quickly produce customizable telecommunication services. ATM (Asynchronous Transfer Mode) — A cell-based international packet-switching standard where each packet has a uniform cell size of 53 bytes. It is a high-bandwidth, fast packet-switching and multiplexing method that enables end-to-end communication of multimedia traffic. ATM is an architected quality-of-service solution that facilitates multi-service and multi-rate connections using a high-capacity, lowlatency switching method. CCITT (Comité Consultatif International de Télephonie et de Télégraphie) — Advisory committee to the ITU, now known as the ITU-T that influences engineers, manufacturers, and administrators. CoS (Class-of-Service) — Categories of subscribers or traffic corresponding to priority levels that form the basis for network resource allocation. CPE (Customer Premise Equipment) — Equipment owned and managed by the customer and located on the customer premise. DTMF (Dual-Tone Multi-Frequency Signaling) — A signaling technique for pushbutton telephone sets in which a matrix combination of two frequencies, each from a set of four, is used to send numerical address information. The two sets of four frequencies are (1) 697, 770, 852, and 941 Hz; and (2) 1209, 1336, 1477, and 1633 Hz. IP (Internet Protocol) — A protocol that specifies data format and performs routing functions and path selection through a TCP/IP network. These functions provide techniques for handling unreliable data and specifying the way network nodes process data, how to perform error processing, and when to throw out unreliable data. IN (Intelligent Network) — An advanced services architecture for telecommunications networks. ITU-T (International Telecommunication Union) — A telecommunications advisory committee to the ITU that influences engineers, manufacturers, and administrators MPLS (Multi-Protocol Label Switching) — An IETF effort designed to simplify and improve IP packet exchange and provide network operators with a flexible way to engineer traffic during link failures and congestion. MPLS integrates information 209

AU1518Ch13Frame Page 210 Thursday, November 14, 2002 6:20 PM

TELECOMMUNICATIONS AND NETWORK SECURITY about network links (layer 2) such as bandwidth, latency, and utilization with the IP (layer 3) into one system. NIST (National Institute of Standards and Technology) — A U.S. national group that was referred to as the National Bureau of Standards prior to 1988. PBX (Private Branch Exchange) — A telephone switch residing at the customer location that sets up and manages voice-grade circuits between telephone users and the switched telephone network. Customer premise switching is usually performed by the PBX as well as a number of additional enhanced features, such as least-cost routing and call-detail recording. PSTN (Public Switched Telephone Network) — The entire legacy public telephone network, which includes telephones, local and interexchange trunks, communication equipment, and exchanges. QoS (Quality-of-Service) — A network service methodology where network applications specify their requirements to the network prior to transmission, either implicitly by the application or explicitly by the network manager. RSVP (Reservation Resource Protocol) — An Internet protocol that enables QoS; an application can reserve resources along a path from source to destination. RSVP-enabled routers then schedule and prioritize packets in support of specified levels of QoS. RTP (Real-Time Transport Protocol) — A protocol that transmits real-time data on the Internet. Sending and receiving applications use RTP mechanisms to support streaming data such as audio and video. RTSP (Real-Time Streaming Protocol) — A protocol that runs on top of IP multicasting, UDP, RTP, and RTCP. SCP (Service Control Point) — A centralized node that holds service logic for call management. SSP (Service-Switching Point) — An origination or termination call switch. STP (Service Transfer Point) — A switch that translates SS7 messages and routes them to the appropriate network nodes and databases. SS7 (Signaling System 7) — An ITU-defined common signaling protocol that offloads PSTN data traffic congestion onto a wireless or wireline digital broadband network. SS7 signaling can occur between any SS7 node, and not only between switches that are immediately connected to one another.

ABOUT THE AUTHOR Valene Skerpac, CISSP, is past chairman of the IEEE Communications Society. Over the past 20 years, she has held positions at IBM and entrepreneurial security companies. Valene is currently president of iBiometrics, Inc.

210

AU1518Ch14Frame Page 211 Thursday, November 14, 2002 6:19 PM

Chapter 14

Packet Sniffers: Use and Misuse Steve A. Rodgers, CISSP

A packet sniffer is a tool used to monitor and capture data traveling over a network. The packet sniffer is similar to a telephone wiretap; but instead of listening to phone conversations, it listens to network packets and conversations between hosts on the network. The word sniffer is generically used to describe packet capture tools, similar to the way crescent wrench is used to describe an adjustable wrench. The original sniffer was a product created by Network General (now a division of Network Associates called Sniffer Technologies). Packet sniffers were originally designed to assist network administrators in troubleshooting their networks. Packet sniffers have many other legitimate uses, but they also have an equal number of sinister uses. This chapter discusses some legitimate uses for sniffers, as well as several ways an unauthorized user or hacker might use a sniffer to compromise the security of a network. HOW DO PACKET SNIFFERS WORK? The idea of sniffing or packet capturing may seem very high-tech. In reality it is a very simple technology. First, a quick primer on Ethernet. Ethernet operates on a principle called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). In essence, the network interface card (NIC) attempts to communicate on the wire (or Ethernet). Because Ethernet is a shared technology, the NIC must wait for an “opening” on the wire before communicating. If no other host is communicating, then the NIC simply sends the packet. If, however, another host is already communicating, the network card will wait for a random, short period of time and then try to retransmit. Normally, the host is only interested in packets destined for its address; but because Ethernet is a shared technology, all the packet sniffer needs to do is turn the NIC on in promiscuous mode and “listen” to the packets on 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

211

AU1518Ch14Frame Page 212 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 14-1. Summary window with statistics about the packets as they are being captured.

the wire. The network adapter can capture packets from the data-link layer all the way through the application layer of the OSI model. Once these packets have been captured, they can be summarized in reports or viewed individually. In addition, filters can be set up either before or after a capture session. A filter allows the capturing or displaying of only those protocols defined in the filter. ETHEREAL Several software packages exist for capturing and analyzing packets and network traffic. One of the most popular is Ethereal. This network protocol analyzer can be downloaded from http://www.ethereal.com/ and installed in a matter of minutes. Various operating systems are supported, including Sun Solaris, HP-UX, BSD (several distributions), Linux (several distributions), and Microsoft Windows (95/98/ME, NT4/2000/XP). At the time of this writing, Ethereal was open-source software licensed under the GNU General Public License. After download and installation, the security practitioner can simply click on “Capture” and then “Start,” choose the appropriate network adapter, and then click on “OK.” The capture session begins, and a summary window displays statistics about the packets as they are being captured (see Exhibit 14-1). 212

AU1518Ch14Frame Page 213 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse

Exhibit 14-2. The Ethereal capture session.

Simply click on “Stop” to end the capture session. Exhibit 14-2 shows an example of what the Ethereal capture session looks like. The top window of the session displays the individual packets in the capture session. The information displayed includes the packet number, the time the packet arrived since the capture was started, the source address of the packet, the destination address of the packet, the protocol, and other information about the packet. The second window parses and displays the individual packet in an easily readable format, in this case packet number one. Further detail regarding the protocol and the source and destination addresses is displayed in summary format. The third window shows a data dump of the packet displaying both the hex and ASCII values of the entire packet. Further packet analysis can be done by clicking on the “Tools” menu. Clicking on “Protocol Hierarchy Statistics” will generate a summary report of the protocols captured during the session. Exhibit 14-3 shows an example of what the protocol hierarchy statistics would look like. 213

AU1518Ch14Frame Page 214 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY

Exhibit 14-3. The protocol hierarchy statistics.

The security practitioner can also get overall statistics on the session, including total packets captured, elapsed time, average packets per second, and the number of dropped packets. Ethereal is a very powerful tool that is freely available over the Internet. While it may take an expert to fully understand the capture sessions, it does not take an expert to download and install the tool. Certainly the aspiring hacker would have no trouble with the installation and configuration. The security practitioner should understand the availability, features, and ease of use of packet sniffers like Ethereal. Having an awareness of these tools will allow the security practitioner to better understand how the packet sniffer could be used to exploit weaknesses and how to mitigate risk associated with them. LEGITIMATE USES Because the sniffer was invented to help network administrators, many legitimate uses exist for them. Troubleshooting was the first use for the sniffer, but performance analysis quickly followed. Now, many uses for sniffers exist, including those for intrusion detection. Troubleshooting The most obvious use for a sniffer is to troubleshoot a network or application problem. From a network troubleshooting perspective, capture tools can tell the network administrator how many computers are communicating on a network segment, what protocols are used, who is sending or receiving the most traffic, and many other details about the network and its hosts. For example, some network-centric applications are very complex 214

AU1518Ch14Frame Page 215 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse and have many components. Here is a list of some components that play a role in a typical client/server application: • • • • • • • •

Client hardware Client software (OS and application) Server hardware Server software (OS and application) Routers Switches Hubs Ethernet network, T1s, T3s, etc.

This complexity often makes the application extremely difficult to troubleshoot from a network perspective. A packet sniffer can be placed anywhere along the path of the client/server application and can unravel the mystery of why an application is not functioning correctly. Is it the network? Is it the application? Perhaps it has to do with lookup issues in a database. The sniffer, in the hands of a skilled network analyst, can help determine the answers to these questions. A packet sniffer is a powerful troubleshooting tool for several reasons. It can filter traffic based on many variables. For example, let us say the network administrator is trying to troubleshoot a slow client/server application. He knows the server name is slopoke.xyzcompany.com and the host’s name is impatient.xyzcompany.com. The administrator can set up a filter to only watch traffic between the server and client. The placement of the packet sniffer is critical to the success of the troubleshooting. Because the sniffer only sees packets on the local network segment, the sniffer must be placed in the correct location. In addition, when analyzing the capture, the analyst must keep the location of the packet sniffer in mind in order to interpret the capture correctly. If the analyst suspects the server is responding slowly, the sniffer could be placed on the same network segment as the server to gather as much information about the server traffic as possible. Conversely, if the client is suspected of being the cause, the sniffer should be placed on the same network segment as the client. It may be necessary to place the tool somewhere between the two endpoints. In addition to placement, the network administrator may need to set up a filter to only watch certain protocols. For instance, if a Web application using HTTP on port 80 is having problems, it may be beneficial to create a filter to only capture HTTP packets on port 80. This filter will significantly reduce the amount of data the troubleshooting will need to sift through to find the problem. Keep in mind, however, that setting this filter can configure the sniffer to miss important packets that could be the root cause of the problem. 215

AU1518Ch14Frame Page 216 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Performance and Network Analysis Another legitimate use of a packet sniffer is for network performance analysis. Many packet sniffer tools can also provide a basic level of network performance and analysis. They can display the general health of the network, network utilization, error rates, summary of protocols, etc. Specialized performance management tools use specialized packet sniffers called RMON probes to capture and forward information to a reporting console. These systems collect and store network performance and analysis information in a database so the information can be displayed on an operator console, or displayed in graphs or summary reports. Network-Based Intrusion Detection Network-based intrusion detection systems (IDSs) use a sniffer-like packet capture tool as the primary means of capturing data for analysis. A network IDS captures packets and compares the packet signatures to its database of attacks for known attack signatures. If it sees a match, it logs the appropriate information to the IDS logs. The security practitioner can then go back and review these logs to determine what happened. If in fact the attack was successful, this information can later be used to determine how to mitigate the attack or vulnerability to prevent it from happening in the future. Verifying Security Configurations Just as the network administrator may use the sniffer to troubleshoot a network problem, so too can the security practitioner use the sniffer to verify security configurations. A security practitioner may use a packet sniffer to review a VPN application to see if data is being transferred between gateways or hosts in encrypted format. The packet sniffer can also be used to verify a firewall configuration. For example, if a security practitioner has recently installed a new firewall, it would be prudent to test the firewall to make sure its configuration is stopping the protocols it has been configured to stop. The security practitioner can place a packet sniffer on the network behind the firewall and then use a separate host to scan ports of the firewall, or open up connections to hosts that sit behind the firewall. If the firewall is configured correctly, it will only allow ports and connections to be established based on its rule set. Any discrepancies could be reviewed to determine if the firewall is misconfigured or if there is simply an underlying problem with the firewall architecture. MISUSE Sniffing has long been one of the most popular forms of passive attacks by hackers. The ability to “listen” to network conversations is very powerful and 216

AU1518Ch14Frame Page 217 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse intriguing. A hacker can use the packet sniffer for a variety of attacks and information-gathering activities. They may be installed to capture usernames and passwords, gather information on other hosts attached to the same network, read e-mail, or capture other proprietary information or data. Hackers are notorious for installing root kits on their victim hosts. These root kits contain various programs designed to circumvent security on a host and allow a hacker to access a host without the administrator’s knowledge. Most modern root kits, or backdoor programs, include tools such as stealth backdoors, keystroke loggers, and often specialized packet sniffers that can capture sensitive information. The SubSeven backdoor for Windows even includes a remotely accessible GUI (graphical user interface) packet sniffer. The GUI makes the packet sniffer easily accessible and simple to use. The packet sniffer can be configured to collect network traffic, save this information into a log, and relay these logs. Network Discovery Information gathering is one of the first steps hackers must take when attacking a host. In this phase of the attack, they are trying to learn as much about a host or network as they can. If the attackers have already compromised a host and installed a packet sniffer, they can quickly learn more about the compromised host as well as other hosts with whom that host communicates. Hosts are often configured to trust one another. This trust can quickly be discovered using a packet sniffer. In addition, the attacker can quickly learn about other hosts on the same network by monitoring the network traffic and activity. Network topology information can also be gathered. By reviewing the IP addresses and subnets in the captures, the attacker can quickly get a feel for the layout of the network. What hosts exist on the network and are critical? What other subnets exist on the network? Are there extranet connections to other companies or vendors? All of these questions can be answered by analyzing the network traffic captured by the packet sniffer. Credential Sniffing Credential sniffing is the act of using a packet capture tool to specifically look for usernames and passwords. Several programs exist only for this specific purpose. One such UNIX program called Esniff.c only captures the first 300 bytes of all Telnet, FTP, and rlogin sessions. This particular program can capture username and password information very quickly and efficiently. In the Windows environment, L0phtcrack is a program that contains a sniffer that can capture hashed passwords used by Windows systems using LAN manager authentication. Once the hash has been captured, the 217

AU1518Ch14Frame Page 218 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Exhibit 14-4. Protocols vulnerable to packet sniffing. Protocol

Vulnerability

Telnet and rlogin HTTP

Credentials and data are sent in cleartext Basic authentication sends credentials in a simple encoded form, not encrypted; easily readable if SSL or other encryption is not used Credentials and data are sent in cleartext Credentials and data are sent in cleartext Community strings for SNMPv1 (the most widely used) are sent in cleartext, including both public and private community strings

FTP POP3 and IMAP SNMP

L0phtcrack program runs a dictionary attack against the password. Depending on the length and complexity of the password, it can be cracked in a matter of minutes, hours, or days. Another popular and powerful password sniffing program is dsniff. This tool’s primary purpose is credential sniffing and can be used on a wide range of protocols including, but not limited to, HTTP, HTTPS, POP3, and SSH. Use of a specific program like Esniff.c, L0phtcrack, or dsniff is not even necessary, depending on the application or protocol. A simple packet sniffer tool in the hands of a skilled hacker can be very effective. This is due to the very insecure nature of the various protocols. Exhibit 14-4 lists some of the protocols that are susceptible to packet sniffing. E-Mail Sniffing How many network administrators or security practitioners have sent or received a password via e-mail? Most, if not all, have at some point in time. Very few e-mail systems are configured to use encryption and are therefore vulnerable to packet sniffers. Not only is the content of the e-mail vulnerable but the usernames and passwords are often vulnerable as well. POP3 (Post Office Protocol version 3) is a very popular way to access Internet e-mail. POP3 in its basic form uses usernames and passwords that are not encrypted. In addition, the data can be easily read. Security is always a balance of what is secure and what is convenient. Accessing e-mail via a POP3 client is very convenient. It is also very insecure. One of the risks security practitioners must be aware of is that, by allowing POP3 e-mail into their enterprise network, they may also be giving hackers both a username and password to access their internal network. Many systems within an enterprise are configured with the same usernames; and from the user’s standpoint, they often synchronize their passwords across multiple systems for simplicity’s sake or possibly use a single 218

AU1518Ch14Frame Page 219 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse sign-on system. For example, say John Smith has a username of “JSMITH” and has a password of “FvYQ-6d3.” His username would not be difficult to guess, but his password is fairly complex and contains a random string of characters and numbers. The enterprise network that John is accessing has decided to configure its e-mail server to accept POP3 connections because several users, including John, wanted to use a POP3 client to remotely access their e-mail. The enterprise also has a VPN device configured with the same username and password as the e-mail system. If attackers compromise John’s password via a packet sniffer watching the POP3 authentication sequence, they may quickly learn they now have access directly into the enterprise network using the same username and password on the Internet-accessible host called “VPN.” This example demonstrates the vulnerability associated with allowing certain insecure protocols and system configurations. Although the password may not have been accessible through brute force, the attackers were able to capture the password in the clear along with its associated username. In addition, they were able to capitalize on the vulnerability by applying the same username and password to a completely separate system. ADVANCED SNIFFING TOOLS Switched Ethernet Networks “No need to worry. I have a switched Ethernet network.” Wrong! It used to be common for network administrators to refer to a switched network as secure. While it is true they are more secure, several vulnerabilities and techniques have surfaced over the past several years that make them less secure. Reconfigure SPAN/Mirror Port. The most obvious way to capture packets

in a switched network is to reconfigure the switch to send all packets to the port into which the packet sniffer is plugged. This can be done with one simple command line in a Cisco router. Once configured, the switch will send all packets for a port, group of ports, or even an entire VLAN directly to the specified port. This emphasizes the need for increased switch security in today’s environments. A single switch without a password, or with a simple password, can allow an intruder access to a plethora of data and information. Incidentally, this is an excellent reason why a single Ethernet switch should not be used inside and outside a firewall. Ideally, the outside, inside, and DMZ should have their own separate physical switches. Also, use a stronger form of authentication on the network devices other than passwords only. If passwords must be used, make sure they are very complex; and do not use the same password for the outside, DMZ, and inside switches. 219

AU1518Ch14Frame Page 220 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Switch Jamming. Switch jamming involves overflowing the address table of a switch with a flood of false MAC addresses. For some switches this will cause the switch to change from “bridging” mode into “repeating” mode, where all frames are broadcast to all ports. When the switch is in repeating mode, it acts like a hub and allows an attacker to capture packets as if they were on the same local area network. ARP Redirect. An ARP redirect is where a host is configured to send a false ARP request to another host or router. This false request essentially tricks the target host or router into sending traffic destined for the victim host to the attack host. Packets are then forwarded from the attacker’s computer back to the victim host, so the victim cannot tell the communication is being intercepted. Several programs exist that allow this to occur, such as ettercap, angst, and dsniff. ICMP Redirect. An ICMP redirect is similar to the ARP redirect, but in this case the victim’s host is told to send packets directly to an attacker’s host, regardless of how the switch thinks the information should be sent. This too would allow an attacker to capture packets to and from a remote host. Fake MAC Address. Switches forward information based on the MAC (Media Access Control) address of the various hosts to which it is connected. The MAC address is a hardware address that is supposed to uniquely identify each node of a network. This MAC address can be faked or forged, which can result in the switch forwarding packets (originally destined for the victim’s host) to the attacker’s host. It is possible to intercept this traffic and then forward the traffic back to the victim computer, so the victim host does not know the traffic is being intercepted. Other Switch Vulnerabilities. Several other vulnerabilities related to switched networks exist; but the important thing to remember is that, just because a network is built entirely of switches, it does not mean that the network is not vulnerable to packet sniffing. Even without exploiting a switch network vulnerability, an attacker could install a packet sniffer on a compromised host.

Wireless Networks Wireless networks add a new dimension to packet sniffing. In the wired world, an attacker must either remotely compromise a system or gain physical access to the network in order to capture packets. The advent of the wireless network has allowed attackers to gain access to an enterprise without ever setting foot inside the premises. For example, with a simple setup including a laptop, a wireless network card, and software packages downloaded over the Internet, an attacker has the ability to detect, connect to, and monitor traffic on a victim’s network. 220

AU1518Ch14Frame Page 221 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse Exhibit 14-5. Suggestions for mitigating risk associated with insecure protocols. Insecure Protocol Secure Solution Telnet and rlogin HTTP FTP POP3 and IMAP SNMP

Replace Telnet or rlogin with Secure Shell (SSH) Run the HTTP or HTTPS session over a Secure Sockets Layer (SSL) or Transport Layer Security (TLS) connection Replace with secure copy (SCP) or create an IPSec VPN between the hosts Replace with SMIME or use PGP encryption Increase the security by using SNMPv2 or SNMPv3, or create a management IPSec VPN between the host and the network management server

The increase in the popularity of wireless networks has also been followed by an increase in war-driving. War-driving is the act of driving around in a car searching for wireless access points and networks with wireless sniffer-like tools. The hacker can even configure a GPS device to log the exact location of the wireless network. Information on these wireless networks and their locations can be added to a database for future reference. Several sites on the Internet even compile information that people have gathered from around the world on wireless networks and their locations. REDUCING THE RISK There are many ways to reduce the risk associated with packet sniffers. Some of them are easy to implement, while others take complete reengineering of systems and processes. Use Encryption The best way to mitigate risk associated with packet sniffers is to use encryption. Encryption can be deployed at the network level, in the applications, and even at the host level. Exhibit 14-5 lists the “insecure” protocols discussed in the previous section, and suggests a “secure” solution that can be deployed. Security practitioners should be aware of the protocols in use on their networks. They should also be aware of the protocols used to connect to and transfer information outside their network (either over the Internet or via extranet connections). A quick way to determine if protocols vulnerable to sniffing are being used is to check the rule set on the Internet or extranet firewalls. If insecure protocols are found, the security practitioner should investigate each instance and determine exactly what information is being transferred and how sensitive the information is. If the information is sensitive and a more secure alternative exists, the practitioner should recommend and implement a secure alternative. Often, this requires the 221

AU1518Ch14Frame Page 222 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY security practitioner to educate the users on the issues associated with using insecure means to connect to and send information to external parties. IPSec VPNs. A properly configured IPSec VPN can significantly reduce the risk associated with insecure protocols as well. The VPN can be configured from host to host, host to gateway, or gateway to gateway, depending on the environment and its requirements. The VPN “tunnels” the traffic in a secure fashion that prevents an attacker from sniffing the traffic as it traverses the network. Keep in mind, however, that even if a VPN is installed, an attack could still compromise the endpoints of the VPN and have access to the sensitive information directly on the host. This highlights the increased need for strong host security on the VPN endpoint, whether it is a Windows client connecting from a home network or a VPN router terminating multiple VPN connections.

Use Strong Authentication Because passwords are vulnerable to brute-force attack or outright sniffing over the network, an obvious risk mitigation would be to stop using passwords and use a stronger authentication mechanism. This could involve using Kerberos, token cards, smart cards, or even biometrics. The security practitioner must take into consideration the business requirements and the costs associated with each solution before determining which authentication method suits a particular system, application, or enterprise as a whole. By configuring a system to use a strong authentication method, the vulnerability of discovered passwords is no longer an issue. Patches and Updates To capture packets on the network, a hacker must first compromise a host (assuming the hacker does not have physical access). If all the latest patches have been applied to the hosts, the risk of someone compromising a host and installing a capture tool will be significantly reduced. Secure the Wiring Closets Because physical access is one way to access a network, make sure your wiring closets are locked. It is a very simple process to ensure the doors are secured to the wiring closets. A good attack and penetration test will often begin with a check of the physical security and of the security of the wiring closets. If access to a closet is gained and a packet sniffer is set up, a great deal of information can be obtained in short order. There is an obvious reason why an attack and penetration might begin this way. If the perimeter network and the remote access into a company 222

AU1518Ch14Frame Page 223 Thursday, November 14, 2002 6:19 PM

Packet Sniffers: Use and Misuse are strong, the physical security may likely be the weak link in the chain. A hacker who is intent on gaining access to the network goes through the same thought process. Also, keep in mind that with the majority of attacks originating from inside the network, you can mitigate the risk of an internal employee using a packet sniffer in a wiring closet by simply locking the doors. Detecting Packet Sniffers Another way to reduce the risk associated with packet sniffers is to monitor the monitors, so to speak. This involves running a tool that can detect a host’s network interface cards running in promiscuous mode. Several tools exist, from simple command-line utilities — which tell whether or not a NIC on the local host is running in promiscuous mode — to more elaborate programs such as Antisniff, which actively scans the network segment looking for other hosts with NICs running in promiscuous mode. SUMMARY The sniffer can be a powerful tool in the hands of the network administrator or security practitioner. Unfortunately, it can be equally powerful in the hands of the hacker. Not only are these tools powerful, they are relatively easy to download off the Internet, install, and use. Security practitioners must be aware of the dangers of packet sniffers and design and deploy security solutions that mitigate the risks associated with them. Keep in mind that using a packet sniffer to gather credential information on one system can often be used to access other unrelated systems with the same username and password. ABOUT THE AUTHOR Steve A. Rodgers, CISSP, has been assisting clients in securing their information assets for over six years. Rodgers specializes in attack and penetration testing, security policy and standards development, and security architecture design. He is the co-founder of Security Professional Services (www.securityps.com) and can be reached at [email protected]

223

AU1518Ch14Frame Page 224 Thursday, November 14, 2002 6:19 PM

AU1518Ch15Frame Page 225 Thursday, November 14, 2002 6:19 PM

Chapter 15

ISPs and Denial-of-Service Attacks K. Narayanaswamy, Ph.D.

A denial-of-service (DoS) attack is any malicious attempt to deprive legitimate customers of their ability to access services, such as a Web server. DoS attacks fall into two broad categories: 1. Server vulnerability DoS attacks: attacks that exploit known bugs in operating systems and servers. These attacks typically will use the bugs to crash programs that users routinely rely upon, thereby depriving those users of their normal access to the services provided by those programs. Examples of vulnerable systems include all operating systems, such as Windows NT or Linux, and various Internetbased services such as DNS, Microsoft’s IIS Servers, Web servers, etc. All of these programs, which have important and useful purposes, also have bugs that hackers exploit to bring them down or hack into them. This kind of DoS attack usually comes from a single location and searches for a known vulnerability in one of the programs it is targeting. Once it finds such a program, the DoS attack will attempt to crash the program to deny service to other users. Such an attack does not require high bandwidth. 2. Packet flooding DoS attacks: attacks that exploit weaknesses in the Internet infrastructure and its protocols. Floods of seemingly normal packets are used to overwhelm the processing resources of programs, thereby denying users the ability to use those services. Unlike the previous category of DoS attacks, which exploit bugs, flood attacks require high bandwidth in order to succeed. Rather than use the attacker’s own infrastructure to mount the attack (which might be easier to detect), the attacker is increasingly likely

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

225

AU1518Ch15Frame Page 226 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY to carry out attacks through intermediary computers (called zombies) that the attacker has earlier broken into. Zombies are coordinated by the hacker at a later time to launch a distributed DoS (DDoS) attack on a victim. Such attacks are extremely difficult to trace and defend with the present-day Internet. Most zombies come from home computers, universities, and other vulnerable infrastructures. Often, the owners of the computers are not even aware that their machines are being co-opted in such attacks. The hacker community has invented numerous scripts to make it convenient for those interested in mounting such attacks to set up and orchestrate the zombies. Many references are available on this topic.1–4 We will invariably use the term DoS attacks to mean all denial-of-service attacks, and DDoS to mean flood attacks as described above. As with most things in life, there is good news and bad news in regard to DDoS attacks. The bad news is that there is no “silver bullet” in terms of technology that will make the problem disappear. The good news, however, is that with a combination of common sense processes and practices with, in due course, appropriate technology, the impact of DDoS attacks can be greatly reduced. THE IMPORTANCE OF DDoS ATTACKS Many wonder why network security and DDoS problems in particular have seemingly increased suddenly in seriousness and importance. The main reason, ironically, is the unanticipated growth and success of ISPs. The rapid growth of affordable, high-bandwidth connection technologies (such as DSL, cable modem, etc.) offered by various ISPs has brought in every imaginable type of customer to the fast Internet access arena: corporations, community colleges, small businesses, and the full gamut of home users. Unfortunately, people who upgrade their bandwidth do not necessarily upgrade their knowledge of network security at the same time; all they see is what they can accomplish with speed. Few foresee the potential security dangers until it is too late. As a result, the Internet has rapidly become a high-speed network with depressingly low per-site security expertise. Such a network is almost an ideal platform to exploit in various ways, including the mounting of DoS attacks. Architecturally, ISPs are ideally situated to play a crucial role in containing the problem, although they have traditionally not been proactive on security matters. A recent study by the University of San Diego estimates that there are over 4000 DDoS attacks every week.5 Financial damages from the infamous February 2000 attacks on Yahoo, CNN, and eBay were estimated to be around $1 billion.6 Microsoft, Internet security watchdog CERT, the Department of Defense, and even the White House have been targeted by attackers. 226

AU1518Ch15Frame Page 227 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks Of course, these are high-profile installations, with some options when it comes to responses. Stephen Gibson documents how helpless the average enterprise might be to ward off DDoS attacks at www.scr.com. There is no doubt that DoS attacks are becoming more numerous and deadly. WHY IS DDoS AN ISP PROBLEM? When major corporations suffer the kind of financial losses just described and given the fanatically deterministic American psyche that requires a scapegoat (if not a reasonable explanation) for every calamity and the litigious culture that has resulted from it, rightly or wrongly, someone is eventually going to pay dearly. The day is not far off when, in the wake of a devastating DDoS attack, an enterprise will pursue litigation against the owner of the infrastructure that could (arguably) have prevented an attack with due diligence. A recent article explores this issue further from the legal perspective of an ISP.7 Our position is not so much that you need to handle DDoS problems proactively today; however, we do believe you would be negligent not to examine the issue immediately from a cost/benefit perspective. Even if you have undertaken such an assessment already, you may need to revisit the topic in light of new developments and the state of the computing world after September 11, 2001. The Internet has a much-ballyhooed, beloved, open, chaotic, laissez faire philosophical foundation. This principle permeates the underlying Internet architecture, which is optimized for speed and ease of growth and which, in turn, has facilitated the spectacular explosion and evolution of this infrastructure. For example, thus far, the market has prioritized issues of privacy, speed, and cost over other considerations such as security. However, changes may be afoot and ISPs should pay attention. Most security problems at various enterprise networks are beyond the reasonable scope of ISPs to fix. However, the DDoS problem is indeed technically different. Individual sites cannot effectively defend themselves against DDoS attacks without some help from their infrastructure providers. When under DDoS attack, the enterprise cannot block out the attack traffic or attempt to clear upstream congestion to allow some of its desirable traffic to get through. Thus, the very nature of the DDoS problem virtually compels the involvement of ISPs. The best possible outcome for ISPs is to jump in and shape the emerging DDoS solutions voluntarily with dignity and concern, rather than being perceived as having been dragged, kicking and screaming, into a dialog they do not want. Uncle Sam is weighing in heavily on DDoS as well. In December 2001, the U.S. Government held a DDoS technology conference in Arlington, Virginia, sponsored by the Defense Advanced Research Projects Agency (DARPA) 227

AU1518Ch15Frame Page 228 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY and the Joint Task Force–Central Network Operations. Fourteen carefully screened companies were selected to present their specific DDoS solutions to the government. Newly designated cybersecurity czar Richard Clarke, who keynoted the conference, stressed the critical importance of DDoS and how the administration views this problem as a threat to the nation’s infrastructure, and that protecting the Internet infrastructure is indeed part of the larger problem of homeland security. The current Republican administration, one might safely assume, is disposed toward deregulation and letting the market sort out the DDoS problem. In the reality of postSeptember 11 thinking, however, it is entirely conceivable that ISPs will eventually be forced to contend with government regulations mandating what they should provide by way of DDoS protection. WHAT CAN ISPs DO ABOUT DDoS ATTACKS? When it comes to DDoS attacks, security becomes a two-way street. Not only must the ISP focus on providing as much protection as possible against incoming DDoS attacks against its customers, but it must also do as much as possible to prevent outgoing DDoS attacks from being launched from its own infrastructure against others. All these measures are feasible and cost very little in today’s ISP environment. Minimal measures such as these can significantly reduce the impact of DDoS attacks on the infrastructure, perhaps staving off more draconian measures mandated by the government. An ISP today must have the ability to contend with the DDoS problem at different levels: • Understand and implement the best practices to defend against DDoS attacks. • Understand and implement necessary procedures to help customers during DDoS attacks. • Assess DDoS technologies to see if they can help. We address each of these major areas below. Defending against DDoS Attacks In discussing what an ISP can do, it is important to distinguish the ISP’s own infrastructure (its routers, hosts, servers, etc.), which it fully controls, from the infrastructure of the customers who lease its Internet connectivity, which the ISP cannot, and should not, control. Most of the measures we recommend for ISPs are also appropriate for their customers to carry out. The extent to which ISPs can encourage or enable their customers to follow these practices will be directly correlated to the number of DDoS attacks. 228

AU1518Ch15Frame Page 229 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks Step 1: Ensure the Integrity of the Infrastructure. An ISP plays a critical role in the Internet infrastructure. It is, therefore, very important for ISPs to ensure that their own routers and hosts are resistant to hacker compromise. This means following all the necessary best practices to protect these machines from break-ins and intrusions of any kind. Passwords for user and root accounts must be protected with extra care, and old accounts must be rendered null and void as soon as possible.

In addition, ISPs should ensure that their critical servers (DNS, Web, etc.) are always current on software patches, particularly if they are security related. These programs will typically have bugs that the vendor eliminates through new patches. When providing services such as Telnet, FTP, etc., ISPs should consider the secure versions of these protocols such as SSH, SCP, etc. The latter versions use encryption to set up secure connections, making it more difficult for hackers using packet sniffing tools to acquire usernames and passwords, for example. ISPs can do little to ensure that their users are as conscientious about these matters as they ought to be. However, providing users with the knowledge and tools necessary to follow good security practices themselves will be very helpful. Step 2: Resist Zombies in the Infrastructure. Zombies are created by hackers who break into computers. Although by no means a panacea, tools such as intrusion detection systems (IDSs) provide some amount of help in detecting when parts of an infrastructure have become compromised. These tools vary widely in functionality, capability, and cost. They have a lot of utility in securing computing assets beyond DDoS protection. (A good source on this topic is Note 8.) Certainly, larger customers of the ISP with significant computing assets should also consider such tools.

Where possible, the ISP should provide users (e.g., home users or small businesses) with the necessary software (e.g., downloadable firewalls) to help them. Many ISPs are already providing free firewalls, such as ZoneAlarm, with their access software. Such firewalls can be set up to maximize restrictions on the customers’ computers (e.g., blocking services that typical home computers are never likely to provide). Simple measures like these can greatly improve the ability of these computers to resist hackers. Most zombies can be now be discovered and removed from a computer by the traditional virus scanning software from McAffee, Symantec, and other vendors. It is important to scan not just programs but also any documents with executable content (such as macros). In other words, everything on a disk requires scanning. The only major problem with all virus scanning regimes is that they currently use databases that have signatures 229

AU1518Ch15Frame Page 230 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY of known viruses, and these databases require frequent updates as new viruses get created. As with firewalls, at least in cases where users clearly can use the help, the ISP could try bundling its access software, if any, with appropriate virus scanning software and make it something the user has to contend with before getting on the Internet. Step 3: Implement Appropriate Router Filters. Many DDoS attacks (e.g., Trinoo, Tribal Flood, etc.) rely on source address spoofing, an underlying vulnerability of the Internet protocols whereby the sender of a packet can conjure up a source address other than his actual address. In fact, the protocols allow packets to have completely fabricated, nonexistent source addresses. Several attacks actually rely on this weakness in the Internet. This makes attacks much more difficult to trace because one cannot figure out the source just by examining the packet contents because the attacker controls that.

There is no legitimate reason why an ISP should forward outgoing packets that do not have source addresses from its known legitimate range of addresses. It is relatively easy, given present-day routers, to filter outgoing packets at the border of an ISP that do not have valid source addresses. This is called ingress filtering, described in more detail in RFC 2267. Routers can also implement egress filtering at the point where traffic enters the ISP to ensure that source addresses are valid to the extent possible (e.g., source addresses cannot be from the ISP, packets from specific interfaces must match expected IP addresses, etc.). Note that such filters do not eliminate all DDoS attacks; however, they do force attackers to use methods that are more sophisticated and do not rely on ISPs forwarding packets with obviously forged source addresses. Many ISPs also have blocks of IP addresses set aside that will never be the source or destination of Internet traffic (see RFC 1918). These are addresses for traffic that will never reach the Internet. The ISP should not accept traffic with this destination, nor should it allow outbound traffic from those IP addresses set aside in this manner. Step 4: Disable Facilities You May Not Need. Ever y port that you open (albeit to provide a legitimate service) is a potential gate for hackers to exploit. Therefore, ISPs, like all enterprises, should ensure they block any and all services for which there is no need. Customer sites should certainly be provided with the same recommendations.

You should evaluate the following features to see if they are enabled and what positive value you get from their being enabled in your network: 230

AU1518Ch15Frame Page 231 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks • Directed broadcast. Some DDoS attacks rely on the ability to broadcast packets to many different addresses to amplify the impact of their handiwork. Directed broadcast is a feature that should not be needed for inbound traffic on border routers at the ISP. • Source routing. This is a feature that enables the sender of a packet to specify an ISP address through which the packet must be routed. Unless there is a compelling reason not to, this feature should be disabled because compromised computers within the ISP infrastructure can exploit this feature to become more difficult to locate during attacks. Step 5: Impose Rate Limits on ICMP and UDP Traffic. Many DDoS attacks exploit the vulnerability of the Internet where the entire bandwidth can be filled with undesirable packets of different descriptions. ICMP (Internet Control Message Protocol, or ping) packets and User Datagram Protocol (UDP) are examples of this class of packets. You cannot completely eliminate these kinds of packets, but neither should you allow the entire bandwidth to be filled with such packets.

The solution is to use your routers to specify rate limits for such packets. Most routers come with simple mechanisms called class-based queuing (CBQ), which you can use to specify the bandwidth allocation for different classes of packets. You can use these facilities to limit the rates allocated for ICMP, UDP, and other kinds of packets that do not have legitimate reasons to hog all available bandwidth. Assisting Customers during a DDoS Attack It is never wise to test a fire hydrant during a deadly blaze. In a similar manner, every ISP will do well to think through its plans should one of its customers become the target of DDoS attacks. In particular, this will entail full understanding and training of the ISP’s support personnel in as many (preferably all) of the following areas as possible: • Know which upstream providers forward traffic to the ISP. ISP personnel need to be familiar with the various providers with whom the ISP has Internet connections and the specific service level agreements (SLAs) with each, if any. During a DDoS attack, bad traffic will typically flow from one or more of these upstream providers, and the options of an ISP to help its customers will depend on the specifics of its agreements with its upstream providers. • Be able to identify and isolate traffic to a specific provider. Once the customer calls during a DDoS directed at his infrastructure, the ISP should be able to determine the source of the bad traffic. All personnel should be trained in the necessary diagnostics to do so. Customers will typically call with the ISP addresses they see on the attack traffic. While this might not be the actual source of the attack, because of 231

AU1518Ch15Frame Page 232 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY source spoofing, it should help the ISP in locating which provider is forwarding the bad traffic. • Be able to filter or limit the rate of traffic from a given provider. Often, the ISP will be able to contact the upstream provider to either filter or limit the rate of attack traffic. If the SLA does not allow for this, the ISP can consider applying such a filter at its own router to block the attack traffic. • Have reliable points of contact with each provider. The DDoS response by an ISP is only as good as its personnel and their knowledge of what to do and whom to contact from their upstream providers. Once again, such contacts cannot be cultivated after an attack has occurred. It is better to have these pieces of information in advance. Holding DDoS attack exercises to ensure that people can carry out their duties during such attacks is the best way to make sure that everyone knows what to do to help the customer. Assessing DDoS Technologies Technological solutions to the DDoS problem are intrinsically complex. DDoS attacks are a symptom of the vulnerabilities of the Internet, and a single site is impossible to protect without cooperation from upstream infrastructure. New products are indeed emerging in this field; however, if you are looking to eliminate the problem by buying an affordable rack-mountable panacea that keeps you in a safe cocoon, you are fresh out of luck. Rather than give you a laundry list of all the vendors, I am going to categorize these products somewhat by the problems they solve, their features, and their functionality so that you can compare apples to apples. Still, the comparison can be a difficult one because various products do different things and more vendors are continually entering this emerging, niche market. Protection against Outgoing DDoS Attacks. Unlike virus protection tools, which are very general in focus, these tools are geared just to find DoS worms and scripts. There are basically two kinds of products that you can find here. Host-Based DDoS Protection. Such protection typically prevents hosts from

being taken over as zombies in a DDoS attack. These tools work in one of two major ways: (1) signature analysis, which, like traditional virus scanners, stores a database of known scripts and patterns and scans for known attack programs; and (2) behavior analysis, which monitors key system parameters for the behavior underlying the attacks (rather than the specific attack programs) and aborts the programs and processes that induce the underlying bad behavior. 232

AU1518Ch15Frame Page 233 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks Established vendors of virus scanning products, such as McAffee, Symantec, and others, have extended their purview to include DoS attacks. Other vendors provide behavior-analytic DDoS protection that essentially detects and prevents DDoS behavior emanating from a host. The major problem with host-based DDoS protection, from an ISP’s perspective, is that one cannot force the customers to use such tools or to scan their disks for zombies, etc. Damage-Control Devices. A few recent products (such as Captus’ Captio and Cs3, Inc.’s Reverse Firewall 9,10) focus on containing the harm that DDoS attacks can do in the outgoing direction. They restrict the damage from DDoS to the smallest possible network. These devices can be quite useful in conjunction with host-based scanning tools. Note that the damage-control devices do not actually prevent an infrastructure from becoming compromised; however, they do provide notification that there is bad traffic from your network and provide its precise origin. Moreover, they give you time to act by throttling the attack at the perimeter of your network and sending you a notification. ISPs could consider using these devices as insurance to insulate themselves from the damage bad customers can do to them as infrastructure providers. Protection against Incoming Attacks. As we have mentioned before, defending against incoming attacks at a particular site requires cooperation from the upstream infrastructure. This makes DDoS protection products quite complex. Moreover, various vendors have tended to realize the necessary cooperation in very different ways. A full treatment of all of these products is well beyond the scope of this chapter. However, here are several issues you need to consider as an ISP when evaluating these products:

• Are the devices inline or offline? An inline device will add, however minimally, to the latency. Some of the devices are built using hardware in an effort to reduce latency. Offline devices, while they do not have that problem, do not have the full benefit of looking at all the traffic in real-time. This could affect their ability to defend effectively. • Do the devices require infrastructure changes and where do they reside? Some of the devices either replace or deploy alongside existing routers and firewalls. Other technologies require replacement of the existing infrastructure. Some of the devices need to be close to the core routers of the network, while most require placement along upstream paths from the site being protected. • How do the devices detect DDoS attacks and what is the likelihood of false positives? The degree of sophistication of the mechanism of detection and its effectiveness in indicating real attacks is all-important in any security technology. After all, a dog that barks the entire day does protect you from some burglars — but you just might stop listening to its warnings! Most of the techniques use comparisons of actual 233

AU1518Ch15Frame Page 234 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY traffic to stored profiles of attacks, or “normal” traffic, etc. A variety of signature-based heuristics are applied to detect attacks. The jury is still out on how effective such techniques will be in the long run. • How do the devices know where the attack is coming from? A major problem in dealing effectively with DDoS attacks is to know, with any degree of certainty, the source of the attacks. Because of source address spoofing on the Internet, packets do not necessarily have to originate where they say they do. All the technologies have to figure out is from where in the upstream infrastructure the attack traffic is flowing. It is the routers along the attack path that must cooperate to defend against the attack. Some of the approaches require that their devices communicate in real-time to form an aggregate picture of where the attack is originating. • What is the range of responses the devices will take and are you comfortable with them? Any DDoS defense must minimally stop the attack from reaching the intended victim, thereby preventing the victim’s computing resources from deteriorating or crashing. However, the real challenge of any DDoS defense is to find ways for legitimate customers to get through while penalizing only the attackers. This turns out to be the major technical challenge in this area. The most common response includes trying to install appropriate filters and rate limits to push the attack traffic to the outer edge of the realm of control of these devices. At the present time, all the devices that provide DDoS defense fall into this category. How effective they will be remains to be seen. The products mentioned here are quite pricey even though the technologies are still being tested under fire. DDoS will have to be a very important threat in order for smaller ISPs to feel justified in investing their dollars in these devices. Finally, many of the approaches are proprietary in nature, so side-by-side technical comparisons are difficult to conduct. Some industry publications do seem to have tested some of these devices in various ways. A sampling of vendors and their offerings, applying the above yardsticks, is provided here: • Arbor Networks (www.arbornetworks.com): offline devices, near core routers, anomaly-based detection; source is tracked by communication between devices, and defense is typically the positioning of a filter at a router where the bad traffic enters the network • Asta Networks (www.astanetworks.com): offline devices that work alongside routers within a network and upstream, signature-based detection; source is tracked by upstream devices, and defense is to use filters at upstream routers • Captus Networks (www.captusnetworks.com): inline device used to throttle incoming 234

AU1518Ch15Frame Page 235 Thursday, November 14, 2002 6:19 PM

ISPs and Denial-of-Service Attacks or outgoing attacks; uses windowing to detect non-TCP traffic and does not provide ways for customers to get in; works as a damage-control device for outgoing attacks • Cs3, Inc. (www.cs3-inc.com): inline devices, modified routers, and firewalls; routers mark packets with path information to provide fair service, and firewalls throttle attacks; source of the attack provided by the path information, and upstream neighbors are used to limit attack traffic when requested; Reverse Firewall is a damage-control device for outgoing attacks • Mazu Networks (www.mazunetworks.com): inline devices at key points in network; deviations from stored historical traffic profile indicate attack; the source of the attack is pinpointed by communication between devices, and defense is provided by using filters to block out the bad traffic • Okena (www.okena.com): host-based system that has extended intrusion detection facilities to provide protection against zombies; it is a way to keep one’s infrastructure clean but is not intended to protect against incoming attacks IMPORTANT RESOURCES Finally, the world of DoS, as is indeed the world of Internet security, is dynamic. If your customers are important to you, you should have people that are on top of the latest threats and countermeasures. Excellent resources in the DoS security arena include: • Computer Emergency Response Team (CERT) (www.cert.org): a vast repository of wisdom about all security-related problems with a growing section on DoS attacks; you should monitor this site regularly to find out what you need to know about this area. This site has a very independent and academic flavor. Funded by the Department of Defense, this organization is likely to play an even bigger role in putting out alerts and other information on DDoS. • System Administration, Networking and Security (SANS) Institute (www.sans.org): a cooperative forum in which you can instantly access the expertise of over 90,000 professionals worldwide. It is an organization of industry professionals, unlike CERT. There is certainly a practical orientation to this organization. It offers courses, conferences, seminars, and White Papers on various topics that are well worth the investment. It also provides alerts and analyses on security incidents through incidents.org, a related facility. 235

AU1518Ch15Frame Page 236 Thursday, November 14, 2002 6:19 PM

TELECOMMUNICATIONS AND NETWORK SECURITY Notes 1. Houle, K. and Weaver, G., “Trends in Denial of Service Technology,” CERT Coordination Center, October 2001, http://www.cert.org/archive/pdf/DOS_trends.pdf. 2. Myers, M., “Securing against Distributed Denial of Service Attacks,” Client/Server Connection, Ltd., http://www.cscl.com/techsupp/techdocs/ddossamp.html. 3. Paul, B., “DDOS: Internet Weapons of Mass Destruction,” Network Computing, Jan. 1, 2001, http://www.networkcomputing.com/1201/1201f1c2.html. 4. Harris, S., “Denying Denial of Service,” Internet Security, Sept. 2001, http://www. infosecuritymag.com/articles/september01/cover.shtml. 5. Lemos, R., “DoS Attacks Underscore Net’s Vulnerability,” CNETnews.com, June 1, 2001, http://news.cnet.com/news/0-1003-200-6158264.html?tag=mn_hd. 6. Yankee Group News Releases, Feb. 10, 2000, http://www.yankeegroup.com/webfolder/ yg21a.nsf/press/384D3C49772576EF85256881007DC0EE?OpenDocument. 7. Radin, M.J. et al., “Distributed Denial of Service Attacks: Who Pays?,” Mazu Networks, http://www.mazunetworks.com/radin-es.html. 8. SANS Institute Resources, Intrusion Detection FAQ, Version 1.52, http://www.sans.org/ newlook/resources/IDFAQ/ID_FAQ.htm. 9. Savage, M., “Reverse Firewall Stymies DDOS Attacks,” Computer Reseller News, Dec. 28, 2001, http://www.crn.com/sections/BreakingNews/breakingnews.asp? ArticleID=32305. 10. Desmond, P., “Cs3 Mounts Defense against DDOS Attacks,” eComSecurity.com, Oct. 30, 2001, http://www.ecomsecurity.com/News_2001-10-30_DDos.cfm.

Further Reading Singer, A., “Eight Things that ISPs and Network Managers Can Do to Help Mitigate DDOS Attacks,” San Diego Supercomputer Center, http://security.sdsc.edu/publications/ddos.shtml.

ABOUT THE AUTHOR Dr. K. Narayanaswamy, Ph.D., Chief Technology Officer and co-founder, Cs3, Inc., is an accomplished technologist who has successfully led the company’s research division since inception. He was the principal investigator of several DARPA and NSF research projects that have resulted in the company’s initial software product suite, and leads the company’s current venture into DDoS and Internet infrastructure technology. He has a Ph.D. in Computer Science from the University of Southern California.

236

AU1518Ch16Frame Page 237 Thursday, November 14, 2002 6:18 PM

Domain 3

Security Management Practices

AU1518Ch16Frame Page 238 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES This domain is typically one of the larger in previous Handbooks, and this volume is not dissimilar. It is often said that information security is far more an infrastructure of people and process than technology. The chapters found here truly reflect this reality. In this domain, we find chapters that address the security function within an organization. Much if not all of the success of a security program can be attributed to organizational effectiveness, which spans the continuum from how much support executive management lends to the program to how well each employee acts on their individual accountability to carry out the program’s intent. However, we also see that there is no one-size-fitsall solution. Within this domain, we read various opinions on where the security function should report, strategies for partnering with other risk management functions, how to develop and protect an information security budget, methods for encouraging the adoption of security throughout the enterprise, and ways in which people — the most critical resource — can be leveraged to achieve security success. An effective security program must be grounded in clearly stated and communicated policy; however, as is pointed out here, policy development cannot be considered as one time and you’re done. Policies have life cycles; and, as the center posts of high-quality security programs, entail the fundamental principles on which the organization stands. Policies also set expectations for critical overarching issues such as ownership, custodianship, and classification of information; people issues such as setting the expectation of privacy for employees; what is the appropriate use of computing resources; and technical policies for virus protection and electronic mail security. This volume of the Handbook certainly reflects a sign of the times. Although in the past outsourcing security was considered to be taboo, many organizations currently acknowledge that good security professionals are rare, security is far from easy, economies of scale can be recognized, and synergies can be yielded. Outsourcing some or all of security administration and security operations is doable — a successful strategy when done properly. Therefore, we feature several viewpoints on contracting with external organizations to manage all or parts of the security function.

238

AU1518Ch16Frame Page 239 Thursday, November 14, 2002 6:18 PM

Chapter 16

The Human Side of Information Security Kevin Henry, CISA, CISSP

We often hear that people are the weakest link in any security model. That statement brings to mind the old adage that a chain is only as strong as its weakest link. Both of these statements may very well be true; however, they can also be false and misleading. Throughout this chapter we are going to define the roles and responsibilities of people, especially in relation to information security. We are going to explore how people can become our strongest asset and even act as a compensating strength for areas where mechanical controls are ineffective. We will look briefly at the training and awareness programs that can give people the tools and knowledge to increase security effectiveness rather than be regarded as a liability and a necessary evil. THE ROLE OF PEOPLE IN INFORMATION SECURITY First, we must always remember that systems, applications, products, etc., were created for people — not the other way around. As marketing personnel know, the end of any marketing plan is when a product or service is purchased for, and by, a person. All of the intermediate steps are only support and development for the ultimate goal of providing a service that a person is willing, or needs, to purchase. Even though many systems in development are designed to reduce labor costs, streamline operations, automate repetitive processes, or monitor behavior, the system itself will still rely on effective management, maintenance upgrades, and proper use by individuals. Therefore, one of the most critical and useful shifts in perspective is to understand how to get people committed to and knowledgeable about their roles and responsibilities as well as the importance of creating, enforcing, and committing to a sound security program. Properly trained and diligent people can become the strongest link in an organization’s security infrastructure. Machines and policy tend to be static and limited by historical perspectives. People can respond quickly, 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

239

AU1518Ch16Frame Page 240 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES absorb new data and conditions, and react in innovative and emotional ways to new situations. However, while a machine will enforce a rule it does not understand, people will not support a rule they do not believe in. The key to strengthening the effectiveness of security programs lies in education, flexibility, fairness, and monitoring. THE ORGANIZATION CHART A good security program starts with a review of the organization chart. From this administrative tool, we learn hints about the structure, reporting relationships, segregation of duties, and politics of an organization. When we map out a network, it is relatively easy to slot each piece of equipment into its proper place, show how data flows from one place to another, show linkages, and expose vulnerabilities. It is the same with an organization chart. Here we can see the structure of an organization, who reports to whom, whether authority is distributed or centralized, and who has the ability or placement to make decisions — both locally and throughout the enterprise. Why is all of this important? In some cases, it is not. In rare cases, an ideal person in the right position is able to overcome some of the weaknesses of a poor structure through strength or personality. However, in nearly all cases, people fit into their relative places in the organizational structure and are constrained by the limitations and boundaries placed around them. For example, a security department or an emergency planning group may be buried deep within one silo or branch of an organization. Unable to speak directly with decision makers, financial approval teams, or to have influence over other branches, their efforts become more or less philosophical and ineffective. In such an environment the true experts often leave in frustration and are replaced by individuals who thrive on meetings and may have limited vision or goals. DO WE NEED MORE POLICY? Many recent discussions have centered on whether the information security community needs more policy or to simply get down to work. Is all of this talk about risk assessment, policy, roles and responsibilities, disaster recovery planning, and all of the other soft issues that are a part of an information security program only expending time and effort with few results? In most cases, this is probably true. Information security must be a cohesive, coordinated action, much like planning any other large project. A house can be built without a blueprint, but endless copies of blueprints and modifications will not build a house. However, proper planning and methodologies will usually result in a project that is on time, meets customer needs, has a clearly defined budget, stays within its budget, and is almost always run at a lower stress level. As when a home is built, the blueprints almost always change, modifications are done, and, together with 240

AU1518Ch16Frame Page 241 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the physical work, the administrative effort keeps the project on track and schedules the various events and subcontractors properly. Many firms have information security programs that are floundering for lack of vision, presentation, and coordination. For most senior managers, information security is a gaping dark hole into which vast amounts of cash are poured with few outcomes except further threats, fear-mongering, and unseen results. To build an effective program requires vision, delegation, training, technical skills, presentation skills, knowledge, and often a thick skin — not necessarily in that order. The program starts with a vision. What do we want to accomplish? Where would we like to be? Who can lead and manage the program? How can we stay up to date, and how can we do it with limited resources and skills? A vision is the perception we have of the goal we want to reach. A vision is not a fairy tale but a realistic and attainable objective with clearly defined parameters. A vision is not necessarily a roadmap or a listing of each component and tool we want to use; rather, it is a strategy and picture of the functional benefits and results that would be provided by an effective implementation of the strategic vision. How do we define our vision? This is a part of policy development, adherence to regulations, and risk assessment. Once we understand our security risks, objectives, and regulations, we can begin to define a practical approach to addressing these concerns. A recent seminar was held with security managers and administrators from numerous agencies and organizations. The facilitator asked the group to define four major technical changes that were on the horizon that would affect their agencies. Even among this knowledgeable group, the response indicated that most were unaware of the emerging technologies. They were knowledgeable about current developments and new products but were unaware of dramatic changes to existing technologies that would certainly have a major impact on their operations and technical infrastructures within the next 18 months. This is a weakness among many organizations. Strategic planning has been totally overwhelmed by the need to do operational and tactical planning. Operational or day-to-day planning is primarily a response mechanism — how to react to today’s issues. This is kindly referred to as crisis management; however, in many cases the debate is whether the managers are managing the crisis or the crisis is managing the managers. Tactical planning is short- to medium-term planning. Sometimes, tactical planning is referred to in a period of up to six months. Tactical planning is 241

AU1518Ch16Frame Page 242 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES forecasting developments to existing strategies, upgrades, and operational process changes. Tactical planning involves understanding the growth, use, and risks of the environment. Good tactical plans prevent performance impacts from over-utilization of hardware resources, loss of key personnel, and market changes. Once tactical planning begins to falter, the impact is felt on operational activity and planning within a short time frame. Strategic planning was once called long-term planning, but that is relative to the pace of change and volatility of the environment. Strategic planning is preparing for totally new approaches and technologies. New projects, marketing strategies, new risks, and economic conditions are all a part of a good strategic plan. Strategic planning is looking ahead to entirely new solutions for current and future challenges — seeing the future and how the company or organization can poise itself to be ready to adopt new technologies. A failure to have a strategic plan results in investment in technologies that are outdated, have a short life span, are ineffective, do not meet the expectations of the users, and often result in a lack of confidence by senior management (especially from the user groups) in the information technology or security department. An information security program is not only a fire-fighting exercise; yet for many companies, that is exactly what they are busy with. Many system administrators are averaging more than five patch releases a week for the systems for which they are responsible. How can they possibly keep up and test each new patch to ensure that it does not introduce other problems? Numerous patches have been found to contain errors or weaknesses that affect other applications or systems. In October 2001, anti-virus companies were still reporting that the LoveLetter virus was accounting for 2.5 percent of all help desk calls — more than a year after patches were available to prevent infection.1 What has gone wrong? How did we end up in the position we are in today? The problem is that not any one person can keep up with this rapidly growing and developing field. Here, therefore, is one of the most critical reasons for delegation: the establishment of the principles of responsibility and accountability in the correct departments and with the proper individuals. Leadership and placement of the security function is an ongoing and never-to-be-resolved debate. There is not a one-size-fits-all answer; however, the core concern is whether the security function has the influence and authority it needs to fulfill its role in the organization. The role of security is to inform, monitor, lead, and enforce best practice. As we look further at each individual role and responsibility in this 242

AU1518Ch16Frame Page 243 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security chapter, we will define some methods of passing on information or awareness training. SECURITY PLACEMENT The great debate is where the security department should reside within an organization. There are several historical factors that apply to this question. Until recently, physical security was often either outsourced or considered a less-skilled department. That was suitable when security consisted primarily of locking doors and patrolling hallways. Should this older physical security function be merged into the technical and cyber-security group? To use our analogy earlier of security being a chain, and the risk that one weak link may have a serious impact on the entire chain, it is probable that combining the functions of physical and technical security is appropriate. Physical access to equipment presents a greater risk than almost any other vulnerability. The trend to incorporate security, risk management, business continuity, and sometimes even audit under one group led by a chief risk officer is recognition both of the importance of these various functions and the need for these groups to work collaboratively to be effective. The position of chief risk officer (CRO) is usually as a member of the senior management team. From this position, the CRO can ensure that all areas of the organization are included in risk management and disaster recovery planning. This is an extremely accountable position. The CRO must have a team of diligent and knowledgeable leaders who can identify, assess, analyze, and classify risks, data, legislation, and regulation. They must be able to convince, facilitate, coordinate, and plan so that results are obtained; workable strategies become tactical plans; and all areas and personnel are aware, informed, and motivated to adhere to ethics, best practices, policy, and emergency response. As with so many positions of authority, and especially in an area where most of the work is administrative such as audit, business continuity planning, and risk management, the risk of gathering a team of paper pushers and “yes men” is significant. The CRO must resist this risk by encouraging the leaders of the various departments to keep each other sharp, continue raising the bar, and striving for greater value and benefits. THE SECURITY DIRECTOR The security director should be able to coordinate the two areas of physical and technical security. This person has traditionally had a law enforcement background, but these days it is important that this person have a good understanding of information systems security. This person ideally should have certification such as the CISSP (Certified Information 243

AU1518Ch16Frame Page 244 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES Systems Security Professional administered by ISC2 [www.isc2.org]) and experience in investigation and interviewing techniques. Courses provided by companies like John E. Reid and Associates can be an asset for this position. ROLES AND RESPONSIBILITIES The security department must have a clearly defined mandate and reporting structure. All of their work should be coordinated with the legal and human resources departments. In extreme circumstances they should have access directly to the board of directors or another responsible position so that they can operate confidentially anywhere within the organization, including the executive management team. All work performed by security should be kept confidential in order to protect information about ongoing investigations or erroneously damage the reputation of an individual or a department. Security should also be a focus point to which all employees, customers, vendors, and the public can refer questions or threats. When an employee receives an e-mail that they suspect may contain a virus or that alleges a virus is on the loose, they should know to contact security for investigation — and not to send the e-mail to everyone they know to warn them of the perceived threat. The security department enforces organizational policy and is often involved in the crafting and implementation of policy. As such, they need to ensure that policy is enforceable, understandable, comprehensive, up-todate, and approved by senior management. TRAINING AND AWARENESS The security director has the responsibility of promoting education and awareness as well as staying abreast of new developments, threats, and countermeasures. Association with organizations such as SANS (www.sans.org), ISSA (www.issa.org), and CSI (www.gocsi.org) can be beneficial. There are many other groups and forums out there; and the director must ensure that the most valued resources are used to provide alerts, trends, and product evaluation. The security department must work together with the education and training departments of the organization to be able to target training programs in the most effective possible manner. Training needs to be relevant to the job functions and risks of the attendees. If the training can be imparted in such a way that the attendees are learning the concepts and principles without even realizing how much they have learned, then it is probably ideal. Training is not a “do not do this” activity — ideally, training does not need to only define rules and regulations; rather, training is an 244

AU1518Ch16Frame Page 245 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security activity designed to instill a concept of best practice and understanding to others. Once people realize the reasons behind a guideline or policy, they will be more inclined to better standards of behavior than they would if only pressured into a firm set of rules. Training should be creative, varied, related to real life, and frequent. Incorporating security training into a ten-minute segment of existing management and staff meetings, and including it as a portion of the new employee orientation process, is often more effective than a day-long seminar once a year. Using examples can be especially effective. The effectiveness of the training is increased when an actual incident known to the staff can be used as an example of the risks, actions, retribution, and reasoning associated with an action undertaken by the security department. This is often called dragging the wolf into the room. When a wolf has been taking advantage of the farmer, bringing the carcass of the wolf into the open can be a vivid demonstration of the effectiveness of the security program. When there has been an incident or employee misuse, bringing this into the open (in a tactful manner) can be a way to prevent others from making the same mistakes. Training is not fear mongering. The attitude of the trainers should be to raise the awareness and behavior of the attendees to a higher level, not to explain the rules as if to criminals that they had “better behave or else.” This is perhaps the greatest strength of the human side of information security. Machines can be programmed with a set of rules. The machine then enforces these rules mechanically. If someone is able to slightly modify their activity or use a totally new attack strategy, they may be able to circumvent the rules and attack the machine or network. Also — because machines are controlled by people — when employees feel unnecessarily constrained by a rule, they may well disable or find a way to bypass the constraint and leave a large hole in the rule base. Conversely, a securityconscious person may be able to detect an aberration in behavior or even attitude that could be a precursor to an attack that is well below the detection level of a machine. REACTING TO INCIDENTS Despite our best precautions and controls, incidents will arise that test the strength of our security programs. Many incidents may be false alarms that can be resolved quickly; however, one of the greatest fears with false alarms is the tendency to become immune to the alarms and turn off the alarm trigger. All alarms should be logged and resolved. This may be done electronically, but it should not be overlooked. Alarm rates can be critical indicators of trends or other types of attacks that may be emerging; they can also be indicators of additional training requirements or employees attempting to circumvent security controls. 245

AU1518Ch16Frame Page 246 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES One of the tools used by security departments to reduce nuisance or false alarms is the establishment of clipping levels or thresholds for alarm activation. The clipping level is the acceptable level of error before triggering the alarm. These are often used for password lockout thresholds and other low-level activity. The establishment of the correct clipping level depends on historical events, the sensitivity of the system, and the granularity of the system security components. Care must be exercised to ensure that clipping levels are not set too high so that a low-level attack can be performed without bringing in an alarm condition. Many corporations use a tiered approach to incident response. The initial incident or alarm is recognized by a help desk or low-level technical person. This person logs the alarm and attempts to resolve the alarm condition. If the incident is too complex or risky to be resolved at this level, the technician refers the alarm to a higher-level technical expert or to management. It is important for the experts to routinely review the logs of the alarms captured at the initial point of contact so that they can be assured that the alarms are being handled correctly and to detect relationships between alarms that may be an indication of further problems. Part of good incident response is communication. To ensure that the incident is handled properly and risk to the corporation is minimized, a manner of distributing the information about the incident needs to be established. Pagers, cell phones, and e-mail can all be effective tools for alerting key personnel. Some of the personnel that need to be informed of an incident include senior management, public relations, legal, human resources, and security. Incident handling is the expertise of a good security team. Proper response will contain the damage; assure customers, employees, and shareholders of adequate preparation and response skills; and provide feedback to prevent future incidents. When investigating an incident, proper care must be taken to preserve the information and evidence collected. The victims or reporting persons should be advised that their report is under investigation. The security team is also responsible for reviewing past incidents and making recommendations for improvements or better controls to prevent future damage. Whenever a business process is affected, and the business continuity plan is enacted, security should ensure that all assets are protected and controls are in place to prevent disruption of recovery efforts. Many corporations today are using managed security service providers (MSSPs) to monitor their systems. The MSSP accumulates the alarms and notifies the corporation when an alarm or event of significant seriousness occurs. When using an MSSP, the corporation should still have contracted measurement tools to evaluate the appropriateness and effectiveness of 246

AU1518Ch16Frame Page 247 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the MSSP’s response mechanisms. A competent internal resource must be designated as the contact for the MSSP. If an incident occurs that requires external agencies or other companies to become involved, a procedure for contacting external parties should be followed. An individual should not contact outside groups without the approval and notification of senior management. Policy must be developed and monitored regarding recent laws requiring an employee to alert police forces of certain types of crimes. THE IT DIRECTOR — THE CHIEF INFORMATION OFFICER (CIO) The IT director is responsible for the strategic planning and structure of the IT department. Plans for future systems development, equipment purchase, technological direction, and budgets all start from the office of the IT director. In most cases, the help desk, system administrators, development departments, production support, operations, and sometimes even telecommunications departments are included in his jurisdiction. The security department should not report to the IT director because this can create a conflict between the need for secure processes and the push to develop new systems. Security can often be perceived as a roadblock for operations and development staff, and having both groups report to the same manager can cause conflict and jeopardize security provisioning. The IT director usually requires a degree in electrical engineering or computer programming and extensive experience in project planning and implementation. This is important for an understanding of the complexities and challenges of new technologies, project management, and staffing concerns. The IT director or CIO should sit on the senior management team and be a part of the strategic planning process for the organization. Facilitating business operations and requirements and understanding the direction and technology needs of the corporation are critical to ensuring that a gulf does not develop between IT and the sales, marketing, or production shops. In many cases, corporations have been limited in their flexibility due to the cumbersome nature of legacy systems or poor communications between IT development and other corporate areas. THE IT STEERING COMMITTEE Many corporations, agencies, and organizations spend millions of dollars per year on IT projects, tools, staff, and programs and yet do not realize adequate benefits or return on investment (ROI) for the amounts of money spent. In many cases this is related to poor project planning, lack of a structured development methodology, poor requirements definition, lack of foresight for future business needs, or lack of close interaction between 247

AU1518Ch16Frame Page 248 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES the IT area and the business units. The IT steering committee is comprised of leaders from the various business units of the organization and the director of IT. The committee has the final approval for any IT expenditures and project prioritization. All proposed IT projects should be presented to the committee along with a thorough business case and forecast expenditure requirements. The committee then determines which projects are most critical to the organization according to risk, opportunities, staffing availability, costs, and alignment with business requirements. Approval for the projects is then granted. One of the challenges for many organizations is that the IT steering committee does not follow up on ongoing projects to ensure that they meet their initial requirements, budget, time frames, and performance. IT steering committee members need to be aware of business strategies, technical issues, legal and administrative requirements, and economic conditions. They need the ability to overrule the IT director and cancel or suspend any project that may not provide the functionality required by the users, adequate security, or is seriously over budget. In such cases the IT steering committee may require a detailed review of the status of the project and reevaluate whether the project is still feasible. Especially in times of weakening IT budgets, all projects should undergo periodic review and rejustification. Projects that may have been started due to hype or the proverbial bandwagon — “everyone must be E-business or they are out of business” — and do not show a realistic return on investment should be cancelled. Projects that can save money must be accelerated — including in many cases a piecemeal approach to getting the most beneficial portions implemented rapidly. Projects that will result in future savings, better technology, and more market flexibility need to be continued, including projects to simplify and streamline IT infrastructure. CHANGE MANAGEMENT — CERTIFICATION AND ACCREDITATION Change management is one of the greatest concerns for many organizations today. In our fast-paced world of rapid development, short time to market, and technological change, change management is the key to ensuring that a “sober second thought” is taken before a change to a system goes into production. Many times, the pressure to make a change rapidly and without a formal review process has resulted in a critical system failure due to inadequate testing or unanticipated or unforeseen technical problems. There are two sides to change management. The most common definition is that change management is concerned with the certification and accreditation process. This is a control set in place to ensure that all changes that are proposed to an existing system are properly tested, approved, and structured (logically and systematically planned and implemented). 248

AU1518Ch16Frame Page 249 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security The other aspect of change management comes from the project management and systems development world. When an organization is preparing to purchase or deploy a new system, or modify an existing system, the organization will usually follow a project management framework to control the budget, training, timing, and staffing requirements of the project. It is common (and often expected, depending on the type of development life cycle employed) that such projects will undergo significant changes or decision points throughout the project lifetime. The decision points are times when evaluations of the project are made and a choice to either continue or halt the project may be required. Other changes may be made to a project due to external factors — economic climate, marketing forces, and availability of skilled personnel — or to internal factors such as identification of new user requirements. These changes will often affect the scope of the project (the amount of work required and the deliverables) or timing and budgeting. Changes made to a project in midstream may cause the project to become unwieldy, subject to large financial penalties — especially when dealing with an outsourced development company — or delayed to the point of impacting business operations. In this instance, change management is the team of personnel that will review proposed changes to a project and determine the cutoff for modifications to the project plan. Almost everything we do can be improved and as the project develops, more ideas and opportunities arise. If uncontrolled, the organization may well be developing a perfect system that never gets implemented. The change control committee must ensure that a time comes when the project timeline and budget are set and followed, and refuse to allow further modifications to the project plan — often saving these ideas for a subsequent version or release. Change management requires that all changes to hardware, software, documentation, and procedures are reviewed by a knowledgeable third party prior to implementation. Even the smallest change to a configuration table or attaching a new piece of equipment can cause catastrophic failures to a system. In some cases a change may open a security hole that goes unnoticed for an extended period of time. Changes to documentation should also be subject to change management so that all documents in use are the same version, the documentation is readable and complete, and all programs and systems have adequate documentation. Furthermore, copies of critical documentation need to be kept off site in order to be available in the event of a major disaster or loss of access to the primary location. Certification Certification is the review of the system from a user perspective. The users review the changes and ensure that the changes will meet the original business requirements outlined at the start of the project or that they will be compatible with existing policy, procedures, or business objectives. 249

AU1518Ch16Frame Page 250 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES The other user group involved is the security department. They need to review the system to ensure that it is adequately secured from threats or risks. In this they will need to consider the sensitivity of the data within the system or that the system protects, the reliance of the business process on the system (availability), regulatory requirements such as data protection or storage (archival) time, and documentation and user training. Accreditation Once a system has been certified by the users, it must undergo accreditation. This is the final approval by management to permit the system, or the changes to a component, to move into production. Management must review the changes to the system in the context of its operational setting. They must evaluate the certification reports and recommendations from security regarding whether the system is adequately secured and meets user requirements and the proposed implementation timetable. This may include accepting the residual risks that could not be addressed in a costeffective manner. Change management is often handled by a committee of business analysts, business unit directors, and security and technical personnel. They meet regularly to approve implementation plans and schedules. Ideally, no change will go into production unless it has been thoroughly inspected and approved by this committee. The main exceptions to this, of course, are changes required to correct system failures. To repair a major failure, a process of emergency change management must be established. The greatest concern with emergency changes is ensuring that the correct follow-up is done to ensure that the changes are complete, documented, and working correctly. In the case of volatile information such as marketing programs, inventory, or newsflashes, the best approach is to keep the information stored in tables or other logically separated areas so that these changes (which may not be subject to change management procedures) do not affect the core system or critical functionality. TECHNICAL STANDARDS COMMITTEE Total cost of ownership (TCO) and keeping up with new or emerging tools and technologies are areas of major expenditure for most organizations today. New hardware and software are continuously marketed. In many cases a new operating system may be introduced before the organization has completed the rollout of the previous version. This often means supporting three versions of software simultaneously. Often this has resulted in the inability of personnel still using the older version of the software to read internal documents generated under the newer version. Configurations of desktops or other hardware can be different, making support 250

AU1518Ch16Frame Page 251 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security and maintenance complex. Decisions have to be made about which new products to purchase — laptops instead of desktops, the minimum standards for a new machine, or type of router or network component. All of these decisions are expensive and require a long-term view of what is coming onto the horizon. The technical standards committee is an advisory committee and should provide recommendations (usually to the IT steering committee or another executive-level committee) for the purchase, strategy, and deployment of new equipment, software, and training. The members of the technical standards committee must be aware of the products currently available as well as the emerging technologies that may affect the viability of current products or purchases. No organization wants to make a major purchase of a software or hardware product that will be incompatible with other products the organization already has or will require within the next few months or years. The members of the technical standards committee should consist of a combination of visionaries, technical experts, and strategic business planners. Care should be taken to ensure that the members of this committee do not become unreasonably influenced or restricted to one particular vendor or supplier. Central procurement is a good principle of security management. Often when an organization is spread out geographically, there is a tendency for each department to purchase equipment independently. Organizations lose control over standards and may end up with incompatible VPNs, difficult maintenance and support, loss of savings that may have been available through bulk purchases, cumbersome disaster recovery planning through the need to communicate with many vendors, and loss of inventory control. Printers and other equipment become untraceable and may be subject to theft or misuse by employees. One organization recently found that tens of thousands of dollars’ worth of equipment had been stolen by an employee that the organization never realized was missing. Unfortunately for the employee, a relationship breakdown caused an angry partner to report the employee to corporate security. THE SYSTEMS ANALYST There are several definitions for a systems analyst. Some organizations may use the term senior analyst when the person works in the IT development area; other organizations use the term to describe the person responsible for systems architecture or configuration. In the IT development shop, the systems analyst plays a critical role in the development and leadership of IT projects and the maintenance of IT systems. The systems analyst may be responsible for chairing or sitting on project development teams, working with business analysts to determine the functional requirements for a system, writing high-level project 251

AU1518Ch16Frame Page 252 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES requirements for use by programmers to write code, enforcing coding standards, coordinating the work of a team of programmers and reviewing their work, overseeing production support efforts, and working on incident handling teams. The systems analyst is usually trained in computer programming and project management skills. The systems analyst must have the ability to review a system and determine its capabilities, weaknesses, and workflow processes. The systems analyst should not have access to change production data or programs. This is important to ensure that they cannot inadvertently or maliciously change a program or organizational data. Without such controls, the analyst may be able to introduce a Trojan horse, circumvent change control procedures, and jeopardize data integrity. Systems analysts in a network or overall systems environment are responsible to ensure that secure and reliable networks or systems are developed and maintained. They are responsible to ensure that the networks or systems are constructed with no unknown gaps or backdoors, that there are few single points of failure, that configurations and access control procedures are set up, and that audit trails and alarms are monitored for violations or attacks. This systems analyst usually requires a technical college diploma and extensive in-depth training. Knowledge of system components, such as the firewalls in use by the organization, tools, and incident handling techniques, is required. Most often, the systems analyst in this environment will have the ability to set up user profiles, change permissions, change configurations, and perform high-level utilities such as backups or database reorganizations. This creates a control weakness that is difficult to overcome. In many cases the only option an organization has is to trust the person in this position. Periodic reviews of their work and proper management controls are some of the only compensating controls available. The critical problem for many organizations is ensuring that this position is properly backed up with trained personnel and thorough documentation, and that this person does not become technically stagnant or begin to become sloppy about security issues. THE BUSINESS ANALYST The business analyst is one of the most critical roles in the information management environment. A good business analyst has an excellent understanding of the business operating environment, including new trends, marketing opportunities, technological tools, current process strengths, needs and weaknesses, and is a good team member. The business 252

AU1518Ch16Frame Page 253 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security analyst is responsible for representing the needs of the users to the IT development team. The business analyst must clearly articulate the functional requirements of a project early on in the project life cycle in order to ensure that information technology resources, money, personnel, and time are expended wisely and that the final result of an IT project meets user needs, provides adequate security and functionality, and embraces controls and separation of duties. Once outlined, the business analyst must ensure that these requirements are addressed and documented in the project plan. The business analyst is then responsible for setting up test scenarios to validate the performance of the system and verify that the system meets the original requirements definitions. When testing, the business analyst should ensure that test scenarios and test cases have been developed to address all recognized risks and test scenarios. Test data should be sanitized to prevent disclosure of private or sensitive information, and test runs of programs should be carefully monitored to prevent test data and reports from introduction into the real-world production environment. Tests should include out-of-range tests, where numbers larger or smaller than the data fields are attempted and invalid data formats are tried. The purpose of the tests is to try to see if it is possible to make the system fail. Proper test data is designed to stress the limitations of the system, the edit checks, and the error handling routines so that the organization can be confident that the system will not fail or handle data incorrectly once in production. The business analyst is often responsible for providing training and documentation to the user groups. In this regard, all methods of access, use, and functionality of the system from a user perspective should be addressed. One area that has often been overlooked has been assignment of error handling and security functionality. The business analyst must ensure that these functions are also assigned to reliable and knowledgeable personnel once the system has gone into production. The business analyst is responsible for reviewing system tests and approving the change as the certification portion of the change management process. If a change needs to be made to production data, the business analyst will usually be responsible for preparing or reviewing the change and approving the timing and acceptability of the change prior to its implementation. This is a proper segregation of duties, whereby the person actually making the change in production — whether it is the operator, programmer, or other user — is not the same person who reviews and approves the change. This may prevent either human error or malicious changes. Once in production, the business analyst is often the second tier of support for the user community. Here they are responsible to check on inconsistencies, errors, or unreliable processing by the system. They will often 253

AU1518Ch16Frame Page 254 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES have a method of creating trouble tickets or system failure notices for the development and production support groups to investigate or take action. Business analysts are commonly chosen from the user groups. They must be knowledgeable in the business operations and should have good communication and teamwork skills. Several colleges offer courses in business analysis, and education in project management can also be beneficial. Because business analysts are involved in defining the original project functional requirements, they should also be trained in security awareness and requirements. Through a partnership with security, business analysts can play a key role in ensuring that adequate security controls are included in the system requirements. THE PROGRAMMER This chapter is not intended to outline all of the responsibilities of a programmer. Instead, it is focused on the security components and risks associated with this job function. The programmer, whether in a mainframe, client/ server, or Web development area, is responsible for preparing the code that will fulfill the requirements of the users. In this regard, the programmer needs to adhere to principles that will provide reliable, secure, and maintainable programs without compromising the integrity, confidentiality, or availability of the data. Poorly written code is the source of almost all buffer overflow attacks. Because of inadequate bounds, parameter checking, or error handling, a program can accept data that exceeds its acceptable range or size, thereby creating a memory or privilege overflow condition. This is a potential hole either for an attacker to exploit or to cause system problems due to simple human error during a data input function. Programs need to be properly documented so that they are maintainable, and the users (usually business analysts) reviewing the output can have confidence that the program handles the input data in a consistent and reliable method. Programmers should never have access to production data or libraries. Several firms have experienced problems due to a disgruntled programmer introducing logic bombs into programs or manipulating production data for their own benefit. Any changes to a program should be reviewed and approved by a business analyst and moved into production by another group or department (such as operators), and not by the programmer directly. This practice was established during the mainframe era but has been slow to be enforced on newer Web-based development projects. This has meant that several businesses have learned the hard way about proper segregation of duties and the protection it provides a firm. Often when a program requires frequent updating, such as a Web site, the placement of 254

AU1518Ch16Frame Page 255 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the changeable data into tables that can be updated by the business analysts or user groups is desirable. One of the greatest challenges for a programmer is to include security requirements in the programs. A program is primarily written to address functional requirements from a user perspective, and security can often be perceived as a hindrance or obstacle to the fast execution and accessibility of the program. The programmer needs to consider the sensitivity of the data collected or generated by the program and provide secure program access, storage, and audit trails. Access controls are usually set up at the initiation of the program; and user IDs, passwords, and privilege levels are checked when the user first logs on to the system or program. Most programs these days have multiple access paths to information — text commands, GUI icons, and drop-down menus are some of the common access methods. A programmer must ensure that all access methods are protected and that the user is unable to circumvent security by accessing the data through another channel or method. The programmer needs training in security and risk analysis. The work of a programmer should also be subject to peer review by other systems analysts or programmers to ensure that quality and standard programming practices have been followed. THE LIBRARIAN The librarian was a job function established in a mainframe environment. In many cases the duties of the librarian have now been incorporated into the job functions of other personnel such as system administrators or operators. However, it is important to describe the functions performed by a librarian and ensure that these tasks are still performed and included in the performance criteria and job descriptions of other individuals. The librarian is responsible for the handling of removable media — tapes, disks, and microfiche; the control of backup tapes and movement to off-site or near-line storage; the movement of programs into production; and source code control. In some instances the librarian is also responsible for system documentation and report distribution. The librarian duties need to be described, assigned, and followed. Movement of tapes to off-site storage should be done systematically with proper handling procedures, secure transport methods, and proper labeling. When reports are generated, especially those containing sensitive data, the librarian must ensure that the reports are distributed to the correct individuals and no pages are attached in error to other print jobs. For this reason, it is a good practice to restrict the access of other personnel from the main printers. 255

AU1518Ch16Frame Page 256 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES The librarian accepts the certified and accredited program changes and moves them into production. These changes should always include a backout plan in case of program or system problems. The librarian should take a backup copy of all programs or tables subject to change prior to moving the new code into production. A librarian should always ensure that all changes are properly approved prior to making a change. Librarians should not be permitted to make changes to programs or tables; they should only enact the changes prepared and approved by other personnel. Librarians also need to be inoculated against social engineering or pressure from personnel attempting to make changes without going through the proper approval process. THE OPERATOR The operator plays a key role in information systems security. No one has greater access or privileges than the operator. The operator can be a key contributor to system security or a gaping hole in a security program. The operator is responsible for the day-to-day operations, job flow, and often the scheduling of the system maintenance and backup routines. As such, an operator is in a position that may have serious impact on system performance or integrity in the event of human error, job-sequencing mistakes, processing delays, backup execution, and timing. The operator also plays a key role in incident handling and error recovery. The operator should log all incidents, abnormal conditions, and job completions so that they can be tracked and acted upon, and provide input for corrective action. Proper tracking of job performance, storage requirements, file size, and database activity provides valuable input to forecasting requirements for new equipment or identification of system performance issues and job inefficiencies before they may become serious processing impairments. The operator should never make changes to production programs or tables except where the changes have been properly approved and tested by other personnel. In the event of a system failure, the operator should have a response plan in place to notify key personnel. THE SYSTEM OWNER AND THE DATA OWNER History has taught us that information systems are not owned by the information technology department, but rather by the user group that depends on the system. The system owner therefore is usually the senior manager in the user department. For a financial system this may be the vice president of finance; for a customer support system, the vice president of sales. The IT department then plays the role of supporting the user group and responding to the needs of the user. Proper ownership and control of systems may prevent the development of systems that are technically sound but of little use to the users. Recent studies have shown that 256

AU1518Ch16Frame Page 257 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security the gap between user requirements and system functionality was a serious detriment to business operations. In fact, several government departments have had to discard costly systems that required years of development because they were found to be inadequate to meet business needs.2 The roles of system owner and data owner may be separate or combined, depending on the size and complexity of the system. The system owner is responsible for all changes and improvements to a system, including decisions regarding the overall replacement of a system. The system owner sits on the IT steering committee, usually as chair, and provides input, prioritization, budgeting, and high-level resource allocation for system maintenance and development. This should not conflict with the role of the IT director and project leaders who are responsible for the day-today operations of production support activity, development projects, and technical resource hiring and allocation. The system owner also oversees the accreditation process that determines when a system change is ready for implementation. This means the system owner must be knowledgeable about new technologies, risks, threats, regulations, and market trends that may impact the security and integrity of a system. The responsibility of the data owner is to monitor the sensitivity of the data stored or processed by a system. This includes determining the appropriate levels of information classification, access restrictions, and user privileges. The data owner should establish or approve the process for granting access to new users, increasing access levels for existing users, and removing access in a timely manner for users who no longer require access as a part of their job duties. The data owner should require an annual report of all system users and determine whether the level of access each user has is appropriate. This should include a review of special access methods such as remote access, wireless access, reports received, and ad hoc requests for information. Because these duties are incidental to the main functions of the persons acting as data or system owners, it is incumbent upon these individuals to closely monitor these responsibilities while delegating certain functions to other persons. The ultimate responsibility for accepting the risks associated with a system rests with the system and data owners. THE USER All of the systems development, the changes, modifications, and daily operations are to be completed with the objective of addressing user requirements. The user is the person who must interact daily with the system and relies on the system to continue business operations. A system that is not designed correctly may lead to a high incidence of user errors, high training costs or extended learning curves, poor performance and frustration, and overly restrictive controls or security measures. Once 257

AU1518Ch16Frame Page 258 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES users notice these types of problems, they will often either attempt to circumvent security controls or other functionality that they find unnecessarily restrictive or abandon the use of the system altogether. Training for a user must include the proper use of the system and the reasons for the various controls and security parameters built into the system. Without divulging the details of the controls, explaining the reasons for the controls may help the users to accept and adhere to the security restrictions built into the system. GOOD PRINCIPLES — EXPLOITING THE STRENGTHS OF PERSONNEL IN REGARD TO A SECURITY PROGRAM A person should never be disciplined for following correct procedures. This may sound ridiculous, but it is a common weakness exploited by people as a part of social engineering. Millions of dollars’ worth of security will be worthless if our staff is not trained to resist and report all social engineering attempts. Investigators have found that the easiest way to gather corporate information is through bribery or relationships with employees. There are four main types of social engineering: intimidation, helpfulness, technical, and name-dropping. The principle of intimidation is the threat of punishment or ridicule for following correct procedures. The person being “engineered” is bullied by the attacker into granting an exception to the rules — perhaps due to position within the company or force of character. In many instances the security-minded person is berated by the attacker, threatened with discipline or loss of employment or otherwise intimidated by a person for just trying to do their job. Some of the most serious breaches of secure facilities have been accomplished through these techniques. In one instance the chief financial officer of a corporation refused to comply with the procedure of wearing an ID card. When challenged by a new security person, the executive explained in a loud voice that he should never again be challenged to display an ID card. Such intimidation unnerved the security person to the point of making the entire security procedure ineffective and arbitrary. Such a “tone at the top” indicates a lack of concern for security that will soon permeate through the entire organization. Helpfulness is another form of social engineering, appealing to the natural instinct of most people to want to provide help or assistance to another person. One of the most vulnerable areas for this type of manipulation is the help desk. Help desk personnel are responsible for password resets, remote access problem resolution, and system error handling. Improper handling of these tasks may result in an attacker getting a password reset for another legitimate user’s account and creating either a security gap or a denial-of-service for the legitimate user. 258

AU1518Ch16Frame Page 259 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security Despite the desires of users, the help desk, and administrators to facilitate the access of legitimate users to the system, they must be trained to recognize social engineering and follow established secure procedures. Name-dropping is another form of social engineering and is often facilitated by press releases, Web page ownership or administrator information, discarded corporate documentation, or other ways that an attacker can learn the names of individuals responsible for research, business operations, administrative functions, or other key roles. By using the names of these individuals in conversation, a hacker can appear to be a legitimate user or have a legitimate affiliation with the corporation. It has been quoted that “The greater the lie, the easier it is to convince someone that it is true.” This especially applies to a name-dropping type of attack. Despite the prior knowledge of the behaviors of a manager, a subordinate may be influenced into performing some task at the request of an attacker although the manager would never have contemplated or approved such a request. Technology has provided new forms of social engineering. Now an attacker may e-mail or fax a request to a corporation for information and receive a response that compromises security. This may be from a person alleging to represent law enforcement or some other government department demanding cooperation or assistance. The correct response must be to have an established manner of contact for outside agencies and train all personnel to route requests for information from an outside source through proper channels. All in all, the key to immunizing personnel against social-engineering attacks is to emphasize the importance of procedure, the correctness of following and enforcing security protocols, and the support of management for personnel who resist any actions that attempt to circumvent proper controls and may be an incidence of social engineering. All employees must know that they will never lose their job for enforcing corporate security procedures. JOB ROTATION Job rotation is an important principle from a security perspective, although it is often seen as a detriment by project managers. Job rotation moves key personnel through the various functional roles in a department or even between departments. This provides several benefits, such as cross-training of key personnel and reducing the risks to a system through lack of trained personnel during vacations or illnesses. Job rotation also serves to identify possible fraudulent activity or shortcuts taken by personnel who have been in the job for an extended time period. In one instance, a corporation needed to take disciplinary action against an employee who was the administrator for a critically important system, not only for the business but also for the community. Because this administrator 259

AU1518Ch16Frame Page 260 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES had sole knowledge of the system and the system administrator password, they were unable to take action in a timely manner. They were forced to delay any action until the administrator left for vacation and gave the password to a backup person. When people stay in a position too long, they may become more attached to the system than to the corporation, and their activity and judgment may become impaired. ANTI-VIRUS AND WEB-BASED ATTACKS The connectivity of systems and the proliferation of Web-based attacks have resulted in significant damage to corporate systems, expenses, and productivity losses. Many people recognize the impact of Code Red and Nimda; however, even when these attacks were taken out of the calculations, the incidence of Web-based attacks rose more than 79 percent in 2001.3 Some studies have documented more attacks in the first two months of 2002 than were detected in the previous year and a half.4 Users have heard many times not to open e-mail attachments; however, this has not prevented many infections and security breaches from happening. More sophisticated attacks — all of which can appear to come from trusted sources — are appearing, and today’s firewalls and anti-virus products are not able to protect an organization adequately. Instead, users need to be more diligent to confirm with a sender whether they intended to send out an attachment prior to opening it. The use of instant messaging, file sharing, and other products, many of which exploit open ports or VPN tunnels through firewalls, is creating even more vulnerabilities. The use of any technology or new product should be subject to analysis and review by security before the users adopt it. This requires the security department to react swiftly to requests from users and be aware of the new trends, technologies, and threats that are emerging. SEGREGATION OF DUTIES The principle of segregation of duties breaks an operation into separate functions so that no one person can control a process from initiation through to completion. Instead, a transaction would require one person to input the data, a second person to review and reconcile the batch totals, and another person (or perhaps the first individual) to confirm the final portion of the transaction. This is especially critical in financial transactions or error handling procedures. SUMMARY This is neither a comprehensive list of all the security concerns and ways to train and monitor the people in our organizations, nor is it a full list 260

AU1518Ch16Frame Page 261 Thursday, November 14, 2002 6:18 PM

The Human Side of Information Security of all job roles and functions. Hopefully it is a tool that managers, security personnel, and auditors can use to review some of the procedures they have in place and create a better security infrastructure. The key objective of this chapter is to identify the primary roles that people play in the information security environment. A security program is only as good as the people implementing it, and a key realization is that tools and technology are not enough when it comes to protecting our organizations. We need to enlist the support of every member of our companies. We need to see the users, administrators, managers, and auditors as partners in security. Much of this is accomplished through understanding. When the users understand why we need security, the security people understand the business, and everyone respects the role of the other departments, then the atmosphere and environment will lead to greater security, confidence, and trust. References 1. www.viruslist.com as reported in SC INFOSECURITY magazine, December 2001, p. 12. 2. www.oregon.gov, Secretary of State Audit of the Public Employees Benefit Board — also California Department of Motor Vehicles report on abandoning new system. 3. Cyber security, Claudia Flisi, Newsweek, March 18, 2002. 4. Etisalat Academy, March 2002.

ABOUT THE AUTHOR Kevin Henry, CISA, CISSP, has over 20 years of experience in telecommunications, computer programming and analysis, and information systems auditing. Kevin is an accomplished and highly respected presenter at many conferences and training sessions, and he serves as a lead instructor for the (ISC)2 Common Body of Knowledge Review for candidates preparing for the CISSP examination.

261

AU1518Ch16Frame Page 262 Thursday, November 14, 2002 6:18 PM

AU1518Ch17Frame Page 263 Thursday, November 14, 2002 6:18 PM

Chapter 17

Security Management Ken Buszta, CISSP

It was once said, “Information is king.” In today’s world, this statement has never rung more true. As a result, information is now viewed as an asset; and organizations are willing to invest large sums of money toward its protection. Unfortunately, organizations appear to be overlooking one of the weakest links for protecting their information — the information security management team. The security management team is the one component in our strategy that can ensure our security plan is working properly and takes corrective actions when necessary. In this chapter, we will address the benefits of an information security team, the various roles within the team, job separation, job rotation, and performance metrics for the team, including certifications. SECURITY MANAGEMENT TEAM JUSTIFICATION Information technology departments have always had to justify their budgets. With the recent global economic changes, the pressures of maintaining stockholder values have brought IT budgets under even more intense scrutiny. Migrations, new technology implementations, and even staff spending have been either been delayed, reduced, or removed from budgets. So how is it that an organization can justify the expense, much less the existence, of an information security management team? While most internal departments lack the necessary skill sets to address security, there are three compelling reasons to establish this team: 1. Maintain competitive advantage. An organization exists to provide a specialized product or service for its clients. The methodologies and trade secrets used to provide these services and products are the assets that establish our competitive advantage. An organization’s failure to properly protect and monitor these assets can result in the loss of not only a competitive advantage but also lost revenues and possible failure of the organization. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

263

AU1518Ch17Frame Page 264 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES 2. Protection of the organization’s reputation. In early 2000, several highprofile organizations’ Web sites were attacked. As a result, the public’s confidence was shaken in their ability to adequately protect their clients. A security management team will not be able to guarantee or fully prevent this from happening, but a well-constructed team can minimize the opportunities made available from your organization to an attacker. 3. Mandates by governmental regulations. Regulations within the United States, such as the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) and those abroad, such as the European Convention on Cybercrime, have mandated that organizations protect their data. An information security management team, working with the organization’s legal and auditing teams, can focus on ensuring that proper safeguards are utilized for regulatory compliance. EXECUTIVE MANAGEMENT AND THE IT SECURITY MANAGEMENT RELATIONSHIP The first and foremost requirement to help ensure the success of an information security management team relies on its relationship with the organization’s executive board. Commencing with the CEO and then working downward, it is essential for the executive board to support the efforts of the information security team. Failure of the executive board to actively demonstrate its support for this group will gradually become reflected within the rest of the organization. Apathy toward the information security team will become apparent, and the team will be rendered ineffective. The executive board can easily avoid this pitfall by publicly signing and adhering to all major information security initiatives such as security policies. INFORMATION SECURITY MANAGEMENT TEAM ORGANIZATION Once executive management has committed its support to an information security team, a decision must be made as to whether the team should operate within a centralized or decentralized administration environment. In a centralized environment, a dedicated team is assigned the sole responsibility for the information security program. These team members will report directly to the information security manager. Their responsibilities include promoting security throughout the organization, implementing new security initiatives, and providing daily security administration functions such as access control. In a decentralized environment, the members of the team have information security responsibilities in addition to those assigned by their departments. These individuals may be network administrators or reside in such departments as finance, legal, human resources, or production. 264

AU1518Ch17Frame Page 265 Thursday, November 14, 2002 6:18 PM

Security Management This decision will be unique to each organization. Organizations that have identified higher risks deploy a centralized administration function. A growing trend is to implement a hybrid solution utilizing the best of both worlds. A smaller dedicated team ensures that new security initiatives are implemented and oversees the overall security plan of the organization, while a decentralized team is charged with promoting security throughout their departments and possibly handling the daily department-related administrative tasking. The next issue that needs to be addressed is how the information security team will fit into the organization’s reporting structure. This is a decision that should not be taken lightly because it will have a long-enduring effect on the organization. It is important that the organization’s decision makers fully understand the ramifications of this decision. The information security team should be placed where its function has significant power and authority. For example, if the information security manager reports to management that does not support the information security charter, the manager’s group will be rendered ineffective. Likewise, if personal agendas are placed ahead of the information security agenda, it will also be rendered ineffective. An organization may place the team directly under the CIO or it may create an additional executive position, separate from any particular department. Either way, it is critical that the team be placed in a position that will allow it to perform its duties. ROLES AND RESPONSIBILITIES When planning a successful information security team, it is essential to identify the roles, rather than the titles, that each member shall perform. Within each role, their responsibilities and authority must be clearly communicated and understood by everyone in the organization. Most organizations can define a single process, such as finance, under one umbrella. There is a manager, and there are direct reports for every phase of the financial life cycle within that department. The information security process requires a different approach. Regardless of how centralized we try to make it, we cannot place it under a single umbrella. The success of the information security team is therefore based on a layered approach. As demonstrated in Exhibit 17-1, the core of any information security team lies with the executive management because they are ultimately responsible to the investors for the organization’s success or failure. As we delve outward into the other layers, we see there are roles for which an information security manager does not have direct reports, such as auditors, technology providers, and the end-user community, but he still has an accountability report from or to each of these members. 265

AU1518Ch17Frame Page 266 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES

Users

ofessionals D

n ers

Executive Management

o

vi

de

ce

Pr

ss

ogy

Ow

Te c h n ol

Inform at

Security Man ion

ers wn aO at

s

ent em ag

Cus tod ian

r IS P

rs

Pr

o

I S A u d it o r s

Exhibit 17-1. Layers of information security management team.

It is difficult to provide a generic approach to fit everyone’s needs. However, regardless of the structure, organizations need to assign securityrelated functions corresponding to the selected employees’ skill sets. Over time, eight different roles have been identified to effectively serve an organization: 1. Executive management. The executive management team is ultimately responsible for the success (or failure) of any information security program. As stated earlier, without their active support, the information security team will struggle and, in most cases, fail in achieving their charter. 2. Information security professionals. These members are the actual members trained and experienced in the information security arena. They are responsible for the design, implementation, management, and review of the organization’s security policy, standards, measures, practices, and procedures. 3. Data owners. Everyone within the organization can serve in this role. For example, the creator of a new or unique data spreadsheet or document can be considered the data owner of that file. As such, they are responsible for determining the sensitivity or classification levels of the data as well as maintaining the accuracy and integrity of the data while it resides in the system. 4. Custodians. This role may very well be the most under-appreciated of all. Custodians act as the owner’s delegate, with their primary 266

AU1518Ch17Frame Page 267 Thursday, November 14, 2002 6:18 PM

Security Management

5.

6.

7.

8.

focus on backing up and restoring the data. The data owners dictate the schedule at which the backups are performed. Additionally, they run the system for the owners and must ensure that the required security controls are applied in accordance with the organization’s security policies and procedures. Process owners. These individuals ensure the appropriate security, consistent with the organization’s security policy, is embedded in the information systems. Technology providers. These are the organization’s subject matter experts for a given set of information security technologies and assist the organization with its implementation and management. Users. As almost every member of the organization is a user of the information systems, they are responsible for adhering to the organization’s security policies and procedures. Their most vital responsibility is maintaining the confidentiality of all usernames and passwords, including the program upon which these are established. Information systems auditor. The auditor is responsible for providing independent assurance to management on the appropriateness of the security objectives and whether the security policies, standards, measures, practices, and procedures are appropriate and comply with the organization’s security objectives. Because of the responsibility this role has in the information security program, organizations may shift this role’s reporting structure directly to the auditing department as opposed to within the information security department.

SEPARATION OF DUTIES AND THE PRINCIPLE OF LEAST PRIVILEGE While it may be necessary for some organizations to have a single individual serve in multiple security roles, each organization will want to consider the possible effects of this decision. By empowering one individual, it is possible for them to manipulate the system for personal reasons without the organization’s knowledge. As such, an information security practice is to maintain a separation of duties. Under this philosophy, pieces of a task are assigned to several people. By clearly identifying the roles and responsibilities, an organization will be able to also implement the Principle of Least Privilege. This idea supports the concept that the users and the processes in a system should have the least number of privileges and for the shortest amount of time needed to perform their tasks. For example, the system administrator’s role may be broken into several different functions to limit the number of people with complete control. One person may become responsible for the system administration, a second person for the security administration, and a third person for the operator functions. 267

AU1518Ch17Frame Page 268 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES Typical system administrator/operator functions include: • • • • • •

Installing system software Starting up and shutting down the system Adding and removing system users Performing backups and recovery Mounting disks and tapes Handling printers

Typical security administrator functions include: • Setting user clearances, initial passwords, and other security clearances for new users, and changing security profiles for existing users • Setting or changing the sensitivity file labels • Setting security characteristics of devices and communication channels • Reviewing audit data The major benefit of both of these principles is to provide a two-person control process to limit the potential damage to an organization. Personnel would be forced into collusion in order to manipulate the system. JOB ROTATION Arguably, training may provide the biggest challenge to management, and many view it as a double-edged sword. On the one edge, training is viewed as an expense and is one of the first areas depreciated when budget cuts are required. This may leave the organization with stale skill sets and disgruntled employees. On the other edge, it is not unusual for an employee to absorb as much training from an organization as possible and then leave for a better opportunity. Where does management draw the line? One method to address this issue is job rotation. By routinely rotating the job a person is assigned to perform, we can provide cross-training to the employees. This process provides the team members with higher skill sets and increased self-esteem; and it provides the organization with backup personnel in the event of an emergency. From the information security point of view, job rotation has its benefits. Through job rotation, the collusion fostered through the separation of duties is broken up because an individual is not performing the same job functions for an extended period. Further, the designation of additionally trained workers adds to the personnel readiness of the organization’s disaster recovery plan. PERFORMANCE METRICS Each department within an organization is created with a charter or mission statement. While the goals for each department should be clearly defined and communicated, the tools that we use to measure a department’s 268

AU1518Ch17Frame Page 269 Thursday, November 14, 2002 6:18 PM

Security Management performance against these goals are not always as clearly defined, particularly in the case of information security. It is vital to determine a set of metrics by which to measure its effectiveness. Depending upon the metrics collected, the results may be used for several different purposes, such as: • Financial. Results may be used to justify existing or increasing future budget levels. • Team competency. A metric, such as certification, may be employed to demonstrate to management and the end users the knowledge of the information security team members. Additional metrics may include authorship and public speaking engagements. • Program efficiency. As the department’s responsibilities are increased, its ability to handle these demands while limiting its personnel hiring can be beneficial in times of economic uncertainty. While in the metric planning stages, the information security manager may consider asking for assistance from the organization’s auditing team. The auditing team can provide an independent verification of the metric results to both the executive management team and the information security department. Additionally, by getting the auditing department involved early in the process, it can assist the information security department in defining its metrics and the tools utilized to obtain them. Determining performance metrics is a multi-step process. In the first step, the department must identify its process for metric collection. Among the questions an organization may consider in this identification process are: • • • • •

Why do we need to collect the statistics? What statistics will we collect? How will the statistics be collected? Who will collect the statistics? When will these statistics be collected?

The second step is for the organization to identify the functions that will be affected. The functions are measured as time, money, and resources. The resources can be quantified as personnel, equipment, or other assets of the organization. The third step requires the department to determine the drivers behind the collection process. In the information security arena, the two drivers that affect the department’s ability to respond in a timely manner are the number of system users and the number of systems within its jurisdiction. The more systems and users an organization has, the larger the information security department. 269

AU1518Ch17Frame Page 270 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES

1000

Number of Users

800 600 400 200 0

1995

1996

1997

1998

1999

2000

2001

Exhibit 17-2. Users administered by information security department.

With these drivers in mind, executive management could rely on the following metrics with a better understanding of the department’s accomplishments and budget justifications: • • • • •

Total systems managed Total remote systems managed User administration, including additions, deletions, and modifications User awareness training Average response times

For example, Exhibit 17-2 shows an increase in the number of system users over time. This chart alone could demonstrate the efficiency of the department as it handles more users with the same number of resources. Exhibit 17-3 shows an example of the average information security response times. Upon review, we are clearly able to see an upward trend in the response times. This chart, when taken by itself, may pose some concerns by senior management regarding the information security team’s abilities. However, when this metric is used in conjunction with the metrics found in Exhibit 17-2, a justification could be made to increase the information security personnel budget. While it is important for these metrics to be gathered on a regular basis, it is even more important for this information to be shared with the appropriate parties. For example, by sharing performance metrics within the department, the department will able to identify its strong and weak areas. The information security manager will also want to share these results with the executive management team to perform a formal annual metric review and evaluation of the metrics. 270

AU1518Ch17Frame Page 271 Thursday, November 14, 2002 6:18 PM

Security Management ART

SLA

Average (in Hours)

20

15

10

5

0

1995

1996

1997

1998

1999

2000

2001

Exhibit 17-3. Average information security response times.

CERTIFICATIONS Using the various certification programs available is an effective tool for management to enhance the confidence levels in its security program while providing the team with recognition for its experience and knowledge. While there are both vendor-centric and vendor-neutral certifications available in today’s market, we will focus only on the latter. (Note: The author does not endorse any particular certification program.) Presently there is quite a debate about which certification is best. This is a hard question to answer directly. Perhaps the more important question is, “What does one want to accomplish in their career?” If based upon this premise, certification should be tailored to a set of objectives and therefore is a personal decision. Certified Information Systems Security Professional (CISSP) The CISSP Certification is an independent and objective measure of professional expertise and knowledge within the information security profession. Many regard this certification as an information security management certification. The credential, established over a decade ago, requires the candidate to have three years’ verifiable experience in one or more of the ten domains in the Common Body of Knowledge (CBK) and pass a rigorous exam. The CBK, developed by the International Information Systems Security Certification Consortium (ISC)2, established an international standard for IS security professionals. The CISSP multiple-choice certification examination covers the following ten domains of the CBK: Domain 1: Access Control Systems and Methodology Domain 2: Telecommunications and Network Security Domain 3: Security Management Practices Domain 4: Applications and Systems Development Security 271

AU1518Ch17Frame Page 272 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES Domain 5: Cryptography Domain 6: Security Architecture and Models Domain 7: Operations Security Domain 8: Business Continuity Planning (BCP) and Disaster Recovery Planning (DRP) Domain 9: Law, Investigations and Ethics Domain 10: Physical Security More information on this certification can be obtained by contacting (ISC)2 through its e-mail address, [email protected] Systems Security Certified Practitioner (SSCP) The SSCP certification focuses on information systems security practices, roles, and responsibilities defined by experts from major industries. Established in 1998, it provides network and systems security administrators with independent and objective measures of competence and recognition as a knowledgeable information systems security practitioner. Certification is only available to those individuals who have at least one year’s experience in the CBK, subscribe to the (ISC)2 Code of Ethics, and pass the 125-question SSCP certification examination, based on seven CBK knowledge areas: 1. 2. 3. 4. 5. 6. 7.

Access Controls Administration Audit and Monitoring Risk, Response and Recovery Cryptography Data Communications Malicious Code/Malware

GIAC In 1999, the SANS (System Administration, Networking, and Security) Institute founded the Global Information Assurance Certification (GIAC) Program to address the need to validate the skills of security professionals. The GIAC certification provides assurance that a certified individual holds an appropriate level of knowledge and skill necessary for a practitioner in key areas of information security. This is accomplished through a twofold process: practitioners must pass a multiple-choice exam and then complete a practical exam to demonstrate their ability to apply their knowledge. GIAC certification programs include: • GIAC Security Essentials Certification (GSEC). GSEC graduates have the knowledge, skills, and abilities to incorporate good information security practice in any organization. The GSEC tests the essential knowle d g e a n d s k i l l s re q u i re d o f a n y i n d i v i d u a l w i t h s e c u r i t y responsibilities within an organization. 272

AU1518Ch17Frame Page 273 Thursday, November 14, 2002 6:18 PM

Security Management • GIAC Certified Firewall Analyst (GCFW). GCFWs have the knowledge, skills, and abilities to design, configure, and monitor routers, firewalls, and perimeter defense systems. • GIAC Certified Intrusion Analyst (GCIA). GCIAs have the knowledge, skills, and abilities to configure and monitor intrusion detection systems and to read, interpret, and analyze network traffic and related log files. • GIAC Certified Incident Handler (GCIH). GCIHs have the knowledge, skills, and abilities to manage incidents; to understand common attack techniques and tools; and to defend against or respond to such attacks when they occur. • GIAC Certified Windows Security Administrator (GCWN). GCWNs have the knowledge, skills and abilities to secure and audit Windows systems, including add-on services such as Internet Information Server and Certificate Services. • GIAC Certified UNIX Security Administrator (GCUX). GCUXs have the knowledge, skills and abilities to secure and audit UNIX and Linux systems. • GIAC Information Security Officer (GISO). GISOs have demonstrated the knowledge required to handle the Security Officer responsibilities, including overseeing the security of information and information resources. This combines basic technical knowledge with an understanding of threats, risks, and best practices. Alternately, this certification suits those new to security who want to demonstrate a basic understanding of security principles and technical concepts. • GIAC Systems and Network Auditor (GSNA). GSNAs have the knowledge, skills, and abilities to apply basic risk analysis techniques and to conduct a technical audit of essential information systems. Certified Information Systems Auditor (CISA) CISA is sponsored by the Information Systems and Audit Control Association (ISACA) and tests a candidate’s knowledge of IS audit principles and practices, as well as technical content areas. It is based on the results of a practice analysis. The exam tests one process and six content areas (domains) covering those tasks that are routinely performed by a CISA. The process area, which existed in the prior CISA practice analysis, has been expanded to provide the CISA candidate with a more comprehensive description of the full IS audit process. These areas are as follows: • • • • • •

Process-based area (domain) The IS audit process Content areas (domains) Management, planning, and organization of IS Technical infrastructure and operational practices Protection of information assets 273

AU1518Ch17Frame Page 274 Thursday, November 14, 2002 6:18 PM

SECURITY MANAGEMENT PRACTICES • Disaster recovery and business continuity • Business application system development, acquisition, implementation, and maintenance • Business process evaluation and risk management For more information, contact ISACA via e-mail: [email protected] CONCLUSION The protection of the assets may be driven by financial concerns, reputation protection, or government mandate. Regardless of the reason, wellconstructed information security teams play a vital role in ensuring organizations are adequately protecting their information assets. Depending upon the organization, an information security team may operate in a centralized or decentralized environment; but either way, the roles must be clearly defined and implemented. Furthermore, it is crucial to develop a set of performance metrics for the information security team. The metrics should look to identify issues such as budgets, efficiencies, and proficiencies within the team. References Hutt, Arthur E. et al., Computer Security Handbook, 3rd ed., John Wiley & Sons, Inc., New York, 1995. International Information Systems Security Certification Consortium (ISC)2, www.isc2.org. Information Systems and Audit Control Association (ISACA), www.isaca.org. Kabay, Michel E., The NCSA Guide to Enterprise Security: Protecting Information Assets, McGrawHill, New York, 1996. Killmeyer Tudor, Jan, Information Security Architecture: An Integrated Approach to Security in the Organization, Auerbach Publications, Boca Raton, FL, 2001. Kovacich, Gerald L., Information Systems Security Officer’s Guide: Establishing and Managing an Information Protection Program, Butterworth-Heinemann, Massachusetts, 1998. Management Planning Guide for Information Systems Security Auditing, National State Auditors Association and the United States General Accounting Office, 2001. Russell, Deborah and Gangemi, G.T. Sr., Computer Security Basics, O’Reilly & Associates, Inc., California, 1991. System Administration, Networking, and Security (SANS) Institute, www.sans.org. Stoll, Clifford, The Cuckoo’s Egg, Doubleday, New York, 1989 Wadlow, Thomas A., The Process of Network Security: Designing and Managing a Safe Network, Addison-Wesley, Massachusetts, 2000.

ABOUT THE AUTHOR Ken Buszta, CISSP, has more than ten years of IT experience and six years of InfoSec experience. He served in the U.S. Navy’s intelligence community before entering the consulting field in 1994. Should you have any questions or comments, he can be reached at [email protected] 274

AU1518Ch18Frame Page 275 Thursday, November 14, 2002 6:17 PM

Chapter 18

The Common Criteria for IT Security Evaluation Debra S. Herrmann

This chapter introduces the Common Criteria (CC) by: • Describing the historical events that led to their development • Delineating the purpose and intended use of the CC and, conversely, situations not covered by the CC • Explaining the major concepts and components of the CC methodology and how they work • Discussing the CC user community and stakeholders • Looking at the future of the CC HISTORY The Common Criteria, referred to as “the standard for information security,”1 represent the culmination of a 30-year saga involving multiple organizations from around the world. The major events are discussed below and summarized in Exhibit 18-1. A common misperception is that computer and network security began with the Internet. In fact, the need for and interest in computer security or COMPUSEC have been around as long as computers. Likewise, the Orange Book is often cited as the progenitor of the CC; actually, the foundation for the CC was laid a decade earlier. One of the first COMPUSEC standards, DoD 5200.28-M,2 Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems, was issued in January 1973. An amended version was issued June 1979.3 DoD 5200.28-M defined the purpose of security testing and evaluation as:2 • To develop and acquire methodologies, techniques, and standards for the analysis, testing, and evaluation of the security features of ADP systems 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

275

276

U.S. DoD

U.S. DoD

U.S. DoD

U.S. DoD

U.S. DoD

U.S. DoD

ISO/IEC U.K. CESG

U.S. DoD

European Communities

6/79

8/83

12/85

7/87

8/90

1990 3/91

4/91

6/91

Lead Organization

1/73

Year

TCSEC or Orange Book TCSEC or Orange Book TNI, part of Rainbow Series TNI, part of Rainbow Series — —





Short Name

Part of Rainbow Series Information Technology Security Evaluation Criteria (ITSEC), Version 1.2, Office for Official ITSEC Publications of the European Communities

JTC1 SC27 WG3 formed UKSP01, UK IT Security Evaluation Scheme: Description of the Scheme, Communications–Electronics Security Group NCSC-TG-021, Version 1, Trusted DBMS Interpretation of the TCSEC, National Computer Security Center

NCSC-TG-011, Version 1, Trusted Network Interpretation of the TCSEC, National Computer Security Center

DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource Sharing ADP Systems DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource Sharing ADP Systems, with 1st Amendment CSC-STD-001–83, Trusted Computer System Evaluation Criteria, National Computer Security Center DoD 5200.28-STD, Trusted Computer System Evaluation Criteria, National Computer Security Center NCSC-TG-005, Version 1, Trusted Network Interpretation of the TCSEC, National Computer Security Center

Standard/Project

Exhibit 18-1. Timeline of events leading to the development of the CC.

AU1518Ch18Frame Page 276 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

U.S. NIST and NSA Canadian CSE

12/92

CCRA CEM Part 2 supplement



CC Parts 1–3

CEM Part 2

CC

CC — CC CEM Part 1

ECMA TR/64

Guidelines for the Security of Information Systems, Organization for Economic Cooperation — and Development Federal Criteria for Information Technology Security, Version 1.0, Volumes I and II Federal criteria The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC), Canadian System CTCPEC Security Centre, Communications Security Establishment, Version 3.oe CC Editing Board established CCEB

CC Sponsoring Organizations 12/93 ECMA Secure Information Processing versus the Concept of Product Evaluation, Technical Report ECMA TR/64, European Computer Manufacturers’ Association 1/96 CCEB Committee draft 1.0 released 1/96 to 10/97 — Public review, trial evaluations 10/97 CCIMB Committee draft 2.0 beta released 11/97 CEMEB CEM-97/017, Common Methodology for Information Technology Security, Part 1: Introduction and General Model, Version 0.6 10/97 to 12/99 CCIMB with Formal comment resolution and balloting ISO/IEC JTC1 SC27 WG3 8/99 CEMEB CEM-99/045, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, v1.0 12/99 ISO/IEC ISO/IEC 15408, Information technology — Security techniques — Evaluation criteria for IT security, Parts 1–3 released 12/99 forward CCIMB Respond to requests for interpretations (RIs), issue final interpretations, incorporate final interpretations 5/00 Multiple Common Criteria Recognition Agreement signed 8/01 CEMEB CEM-2001/0015, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, Supplement: ALC_FLR — Flaw Remediation, v1.0

6/93

1/93

OECD

11/92

AU1518Ch18Frame Page 277 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation

277

AU1518Ch18Frame Page 278 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES Exhibit 18-2. Summary of Orange Book trusted computer system evaluation criteria (TCSEC) divisions. Evaluation Division

Evaluation Class

Degree of Trust

A — Verified protection B — Mandatory protection

A1 — Verified design Highest B3 — Security domains B2 — Structured protection B1 — Labeled security protection C — Discretionary protection C2 — Controlled access protection C1 — Discretionary security protection D — Minimal protection D1 — Minimal protection Lowest

• To assist in the analysis, testing, and evaluation of the security features of ADP systems by developing factors for the Designated Approval Authority concerning the effectiveness of measures used to secure the ADP system in accordance with Section VI of DoD Directive 5200.28 and the provisions of this Manual • To minimize duplication and overlapping effort, improve the effectiveness and economy of security operations, and provide for the approval and joint use of security testing and evaluation tools and equipment As shown in the next section, these goals are quite similar to those of the Common Criteria. The standard stated that the security testing and evaluation procedures “will be published following additional testing and coordination.”2 The result was the publication of CSC-STD-001–83, the Trusted Computer System Evaluation Criteria (TCSEC),4 commonly known as the Orange Book, in 1983. A second version of this standard was issued in 1985.5 The Orange Book proposed a layered approach for rating the strength of COMPUSEC features, similar to the layered approach used by the Software Engineering Institute (SEI) Capability Maturity Model (CMM) to rate the robustness of software engineering processes. As shown in Exhibit 18-2, four evaluation divisions composed of seven classes were defined. Division A class A1 was the highest rating, while division D class D1 was the lowest. The divisions measured the extent of security protection provided, with each class and division building upon and strengthening the provisions of its predecessors. Twenty-seven specific criteria were evaluated. These criteria were grouped into four categories: security policy, accountability, assurance, and documentation. The Orange Book also introduced the concepts of a reference monitor, formal security policy model, trusted computing base, and assurance. The Orange Book was oriented toward custom software, particularly defense and intelligence applications, operating on a mainframe computer 278

AU1518Ch18Frame Page 279 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation that was the predominant technology of the time. Guidance documents were issued; however, it was difficult to interpret or apply the Orange Book to networks or database management systems. When distributed processing became the norm, additional standards were issued to supplement the Orange Book, such as the Trusted Network Interpretation and the Trusted Database Management System Interpretation. Each standard had a different color cover, and collectively they became known as the Rainbow Series. In addition, the Federal Criteria for Information Technology Security was issued by NIST and NSA in December 1992, but it was short-lived. At the same time, similar developments were proceeding outside the United States. Between 1990 and 1993, the Commission of the European Communities, the European Computer Manufacturers Association (ECMA), the Organization for Economic Cooperation and Development (OECD), the U.K. Communications–Electronics Security Group, and the Canadian Communication Security Establishment (CSE) all issued computer security standards or technical reports. These efforts and the evolution of the Rainbow Series were driven by three main factors:6 1. The rapid change in technology, which led to the need to merge communications security (COMSEC) and computer security (COMPUSEC) 2. The more universal use of information technology (IT) outside the defense and intelligence communities 3. The desire to foster a cost-effective commercial approach to developing and evaluating IT security that would be applicable to multiple industrial sectors These organizations decided to pool their resources to meet the evolving security challenge. ISO/IEC Joint Technical Committee One (JTC1) Subcommittee 27 (SC27) Working Group Three (WG3) was formed in 1990. Canada, France, Germany, the Netherlands, the United Kingdom, and the United States, which collectively became known as the CC Sponsoring Organizations, initiated the CC Project in 1993, while maintaining a close liaison with ISO/IEC JTC1 SC27 WG3. The CC Editing Board (CCEB), with the approval of ISO/IEC JTC1 SC27 WG3, released the first committee draft of the CC for public comment and review in 1996. The CC Implementation Management Board (CCIMB), again with the approval of ISO/IEC JTC1 SC27 WG3, incorporated the comments and observations gained from the first draft to create the second committee draft. It was released for public comment and review in 1997. Following a formal comment resolution and balloting period, the CC were issued as ISO/IEC 15408 in three parts: • ISO/IEC 15408-1(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model 279

AU1518Ch18Frame Page 280 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES • ISO/IEC 15408-2(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 2: Security functional requirements • ISO/IEC 15408-3(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 3: Security assurance requirements Parallel to this effort was the development and release of the Common Evaluation Methodology, referred to as the CEM or CM, by the Common Evaluation Methodology Editing Board (CEMEB): • CEM-97/017, Common Methodology for Information Technology Security Evaluation, Part 1: Introduction and General Model, v0.6, November 1997 • CEM-99/045, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, v1.0, August 1999 • CEM-2001/0015, Common Methodology for Information Technology Security Evaluation, Part 2: Evaluation Methodology, Supplement: ALC_FLR — Flaw Remediation, v1.0, August 2001 As the CEM becomes more mature, it too will become an ISO/IEC standard. PURPOSE AND INTENDED USE The goal of the CC project was to develop a standardized methodology for specifying, designing, and evaluating IT products that perform security functions which would be widely recognized and yield consistent, repeatable results. In other words, the goal was to develop a full life-cycle, consensus-based security engineering standard. Once this was achieved, it was thought, organizations could turn to commercial vendors for their security needs rather than having to rely solely on custom products that had lengthy development and evaluation cycles with unpredictable results. The quantity, quality, and cost effectiveness of commercially available IT security products would increase; and the time to evaluate them would decrease, especially given the emergence of the global economy. There has been some confusion that the term IT product only refers to plug-and-play commercial off-the-shelf (COTS) products. In fact, the CC interprets the term IT product quite broadly, to include a single product or multiple IT products configured as an IT system or network. The standard lists several items that are not covered and considered out of scope:7 • Administrative security measures and procedural controls • Physical security • Personnel security 280

AU1518Ch18Frame Page 281 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation • Use of evaluation results within a wider system assessment, such as certification and accreditation (C&A) • Qualities of specific cryptographic algorithms Administrative security measures and procedural controls generally associated with operational security (OPSEC) are not addressed by the CC/CEM. Likewise, the CC/CEM does not define how risk assessments should be conducted, even though the results of a risk assessment are required as an input to a PP.7 Physical security is addressed in a very limited context — that of restrictions on unauthorized physical access to security equipment and prevention of and resistance to unauthorized physical modification or substitution of such equipment.6 Personnel security issues are not covered at all; instead, they are generally handled by assumptions made in the PP. The CC/CEM does not address C&A processes or criteria. This was specifically left to each country and/or government agency to define. However, it is expected that CC/CEM evaluation results will be used as input to C&A. The robustness of cryptographic algorithms, or even which algorithms are acceptable, is not discussed in the CC/CEM. Rather, the CC/CEM limits itself to defining requirements for key management and cryptographic operation. Many issues not handled by the CC/CEM are covered by other national and international standards. MAJOR COMPONENTS OF THE METHODOLOGY AND HOW THEY WORK The three-part CC standard (ISO/IEC 15408) and the CEM are the two major components of the CC methodology, as shown in Exhibit 18-3. The CC Part 1 of ISO/IEC 15408 provides a brief history of the development of the CC and identifies the CC sponsoring organizations. Basic concepts and terminology are introduced. The CC methodology and how it corresponds to a generic system development lifecycle is described. This information forms the foundation necessary for understanding and applying Parts 2 and 3. Four key concepts are presented in Part 1: • • • •

Protection Profiles (PPs) Security Targets (STs) Targets of Evaluation (TOEs) Packages

A Protection Profile, or PP, is a formal document that expresses an implementation-independent set of security requirements, both functional and assurance, for an IT product that meets specific consumer needs.7 The process of developing a PP helps a consumer to elucidate, define, and validate their security requirements, the end result of which is used to (1) communicate these requirements to potential developers and (2) provide a foundation 281

AU1518Ch18Frame Page 282 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

I. The Common Criteria

-

ISO/IEC 15408 Part 1 Terminology and concepts Description of CC Methodology History of development CC Sponsoring organizations

ISO/IEC 15408 Part 2 - Catalog of security functional classes, families, components, and elements

ISO/IEC 15408 Part 3 - Catalog of security assurance classes, families, components, and elements - Definition of standard EAL packages

II. The Common Evaluation Methodology CEM-97/017 Part 1 - Terminology and concepts - Description of CEM - Evaluation principles and roles

CEM-99/045 Part 2 - Standardized application and execution of CC Part 3 requirements - Evaluation tasks, activities, and work units

CEM-2001/015 Part 2 Supplement - Flaw remediation

Exhibit 18-3. Major components of the CC CEM.

from which a security target can be developed and an evaluation conducted. A Security Target, or ST, is an implementation-dependent response to a PP that is used as the basis for developing a TOE. In other words, the PP specifies security functional and assurance requirements, while an ST provides a design that incorporates security mechanisms, features, and functions to fulfill these requirements. A Target of Evaluation, or TOE, is an IT product, system, or network and its associated administrator and user guidance documentation that is the subject of an evaluation.7-9 A TOE is the physical implementation of an ST. There are three types of TOEs: monolithic, component, and composite. A monolithic TOE is self-contained; it has no higher or lower divisions. A 282

AU1518Ch18Frame Page 283 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation component TOE is the lowest-level TOE in an IT product or system; it forms part of a composite TOE. In contrast, a composite TOE is the highest-level TOE in an IT product or system; it is composed of multiple component TOEs. A package is a set of components that are combined together to satisfy a subset of identified security objectives.7 Packages are used to build PPs and STs. Packages can be a collection of functional or assurance requirements. Because they are a collection of low-level requirements or a subset of the total requirements for an IT product or system, packages are intended to be reusable. Evaluation assurance levels (EALs) are examples of predefined packages. Part 2 of ISO/IEC 15408 is a catalog of standardized security functional requirements, or SFRs. SFRs serve many purposes. They7-9 (1) describe the security behavior expected of a TOE, (2) meet the security objectives stated in a PP or ST, (3) specify security properties that users can detect by direct interaction with the TOE or by the TOE’s response to stimulus, (4) counter threats in the intended operational environment of the TOE, and (5) cover any identified organizational security policies and assumptions. The CC organizes SFRs in a hierarchical structure of security functionality: • • • •

Classes Families Components Elements

Eleven security functional classes, 67 security functional families, 138 security functional components, and 250 security functional elements are defined in Part 2. Exhibit 18-4 illustrates the relationship between classes, families, components, and elements. A class is a grouping of security requirements that share a common focus; members of a class are referred to as families.7 Each functional class is assigned a long name and a short three-character mnemonic beginning with an “F.” The purpose of the functional class is described and a structure diagram is provided that depicts the family members. ISO/IEC 15408-2 defines 11 security functional classes. These classes are lateral to one another; there is no hierarchical relationship among them. Accordingly, the standard presents the classes in alphabetical order. Classes represent the broadest spectrum of potential security functions that a consumer may need in an IT product. Classes are the highest-level entity from which a consumer begins to select security functional requirements. It is not expected that a single IT product will contain SFRs from all classes. Exhibit 18-5 lists the security functional classes. 283

AU1518Ch18Frame Page 284 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

Class A

Family 1

Family 2

Family x

Component 1

Component 2

Component x

Element 1

Element 2

Element x

Exhibit 18-4. Relationship between classes, families, components, and elements.

Exhibit 18-5. Functional security classes. Short Name

Long Name

FAU

Security audit

FCO FCS FDP

FIA

FMT FPR FPT FRU FTA FTP

284

Purpose8

Monitor, capture, store, analyze, and report information related to security events Communication Assure the identity of originators and recipients of transmitted information; non-repudiation Cryptographic support Management and operational use of cryptographic keys User data protection Protect (1) user data and the associated security attributes within a TOE and (2) data that is imported, exported, and stored Identification and Ensure unambiguous identification of authorized authentication users and the correct association of security attributes with users and subjects Security management Management of security attributes, data, and functions and definition of security roles Privacy Protect users against discovery and misuse of their identity Protection of the TSF Maintain the integrity of the TSF management functions and data Resource utilization Ensure availability of system resources through fault tolerance and the allocation of services by priority TOE access Controlling user session establishment Trusted path/channels Provide a trusted communication path between users and the TSF and between the TSF and other trusted IT products

AU1518Ch18Frame Page 285 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation

functional requirement

3-letter family code

2-letter class code

1-digit element number

1-digit component number

Exhibit 18-6. Standard notation for classes, families, components, and elements.

A functional family is a grouping of SFRs that share security objectives but may differ in emphasis or rigor. The members of a family are referred to as components.7 Each functional family is assigned a long name and a three-character mnemonic that is appended to the functional class mnemonic. Family behavior is described. Hierarchics or ordering, if any, between family members is explained. Suggestions are made about potential OPSEC management activities and security events that are candidates to be audited. Components are a specific set of security requirements that are constructed from elements; they are the smallest selectable set of elements that can be included in a Protection Profile, Security Target, or a package.7 Components are assigned a long name and described. Hierarchical relationships between one component and another are identified. The short name for components consists of the class mnemonic, the family mnemonic, and a unique number. An element is an indivisible security requirement that can be verified by an evaluation, and it is the lowest-level security requirement from which components are constructed.7 One or more elements are stated verbatim for each component. Each element has a unique number that is appended to the component identifier. If a component has more than one element, all of them must be used. Dependencies between elements are listed. Elements are the building blocks from which functional security requirements are specified in a protection profile. Exhibit 18-6 illustrates the standard CC notation for security functional classes, families, components, and elements. Part 3 of ISO/IEC 15408 is a catalog of standardized security assurance requirements or SARs. SARs define the criteria for evaluating PPs, STs, and TOEs and the security assurance responsibilities and activities of developers and evaluators. The CC organize SARs in a hierarchical structure of security assurance classes, families, components, and elements. Ten security assurance classes, 42 security assurance families, and 93 security assurance components are defined in Part 3. 285

AU1518Ch18Frame Page 286 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES A class is a grouping of security requirements that share a common focus; members of a class are referred to as families.7 Each assurance class is assigned a long name and a short three-character mnemonic beginning with an “A.” The purpose of the assurance class is described and a structure diagram is provided that depicts the family members. There are three types of assurance classes: (1) those that are used for Protection Profile or Security Target validation, (2) those that are used for TOE conformance evaluation, and (3) those that are used to maintain security assurance after certification. ISO/IEC 15408-3 defines ten security assurance classes. Two classes, APE and ASE, evaluate PPs and STs, respectively. Seven classes verify that a TOE conforms to its PP and ST. One class, AMA, verifies that security assurance is maintained between certification cycles. These classes are lateral to one another; there is no hierarchical relationship among them. Accordingly, the standard presents the classes in alphabetical order. Classes represent the broadest spectrum of potential security assurance measures that a consumer may need to verify the integrity of the security functions performed by an IT product. Classes are the highestlevel entity from which a consumer begins to select security assurance requirements. Exhibit 18-7 lists the security assurance classes in alphabetical order and indicates their type. An assurance family is a grouping of SARs that share security objectives. The members of a family are referred to as components.7 Each assurance family is assigned a long name and a three-character mnemonic that is appended to the assurance class mnemonic. Family behavior is described. Unlike functional families, the members of an assurance family only exhibit linear hierarchical relationships, with an increasing emphasis on scope, depth, and rigor. Some families contain application notes that provide additional background information and considerations concerning the use of a family or the information it generates during evaluation activities. Components are a specific set of security requirements that are constructed from elements; they are the smallest selectable set of elements that can be included in a Protection Profile, Security Target, or a package.7 Components are assigned a long name and described. Hierarchical relationships between one component and another are identified. The short name for components consists of the class mnemonic, the family mnemonic, and a unique number. Again, application notes may be included to convey additional background information and considerations. An element is an indivisible security requirement that can be verified by an evaluation, and it is the lowest-level security requirement from which components are constructed.7 One or more elements are stated verbatim for each component. If a component has more than one element, all of them must be used. Dependencies between elements are listed. Elements are the 286

AU1518Ch18Frame Page 287 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation Exhibit 18-7. Security assurance classes. Short Name Long Name APE ASE

ACM ADO ADV

AGD

ALC

ATE AVA

AMA

Type

Purpose

Protection profile PP/ST Demonstrate that the PP is complete, consistent and evaluation technically sound Security target PP/ST Demonstrate that the ST is complete, consistent, evaluation technically sound, and suitable for use as the basis for a TOE evaluation Configuration TOE Control the process by which a TOE and its related management documentation is developed, refined, and modified Delivery and TOE Ensure correct delivery, installation, generation, and operation initialization of the TOE Development TOE Ensure that the development process is methodical by requiring various levels of specification and design and evaluating the consistency between them Guidance TOE Ensure that all relevant aspects of the secure documents operation and use of the TOE are documented in user and administrator guidance Lifecycle support TOE Ensure that methodical processes are followed during the operations and maintenance phase so that security integrity is not disrupted Tests TOE Ensure adequate test coverage, test depth, functional and independent testing Vulnerability TOE Analyze the existence of latent vulnerabilities, such assessment as exploitable covert channels, misuse or incorrect configuration of the TOE, the ability to defeat, bypass, or compromise security credentials Maintenance of AMA Assure that the TOE will continue to meet its assurance security target as changes are made to the TOE or its environment

PP/ST — Protection Profile or Security Target evaluation. TOE — TOE conformance evaluation. AMA — Maintenance of assurance after certification.

building blocks from which a PP or ST is created. Each assurance element has a unique number that is appended to the component identifier and a one-character code. A “D” indicates assurance actions to be taken by the TOE developer. A “C” explains the content and presentation criteria for assurance evidence, that is, what must be demonstrated.7 An “E” identifies actions to be taken or analyses to be performed by the evaluator to confirm that evidence requirements have been met. Exhibit 18-8 illustrates the standard notation for assurance classes, families, components, and elements. Part 3 of ISO/IEC 15408 also defines seven hierarchical evaluation assurance levels, or EALs. An EAL is a grouping of assurance components that 287

AU1518Ch18Frame Page 288 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

1-digit action element type assurance requirement

3-letter family code

2-letter class code

1-digit element number 1-digit component number

Exhibit 18-8. Standard notation for assurance classes, families, components, and elements.

Exhibit 18-9. Standard EAL packages. Short Name

Long Name

EAL 1 EAL 2 EAL 3 EAL 4 EAL 5 EAL 6 EAL 7

Functionally tested Structurally tested Methodically tested and checked Methodically designed, tested, and reviewed Semi-formally designed and tested Semi-formally verified design and tested Formally verified design and tested

Level of Confidence Lowest

Medium

Highest

represents a point on the predefined assurance scale.7 In short, an EAL is an assurance package. The intent is to ensure that a TOE is not over- or underprotected by balancing the level of assurance against cost, schedule, technical, and mission constraints. Each EAL has a long name and a short name, which consists of “EAL” and a number from 1 to 7. The seven EALs add new and higher assurance components as security objectives become more rigorous. Application notes discuss limitations on evaluator actions and/or the use of information generated. Exhibit 18-9 cites the seven standard EALs. The CEM The Common Methodology for Information Technology Security Evaluation, known as the CEM (or CM), was created to provide concrete guidance to evaluators on how to apply and interpret SARs and their developer, content and presentation, and evaluator actions, so that evaluations are consistent and repeatable. To date the CEM consists of two parts and a supplement. Part 1 of the CEM defines the underlying principles of evaluations and delineates the roles of sponsors, developers, evaluators, and national evaluation authorities. Part 2 of the CEM specifies the evaluation methodology in terms of evaluator tasks, subtasks, activities, subactivities, 288

AU1518Ch18Frame Page 289 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation actions, and work units, all of which tie back to the assurance classes. A supplement was issued to Part 2 in 2001 that provides evaluation guidance for the ALC_FLR family. Like the CC, the CEM will become an ISO/IEC standard in the near future. CC USER COMMUNITY AND STAKEHOLDERS The CC user community and stakeholders can be viewed from two different constructs: (1) generic groups of users, and (2) formal organizational entities that are responsible for overseeing and implementing the CC/CEM worldwide. (See Exhibit 18-10.) ISO/IEC 15408-1 defines the CC/CEM generic user community to consist of: • Consumers • Developers • Evaluators Consumers are those organizations and individuals who are interested in acquiring a security solution that meets their specific needs. Consumers state their security functional and assurance requirements in a PP. This mechanism is used to communicate with potential developers by conveying requirements in an implementation-independent manner and information about how a product will be evaluated. Developers are organizations and individuals who design, build, and sell IT security products. Developers respond to a consumer’s PP with an implementation-dependent detailed design in the form of an ST. In addition, developers prove through the ST that all requirements from the PP have been satisfied, including the specific activities levied on developers by SARs. Evaluators perform independent evaluations of PPs, STs, and TOEs using the CC/CEM, specifically the evaluator activities stated in SARs. The results are formally documented and distributed to the appropriate entities. Consequently, consumers do not have to rely only on a developer’s claims — they are privy to independent assessments from which they can evaluate and compare IT security products. As the standard7 states: The CC is written to ensure that evaluations fulfill the needs of consumers — this is the fundamental purpose and justification for the evaluation process.

The Common Criteria Recognition Agreement (CCRA),10 signed by 15 countries to date, formally assigns roles and responsibilities to specific organizations: • Customers or end users • IT product vendors 289

AU1518Ch18Frame Page 290 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES Exhibit 18-10. Roles and responsibilities of CC/CEM stakeholders. Category I. Generic Usersa Consumers

Developers Evaluators

Roles and Responsibilities Specify requirements Inform developers how IT product will be evaluated Use PP, ST, and TOE evaluation results to compare products Respond to consumer’s requirements Prove that all requirements have been met Conduct independent evaluations using standardized criteria

II. Specific Organizationsb Customer or end user Specify requirements Inform vendors how IT product will be evaluated Use PP, ST, and TOE evaluation results to compare IT products IT product vendor Respond to customer’s requirements Prove that all requirements have been met Deliver evidence to sponsor Sponsor Contract with CCTL for IT product to be evaluated Deliver evidence to CCTL Request accreditation from National Evaluation Authority Common Criteria Testing Laboratory Receive evidence from sponsor (CCTL) Conduct evaluations according to CC/CEM Produce Evaluation Technical Reports Make certification recommendation to National Evaluation Authority National Evaluation Define and manage national evaluation scheme Authority Accredit CCTLs Monitor CCTL evaluations Issue guidance to CCTLs Issue and recognize CC certificates Maintain Evaluated Products Lists and PP Registry Common Criteria Facilitate consistent interpretation and application of the Implementation CC/CEM Management Board Oversee National Evaluation Authorities (CCIMB) Render decisions in response to Requests for Interpretations (RIs) Maintain the CC/CEM Coordinate with ISO/IEC JTC1 SC27 WG3 and CEMEB a

b

ISO/IEC 15408-1(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model; Part 2: Security functional requirements; Part 3: Security assurance requirements. Arrangement on the Recognition of Common Criteria Certificates in the Field of Information Technology Security, May 23, 2000.

290

AU1518Ch18Frame Page 291 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation • • • •

Sponsors Common Criteria Testing Laboratories (CCTLs) National Evaluation Authorities Common Criteria Implementation Management Board (CCIMB)

Customers or end users perform the same role as consumers in the generic model. They specify their security functional and assurance requirements in a PP. By defining an assurance package, they inform developers how the IT product will be evaluated. Finally, they use PP, ST, and TOE evaluation results to compare IT products and determine which best meets their specific needs and will work best in their particular operational environment. IT product vendors perform the same role as developers in the generic model. They respond to customer requirements by developing an ST and corresponding TOE. In addition, they provide proof that all security functional and assurance requirements specified in the PP have been satisfied by their ST and TOE. This proof and related development documentation is delivered to the Sponsor. A new role introduced by the CCRA is that of the Sponsor. A Sponsor locates an appropriate CCTL and makes contractual arrangements with them to conduct an evaluation of an IT product. They are responsible for delivering the PP, ST, or TOE and related documentation to the CCTL and coordinating any pre-evaluation activities. A Sponsor may represent the customer or the IT product vendor, or be a neutral third party such as a system integrator. The CCRA divides the generic evaluator role into three hierarchical functions: Common Criteria Testing Laboratories (CCTLs), National Evaluation Authorities, and the Common Criteria Implementation Management Board (CCIMB). CCTLs must meet accreditation standards and are subject to regular audit and oversight activities to ensure that their evaluations conform to the CC/CEM. CCTLs receive the PP, ST, or TOE and the associated documentation from the Sponsor. They conduct a formal evaluation of the PP, ST or TOE according to the CC/CEM and the assurance package specified in the PP. If missing, ambiguous, or incorrect information is uncovered during the course of an evaluation, the CCTL issues an Observation Report (OR) to the sponsor requesting clarification. The results are documented in an Evaluation Technical Report (ETR), which is sent to the National Evaluation Authority along with a recommendation that the IT product be certified (or not). Each country that is a signatory to the CCRA has a National Evaluation Authority. The National Evaluation Authority is the focal point for CC activities within its jurisdiction. A National Evaluation Authority may take one 291

AU1518Ch18Frame Page 292 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES of two forms — that of a Certificate Consuming Participant or that of a Certificate Authorizing Participant. A Certificate Consuming Participant recognizes CC certificates issued by other entities but, at present, does not issue any certificates itself. It is not uncommon for a country to sign on to the CCRA as a Certificate Consuming Participant, then switch to a Certificate Authorizing Participant later, after they have established their national evaluation scheme and accredited some CCTLs. A Certificate Authorizing Participant is responsible for defining and managing the evaluation scheme within their jurisdiction. This is the administrative and regulatory framework by which CCTLs are initially accredited and subsequently maintain their accreditation. The National Evaluation Authority issues guidance to CCTLs about standard practices and procedures and monitors evaluation results to ensure their objectivity, repeatability, and conformance to the CC/CEM. The National Evaluation Authority issues official CC certificates, if they agree with the CCTL recommendation, and recognizes CC certificates issued by other National Evaluation Authorities. In addition, the National Evaluation Authority maintains the Evaluated Products List and PP Registry for its jurisdiction. The Common Criteria Implementation Management Board (CCIMB) is composed of representatives from each country that is a party to the CCRA. The CCIMB has the ultimate responsibility for facilitating the consistent interpretation and application of the CC/CEM across all CCTLs and National Evaluation Authorities. Accordingly, the CCIMB monitors and oversees the National Evaluation Authorities. The CCIMB renders decisions in response to Requests for Interpretations (RIs). Finally, the CCIMB maintains the current version of the CC/CEM and coordinates with ISO/IEC JTC1 SC27 WG3 and the CEMEB concerning new releases of the CC/CEM and related standards. FUTURE OF THE CC As mentioned earlier, the CC/CEM is the result of a 30-year evolutionary process. The CC/CEM and the processes governing it have been designed so that CC/CEM will continue to evolve and not become obsolete when technology changes, like the Orange Book did. Given that and the fact that 15 countries have signed the CC Recognition Agreement (CCRA), the CC/CEM will be with us for the long term. Two near-term events to watch for are the issuance of both the CEM and the SSE-CMM as ISO/IEC standards. The CCIMB has set in place a process to ensure consistent interpretations of the CC/CEM and to capture any needed corrections or enhancements to the methodology. Both situations are dealt with through what is known as the Request for Interpretation (RI) process. The first step in this process is for a developer, sponsor, or CCTL to formulate a question. This 292

AU1518Ch18Frame Page 293 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation question or RI may be triggered by four different scenarios. The organization submitting the RI:10 • Perceives an error in the CC or CEM • Perceives the need for additional material in the CC or CEM • Proposes a new application of the CC and/or CEM and wants this new approach to be validated • Requests help in understanding part of the CC or CEM The RI cites the relevant CC and/or CEM reference and states the problem or question. The ISO/IEC has a five-year reaffirm, update, or withdrawal cycle for standards. This means that the next version of ISO/IEC 15408, which will include all of the final interpretations in effect at that time, should be released near the end of 2004. The CCIMB has indicated that it may issue an interim version of the CC or CEM, prior to the release of the new ISO/IEC 15408 version, if the volume and magnitude of final interpretations warrant such an action. However, the CCIMB makes it clear that it remains dedicated to support the ISO/IEC process.1 Acronyms ADP — Automatic Data Processing equipment C&A — Certification and Accreditation CC — Common Criteria CCEB — Common Criteria Editing Board CCIMB — Common Criteria Implementation Board CCRA — Common Criteria Recognition Agreement CCTL — accredited CC Testing Laboratory CEM — Common Evaluation Methodology CESG — U.K. Communication Electronics Security Group CMM — Capability Maturity Model COMSEC — Communications Security COMPUSEC — Computer Security CSE — Canadian Computer Security Establishment DoD — U.S. Department of Defense EAL — Evaluation Assurance Level ECMA — European Computer Manufacturers Association 293

AU1518Ch18Frame Page 294 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES ETR — Evaluation Technical Report IEC — International Electrotechnical Commission ISO — International Organization for Standardization JTC — ISO/IEC Joint Technical Committee NASA — U.S. National Aeronautics and Space Administration NIST — U.S. National Institute of Standards and Technology NSA — U.S. National Security Agency OECD — Organization for Economic Cooperation and Development OPSEC — Operational Security OR — Observation Report PP — Protection Profile RI — Request for Interpretation SAR — Security Assurance Requirement SEI — Software Engineering Institute at Carnegie Mellon University SFR — Security Functional Requirement SSE-CMM — System Security Engineering CMM ST — Security Target TCSEC — Trusted Computer Security Evaluation Criteria TOE — Target of Evaluation References 1. www.commoncriteria.org; centralized resource for current information about the Common Criteria standards, members, and events. 2. DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems , U.S. Department of Defense, January 1973. 3. DoD 5200.28M, ADP Computer Security Manual — Techniques and Procedures for Implementing, Deactivating, Testing, and Evaluating Secure Resource-Sharing ADP Systems , with 1st Amendment, U.S. Department of Defense, June 25, 1979. 4. CSC-STD-001-83, Trusted Computer System Evaluation Criteria (TCSEC), National Computer Security Center, U.S. Department of Defense, August 15, 1983. 5. DoD 5200.28-STD, Trusted Computer System Evaluation Criteria (TCSEC), National Computer Security Center, U.S. Department of Defense, December 1985. 6. Herrmann, D., A Practical Guide to Security Engineering and Information Assurance, Auerbach Publications, Boca Raton, FL, 2001. 7. ISO/IEC 15408-1(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model. 8. ISO/IEC 15408-2(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 2: Security functional requirements.

294

AU1518Ch18Frame Page 295 Thursday, November 14, 2002 6:17 PM

The Common Criteria for IT Security Evaluation 9. ISO/IEC 15408-3(1999-12-01), Information technology — Security techniques — Evaluation criteria for IT security — Part 3: Security assurance requirements. 10. Arrangement on the Recognition of Common Criteria Certificates in the Field of Information Technology Security, May 23, 2000.

ABOUT THE AUTHOR Debra Herrmann is the ITT manager of security engineering for the FAA Telecommunications Infrastructure program. Her special expertise is in the specification, design, and assessment of secure mission-critical systems. She is the author of Using the Common Criteria for IT Security Evaluation and A Practical Guide to Security Engineering and Information Assurance, both from Auerbach Publications.

295

AU1518Ch18Frame Page 296 Thursday, November 14, 2002 6:17 PM

AU1518Ch19Frame Page 297 Thursday, November 14, 2002 6:17 PM

Chapter 19

The Security Policy Life Cycle: Functions and Responsibilities Patrick D. Howard, CISSP

Most information security practitioners normally think of security policy development in fairly narrow terms. Use of the term policy development usually connotes writing a policy on a particular topic and putting it into effect. If practitioners happen to have recent, hands-on experience in developing information security policies, they may also include in their working definition the staffing and coordination of the policy, security awareness tasks, and perhaps policy compliance oversight. But is this an adequate inventory of the functions that must be performed in the development of an effective security policy? Unfortunately, many security policies are ineffective because of a failure to acknowledge all that is actually required in developing policies. Limiting the way security policy development is defined also limits the effectiveness of policies resulting from this flawed definition. Security policy development goes beyond simple policy writing and implementation. It is also much more than activities related to staffing a newly created policy, making employees aware of it, and ensuring that they comply with its provisions. A security policy has an entire life cycle that it must pass through during its useful lifetime. This life cycle includes research, getting policies down in writing, getting management buy-in, getting them approved, getting them disseminated across the enterprise, keeping users aware of them, getting them enforced, tracking them and ensuring that they are kept current, getting rid of old policies, and other similar tasks. Unless an organization recognizes the various functions involved in the policy development task, it runs the risk of developing policies that are poorly thought out, incomplete, redundant, not fully supported by users or management, superfluous, or irrelevant. 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

297

AU1518Ch19Frame Page 298 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES Use of the security policy life cycle approach to policy development can ensure that the process is comprehensive of all functions necessary for effective policies. It leads to a greater understanding of the policy development process through the definition of discrete roles and responsibilities, through enhanced visibility of the steps necessary in developing effective policies, and through the integration of disparate tasks into a cohesive process that aims to generate, implement, and maintain policies. POLICY DEFINITIONS It is important to be clear on terms at the beginning. What do we mean when we say policy, or standard, or baseline, or guideline, or procedure? These are terms information security practitioners hear and use every day in the performance of their security duties. Sometimes they are used correctly, and sometimes they are not. For the purpose of this discussion these terms are defined in Exhibit 19-1. Exhibit 19-1 provides generally accepted definitions for a security policy hierarchy. A policy is defined as a broad statement of principle that presents management’s position for a defined control area. A standard is defined as a rule that specifies a particular course of action or response to a given situation and is a mandatory directive for carrying out policies. Baselines establish how security controls are to be implemented on specific technologies. Procedures define specifically how policies and standards will be implemented in a given situation. Guidelines provide recommendations on how other requirements are to be met. An example of interrelated security requirements at each level might be an electronic mail security policy for the entire organization at the highest policy level. This would be supported by various standards, including perhaps a requirement that e-mail messages be routinely purged 90 days following their creation. A baseline in this example would relate to how security controls for the e-mail service will be configured on a specific type of system (e.g., ACF2, VAX VMS, UNIX, etc.). Continuing the example, procedures would be specific requirements for how the e-mail security policy and its supporting standards are to be applied in a given business unit. Finally, guidelines in this example would include guidance to users on best practices for securing information sent or received via electronic mail. It should be noted that many times the term policy is used in a generic sense to apply to security requirements of all types. When used in this fashion it is meant to comprehensively include policies, standards, baselines, guidelines, and procedures. In this document, the reader is reminded to consider the context of the word’s use to determine if it is used in a general way to refer to policies of all types or to specific policies at one level of the hierarchy. 298

AU1518Ch19Frame Page 299 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities Exhibit 19-1. Definition of terms. Policy

A broad statement of principle that presents management’s position for a defined control area. Policies are intended to be long-term and guide the development of more specific rules to address specific situations. Policies are interpreted and supported by standards, baselines, procedures, and guidelines. Policies should be relatively few in number, should be approved and supported by executive management, and should provide overall direction to the organization. Policies are mandatory in nature, and an inability to comply with a policy should require approval of an exception. Standard A rule that specifies a particular course of action or response to a given situation. Standards are mandatory directives to carry out management’s policies and are used to measure compliance with policies. Standards serve as specifications for the implementation of policies. Standards are designed to promote implementation of high-level organization policy rather than to create new policy in themselves. Baseline A baseline is a platform-specific security rule that is accepted across the industry as providing the most effective approach to a specific security implementation. Baselines are established to ensure that the security features of commonly used systems are configured and administered uniformly so that a consistent level of security can be achieved throughout the organization. Procedure Procedures define specifically how policies, standards, baselines and guidelines will be implemented in a given situation. Procedures are either technology or process dependent and refer to specific platforms, applications, or processes. They are used to outline steps that must be taken by an organizational element to implement security related to these discrete systems and processes. Procedures are normally developed, implemented, and enforced by the organization owning the process or system. Procedures support organization policies, standards, baselines, and guidelines as closely as possible, while addressing specific technical or procedural requirements within the local organization to which they apply. Guideline A guideline is a general statement used to recommend or suggest an approach to implementation of policies, standards, and baselines. Guidelines are essentially recommendations to consider when implementing security. While they are not mandatory in nature, they are to be followed unless there is a documented and approved reason not to.

POLICY FUNCTIONS There are 11 functions that must be performed throughout the life of security policy documentation, from cradle to grave. These can be categorized in four fairly distinct phases of a policy’s life. During its development a policy is created, reviewed, and approved. This is followed by an implementation phase where the policy is communicated and either complied with or given an exception. Then, during the maintenance phase, the policy must be kept up-to-date, awareness of it must be maintained, and compliance with 299

AU1518Ch19Frame Page 300 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES

Creation Communication Awareness

Review Compliance

Monitoring

Approval Exceptions

Enforcement

Development Tasks

Retirement Implementation Tasks

Maintenance Maintenance Tasks Disposal Tasks

Exhibit 19-2. Policy functions.

it must be monitored and enforced. Finally, during the disposal phase, the policy is retired when it is no longer required. Exhibit 19-2 shows all of these security policy development functions by phase and their relationships through the flow of when they are performed chronologically in the life cycle. The following paragraphs expand on each of these policy functions within these four phases. Creation: Plan, Research, Document, and Coordinate the Policy The first step in the policy development phase is the planning for, research, and writing of the policy — or, taken together, the creation function. The policy creation function includes identifying why there is a need for the policy (for example, the regulatory, legal, contractual, or operational requirement for the policy); determining the scope and applicability of the policy; roles and responsibilities inherent in implementing the policy; and assessing the feasibility of implementing it. This function also includes conducting research to determine organizational requirements for developing policies (i.e., approval authorities, coordination requirements, and style or formatting standards), and researching industry-standard best practices for their applicability to the current organizational policy need. This function results in the documentation of the policy in accordance with organization standards and procedures, as well as coordination as necessary with internal and external organizations that it affects 300

AU1518Ch19Frame Page 301 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities to obtain input and buy-in from these elements. Overall, policy creation is probably the most easily understood function in the policy development life cycle because it is the one that is most often encountered and which normally requires the readily identifiable milestones. Review: Get an Independent Assessment of the Policy Policy review is the second function in the development phase of the life cycle. Once the policy document has been created and initial coordination has been effected, it must be submitted to an independent individual or group for assessment prior to its final approval. There are several benefits of an independent review: a more viable policy through the scrutiny of individuals who have a different or wider perspective than the writer of the policy; broadened support for the policy through an increase in the number of stakeholders; and increased policy credibility through the input of a variety of specialists on the review team. Inherent to this function is the presentation of the policy to the reviewer(s) either formally or informally; addressing any issues that may arise during the review; explaining the objective, context, and potential benefits of the policy; and providing justification for why the policy is needed. As part of this function, the creator of the policy is expected to address comments and recommendations for changes to the policy, and to make all necessary adjustments and revisions resulting in a final policy ready for management approval. Approval: Obtain Management Approval of the Policy The final step in the policy development phase is the approval function. The intent of this function is to obtain management support for the policy and endorsement of the policy by a company official in a position of authority through their signature. Approval permits and hopefully launches the implementation of the policy. The approval function requires the policy creator to make a reasoned determination as to the appropriate approval authority; coordination with that official; presentation of the recommendations stemming from the policy review; and then a diligent effort to obtain broader management buy-in to the policy. Also, should the approving authority hesitate to grant full approval of the policy, the policy creator must address issues regarding interim or temporary approval as part of this function. Communication: Disseminate the Policy Once the policy has been formally approved, it passes into the implementation phase of the policy life cycle. Communication of the policy is the first function to be performed in this phase. The policy must be initially disseminated to organization employees or others who are affected by the policy (e.g., contractors, partners, customers, etc.). This function entails determining the extent and the method of the initial distribution of the policy, 301

AU1518Ch19Frame Page 302 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES addressing issues of geography, language, and culture; prevention of unauthorized disclosure; and the extent to which the supervisory chain will be used in communicating the policy. This is most effectively completed through the development of a policy communication, implementation, or rollout plan, which addresses these issues as well as resources required for implementation, resource dependencies, documenting employee acknowledgment of the policy, and approaches for enhancing visibility of the policy. Compliance: Implement the Policy Compliance encompasses activities related to the initial execution of the policy to comply with its requirements. This includes working with organizational personnel and staff to interpret how the policy can best be implemented in various situations and organizational elements; ensuring that the policy is understood by those required to implement, monitor, and enforce the policy; monitoring, tracking, and reporting on the pace, extent, and effectiveness of implementation activities; and measuring the policy’s immediate impact on operations. This function also includes keeping management apprised of the status of the policy’s implementation. Exceptions: Manage Situations where Implementation Is Not Possible. Because of timing, personnel shortages, and other operational requirements, not every policy can be complied with as originally intended. Therefore, exceptions to the policy will probably need to be granted to organizational elements that cannot fully meet the requirements of the policy. There must be a process in place to ensure that requests for exception are recorded, tracked, evaluated, submitted for approval/disapproval to the appropriate authority, documented, and monitored throughout the approved period of noncompliance. The process must also accommodate permanent exceptions to the policy as well as temporary waivers of requirements based on short-term obstacles. Awareness: Assure Continued Policy Awareness Following implementation of the policy, the maintenance phase of the policy development life cycle begins. The awareness function of the maintenance phase comprises continuing efforts to ensure that personnel are aware of the policy in order to facilitate their compliance with its requirements. This is done by defining the awareness needs of various audience groups within the organization (executives, line managers, users, etc.); determining the most effective awareness methods for each audience group (i.e., briefings, training, messages); and developing and disseminating awareness materials (presentations, posters, mailings, etc.) regarding the need for adherence to the policy. The awareness function also includes efforts to integrate up-to-date policy compliance and enforcement feedback 302

AU1518Ch19Frame Page 303 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities as well as current threat information to make awareness information as topical and realistic as possible. The final task is measuring the awareness of employees with the policy and adjusting awareness efforts based on the results of measurement activities. Monitoring: Track and Report Policy Compliance During the maintenance phase, the monitoring function is performed to track and report on the effectiveness of efforts to comply with the policy. This information results from observations of employees and supervisors; from formal audits, assessments, inspections, and reviews; and from violation reports and incident response activities. This function includes continuing activities to monitor compliance or noncompliance with the policy through both formal and informal methods, and the reporting of these deficiencies to appropriate management authorities for action. Enforcement: Deal with Policy Violations The compliance muscle behind the policy is effective enforcement. The enforcement function comprises management’s response to acts or omissions that result in violations of the policy with the purpose of preventing or deterring their recurrence. This means that once a violation is identified, appropriate corrective action must be determined and applied to the people (disciplinary action), processes (revision), and technologies (upgrade) affected by the violation to lessen the likelihood of it happening again. As stated previously, inclusion of information on these corrective actions in the awareness efforts can be highly effective. Maintenance: Ensure the Policy Is Current Maintenance addresses the process of ensuring the currency and integrity of the policy. This includes tracking drivers for change (i.e., changes in technology, processes, people, organization, business focus, etc.) that may affect the policy; recommending and coordinating policy modifications resulting from these changes; and documenting policy changes and recording change activities. This function also ensures the continued availability of the policy to all parties affected by it, as well as maintaining the integrity of the policy through effective version control. When changes to the policy are required, several previously performed functions need to be revisited — review, approval, communication, and compliance in particular. Retirement: Dispense with the Policy when No Longer Needed After the policy has served its useful purpose (e.g., the company no longer uses the technology for which it applies, or it has been superseded by another policy), then it must be retired. The retirement function makes up the disposal phase of the life cycle, and is the final function in the policy 303

AU1518Ch19Frame Page 304 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES development life cycle. This function entails removing a superfluous policy from the inventory of active policies to avoid confusion, archiving it for future reference, and documenting information about the decision to retire the policy (i.e., justification, authority, date, etc.). These four life cycle phases comprising 11 distinct functions must be performed in their entirety over the complete life cycle of a given policy. One cannot rule out the possibility of combining certain functions to suit current operational requirements. Nevertheless, regardless of the manner in which they are grouped, or the degree to which they are abbreviated by immediate circumstances, each function needs to be performed. In the development phase, organizations often attempt to develop policy without an independent review, resulting in policies that are not well conceived or well received. Shortsighted managers often fail to appropriately address the exception function from the implementation phase, mistakenly thinking there can be no circumstances for noncompliance. Many organizations fail to continually evaluate the need for their established policies during the maintenance phase, discounting the importance of maintaining the integrity and availability of the policies. One often finds inactive policies on the books of major organizations, indicating that the disposal function is not being applied. Not only do all the functions need to be performed, several of them must be done iteratively. In particular, maintenance, awareness, compliance monitoring, and enforcement must be continually exercised over the full life of the policy. POLICY RESPONSIBILITIES In most cases the organization’s information security function — either a group or an individual — performs the vast majority of the functions in the policy life cycle and acts as the proponent for most policy documentation related to the protection of information assets. By design, the information security function exercises both long-term responsibility and day-today tasks for securing information resources and, as such, should own and exercise centralized control over security-related policies, standards, baselines, procedures, and guidelines. This is not to say, however, that the information security function and its staff should be the proponent for all security-related policies or perform all policy development functions. For instance, owners of information systems should have responsibility for establishing requirements necessary to implement organization policies for their own systems. While requirements such as these must comport with higherlevel policy directives, their proponent should be the organizational element that has the greatest interest in ensuring the effectiveness of the policy. While the proponent or owner of a policy exercises continuous responsibility for the policy over its entire life cycle, there are several factors that have a significant bearing on deciding what individual or element should 304

AU1518Ch19Frame Page 305 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities have direct responsibility for performing specific policy functions in an organization. These factors include the following: • The principle of separation of duties should be applied in determining responsibility for a particular policy function to ensure that necessary checks and balances are applied. To provide a different or broader perspective, an official or group that is independent of the proponent should review the policy, and an official who is senior to the proponent should be charged with approving the policy. Or, to lessen the potential for conflicts of interest, the audit function as an independent element within an organization should be tasked with monitoring compliance with the policy, while external audit groups or organizations should be relied upon to provide an independent assessment of policy compliance to be consistent with this principle. • Additionally, for reasons of efficiency, organizational elements other than the proponent may need to be assigned responsibility for certain security policy development life-cycle functions. For instance, dissemination and communication of the policy is best carried out by the organizational element normally charged with performing these functions for the entire organization, (i.e., knowledge management, corporate communications, etc.). On the other hand, awareness efforts are often assigned to the organization training function on the basis of efficiency, even though the training staff is not particularly well suited to perform the policy awareness function. While the training department may render valuable support during the initial dissemination of the policy and in measuring the effectiveness of awareness efforts, the organization’s information security function is better suited to perform continuing awareness efforts because it is well positioned to monitor policy compliance and enforcement activities and to identify requirements for updating the program, each of which is an essential ingredient to effective employee awareness of the policy. • Limits on span of control that the proponent exercises have an impact on who should be the proponent for a given policy function. Normally, the proponent can play only a limited role in compliance monitoring and enforcement of the policy because the proponent cannot be in all places where the policy has been implemented at all times. Line managers, because of their close proximity to the employees who are affected by security policies, are in a much better position to effectively monitor and enforce them and should therefore assume responsibility for these functions. These managers can provide the policy owner assurance that the policy is being adhered to and can ensure that violations are dealt with effectively. • Limits on the authority that an individual or element exercises may determine the ability to successfully perform a policy function. The effectiveness of a policy may often be judged by its visibility and the emphasis 305

AU1518Ch19Frame Page 306 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES that organizational management places on it. The effectiveness of a policy in many cases depends on the authority on which the policy rests. For a policy to have organization-wide support, the official who approves it must have some recognized degree of authority over a substantial part of the organization. Normally, the organization’s information security function does not enjoy that level of recognition across an entire organization and requires the support of upper-level management in accomplishing its mission. Consequently, acceptance of and compliance with information security policies is more likely when based on the authority of executive management. • The proponent’s placement in the organization may cause a lack of knowledge of the environment in which the policy will be implemented, thus hindering its effectiveness. Employment of a policy evaluation committee can provide a broader understanding of operations that will be affected by the policy. A body of this type can help ensure that the policy is written so as to promote its acceptance and successful implementation, and it can be used to forecast implementation problems and to effectively assess situations where exceptions to the policy may be warranted. • Finally, the applicability of the policy also affects the responsibility for policy life-cycle functions. What portion of the organization is affected by the policy? Does it apply to a single business unit, all users of a particular technology, or the entire global enterprise? This distinction can be significant. If the applicability of a policy is limited to a single organizational element, then management of that element should own the policy. However, if the policy is applicable to the entire organization, then a higher-level entity should exercise ownership responsibilities for the policy. THE POLICY LIFE-CYCLE MODEL To ensure that all functions in the policy life cycle are appropriately performed and that responsibilities for their execution are adequately assigned for each function, organizations should establish a framework that facilitates ready understanding, promotes consistent application, establishes a hierarchical structure of mutually supporting policy levels, and effectively accommodates frequent technological and organizational change. Exhibit 19-3 provides a reference for assigning responsibilities for each policy development function according to policy level. In general, this model proposes that responsibilities for functions related to security policies, standards, baselines, and guidelines are similar in many respects. As the element charged with managing the organization’s overall information security program, the information security function should normally serve as the proponent for most related policies, standards, 306

Retirement

Maintenance

Enforcement

Monitoring

Awareness

Exceptions

Compliance

Approval Communication

Review

Function Creation

Information security function Information security function

Policies Information security function Policy evaluation committee Chief executive officer Communications department Managers and employees organization-wide Policy evaluation committee Information security function Managers and employees, information security function, and audit function Managers Information security function Information security function

Information security function Information security function

Information security function Managers and employees, information security function, and audit function Not applicable

Guidelines Information security function Policy evaluation committee Chief information officer Communications Department Managers and employees organization-wide Not applicable

Responsibility Standards and Baselines Information security function Policy evaluation committee Chief information officer Communications department Managers and employees organization-wide Policy evaluation committee Information security function Managers and employees, information security function, and audit function Managers

Exhibit 19-3. Policy function–responsibility model.

Proponent element

Managers and employees assigned to proponent element, information security function, and audit function Managers assigned to proponent element Proponent element

Proponent management

Managers and employees of proponent element Department vice president

Information security function and proponent management Department vice president Proponent element

Procedures Proponent element

AU1518Ch19Frame Page 307 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities

307

AU1518Ch19Frame Page 308 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES baselines, and guidelines related to the security of the organization’s information resources. In this capacity, the information security function should perform the creation, awareness, maintenance, and retirement functions for security policies at these levels. There are exceptions to this general principle, however. For instance, even though it has a substantial impact on the security of information resources, it is more efficient for the human resources department to serve as the proponent for employee hiring policy and standards. Responsibilities for functions related to security procedures, on the other hand, are distinctly different than those for policies, standards, baselines, and guidelines. Exhibit 19-3 shows that proponents for procedures rests outside the organization information security function and is decentralized based on the limited applicability by organizational element. Although procedures are created and implemented (among other functions) on a decentralized basis, they must be consistent with higher organization security policy; therefore, they should be reviewed by the organization information security function as well as the next-higher official in the proponent element’s management chain. Additionally, the security and audit functions should provide feedback to the proponent on compliance with procedures when conducting reviews and audits. The specific rationale for the assignment of responsibilities shown in the model is best understood through an exploration of the model according to life-cycle functions as noted below. • Creation. In most organizations the information security function should serve as the proponent for all security-related policies that extend across the entire enterprise; and should be responsible for creating these policies, standards, baselines, and guidelines. However, security procedures necessary to implement higher-level security requirements and guidelines should be created by each proponent element to which they apply because they must be specific to the element’s operations and structure. • Review. The establishment of a policy evaluation committee provides a broad-based forum for reviewing and assessing the viability of security policies, standards, baselines, and guidelines that affect the entire organization. The policy evaluation committee should be chartered as a group of policy stakeholders drawn from across the organization who are responsible for ensuring that security policies, standards, baselines, and guidelines are well written and understandable, are fully coordinated, and are feasible in terms of the people, processes, and technologies that they affect. Because of their volume, and the number of organizational elements involved, it will probably not be feasible for the central policy evaluation committee to review all procedures developed by proponent elements. However, security procedures require a 308

AU1518Ch19Frame Page 309 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities









similar review, and the proponent should seek to establish a peer review or management review process to accomplish this or request review by the information security function within its capability. Approval. The most significant differences between the responsibilities for policies vis-à-vis standards, baselines, and guidelines are the level of the approval required for each and the extent of the implementation. Security policies affecting the entire organization should be signed by the chief executive officer to provide the necessary level of emphasis and visibility to this most important type of policy. Because information security standards, baselines, and guidelines are designed to elaborate on specific policies, this level of policy should be approved with the signature of the executive official subordinate to the CEO who has overall responsibility for the implementation of the policy. The chief information officer will normally be responsible for approving these types of policies. Similarly, security procedures should bear the approval of the official exercising direct management responsibility for the element to which the procedures apply. The department vice president or department chief will normally serve in this capacity. Communication. Because it has the apparatus to efficiently disseminate information across the entire organization, the communications department should exercise the policy communication responsibility for enterprisewide policies. The proponent should assume the responsibility for communicating security procedures, but as much as possible should seek the assistance of the communications department in executing this function. Compliance. Managers and employees to whom security policies are applicable play the primary role in implementing and ensuring initial compliance with newly published policies. In the case of organizationwide policies, standards, baselines, and guidelines, this responsibility extends to all managers and employees to whom they apply. As for security procedures, this responsibility will be limited to managers and employees of the organizational element to which the procedures apply. Exceptions. At all levels of an organization, there is the potential for situations that prevent full compliance with the policy. It is important that the proponent of the policy or an individual or group with equal or higher authority review exceptions. The policy evaluation committee can be effective in screening requests for exceptions received from elements that cannot comply with policies, standards, and baselines. Because guidelines are, by definition, recommendations or suggestions and are not mandatory, formal requests for exceptions to them are not necessary. In the case of security procedures, the lower-level official who approves the procedures should also serve as the authority for approving exceptions to them. The department vice president typically performs this function. 309

AU1518Ch19Frame Page 310 Thursday, November 14, 2002 6:17 PM

SECURITY MANAGEMENT PRACTICES • Awareness. For most organizations, the information security function is ideally positioned to manage the security awareness program and should therefore have the responsibility for this function in the case of security policies, standards, baselines, and guidelines that are applicable to the entire organization. However, the information security function should perform this function in coordination with the organization’s training department to ensure unity of effort and optimum use of resources. Proponent management should exercise responsibility for employee awareness of security procedures that it owns. Within capability, this can be accomplished with the advice and assistance of the information security function. • Monitoring. The responsibility for monitoring compliance with security policies, standards, baselines, and guidelines that are applicable to the entire organization is shared among employees, managers, the audit function, and the information security function. Every employee that is subject to security requirements should assist in monitoring compliance by reporting deviations that they observe. Although they should not be involved in enforcing security policies, the information security functions and organization audit function can play a significant role in monitoring compliance. This includes monitoring compliance with security procedures owned by lower-level organizational elements by reporting deviations to the proponent for appropriate enforcement action. • Enforcement. The primary responsibility for enforcing security requirements of all types falls on managers of employees affected by the policy. Of course, this does not apply to guidelines, which by design are not enforceable in strict disciplinary terms. Managers assigned to proponent elements to which procedures are applicable must be responsible for their enforcement. The general rule is that the individual granted the authority for supervising employees should be the official who enforces the security policy. Hence, in no case should the information security function or audit function be granted enforcement authority in lieu of or in addition to the manager. Although the information security function should not be directly involved in enforcement actions, it is important that it be privy to reports of corrective action so that this information can be integrated into ongoing awareness efforts. • Maintenance. With its overall responsibility for the organization’s information security program, the information security function is best positioned to maintain security policies, guidelines, standards, and baselines having organization-wide applicability to ensure they remain current and available to those affected by them. At lower levels of the organization, proponent elements as owners of security procedures should perform the maintenance function for procedures that they develop for their organizations. 310

AU1518Ch19Frame Page 311 Thursday, November 14, 2002 6:17 PM

The Security Policy Life Cycle: Functions and Responsibilities • Retirement. When a policy, standard, baseline, or guideline is no longer necessary and must be retired, the proponent for it should have the responsibility for retiring it. Normally, the organization’s information security function will perform this function for organization-wide security policies, standards, baselines, and guidelines, while the proponent element that serves as the owner of security procedures should have responsibility for retiring the procedure under these circumstances. Although this methodology is presented as an approach for developing information security policies specifically, its potential utility should be fairly obvious to an organization in the development, implementation, maintenance, and disposal of the full range of its policies — both security related and otherwise. CONCLUSION The life cycle of a security policy is far more complex than simply drafting written requirements to correct a deviation or in response to a newly deployed technology and then posting it on the corporate intranet for employees to read. Employment of a comprehensive policy life cycle as described here will provide a framework to help an organization ensure that these interrelated functions are performed consistently over the life of a policy through the assignment of responsibility for the execution of each policy development function according to policy type. Utilization of the security policy life-cycle model can result in policies that are timely, well written, current, widely supported and endorsed, approved, and enforceable for all levels of the organization to which they apply. References Fites, Philip and Martin P.J. Kratz. Information Systems Security: A Practitioner’s Reference, London: International Thomson Computer Press, 1996. Hutt, Arthur E., Seymour Bosworth, and Douglas B. Hoyt. Computer Security Handbook, 3rd ed., John Wiley & Sons, New York, 1995. National Institute of Standards and Technology, An Introduction to Computer Security: The NIST Handbook, Special Publication 800-12, October 1995. Peltier, Thomas R., Information Security Policies and Procedures: A Practitioner’s Reference, Auerbach Publications, New York, 1999. Tudor, Jan Killmeyer, Information Security Architecture: An Integrated Approach to Security in the Organization, Auerbach Publications, New York, 2001.

ABOUT THE AUTHOR Patrick D. Howard, CISSP, a senior information security consultant with QinetiQ-TIM, has more than 20 years of experience in information security. Pat has been an instructor for the Computer Security Institute, conducting CISSP Prep for Success Workshops across the United States. 311

AU1518Ch19Frame Page 312 Thursday, November 14, 2002 6:17 PM

AU1518Ch20Frame Page 313 Thursday, November 14, 2002 6:16 PM

Chapter 20

Security Assessment Sudhanshu Kairab, CISSP, CISA

During the past decade, businesses have become increasingly dependent on technology. IT environments have evolved from mainframes running selected applications and independent desktop computers to complex client/server networks running a multitude of operating systems with connectivity to business partners and consumers. Technology trends indicate that IT environments will continue to become more complicated and connected. With this trend in technology, why is security important? With advances in technology, security has become a central part of strategies to deploy and maintain technology. For companies pursuing E-commerce initiatives, security is a key consideration in developing the strategy. In the businessto-consumer markets, customers cite security as the main reason for buying or not buying online. In addition, most of the critical data resides on various systems within the IT environment of most companies. Loss or corruption of data can have devastating effects on a company, ranging from regulatory penalties stemming from laws such as HIPAA (Health Insurance Portability and Accountability Act) to loss of customer confidence. In evaluating security in a company, it is important to keep in mind that managing security is a process much like any other process in a company. Like any other business process, security has certain technologies that support it. In the same way that an ERP (enterprise resources planning) package supports various supply-chain business processes such as procurement, manufacturing, etc., technologies such as firewalls, intrusion detection systems, etc. support the security process. However, unlike some other business processes, security is something that touches virtually every part of the business, from human resources and finance to core operations. Consequently, security must be looked at as a business process and not a set of tools. The best security technology will not yield a secure environment if it is without sound processes and properly defined business requirements. One of the issues in companies today is that, as they have raced to address the numerous security concerns, security processes and technology have not always been implemented with the full 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

313

AU1518Ch20Frame Page 314 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES understanding of the business and, as a result, have not always been aligned with the needs of the business. When securing a company’s environment, management must consider several things. In deciding what security measures are appropriate, some considerations include: • • • •

What needs to be protected? How valuable is it? How much does downtime cost a company? Are there regulatory concerns (e.g., HIPAA, GLBA [Gramm-LeachBliley Act])? • What is the potential damage to the company’s reputation if there is a security breach? • What is the probability that a breach can occur? Depending on the answers to these and other questions, a company can decide which security processes make good business sense for them. The security posture must balance: • The security needs of the business • The operational concerns of the business • The financial constraints of the business The answers to the questions stated earlier can be ascertained by performing a security assessment. An independent third-party security assessment can help a company define what its security needs are and provide a framework for enhancing and developing its information security program. Like an audit, it is important for an assessment to be independent so that results are not (or do not have the appearance of being) biased in any way. An independent security assessment using an internal auditor or a third-party consultant can facilitate open and honest discussion that will provide meaningful information. If hiring a third-party consultant to perform an assessment, it is important to properly evaluate their qualifications and set up the engagement carefully. The results of the security assessment will serve as the guidance for short- and long-term security initiatives; therefore, it is imperative to perform the appropriate due diligence evaluation of any consulting firm considered. In evaluating a third-party consultant, some attributes that management should review include: • Client references. Determine where they have previously performed security assessments. • Sample deliverables. Obtain a sense of the type of report that will be provided. Clients sometimes receive boilerplate documents or voluminous reports from security software packages that are difficult to decipher, not always accurate, and fail to adequately define the risks. 314

AU1518Ch20Frame Page 315 Thursday, November 14, 2002 6:16 PM

Security Assessment • Qualifications of the consultants. Determine if the consultants have technical or industry certifications (e.g., CISSP, CISA, MCSE, etc.) and what type of experience they have. • Methodology and tools. Determine if the consultants have a formal methodology for performing the assessment and what tools are used to do some of the technical pieces of the assessment. Because the security assessment will provide a roadmap for the information security program, it is critical that a quality assessment be performed. Once the selection of who is to do the security assessment is finalized, management should define or put parameters around the engagement. Some things to consider include: • Scope. The scope of the assessment must be very clear, that is, network, servers, specific departments or business units, etc. • Timing. One risk with assessments is that they can drag on. The people who will offer input should be identified as soon as possible, and a single point of contact should be appointed to work with the consultants or auditors performing the assessment to ensure that the work is completed on time. • Documentation. The results of the assessment should be presented in a clear and concise fashion so management understands the risks and recommendations. STANDARDS The actual security assessment must measure the security posture of a company against standards. Security standards range from ones that address high-level operational processes to more technical and sometimes technology-specific standards. Some examples include: • ISO 17799: Information Security Best Practices. This standard was developed by a consortium of companies and describes best practices for information security in the areas listed below. This standard is very process driven and is technology independent. — Security policy — Organizational security — Asset classification and control — Personnel security — Physical and environmental security — Communications and operations management — Access control — Systems development and maintenance — Business continuity management — Compliance 315

AU1518Ch20Frame Page 316 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES • Common Criteria (http://www.commoncriteria.org). “Represents the outcome of a series of efforts to develop criteria for evaluation of IT security products that are broadly useful within the international community.”1 The Common Criteria are broken down into three parts listed below: — Part 1: Introduction and general model: defines general concepts and principles of IT security evaluation and presents a general model for evaluation — Part 2: Security functional requirements — Part 3: Security assurance requirements • SANS/FBI Top 20 Vulnerabilities (http://www.sans.org/top20.htm). This is an updated list of the 20 most significant Internet security vulnerabilities broken down into three categories: General, UNIX related, and NT related. • Technology-specific standards. For instance, best practices for locking down Microsoft products can be found on the Microsoft Web site. When performing an assessment, parts or all of the standards listed above or other known standards can be used. In addition, the consultant or auditor should leverage past experience and their knowledge of the company. UNDERSTANDING THE BUSINESS To perform an effective security assessment, one must have a thorough understanding of the business environment. Some of the components of the business environment that should be understood include: • What are the inherent risks for the industry in which the company operates? • What is the long- and short-term strategy for the company? — What are the current business requirements, and how will this change during the short term and the long term? • What is the organizational structure, and how are security responsibilities handled? • What are the critical business processes that support the core operations? • What technology is in place? To answer these and other questions, the appropriate individuals, including business process owners, technology owners, and executives, should be interviewed. INHERENT RISKS As part of obtaining a detailed understanding of the company, an understanding of the inherent risks in the business is required. Inherent risks are 316

AU1518Ch20Frame Page 317 Thursday, November 14, 2002 6:16 PM

Security Assessment those risks that exist in the business without considering any controls. These risks are a result of the nature of the business and the environment in which it operates. Inherent risks can be related to a particular industry or to general business practices, and can range from regulatory concerns as a result of inadequate protection of data to risks associated with disgruntled employees within an information technology (IT) department. These risks can be ascertained by understanding the industry and the particular company. Executives are often a good source of this type of information. BUSINESS STRATEGY Understanding the business strategy can help identify what is important to a company. This will ultimately be a factor in the risk assessment and the associated recommendations. To determine what is important to a company, it is important to understand the long- and short-term strategies. To take this one step further, how will IT support the long- and short-term business strategies? What will change in the IT environment once the strategies are implemented? The business strategy gives an indication of where the company is heading and what is or is not important. For example, if a company were planning on consolidating business units, the security assessment might focus on integration issues related to consolidation, which would be valuable input in developing a consolidation strategy. One example of a prevalent business strategy for companies of all sizes is facilitating employee telecommuting. In today’s environment, employees are increasingly accessing corporate networks from hotels or their homes during business hours as well as off hours. Executives as well as lowerlevel personnel have become dependent on the ability to access company resources at any time. From a security assessment perspective, the key objective is to determine if the infrastructure supporting remote access is secure and reliable. Some questions that an assessment might address in evaluating a remote access strategy include: • How will remote users access the corporate network (e.g., dial in, VPN, etc.)? • What network resources do remote users require (e.g., e-mail, shared files, certain applications)? — Based on what users must access, what kind of bandwidth is required? • What is the tolerable downtime for remote access? Each of the questions above has technology and process implications that need to be considered as part of the security assessment. In addition to the business strategies, it is also helpful to understand security concerns at the executive level. Executives offer the “big-picture” 317

AU1518Ch20Frame Page 318 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES view of the business, which others in the business sometimes do not. This high-level view can help prioritize the findings of a security assessment according to what is important to senior management. Interfacing with executives also provides an opportunity to make them more aware of security exposures that may potentially exist. ORGANIZATIONAL STRUCTURE For an information security program to be effective, the organization structure must adequately support it. Where the responsibility for information security resides in an organization is often an indication of how seriously management views information security. In many companies today, information security is the responsibility of a CISO (chief information security officer) who might report to either the CIO (chief information officer) or the CEO (chief executive officer). The CISO position has risen in prominence since the September 11 attacks. According to a survey done in January 2002 by Booz Allen Hamilton, “firms with more than $1 billion in annual revenues … 54 percent of the 72 chief executive officers it surveyed have a chief security officer in place. Ninety percent have been in that position for more than two years.”2 In other companies, either middle- or lowerlevel management within an IT organization handles security. Having a CISO can be an indication that management has a high level of awareness of information security issues. Conversely, information security responsibility at a lower level might mean a low level of awareness of information security. While this is not always true, a security assessment must ascertain management and company attitude regarding the importance of information security. Any recommendations that would be made in the context of a security assessment must consider the organizational impact and, more importantly, whether the current setup of the organization is conducive to implementing the recommendations of the security assessment in the first place. Another aspect of where information security resides in an organization is whether roles and responsibilities are clearly defined. As stated earlier, information security is a combination of process and technology. Roles and responsibilities must be defined such that there is a process owner for the key information security-related processes. In evaluating any part of an information security program, one of the first questions to ask is: “Who is responsible for performing the process?” Oftentimes, a security assessment may reveal that, while the process is very clearly defined and adequately addresses the business risk, no one owns it. In this case, there is no assurance that the process is being done. A common example of this is the process of ensuring that terminated employees are adequately processed. When employees are terminated, some things that are typically done include: 318

AU1518Ch20Frame Page 319 Thursday, November 14, 2002 6:16 PM

Security Assessment • • • •

Payroll is stopped. All user access is eliminated. All assets (i.e., computers, ID badges, etc.) are returned. Common IDs and passwords that the employee was using are changed.

Each of the steps above requires coordination among various departments, depending on the size and structure of a given company. Ensuring that terminated employees are processed correctly might mean coordination among departments such as human resources, IT, finance, and others. To ensure the steps outlined above are completed, a company might have a form or checklist to help facilitate communication among the relevant departments and to have a record that the process has been completed. However, without someone in the company owning the responsibility of ensuring that the items on the checklist are completed, there is no assurance that a terminated employee is adequately processed. It might be the case that each department thought someone else was responsible for it. Too often, in the case of terminated employees, processing is incomplete because of a lack of ownership of the process, which presents significant risk for any company. Once there are clear roles and responsibilities for security-related processes, the next step is to determine how the company ensures compliance. Compliance with security processes can be checked using two methods. First, management controls can be built into the processes to ensure compliance. Building on the example of terminated employees, one of the significant elements in the processing is to ensure that the relevant user IDs are removed. If the user IDs of the terminated employees are, by mistake, not removed, it can be still be caught during periodic reviews of user IDs. This periodic review is a management control to ensure that only valid user IDs are active, while also providing a measure of security compliance. The second method of checking compliance is an audit. Many internal audit departments include information security as part of their scope as it grows in importance. The role of internal audit in an information security program is twofold. First, audits check compliance with key security processes. Internal audits focus on different processes and related controls on a rotation basis over a period of time based on risk. The auditors gain an understanding of the processes and associated risks and ensure that internal controls are in place to reasonably mitigate the risks. Essentially, internal audit is in a position to do a continuous security assessment. Second, internal audits provide a company with an independent evaluation of the business processes, associated risks, and security policies. Because of their experience with and knowledge of the business and technology, internal auditors can evaluate and advise on security processes and related internal controls. 319

AU1518Ch20Frame Page 320 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES While there are many internal audit departments that do not have an adequate level of focus on information security, its inclusion within the scope of internal audit activities is an important indication about the level of importance placed on it. Internal audit is in a unique position to raise the level of awareness of information security because of its independence and access to senior management and the audit committee of the board of directors. BUSINESS PROCESSES In conjunction with understanding the organization, the core business processes must be understood when performing a security assessment. The core business processes are those that support the main operations of a company. For example, the supply-chain management process is a core process for a manufacturing company. In this case, the security related to the systems supporting supply-chain management would warrant a close examination. A good example of where core business processes have resulted in increased security exposures is business-to-business (B2B) relationships. One common use of a B2B relationship is where business partners manage supply-chain activities using various software packages. In such a relationship, business partners might have access to each other’s manufacturing and inventory information. Some controls for potential security exposures as a result of such an arrangement include ensuring that: • Business partners have access based on a need-to-know basis. • Communication of information between business partners is secure. • B2B connection is reliable. These security exposure controls have information security implications and should be addressed in an information security program. For example, ensuring that business partners have access on a need-to-know basis might be accomplished using the access control features of the software as well as strict user ID administration procedures. The reliability of the B2B connection might be accomplished with a combination of hardware and software measures as well as SLAs (service level agreements) establishing acceptable downtime requirements. In addition to the core business processes listed above, security assessments must consider other business processes in place to support the operations of a company, including: • • • • • 320

Backup and recovery Information classification Information retention Physical security User ID administration

AU1518Ch20Frame Page 321 Thursday, November 14, 2002 6:16 PM

Security Assessment • • • • • •

Personnel security Business continuity and disaster recovery Incident handling Software development Change management Noncompliance

The processes listed above are the more traditional security-related processes that are common across most companies. In some cases, these processes might be discussed in conjunction with the core business processes, depending on the environment. In evaluating these processes, guidelines such as the ISO 17799 and the Common Criteria can be used as benchmarks. It is important to remember that understanding any of the business processes means understanding the manual processes as well as the technology used to support them. Business process owners and technology owners should be interviewed to determine exactly how the process is performed. Sometimes, a walk-through is helpful in gaining this understanding. TECHNOLOGY ENVIRONMENT As stated in the previous section, the technology supporting business processes is an important part of the security assessment. The technology environment ranges from industry-specific applications, to network operating systems, to security software such as firewalls and intrusion detection systems. Some of the more common areas to focus on in a security assessment include: • • • • • • • •

Critical applications Local area network Wide area network Server operating systems Firewalls Intrusion detection systems Anti-virus protection Patch levels

When considering the technology environment, it is important to not only identify the components but also to determine how they are used. For example, firewalls are typically installed to filter traffic going in and out of a network. In a security assessment, one must understand what the firewall is protecting and if the rule base is configured around business requirements. Understanding whether the technology environment is set up in alignment with business requirements will enable a more thoughtful security assessment. 321

AU1518Ch20Frame Page 322 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES RISK ASSESSMENT Once there is a good understanding of the business, its critical processes, and the technology supporting the business, the actual risk assessment can be done — that is, what is the risk as a result of the security exposures? While gaining an understanding of the business and the risk assessment are listed as separate steps, it is important to note that both of these steps will tend to happen simultaneously in the context of an audit; and this process will be iterative to some extent. Due to the nature of how information is obtained and the dynamic nature of a security assessment, the approach to performing the assessment must be flexible. The assessment of risk takes the understanding of the critical processes and technology one step further. The critical business processes and the associated security exposures must be evaluated to determine what the risk is to the company. Some questions to think about when determining risk include: • What is the impact to the business if the business process cannot be performed? • What is the monetary impact? — Cost to restore information — Regulatory penalties • What is the impact to the reputation of the company? • What is the likelihood of an incident due to the security exposure? • Are there any mitigating controls that reduce the risk? It is critical to involve business process and technology owners when determining risks. Depending on how the assessment is performed, some of the questions will come up or be answered as the initial information is gathered. In addition, other more detailed questions will come up that will provide the necessary information to properly assess the risk. In addition to evaluating the business processes, the risk assessment should also be done relative to security exposures in the technology environment. Some areas on which to focus here include: • • • • •

Perimeter security (firewalls, intrusion detection, etc.) Servers Individual PCs Anti-virus software Remote access

Security issues relating to the specific technologies listed above may come up during the discussions about the critical business processes. For example, locking down servers may arise because it is likely that there are servers that support some of the critical business processes. 322

AU1518Ch20Frame Page 323 Thursday, November 14, 2002 6:16 PM

Security Assessment Once all the security risks have been determined, the consultant or auditor must identify what measures are in place to mitigate the risks. Some of the measures to look for include: • Information security policies • Technical controls (e.g., servers secured according to best practice standards) • Business process controls (e.g., review of logs and management reports) The controls may be identified while the process is reviewed and the risk is determined. Again, a security assessment is an iterative process in which information may not be uncovered in a structured manner. It is important to differentiate and organize the information so that risk is assessed properly. The combination of security exposures and controls (or lack thereof) to mitigate the associated risks should then be used to develop the gap analysis and recommendations. The gap analysis is essentially a detailed list of security exposures, along with controls to mitigate the associated risks. Those areas where there are inadequate controls or no controls to mitigate the security exposure are the gaps, which potentially require remediation of some kind. The final step in the gap analysis is to develop recommendations to close the gaps. Recommendations could range from writing a security policy to changing the technical architecture to altering how the current business process is performed. It is very important that the recommendations consider the business needs of the organization. Before a recommendation is made, a cost/benefit analysis should be done to ensure that it makes business sense. It is possible that, based on the cost/benefit analysis and operational or financial constraints, the organization might find it reasonable to accept certain security risks. Because the recommendations must be sold to management, they must make sense from a business perspective. The gap analysis should be presented in an organized format that management can use to understand the risks and implement the recommendations. An effective way to present the gap analysis is with a risk matrix with the following columns represented: • • • •

Finding Risk Controls in place Recommendation

This format provides a simple and concise presentation of the security exposures, controls, and recommendations. The presentation of the gap 323

AU1518Ch20Frame Page 324 Thursday, November 14, 2002 6:16 PM

SECURITY MANAGEMENT PRACTICES analysis is very important because management will use it to understand the security exposures and associated risks. In addition, the gap analysis can be used to prioritize short- and long-term security initiatives. CONCLUSION For many companies, the security assessment is the first step in developing an effective information security program because many organizations do not know where they are from a security perspective. An independent security assessment and the resulting gap analysis can help determine what the security exposures are, as well as provide recommendations for additional security measures that should be in implemented. The gap analysis can also help management prioritize the tasks in the event that all the recommendations could not be immediately implemented. The gap analysis reflects the security position at a given time, and the recommendations reflect current and future business requirements to the extent they are known. As business requirements and technologies change, security exposures will invariably change. To maintain a sound information security program, the cycle of assessments, gap analysis, and implementation of recommendations should be done on a continuous basis to effectively manage security risk. References 1. Common Criteria Web page: http://www.commoncriteria.org/docs/origins.html. 2. Flash, Cynthia, Rise of the chief security officer, Internet News, March 25, 2002, http://www. internetnews.com/ent-news/article/0,7_997111,00.html.

ABOUT THE AUTHOR Sudhanshu Kairab, CISSP, CISA, is an information security consultant with a diverse background, including security consulting, internal auditing, and public accounting across different industries. His recent projects include security assessments and development of security policies and procedures.

324

AU1518Ch21Frame Page 325 Thursday, November 14, 2002 6:15 PM

Chapter 21

Evaluating the Security Posture of an Information Technology Environment: The Challenges of Balancing Risk, Cost, and Frequency of Evaluating Safeguards Brian R. Schultz, CISSP, CISA

The elements that could affect the integrity, availability, and confidentiality of the data contained within an information technology (IT) system must be assessed periodically to ensure that the proper safeguards have been implemented to adequately protect the resources of an organization. More specifically, the security that protects the data contained within the IT systems should be evaluated regularly. Without the assurance that the data contained within the system has integrity and is therefore accurate, the system is useless to serve the stakeholders who rely on the accuracy of such data. Historically, safeguards over a system have been evaluated as a function of compliance with laws, regulations, or guidelines that are driven by an external entity. External auditors such as financial statement auditors might assess security over a system to understand the extent of security controls implemented and whether these controls are adequate to allow them to rely on the data processed by the systems. Potential partners for a merger might assess the security of an organization’s systems to determine the effectiveness of security measures and to gain a better understanding of the systems’ condition and value. See Exhibit 21-1 for a list of common IT evaluation methodologies.

325

AU1518Ch21Frame Page 326 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 21-1. Common IT evaluation types. Type of Evaluation: Financial Statement Audit Stakeholders: All professionals who work for the organization or who own a company that undergoes an annual financial statement audit. Description: Financial statement auditors review the financial data of an organization to determine whether the financial data is accurately reported. As a component of performing the financial statement audit, they also review the controls (safeguards) used to protect the integrity of the data. Financial statement auditors are not concerned with the confidentiality or availability of data as long as it has no impact on the integrity of the data. This work will be conducted in accordance with American Institute of Certified Public Accountants (AICPA) standards for public organizations and in accordance with the Federal Information System Control Audit Methodology (FISCAM) for all U.S. federal agency financial statement audits. Type of Evaluation: Due Diligence Audit before the Purchase of a Company Stakeholders: Potential buyers of a company. Description: Evaluation of the safeguards implemented and the condition of an IT system prior to the purchase of a company. Type of Evaluation: SAS 70 Audit Stakeholders: The users of a system that is being processed by a facility run by another organization. Description: The evaluation of data centers that process (host) applications or complete systems for several organizations. The data center will frequently obtain the services of a third-party organization to perform an IT audit over the data center. The report, commonly referred to as an SAS 70 Report, provides an independent opinion of the safeguards implemented at the shared data center. The SAS 70 Report is generally shared with each of the subscribing organizations that uses the services of the data center. Because the SAS 70 audit and associated report are produced by a third-party independent organization, most subscribing organizations of the data center readily accept the results to be sufficient, eliminating the need to initiate their own audits of the data center. Type of Evaluation: Federal Financial Institutions Examination Council (FFIEC) Information Systems Examination Stakeholders: All professionals in the financial industry and their customers. Description: Evaluation of the safeguards affecting the integrity, reliability, and accuracy of data and the quality of the management information systems supporting management decisions. Type of Evaluation: Health Insurance Portability Accountability Act (HIPAA) Compliance Audit Stakeholders: All professionals in health care and patients. Description: Evaluation of an organization’s compliance with HIPAA specifically in the area of security and privacy of healthcare data and data transmissions.

326

AU1518Ch21Frame Page 327 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment Exhibit 21-1. Common IT evaluation types (Continued). Type of Evaluation: U.S. Federal Government Information Systems Reform Act (GISRA) Review Stakeholders: All U.S. federal government personnel and American citizens. Description: Evaluation of safeguards of federal IT systems with a final summary report of each agency’s security posture provided to the Office of Management and Budget. Type of Evaluation: U.S. Federal Government Risk Assessment in compliance with Office of Management and Budget Circular A-130 Stakeholders: All federal government personnel and those who use the data contained within those systems. Description: Evaluation of U.S. government major applications and general support systems every three years to certify and accredit that the system is properly secured to operate and process data.

Evaluations of IT environments generally are not performed proactively by the IT department of an organization. This is primarily due to a performance-focused culture within the ranks of the chief information officers and other executives of organizations who have been driven to achieve performance over the necessity of security. As more organizations experience performance issues as a result of lack of effective security, there will be more proactive efforts to integrate security into the development of IT infrastructures and the applications that reside within them. In the long run, incorporating security from the beginning is significantly more effective and results in a lower cost over the life cycle of a system. Internal risk assessments should be completed by the information security officer or an internal audit department on an annual basis and more often if the frequency of hardware and software changes so necessitates. In the case of a major launch of a new application or major platform, a preimplementation (before placing into production) review should be performed. If an organization does not have the capacity or expertise to perform its own internal risk assessment or pre-implementation evaluation, a qualified consultant should be hired to perform the risk assessment. The use of a contractor offers many advantages: • Independent evaluators have a fresh approach and will not rely on previously formed assumptions. • Independent evaluators are not restricted by internal politics. • Systems personnel are generally more forthright with an outside consultant than with internal personnel. • Outside consultants have been exposed to an array of systems of other organizations and can offer a wider perspective on how the security posture of the system compares with systems of other organizations. 327

AU1518Ch21Frame Page 328 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES

Assess

Design

Security Strategy and Policy

Implement

Test

Exhibit 21-2. Security life-cycle model.

• Outside consultants might have broader technology experience based on their exposure to multiple technologies and therefore are likely to be in a position to offer recommendations for improving security. When preparing for an evaluation of the security posture of an IT system, the security life-cycle model should be addressed to examine the organization’s security strategy, policies, procedures, architecture, infrastructure design, testing methodologies, implementation plans, and prior assessment findings. SECURITY LIFE-CYCLE MODEL The security life-cycle model contains all of the elements of security for a particular component of security of an information technology as seen in Exhibit 21-2. Security elements tend to work in cycles. Ideally, the security strategy and policy are determined with a great deal of thought and vision followed by the sequential phases of design, test, implement and, finally, assess. The design phase is when the risk analyst examines the design of safeguards and the chosen methods of implementation. In the second phase, the test phase, the risk assessment examines the testing procedures and processes that are used before placing safeguards into production. In the following phase, the implementation phase, the risk assessment analyzes the effectiveness of the technical safeguards settings contained within the operating system, multilevel security, database management system, application-level security, public key infrastructure, intrusion detection system, firewalls, and routers. These safeguards are evaluated using 328

AU1518Ch21Frame Page 329 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment

Threats

Vulnerability

Data

Exhibit 21-3. Elements of an organization’s security posture.

technical vulnerability tools as well as a manual review of security settings provided on printed reports. Assessing security is the last phase of the security life-cycle model, and it is in this phase that the actions taken during the previous phases of the security life-cycle model are assessed. The assess phase is the feedback mechanism that provides the organization with the condition of the security posture of an IT environment. The risk assessment first focuses on the security strategy and policy component of the model. The security strategy and policy component is the core of the model, and many information security professionals would argue that this is the most important element of a successful security program. The success or failure of an organization’s security hinges on a well-formulated, risk-based security strategy and policy. When used in the appropriate context, the security life-cycle model is an effective tool to use as a framework in the evaluation of IT security risks. ELEMENTS OF RISK ASSESSMENT METHODOLOGIES A risk assessment is an active process that is used to evaluate the security of an IT environment. Contained within each security assessment methodology are the elements that permit the identification and categorization of the components of the security posture of a given IT environment. These identified elements provide the language necessary to identify, communicate, and report the results of a risk assessment. These elements are comprised of threats, vulnerabilities, safeguards, countermeasures, and residual risk analysis. As seen in Exhibit 21-3, each of these elements is dynamic and, in combination, constitutes the security posture of the IT environment. 329

AU1518Ch21Frame Page 330 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES THREATS A threat is a force that could affect an organization or an element of an organization. Threats can be either external or internal to an organization and, by themselves, are not harmful. However, they have the potential to be harmful. Threats are also defined as either man-made — those that mankind generates — or natural — those that naturally occur. For a threat to affect an organization, it must exploit an existing vulnerability. Every organization is vulnerable to threats. The number, frequency, severity, type, and likelihood of each threat are dependent on the environment of the IT system. Threats can be ranked on a relative scale of low, medium, and high, based on the potential risk to an asset or group of assets. • Low indicates a relatively low probability that this threat would have significant effect. • Medium indicates a moderate probability that this threat would have significant effect if not mitigated by an appropriate safeguard. • High indicates a relatively high probability that the threat could have significant effect if not mitigated by an appropriate safeguard or series of safeguards. VULNERABILITY Vulnerability is a weakness or condition of an organization that could permit a threat to take advantage of the weakness to affect its performance. The absence of a firewall to protect an organization’s network from external attacks is an example of vulnerability in the protection of the network from potential external attacks. All organizations have and will continue to have vulnerabilities. However, each organization should identify the potential threats that could exploit vulnerabilities and properly safeguard against threats that could have a dramatic effect on performance. SAFEGUARDS Safeguards, also called controls, are measures that are designed to prevent, detect, protect, or sometimes react to reduce the likelihood — or to completely mitigate the possibility — of a threat to exploit an organization’s vulnerabilities. Safeguards can perform several of these functions at the same time, or they may only perform one of these functions. A firewall that is installed and configured properly is an example of a safeguard to prevent external attacks to the organization’s network. Ideally, a “defensein-depth” approach should be deployed to implement multiple layers of safeguards to establish the appropriate level of protection for the given environment. The layering of protection provides several obstacles for an attacker, thereby consuming the attacker’s resources of time, money, and risk in continuing the attack. For instance, a medical research firm should safeguard its product research from theft by implementing a firewall on its 330

AU1518Ch21Frame Page 331 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment network to prevent someone from obtaining unauthorized access to the network. In addition, the firm might also implement a network intrusion detection system to create an effective defense-in-depth approach to external network safeguards. A countermeasure is a type of safeguard that is triggered by an attack and is reactive in nature. Its primary goal is to defend by launching an offensive action. Countermeasures should be deployed with caution because they could have a profound effect on numerous systems if activated by an attack. RESIDUAL RISK ANALYSIS As a risk assessment is completed, a list of all of the identified vulnerabilities should be documented and a residual risk analysis performed. Through this process, each individual vulnerability is examined along with the existing safeguards (if any), and the residual risk is then determined. The final step is the development of recommendations to strengthen existing safeguards or recommendations to implement new safeguards to mitigate the identified residual risk. RISK ASSESSMENT METHODOLOGIES Several risk assessment methodologies are available to the information security professional to evaluate the security posture of an IT environment. The selection of a methodology is based on a combination of factors, including the purpose of the risk assessment, available budget, and the required frequency. The primary consideration in selecting a risk assessment methodology, however, is the need of the organization for performing the risk assessment. The depth of the risk assessment required is driven by the level of risk attributed to the continued and accurate performance of the organization’s systems. An organization that could be put out of business by a systems outage for a few days would hold a much higher level of risk than an organization that could survive weeks or months without their system. For example, an online discount stockbroker would be out of business without the ability to execute timely stock transactions, whereas a construction company might be able to continue operations for several weeks without access to its systems without significant impact. An organization’s risk management approach should also be considered before selecting a risk assessment methodology. Some organizations are proactive in their approach to addressing risk and have a well-established risk management program. Before proceeding in the selection of a risk assessment methodology, it would be helpful to determine if the organization has such a program and the extent of its depth and breadth. In the case 331

AU1518Ch21Frame Page 332 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES of a highly developed risk assessment methodology, several layers of safeguards are deployed and require a much different risk assessment approach than if the risk management program were not developed and few safeguards had been designed and deployed. Gaining an understanding of the design of the risk management program, or lack thereof, will enable the information security professional conducting the risk assessment to quickly identify the layers of controls that should be considered when scoping the risk assessment. The risk assessment methodologies available to the information security professional are general and not platform specific. There are several methodologies available, and the inexperienced information security professional and those not familiar with the risk assessment process will quickly become frustrated with the vast array of methodologies and opinions with regard to how to conduct an IT risk assessment. It is the author’s opinion that all IT risk assessment methodologies should be based on the platform level. This is the only real way to thoroughly address the risk of a given IT environment. Some of the highest risks associated within an IT environment are technology specific; therefore, each risk assessment should include a technical-level evaluation. However, the lack of technology-specific vulnerability and safeguard information makes the task of a technically driven risk assessment a challenge to the information security professional. Hardware and software changes frequently open up new vulnerabilities with each new version. In an ideal world, a centralized depository of vulnerabilities and associated safeguards would be available to the security professional. In the meantime, the information security professional must rely on decentralized sources of information regarding technical vulnerabilities and associated safeguards. Although the task is daunting, the information security professional can be quite effective in obtaining the primary goal, which is to reduce risk to the greatest extent possible. This might be accomplished by prioritizing risk mitigation efforts on the vulnerabilities that represent the highest risk and diligently eliminating lower-risk vulnerabilities until the risk has been reduced to an acceptable level. Several varieties of risk assessments are available to the information security professional, each one carrying unique qualities, timing, and cost. In addition, risk assessments can be scoped to fit an organization’s needs to address risk and to the budget available to address risk. The lexicon and standards of risk assessments vary greatly. While this provides for a great deal of flexibility, it also adds a lot of frustration when trying to scope an evaluation and determine the associated cost. Listed below are several of the most common types of risk assessments. 332

AU1518Ch21Frame Page 333 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment QUALITATIVE RISK ASSESSMENT A qualitative risk assessment is subjective, based on best practices and the experience of the professional performing it. Generally, the findings of a qualitative risk assessment will result in a list of vulnerabilities with a relative ranking of risk (low, medium, or high). Some standards exist for some specific industries, as listed in Exhibit 21-1; however, qualitative risk assessments tend to be open and flexible, providing the evaluator a great deal of latitude in determining the scope of the evaluation. Given that each IT environment potentially represents a unique combination of threats, vulnerabilities, and safeguards, the flexibility is helpful in obtaining quick, cost-effective, and meaningful results. Due to this flexibility, the scope and cost of the qualitative risk assessment can vary greatly. Therefore, evaluators have the ability to scope evaluations to fit an available budget. QUANTITATIVE RISK ASSESSMENT A quantitative risk assessment follows many of the same methodologies of a qualitative risk assessment, with the added task of determining the cost associated with the occurrence of a given vulnerability or group of vulnerabilities. These costs are calculated by determining asset value, threat frequency, threat exposure factors, safeguard effectiveness, safeguard cost, and uncertainty calculations. This is a highly effective methodology in communicating risk to an audience that appreciates interpreting risk based on cost. For example, if an information systems security officer of a large oil company wanted to increase the information security budget of the department, presentation of the proposed budget to the board of directors for approval is required. The best way for this professional to effectively communicate the need for additional funding to improve safeguards and the associated increase in the budget is to report the cost of the risk in familiar terms with which the board members are comfortable. In this particular case, the members of the board are very familiar with financial terms. Thus, the expression of risk in terms of financial cost provides a compelling case for action. For such an audience, a budget increase is much more likely to be approved if the presenter indicates that the cost of not increasing the budget has a high likelihood of resulting in a “two billion dollar loss of revenue” rather than “the risk represents a high operational cost.” Although the risk represented is the same, the ability to communicate risk in financial terms is very compelling. A quantitative risk assessment approach requires a professional or team of professionals who are exceptional in their professions to obtain meaningful and accurate results. They must be well seasoned in performing qualitative and quantitative risk assessments, as the old GI-GO (garbage-in, garbage-out) rule applies. If the persons performing the quantitative risk assessment do not properly estimate the cost of an asset and frequency of 333

AU1518Ch21Frame Page 334 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES loss expectancy, the risk assessment will yield meaningless results. In addition to requiring a more capable professional, a quantitative risk assessment approach necessitates the use of a risk assessment tool such as RiskWatch or CORA (Cost of Risk Analysis). The requirement for the advanced skills of a quantitative risk assessment professional and the use of a quantitative risk assessment tool significantly increases the cost above that of a qualitative risk assessment. For many organizations, a qualitative risk assessment would be more than adequate to identify risk for appropriate mitigation. As a word of caution when using a quantitative approach, much like the use of statistics in politics to influence an audience’s opinion, the cost information that results from a quantitative risk assessment could be manipulated to lead an audience to a variety of conclusions. INFORMATION TECHNOLOGY AUDIT IT audits are primarily performed by external entities and internal audit departments with the charge to determine the effectiveness of the security posture over an IT environment and, in the case of a financial statement audit, to determine the reliability (integrity) of the data contained within the system. They essentially focus on the adequacy of and compliance with existing policies, procedures, technical baseline controls, and guidelines. Therefore, the primary purpose of an IT audit is to report the condition of the system and not to improve security. However, IT auditors are usually more than willing to share their findings and recommendations with the IT department. In addition, IT auditors are required to document their work in sufficient detail as to permit another competent IT auditor to perform the exact same audit procedure (test) and come to the same conclusion. This level of documentation is time-consuming and therefore usually has an effect on the depth and breadth of the evaluation. Thus, IT audits may not be as technically deep in scope as a non-audit type of evaluation. TECHNICAL VULNERABILITY ASSESSMENT A technical vulnerability assessment is a type of risk assessment that is focused primarily on the technical safeguards at the platform and network levels and does not include an assessment of physical, environmental, configuration management, and management safeguards. NETWORK TECHNICAL VULNERABILITY ASSESSMENT The safeguards employed at the network level support all systems contained within its environment. Sometimes these collective systems are referred to as a general support system. Most networks are connected to the Internet, which requires protection from exterior threats. Accordingly, a network technical vulnerability assessment should include an evaluation of the 334

AU1518Ch21Frame Page 335 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment Exhibit 21-4. Automated technical vulnerability assessment tools. Nessus. This is a free system security scanning software that provides the ability to remotely evaluate security within a given network and determine the vulnerabilities that an attacker might use. ISS Internet Scanner. A security scanner that provides comprehensive network vulnerability assessment for measuring online security risks, it performs scheduled and selective probes of communication services, operating systems, applications, and routers to uncover and report systems vulnerabilities. Shadow Security Scanner. This tool identifies known and unknown vulnerabilities, suggests fixes to identified vulnerabilities, and reports possible security holes within a network’s Internet, intranet, and extranet environments. It employs a unique artificial intelligence engine that allows the product to think like a hacker or network security analyst attempting to penetrate your network. NMAP. NMAP (Network Mapper) is an open-source utility for network exploration or security auditing. It rapidly scans large networks using raw IP packets in unique ways to determine what hosts are available on the network, what services (ports) they are offering, what operating system (and OS version) they are running, and what type of packet filters or firewalls are in use. NMAP is free software available under the terms of the GNU GPL. Snort. This packet-sniffing utility monitors displays and logs network traffic. L0ftCrack. This utility can crack captured password files through comparisons of passwords to dictionaries of words. If the users devised unique passwords, the utility uses brute-force guessing to reveal the passwords of the users.

safeguards implemented to protect the network and its infrastructure. This would include the routers, load balancers, firewalls, virtual private networks, public key infrastructure, single sign-on solutions, network-based operating systems (e.g., Windows 2000), and network protocols (e.g., TCP/IP). Several automated tools can be used to assist the vulnerability assessment team. See Exhibit 21-4 for a list of some of the more common tools used. PLATFORM TECHNICAL VULNERABILITY ASSESSMENT The safeguards employed at the platform level support the integrity, availability, and confidentiality of the data contained within the platform. A platform is defined as a combination of hardware, operating system software, communications software, security software, and the database management system and application security that support a set of data (see Exhibit 21-5 for an example of a mainframe platform diagram). The combination of these distinctly separate platform components contains a unique set of risks, necessitating that each platform be evaluated based on its unique combination. Unless the evaluator is able to examine the safeguards at the platform level, the integrity of the data cannot be properly and completely assessed and, therefore, is not reliable. Several automated tools can be used by the vulnerability assessment team. 335

AU1518Ch21Frame Page 336 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES

Data Application Database System Security Software - RACF Operating System - OS/390 Hardware - IBM Mainframe

Exhibit 21-5. Mainframe platform diagram.

PENETRATION TESTING A penetration test, also known as a pen test, is a type of risk assessment; but its purpose is quite different. A pen test is designed to test the security of a system after an organization has implemented all designed safeguards, performed a risk assessment, implemented all recommended improvements, and implemented all new recommended safeguards. It is the final test to determine if enough layered safeguards have been sufficiently implemented to prevent a successful attack against the system. This form of ethical hacking attempts to find vulnerabilities that have been overlooked in prior risk assessments. Frequently, a successful penetration is accomplished as a result of the penetration team, otherwise known as a tiger team, discovering multiple vulnerabilities that by themselves are not considered high risk but, when combined, create a backdoor permitting the penetration team to successfully exploit the low-risk vulnerabilities. There are several potential components to a pen test that, based on the organization’s needs, can be selected for use: • External penetration testing is performed from outside of the organization’s network, usually from the Internet. The organization can either provide the pen team with the organization’s range of IP addresses or ask the evaluators to perform a blind test. Blind tests are more expensive because it will take the penetration team time to discover the IP addresses of the organization. While it might seem to be a more effective test to have the team perform a blind test, it is inevitable that the team will find the IP addresses; therefore, it may be considered a waste of time and funds. 336

AU1518Ch21Frame Page 337 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment • Internal penetration testing is performed within the internal network of the organization. The penetration team attempts to gain access to sensitive unauthorized areas of the system. The internal penetration test is a valuable test, especially in light of the fact that an estimated 80 percent of incidents of unauthorized access are committed by employees. • Social engineering can be used by the pen testers to discover vital information from the organization’s personnel that might be helpful in launching an attack. For instance, a pen tester might drive up to the building of the organization, write down the name on an empty reserved parking space, and then call the help desk impersonating the absent employee to report that they had forgotten their password. The pen tester would then request that his password be reset so that he can get back into the system. Unless the help desk personnel have a way (employee number, etc.) to verify his identity, they will reset the password, giving the attacker the opportunity to make a new password for the absent employee and gain unauthorized access to the network. • War dialing tools can be used to automatically dial every combination of phone numbers for a given phone number exchange in an attempt to identify a phone line that has a modem connected. Once a phone line with an active modem has been discovered, the penetration team will attempt to gain access to the system. • Dumpster diving is the practice of searching through trash cans and recycling bins in an attempt to obtain information that will allow the penetration team to gain access to the system. Penetration testing is the most exciting of all of the risk assessments because it is an all-out attempt to gain access to the system. It is the only risk assessment methodology that proves the existence of a vulnerability or series of vulnerabilities. The excitement of penetration testing is also sometimes perpetuated by those who perform them. Some pen testers, also known as ethical hackers or “white hats,” are retired hackers who at one time were “black hats.” Some organizations might be tempted to skip the detailed risk assessment and risk remediation plan and go straight to a penetration test. While pen testing is an enthralling process, the results will be meaningless if the organization does not do its homework before the penetration test. In all likelihood, a good penetration team will gain access to an organization’s systems if it has not gone through the rigors of the risk assessment and improvement of safeguards. EVALUATING IDENTIFIED VULNERABILITIES After the vulnerabilities have been identified through a risk assessment, a vulnerability analysis should be performed to rank each vulnerability according to its risk level: 337

AU1518Ch21Frame Page 338 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES • Low. The risk of this vulnerability is not considered significant; however, when combined with several other low-risk vulnerabilities, the aggregate might be considered either a medium or high risk. Recommended safeguards need to be reviewed to determine if they are practical or cost-effective relative to the risk of the vulnerability. • Medium. This risk is potentially significant. If the vulnerability could be exploited more readily in combination with another vulnerability, then this risk could be ranked higher. Corrective action of a medium risk level should be taken within a short period of time after careful consideration of the cost-effectiveness of implementing the recommended safeguard. • High. The risk of this vulnerability is significant and, if exploited, could have profound effects on the viability of the organization. Immediate corrective action should be taken to mitigate the risk. ANALYZING PAIRED VULNERABILITIES In addition to ranking individual vulnerabilities, an analysis of all of the vulnerabilities should be performed to determine if any of the combinations of vulnerabilities, when considered together, represent a higher level of risk. These potentially higher-risk combinations should be documented and action taken to mitigate the risk. This is particularly important when considering the low-risk items because the combination of these lower-risk items could create the backdoor that permits an attacker to gain access to the system. To determine the relative nominal risk level of the identified vulnerabilities, the information security professional should identify potential layers of safeguards that mitigate a risk and then determine the residual risk. A residual risk mitigation plan should then be developed to reduce the residual risk to an acceptable level. CONCLUSION Unfortunately, security assessments are usually the last action that the IT department initiates as part of its security program. Other priorities such as application development, infrastructure building, or computer operations typically take precedence. Many organizations typically do not take security past the initial implementation because of a rush-to-build functionality of the systems — until an IT auditor or a hacker forces them to take security seriously. The “pressures to process” sometimes force organizations to ignore prudent security design and security assessment, leaving security as an afterthought. In these circumstances, security is not considered a critical element in serving the users; thus, many times security is left behind. The reality is that information contained within a system cannot be relied upon as having integrity unless security has been assessed and adequate protection of the data has been provided for the entire time the data has resided on the system. 338

AU1518Ch21Frame Page 339 Thursday, November 14, 2002 6:15 PM

Evaluating the Security Posture of an IT Environment Evaluating the security posture of an IT environment is a challenge that involves balancing the risk, frequency of evaluation, and cost. Security that is designed, tested, and implemented based on a strong security strategy and policy will be highly effective and in the long run cost-effective. Unfortunately, there are no clear-cut answers regarding how often a given IT environment should be evaluated. The answer may be found by defining how long the organization may viably operate without the systems. Such an answer will define the level of risk the organization is willing, or is not willing, to accept. A security posture that is built with the knowledge of this threshold of risk can lead to a system of safeguards that is both risk-based and cost-effective. ABOUT THE AUTHOR Brian Schultz, CISSP, CISA, is chairman of the board of INTEGRITY, a nonprofit organization dedicated to assisting the federal government with implementation of information security solutions. An expert in the field of information security assessment, Mr. Schultz has, throughout his career, assessed the security of numerous private and public organizations. He is a founding member of the Northern Virginia chapter of the Information Systems Security Association (ISSA).

Copyright 2003. INTEGRITY. All Rights Reserved. Used with permission.

339

AU1518Ch21Frame Page 340 Thursday, November 14, 2002 6:15 PM

AU1518Ch22Frame Page 341 Thursday, November 14, 2002 6:15 PM

Chapter 22

Cyber-Risk Management: Technical and Insurance Controls for Enterprise-Level Security Carol A. Siegel, CISSP Ty R. Sagalow Paul Serritella

Traditional approaches to security architecture and design have attempted to achieve the goal of the elimination of risk factors — the complete prevention of system compromise through technical and procedural means. Insurance-based solutions to risk long ago admitted that a complete elimination of risk is impossible and, instead, have focused more on reducing the impact of harm through financial avenues — providing policies that indemnify the policyholder in the event of harm. It is becoming increasingly clear that early models of computer security, which focused exclusively on the risk-elimination model, are not sufficient in the increasingly complex world of the Internet. There is simply no magic bullet for computer security; no amount of time or money can create a perfectly hardened system. However, insurance cannot stand alone as a risk mitigation tool — the front line of defense must always be a complete information security program and the implementation of security tools and products. It is only through leveraging both approaches in a complementary fashion that an organization can reach the greatest degree of risk 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

341

AU1518Ch22Frame Page 342 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES reduction and control. Thus, today, the optimal model requires a program of understanding, mitigating, and transferring risk through the use of integrating technology, processes, and insurance — that is, a risk management approach. The risk management approach starts with a complete understanding of the risk factors facing an organization. Risk assessments allow for security teams to design appropriate control systems and leverage the necessary technical tools; they also are required for insurance companies to properly draft and price policies for the remediation of harm. Complete risk assessments must take into account not only the known risks to a system but also the possible exploits that may be developed in the future. The completeness of cyber-risk management and assessment is the backbone of any secure computing environment. After a risk assessment and mitigation effort has been completed, insurance needs to be procured from a specialized insurance carrier of top financial strength and global reach. The purpose of the insurance is threefold: (1) assistance in the evaluation of the risk through products and services available from the insurer, (2) transfer of the financial costs of a successful computer attack or threat to the carrier, and (3) the provision of important post-incident support funds to reduce the potential reputation damage after an attack. THE RISK MANAGEMENT APPROACH As depicted in Exhibit 22-1, risk management requires a continuous cycle of assessment, mitigation, insurance, detection, and remediation. Assess An assessment means conducting a comprehensive evaluation of the security in an organization. It usually covers diverse aspects, ranging from physical security to network vulnerabilities. Assessments should include penetration testing of key enterprise systems and interviews with security and IT management staff. Because there are many different assessment formats, an enterprise should use a method that conforms to a recognized standard (e.g., ISO 17799, InfoSec — Exhibit 22-2). Regardless of the model used, however, the assessment should evaluate people, processes, technology, and financial management. The completed assessment should then be used to determine what technology and processes should be employed to mitigate the risks exposed by the assessment. An assessment should be done periodically to determine new vulnerabilities and to develop a baseline for future analysis to create consistency and objectivity. 342

AU1518Ch22Frame Page 343 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management

- Understand the report that the assessment yields - Determine areas of vulnerability that need immediate attention - Establish a recurring procedure to address these vulnerabilities - Recover lost data from backup systems - Execute alternative hot site until primary site is available

- Monitor assets to discover any unusual activity - Implement a 24x7 monitoring system that includes intrusion detection, antivirus, etc., to immediately identify and stop any potential intrusion - Analyze logs to determine any past events that were missed

Remediate

- Evaluate the organization’s security framework, including penetration testing and interviews with key personnel. - Use standard methodology and guidelines for assessment (e.g., ISO 17799, InfoSec, etc.)

Assess

Detect Mitigate

- Create and implement policies and procedures that ensure high levels of security - Implement financial risk mitigation and transfer mechanisms - Should be reviewed periodically to ensure maintenance of security posture

Insure - Choose the right insurance carrier based on expertise, financial strength, and global reach - Choose the right policy, including both first party and third party coverage - Implement insurance as a risk transfer solution and risk evaluation based security solutions - Work with the carrier to determine potential loss and business impact due to a security breach

Exhibit 22-1. Risk management cycle.

Mitigate Mitigation is the series of actions taken to reduce risk, minimize chances of an incident occurring, or limit the impact of any breach that does occur. Mitigation includes creating and implementing policies that ensure high levels of security. Security policies, once created, require procedures that ensure compliance. Mitigation also includes determining and using the right set of technologies to address the threats that the organization faces and implementing financial risk mitigation and transfer mechanisms. Insure Insurance is a key risk transfer mechanism that allows organizations to be protected financially in the event of loss or damage. A quality insurance program can also provide superior loss prevention and analysis recommendations, often providing premium discounts for the purchase of certain security products and services from companies known to the insurer that dovetail into a company’s own risk assessment program. Initially, 343

AU1518Ch22Frame Page 344 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-2. The 11 domains of risk assessment. Security Policy: During the assessment, the existence and quality of the organization’s security policy is evaluated. Security policies should establish guidelines, standards, and procedures to be followed by the entire organization. These need to be updated frequently. Organizational Security: One of the key areas that any assessment looks at is the organizational aspect of security. This means ensuring that adequate staff has been assigned to security functions, that there are hierarchies in place for security-related issues, and that people with the right skill sets and job responsibilities are in place. Asset Classification and Control: Any business will be impacted if the software and hardware assets it has are compromised. In evaluating the security of the organization, the existence of an inventory management system and risk classification system have to be verified. Personnel Security: The hiring process of the organization needs to be evaluated to ensure that adequate background checks and legal safeguards are in place. Also, employee awareness of security and usage policies should be determined. Physical and Environmental Security: Ease of access to the physical premises needs to be tested, making sure that adequate controls are in place to allow access only to authorized personnel. Also, the availability of redundant power supplies and other essential services has to be ensured. Communication and Operations Management: Operational procedures need to be verified to ensure that information processing occurs in a safe and protected manner. These should cover standard operating procedures for routine tasks as well as procedures for change control for software, hardware, and communication assets. Access Control: This domain demands that access to systems and data be determined by a set of criteria based on business requirement, job responsibility, and time period. Access control needs to be constantly verified to ensure that it is available only on a need-to-know basis with strong justification. Systems Development and Maintenance: If a company is involved in development activity, assess whether security is a key consideration at all stages of the development life cycle. Business Continuity Management: Determining the existence of a business continuity plan that minimizes or eliminates the impact of business interruption is a part of the assessment. Compliance: The assessment has to determine if the organization is in compliance with all regulatory, contractual, and legal requirements. Financial Considerations: The assessment should include a review to determine if adequate safeguards have to be implemented to ensure that any security breach results in minimal financial impact. This is implemented through risk transfer mechanisms; primarily insurance that covers the specific needs of the organization.

determining potential loss and business impact due to a security breach allows organizations to choose the right policy for their specific needs. The insurance component then complements the technical solutions, policies, and procedures. A vital step is choosing the right insurance carrier by 344

AU1518Ch22Frame Page 345 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management seeking companies with specific underwriting and claims units with expertise in the area of information security, top financial ratings, and global reach. The right carrier should offer a suite of policies from which companies can choose to provide adequate coverage. Detect Detection implies constant monitoring of assets to discover any unusual activity. Usually this is done by implementing a 24/7 monitoring system that includes intrusion detection to immediately identify and stop any potential intrusion. Additionally, anti-virus solutions allow companies to detect new viruses or worms as they appear. Detection also includes analyzing logs to determine any past events that were missed and specification of actions to prevent future misses. Part of detection is the appointment of a team in charge of incident response. Remediate Remediation is the tactical response to vulnerabilities that assessments discover. This involves understanding the report that the assessment yields and prioritizing the areas of vulnerability that need immediate attention. The right tactic and solution for the most efficient closing of these holes must be chosen and implemented. Remediation should follow an established recurring procedure to address these vulnerabilities periodically. In the cycle above, most of the phases focus on the assessment and implementation of technical controls. However, no amount of time or money spent on technology will eliminate risk. Therefore, insurance plays a key role in any risk management strategy. When properly placed, the insurance policy will transfer the financial risk of unavoidable security exposures from the balance sheet of the company to that of the insurer. As part of this basic control, companies need to have methods of detection (such as intrusion detection systems, or IDS) in place to catch the cyberattack when it takes place. Post incident, the insurer will then remediate any damage done, including finance and reputation impacts. The remediation function includes recovery of data, insurance recoveries, and potential claims against third parties. Finally, the whole process starts again with an assessment of the company’s vulnerabilities, including an understanding of a previously unknown threat. TYPES OF SECURITY RISKS The CSI 2001 Computer Crime and Security Survey2 confirms that the threat from computer crime and other information security breaches continues unabated and that the financial toll is mounting. According to the survey, 85 percent of respondents had detected computer security breaches within the past 12 months; and the total amount of financial loss 345

AU1518Ch22Frame Page 346 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES reported by those who could quantify the loss amounted to $377,828,700 — that is, over $2 million per event. One logical method for categorizing financial loss is to separate loss into three general areas of risk: 1. First-party financial risk: direct financial loss not arising from a thirdparty claim (called first-party security risks) 2. Third-party financial risk: a company’s legal liabilities to others (called third-party security risks) 3. Reputation risk: the less quantifiable damages such as those arising from a loss of reputation and brand identity. These risks, in turn, arise from the particular cyber-activities. Cyber-activities can include a Web site presence, e-mail, Internet professional services such as Web design or hosting, network data storage, and E-commerce (i.e., purchase or sale of goods and services over the Internet). First-party security risks include financial loss arising from damage, destruction, or corruption of a company’s information assets — that is, data. Information assets — whether in the form of customer lists and privacy information, business strategies, competitor information, product formulas, or other trade secrets vital to the success of a business — are the real assets of the 21st century. Their proper protection and quantification are key to a successful company. Malicious code transmissions and computer viruses — whether launched by a disgruntled employee, overzealous competitor, cyber-criminal, or prankster — can result in enormous costs of recollection and recovery. A second type of first-party security risk is the risk of revenue loss arising from a successful denial-of-service (DoS) attack. According to the Yankee Group, in February 2000 a distributed DoS attack was launched against some of the most sophisticated Web sites, including Yahoo, Buy.com, CNN, and others, resulting in $1.2 billion in lost revenue and related damages. Finally, first-party security risk can arise from the theft of trade secrets. Third-party security risk can manifest itself in a number of different types of legal liability claims against a company, its directors, officers, or employees. Examples of these risks can arise from the company’s presence on the Web, its rendering of professional services, the transmission of malicious code or a DoS attack (whether or not intentional), and theft of the company’s customer information. The very content of a company’s Web site can result in allegations of copyright and trademark infringement, libel, or invasion of privacy claims. The claims need not even arise from the visual part of a Web page but can, and often do, arise out of the content of a site’s metatags — the invisible part of a Web page used by search engines. 346

AU1518Ch22Frame Page 347 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management If a company renders Internet-related professional services to others, this too can be a source of liability. Customers or others who allege that such services, such as Web design or hosting, were rendered in a negligent manner or in violation of a contractual agreement may find relief in the court system. Third-party claims can directly arise from a failure of security. A company that negligently or through the actions of a disgruntled employee transmits a computer virus to its customers or other e-mail recipients may be open to allegations of negligent security practices. The accidental transmission of a DoS attack can pose similar legal liabilities. In addition, if a company has made itself legally obligated to keep its Web site open on a 24/7 basis to its customers, a DoS attack shutting down the Web site could result in claims by its customers. A wise legal department will make sure that the company’s customer agreements specifically permit the company to shut down its Web site for any reason at any time without incurring legal liability. Other potential third-party claims can arise from the theft of customer information such as credit card information, financial information, health information, or other personal data. For example, theft of credit card information could result in a variety of potential lawsuits, whether from the card-issuing companies that then must undergo the expense of reissuing, the cardholders themselves, or even the Web merchants who later become the victims of the fraudulent use of the stolen credit cards. As discussed later, certain industries such as financial institutions and healthcare companies have specific regulatory obligations to guard their customer data. Directors and officers (D&Os) face unique, and potentially personal, liabilities arising out of their fiduciary duties. In addition to case law or common-law obligations, D&Os can have obligations under various statutory laws such as the Securities Act of 1933 and the Securities & Exchange Act of 1934. Certain industries may also have specific statutory obligations such as those imposed on financial institutions under the Gramm-LeachBliley Act (GLBA), discussed in detail later. Perhaps the most difficult and yet one of the most important risks to understand is the intangible risk of damage to the company’s reputation. Will customers give a company their credit card numbers once they read in the paper that a company’s database of credit card numbers was violated by hackers? Will top employees remain at a company so damaged? And what will be the reaction of the company’s shareholders? Again, the best way to analyze reputation risk is to attempt to quantify it. What is the expected loss of future business revenue? What is the expected loss of market capitalization? Can shareholder class or derivative actions be foreseen? And, if so, what can the expected financial cost of those actions be in terms of legal fees and potential settlement amounts? 347

AU1518Ch22Frame Page 348 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-3. First- and third-party risks. Activity

First-Party Risk

Third-Party Risk

Web site presence

Damage or theft of data (assumes database is connected to network) via hacking Damage or theft of data (assumes database is connected to network) via computer virus; shutdown of network via DoS attack Loss of revenue due to successful DoS attack

Allegations of trademark, copyright, libel, invasion of privacy, and other Web content liabilities Transmission of malicious code (e.g., NIMDA) or DoS due to negligent network security; DoS customer claims if site is shut down due to DoS attack Customer suits

E-mail

E-commerce Internet professional services Any

Customer suits alleging negligent performance of professional services Claims against directors and officers for mismanagement

The risks just discussed are summarized in Exhibit 22-3. Threats These risks defined above do not exist in a vacuum. They are the product of specific threats, operating in an environment featuring specific vulnerabilities that allow those threats to proceed uninhibited. Threats may be any person or object, from a disgruntled employee to an act of nature, that may lead to damage or value loss for an enterprise. While insurance may be used to minimize the costs of a destructive event, it is not a substitute for controls on the threats themselves. Threats may arise from external or internal entities and may be the product of intentional or unintentional action. External entities comprise the well-known sources — hackers, virus writers — as well as less obvious ones such as government regulators or law enforcement entities. Attackers may attempt to penetrate IT systems through various means, including exploits at the system, server, or application layers. Whether the intent is to interrupt business operations, or to directly acquire confidential data or access to trusted systems, the cost in system downtime, lost revenue, and system repair and redesign can be crippling to any enterprise. The collapse of the British Internet service provider (ISP) Cloud-Nine in January 2002, due to irreparable damage caused by distributed DoS attacks launched against its infrastructure, is only the most recent example of the enterprise costs of cyber-attacks.3 348

AU1518Ch22Frame Page 349 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management

at re Th s

al P r

ocedural Co ntr ol

In te rn al

al rn te

Th re at

s

Ex

ern Int

s

ical Contro chn ls Te

ls

Fi

lC ncia ontro na

at re Th ct re di In

t/ ts en rea nm Th er ry ov to G ula eg

R

s

Enterprise Resources

Exhibit 22-4. Enterprise resource threats.

Viruses and other malicious code frequently use the same exploits as human attackers to gain access to systems. However, as viruses can replicate and spread themselves without human intervention, they have the potential to cause widespread damage across an internal network or the Internet as a whole. Risks may arise from non-human factors as well. For example, system outages through failures at the ISP level, power outages, or natural disasters may create the same loss of service and revenue as attackers conducting DoS attacks. Therefore, technical controls should be put in place to minimize those risks. These risks are diagrammed in Exhibit 22-4. Threats that originate from within an organization can be particularly difficult to track. This may entail threats from disgruntled employees (or ex-employees), or mistakes made by well-meaning employees as well. Many standard technical controls — firewalls, anti-virus software, or intrusion detection — assume that the internal users are working actively to support the security infrastructure. However, such controls are hardly sufficient against insiders working actively to subvert a system. Other types of risks — for example, first-party risks of intellectual property violations — 349

AU1518Ch22Frame Page 350 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES may be created by internal entities without their knowledge. Exhibit 22-5 describes various threats by type. As noted, threats are comprised of motive, access, and opportunity — outsiders must have a desire to cause damage as well as a means of affecting the target system. While an organization’s exposure to risk can never be completely eliminated, all steps should be taken to minimize exposure and limit the scope of damage. Such vulnerabilities may take a number of forms. Technical vulnerabilities include exploits against systems at the operating system, network, or application level. Given the complexity and scope of many commercial applications, vulnerabilities within code become increasingly difficult to detect and eradicate during the testing and quality assurance (QA) processes. Examples range from the original Internet Worm to recently documented vulnerabilities in commercial instant messaging clients and Web servers. Such weaknesses are an increasing risk in today’s highly interconnected environments. Weaknesses within operating procedures may expose an enterprise to risk not controlled by technology. Proper change management processes, security administration processes, and human resources controls and oversight, for example, are necessary. They may also prove disruptive in highly regulated environments, such as financial services or healthcare, in which regulatory agencies require complete sets of documentation as part of periodic auditing requirements. GLBA/HIPAA Title V of the Gramm-Leach-Bliley Act (GLBA) has imposed new requirements on the ways in which financial services companies handle consumer data. The primary focus of Title V, and the area that has received the most attention, is the sharing of personal data among organizations and their unaffiliated business partners and agencies. Consumers must be given notice of the ways in which their data is used and must be given notice of their right to opt out of any data-sharing plan. However, Title V also requires financial services organizations to provide adequate security for systems that handle customer data. Security guidelines require the creation and documentation of detailed data security programs addressing both physical and logical access to data, risk assessment, and mitigation programs, and employee training in the new security controls. Third-party contractors of financial services firms are also bound to comply with the GLBA regulations. On February 1, 2001, the Department of the Treasury, Federal Reserve System, and Federal Deposit Insurance Corporation issued interagency regulations, in part requiring financial institutions to: 350

Internal

External

System penetration (internal source)

Intellectual property violation

Virus penetration Power loss or connectivity loss

Regulatory action

System penetration (external source)

Threat

Exhibit 22-5. Threat matrix.

Attempts by external parties to penetrate corporate resources to modify or delete data or application systems Regulatory action or investigation based on corporate noncompliance with privacy and security guidelines Malicious code designed to self-replicate Loss of Internet connectivity, power, cooling system; may result in large-scale system outages Illicit use of third-party intellectual property (images, text, code) without appropriate license arrangements Malicious insiders attempting to access restricted data

Description

Moderate

Low to moderate

Moderate Low

Low to moderate

Moderate

Security Risk

Strong authentication; strong access control; use of internal firewalls to segregate critical systems

Strong authentication; strong access control; ongoing system support and tracking Data protection; risk assessment and management programs; user training; contractual controls Technological: anti-virus controls Redundant power and connectivity; contractual controls with ISP/hosting facilities Procedural and personnel controls; financial controls mitigating risk

Controls

AU1518Ch22Frame Page 351 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management

351

AU1518Ch22Frame Page 352 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES • Develop and execute an information security program. • Conduct regular tests of key controls of the information security program. These tests should be conducted by an independent third party or staff independent of those who develop or maintain the program. • Protect against destruction, loss, or damage to customer information, including encrypting customer information while in transit or storage on networks. • Involve the board of directors, or appropriate committee of the board, to oversee and execute all of the above. Because the responsibility for developing specific guidelines for compliance was delegated to the various federal and state agencies that oversee commercial and financial services (and some are still in the process of being issued), it is possible that different guidelines for GLBA compliance will develop between different states and different financial services industries (banking, investments, insurance, etc.). The Health Insurance Portability and Accountability Act (HIPAA) will force similar controls on data privacy and security within the healthcare industry. As part of HIPAA regulations, healthcare providers, health plans, and clearinghouses are responsible for protecting the security of client health information. As with GLBA, customer medical data is subject to controls on distribution and usage, and controls must be established to protect the privacy of customer data. Data must also be classified according to a standard classification system to allow greater portability of health data between providers and health plans. Specific guidelines on security controls for medical information have not been issued yet. HIPAA regulations are enforced through the Department of Health and Human Services. As GLBA and HIPAA regulations are finalized and enforced, regulators will be auditing those organizations that handle medical or financial data to confirm compliance with their security programs. Failure to comply can be classified as an unfair trade practice and may result in fines or criminal action. Furthermore, firms that do not comply with privacy regulations may leave themselves vulnerable to class-action lawsuits from clients or third-party partners. These regulations represent an entirely new type of exposure for certain types of organizations as they increase the scope of their IT operations. Cyber-Terrorism The potential for cyber-terrorism deserves special mention. After the attacks of 9/11/01, it is clear that no area of the world is protected from a potential terrorist act. The Internet plays a critical role in the economic stability of our national infrastructure. Financial transactions, running of utilities and manufacturing plants, and much more are dependent upon a working Internet. Fortunately, companies are coming together in newly 352

AU1518Ch22Frame Page 353 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management formed entities such as ISACs (Information Sharing and Analysis Centers) to determine their interdependency vulnerabilities and plan for the worst. It is also fortunate that the weapons used by a cyber-terrorist do not differ much from those of a cyber-criminal or other hacker. Thus, the same risk management formula discussed above should be implemented for the risk of cyber-terrorism. INSURANCE FOR CYBER-RISKS Insurance, when properly placed, can serve two important purposes. First, it can provide positive reinforcement for good behavior by adjusting the availability and affordability of insurance depending upon the quality of an insured’s Internet security program. It can also condition the continuation of such insurance on the maintenance of that quality. Second, insurance will transfer the financial risk of a covered event from a company’s balance sheet to that of the insurer. The logical first step in evaluating potential insurance solutions is to review the company’s traditional insurance program, including its property (including business interruption) insurance, comprehensive general liability (CGL), directors and officers insurance, professional liability insurance, and crime policies. These policies should be examined in connection with a company’s particular risks (see above) to determine whether any gap exists. Given that these policies were written for a world that no longer exists, it is not surprising that traditional insurance policies are almost always found to be inadequate to address today’s cyber-needs. This is not due to any defect in these time-honored policies but simply due to the fact that, with the advent of the new economy risks, there comes a need for specialized insurance to meet those new risks. One of the main reasons why traditional policies such as property and CGL do not provide much coverage for cyber-risks is their approach that property means tangible property and not data. Property policies also focus on physical perils such as fire and windstorm. Business interruption insurance is sold as part of a property policy and covers, for example, lost revenue when your business burns down in a fire. It will not, however, cover E-revenue loss due to a DoS attack. Even computer crime policies usually do not cover loss other than for money, securities, and other tangible property. This is not to say that traditional insurance can never be helpful with respect to cyber-risks. A mismanagement claim against a company’s directors and officers arising from cyber-events will generally be covered under the company’s directors’ and officers’ insurance policy to the same extent as a non-cyber claim. For companies that render professional services to others for a fee, such as financial institutions, those that fail to reasonably render those services due to a cyber-risk may find customer claims to be covered under their professional liability policy. (Internet professional companies 353

AU1518Ch22Frame Page 354 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-6. First- and third-party coverage. First-Party Coverage Media E&O Network security

Cyber extortion Reputation Criminal reward

Cyber-attack caused damage, destruction and corruption of data, theft of trade secrets or E-revenue business interruption Payment of cyber-investigator

Third-Party Coverage Web content liability Professional liability Transmission of a computer virus or DoS liability; theft of customer information liability; DoS customer liability Payment of extortion amount where appropriate

Payment of public relations fees up to $50,000 Payment of criminal reward fund up to $50,000

should still seek to purchase a specific Internet professional liability insurance policy.) Specific Cyber-Liability and Property Loss Policies The inquiry detailed above illustrates the extreme dangers associated with relying upon traditional insurance policies to provide broad coverage for 21st-century cyber-risks. Regrettably, at present there are only a few specific policies providing expressed coverage for all the risks of cyberspace listed at the beginning of this chapter. One should be counseled against buying an insurance product simply because it has the name Internet or cyber in it. So-called Internet insurance policies vary widely, with some providing relatively little real coverage. A properly crafted Internet risk program should contain multiple products within a suite concept permitting a company to choose which risks to cover, depending upon where it is in its Internet maturity curve.4 A suite should provide at least six areas of coverage, as shown in Exhibit 22-6. These areas of coverage may be summarized as follows: • Web content liability provides coverage for claims arising out of the content of your Web site (including the invisible metatags content), such as libel, slander, copyright, and trademark infringement. • Internet professional liability provides coverage for claims arising out of the performance of professional services. Coverage usually includes both Web publishing activities as well as pure Internet services such as being an ISP, host, or Web designer. Any professional service conducted over the Internet can usually be added to the policy. • Network security coverage comes in two basic types: 354

AU1518Ch22Frame Page 355 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management — Third-party coverage provides liability coverage arising from a failure of the insured’s security to prevent unauthorized use of or access to its network. This important coverage would apply, subject to the policy’s full terms, to claims arising from the transmission of a computer virus (such as the Love Bug or Nimda Virus), theft of a customer’s information (most notably including credit card information), and so-called denial-of-service liability. In the last year alone, countless cases of this type of misconduct have been reported. — First-party coverage provides, upon a covered event, reimbursement for loss arising out of the altering, copying, misappropriating, corrupting, destroying, disrupting, deleting, damaging, or theft of information assets, whether or not criminal. Typically the policy will cover the cost of replacing, reproducing, recreating, restoring, or recollecting. In case of theft of a trade secret (a broadly defined term), the policy will either pay or be capped at the endorsed negotiated amount. First-party coverage also provides reimbursement for lost E-revenue as a result of a covered event. Here, the policy will provide coverage for the period of recovery plus an extended business interruption period. Some policies also provide coverage for dependent business interruption, meaning loss of E-revenue as a result of a computer attack on a third-party business (such as a supplier) upon which the insured’s business depends. • Cyber extortion coverage provides reimbursement of investigation costs, and sometimes the extortion demand itself, in the event of a covered cyber-extortion threat. These threats, which usually take the form of a demand for “consulting fees” to prevent the release of hacked information or to prevent the extortion from carrying out a threat to shut down the victims’ Web sites, are all too common. • Public relations or crisis communication coverage provides reimbursement up to $50,000 for use of public relation firms to rebuild an enterprise’s reputation with customers, employees, and shareholders following a computer attack. • Criminal reward funds coverage provides reimbursement up to $50,000 for information leading to the arrest and conviction of a cybercriminal. Given that many cyber-criminals hack into sites for “bragging rights,” this unique insurance provision may create a most welcome chilling effect. Loss Prevention Services Another important feature of a quality cyber-risk insurance program is its loss prevention services. Typically these services could include anything from free online self-assessment programs and free educational CDs to a full-fledged, on-site security assessment, usually based on ISO 17799. 355

AU1518Ch22Frame Page 356 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-7. Finding the right insurer. Quality

Preferred or Minimum Threshold

Financial strength Experience

Triple-A from Standard & Poor’s At least two years in dedicated, specialized unit composed of underwriters, claims, technologists, and legal professionals Defined as amount of limits single carrier can offer; minimum acceptable: $25,000,000 Global presence with employees and law firm contacts throughout the United States, Europe, Asia, Middle East, South America Flexible, knowledgeable Customer focused; willing to meet with client both before and after claim Suite permitting insured to choose right coverage including eight coverages described above Array of services, most importantly including FREE on-site security assessments conducted by well-established thirdparty (worldwide) security assessment firms

Capacity Territory

Underwriting Claims philosophy Policy form Loss prevention

Some insurers may also add other services such as an internal or external network scan. The good news is that these services are valuable, costing up to $50,000. The bad news is that the insurance applicant usually has to pay for the services, sometimes regardless of whether or not it ends up buying the policy. Beginning in 2001, one carrier has arranged to pay for these services as part of the application process. This is welcome news. It can only be hoped that more insurers will follow this lead. Finding the Right Insurer As important as finding the right insurance product is finding the right insurer. Financial strength, experience, and claims philosophy are all important. In evaluating insurers, buyers should take into consideration the factors listed in Exhibit 22-7. In summary, traditional insurance is not up to the task of dealing with today’s cyber-risks. To yield the full benefits, insurance programs should provide and implement a purchase combination of traditional and specific cyber-risk insurance. TECHNICAL CONTROLS Beyond insurance, standard technical controls must be put in place to manage risks. First of all, the basic physical infrastructure of the IT data center should be secured against service disruptions caused by environmental threats. Organizations that plan to build and manage their own data 356

AU1518Ch22Frame Page 357 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management centers should implement fully redundant and modular systems for power, Internet access, and cooling. For example, data centers should consider backup generators in case of area-wide power failures, and Internet connectivity from multiple ISPs in case of service outages from one provider. In cases where the customer does not wish to directly manage its data center, the above controls should be verified before contracting with an ASP or ISP. These controls should be guaranteed contractually, as should failover controls and minimum uptime requirements. Physical Access Control Access control is an additional necessity for a complete data center infrastructure. Physical access control is more than simply securing entrances and exits with conventional locks and security guards. Secure data centers should rely on alarm systems and approved locks for access to the most secure areas, with motion detectors throughout. More complex security systems, such as biometric5 or dual-factor authentication (authentication requiring more than one proof of identity; e.g., card and biometric), should be considered for highly secure areas. Employee auditing and tracking for entrances and exits should be put in place wherever possible, and visitor and guest access should be limited. A summary of potential controls is provided in Exhibit 22-8. If it is feasible to do so, outside expertise in physical security, like logical security, should be leveraged wherever possible. Independent security audits may provide insight regarding areas of physical security that are not covered by existing controls. Furthermore, security reports may be required by auditors, regulators, and other third parties. Audit reports and other security documentation should be kept current and retained in a secure fashion. Again, if an organization uses outsourced facilities for application hosting and management, it should look for multilevel physical access control. Third-party audit reports should be made available as part of the vendor search process; security controls should be made part of the evaluation criteria. As with environmental controls, access controls should also be addressed within the final service agreement such that major modifications to the existing access control infrastructure require advance knowledge and approval. Organizations should insist on periodic audits or thirdparty reviews to ensure compliance. Network Security Controls A secure network is the first layer of defense against risk within an Ebusiness system. Network-level controls are instrumental in preventing unauthorized access from within and without, and tracking sessions internally will detect and alert administrators in case of system penetration. 357

AU1518Ch22Frame Page 358 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-8. Physical controls. Physical Control

Description

Role

Access control

Grants access to physical resources through possession of keys, cards, biometric indicators, or key combinations; multi-factor authentication may be used to increase authentication strength; access control system which requires multiple-party authentication provide higher levels of access control Detection of attempted intrusion through motion sensors, contact sensors, and sensors at standard access points (doors, windows, etc.) Any data center infrastructure should rely on round-the-clock monitoring, through onpremises personnel and offsite monitoring

Securing data center access in general, as well as access to core resources such as server rooms; media — disks, CD-ROMs, tapes — should be secured using appropriate means as well; organizations should model their access control requirements on the overall sensitivity of their data and applications At all perimeter access points to the data center, as well as in critical areas

Intrusion detection

24/7 Monitoring

Validation to existing alarm and access control systems

Internet DMZ

Intranet DMZ

Internet Internet Web Server Internet Router

Intrusion Detection

Internet Firewall

Intranet Web Servers Intranet Firewall

DNS

Application Server

Exhibit 22-9. Demilitarized zone architecture.

Exhibit 22-9 conceptually depicts the overall architecture of an E-business data center. Common network security controls include the following features. 358

AU1518Ch22Frame Page 359 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management Firewalls. Firewalls are critical components of any Internet-facing system. Firewalls filter network traffic based on protocol, destination port, or packet content. As firewall systems have become more advanced, the range of different attack types that can be recognized by the firewall has continued to grow. Firewalls may also be upgraded to filter questionable content or scan incoming traffic for attack signatures or illicit content.

For any infrastructure that requires access to business data, a multiplefirewall configuration should be used. An Internet demilitarized zone (DMZ) should be created for all Web-accessible systems — Web servers or DNS servers — while an intranet DMZ, separated from the Internet, contains application and database servers. This architecture prevents external entities from directly accessing application logic or business data. Network Intrusion Detection Systems. Networked IDSs track internal sessions at major network nodes and look for attack signatures — a sequence of instructions corresponding to a known attack. These systems generally are also tied into monitoring systems that can alert system administrators in the case of detected penetration. More advanced IDSs look for only “correct” sequences of packets and use real-time monitoring capabilities to identify suspicious but unknown sequences. Anti-virus Software. Anti-virus gateway products can provide a powerful second level of defense against worms, viruses, and other malicious code. Anti-virus gateway products, provided by vendors such as Network Associates, Trend Micro, and Symantec, can scan incoming HTTP, SMTP, and FTP traffic for known virus signatures and block the virus before it infects critical systems.

As described in Exhibit 22-10, specific design principles should be observed in building a stable and secure network. Exhibit 22-11 provides a summary of the controls in question. Increasingly, organizations are moving toward managed network services rather than supporting the systems internally. Such a solution saves the organization from having to build staff for managing security devices, or to maintain a 24/7 administration center for monitoring critical systems. Such a buy (or, in this case, hire) versus build decision should be seriously considered in planning your overall risk management framework. Organizations looking to outsource security functions can certainly save money, resources, and time; however, organizations should look closely at the financial as well as technical soundness of any such vendors. Application Security Controls A successful network security strategy is only useful as a backbone to support the development of secure applications. These controls entail security at the operating system level for enterprise systems, as well as 359

AU1518Ch22Frame Page 360 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-10. Secure network design principles. Redundancy. Firewall systems, routers, and critical components such as directory servers should be fully redundant to reduce the impact of a single failure. Currency. Critical network tools must be kept up -to-date with respect to patch-level and core system operations. Vulnerabilities are discovered frequently, even within network security devices such as firewalls or routers. Scalability. An enterprise’s network security infrastructure should be able to grow as business needs require. Service outages caused by insufficient bandwidth provided by an ISP, or server outages due to system maintenance, can be fatal for growing applications. The financial restitution provided by cyber-risk coverage might cover business lost during the service outage but cannot address the greater issues of loss of business, consumer goodwill, or reputation. Simplicity. Complexity of systems, rules, and components can create unexpected vulnerabilities in commercial systems. Where possible, Internet-facing infrastructures should be modularized and simplified such that each component is not called upon to perform multiple services. For example, an organization with a complex E-business infrastructure should separate that network environment from its own internal testing and development networks, with only limited points of access between the two environments. A more audited and restricted set of rules may be enforced in the former without affecting the productivity of the latter.

Exhibit 22-11. Network security controls. Network Control

Description

Role

Firewall

Blocks connections to internal resources by protocol, port, and address; also provides stateful packet inspection Detects signature of known attacks at the network level

Behind Internet routers; also within corporate networks to segregate systems into DMZs

IDS

Anti-virus

Detects malicious code at network nodes

At high-throughput nodes within networks, and at perimeter of network (at firewall level) At Internet HTTP and SMTP gateways

trust management, encryption, data security, and audit controls at the application level. Operating systems should be treated as one of the most vulnerable components of any application framework. Too often, application developers create strong security controls within an application, but have no control over the lower level exploits. Furthermore, system maintenance and administration over time is frequently overlooked as a necessary component of security. Therefore, the following controls should be observed: 360

AU1518Ch22Frame Page 361 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management • Most major OS suppliers — Microsoft, Sun, Hewlett-Packard, etc. — provide guidelines for operating system hardening. Implement those guidelines on all production systems. • Any nonessential software should be removed from production systems. • Administer critical servers from the system console wherever possible. Remote administration should be disabled; if this is not possible, secure log-in shells should be used in place of less secure protocols such as Telnet. • Host-based intrusion detection software should be installed on all critical systems. A host-based IDS is similar to the network-based variety, except it only scans traffic intended for the target server. Known attack signatures may be detected and blocked before reaching the target application, such as a Web or application server. Application-level security is based on maintaining the integrity and confidentiality of the system as well as the data managed by the system. A Web server that provides promotional content and brochures to the public, for example, has little need to provide controls on confidentiality. However, a compromise of that system resulting in vandalism or server downtime could prove costly; therefore, system and data integrity should be closely controlled. These controls are partially provided by security and the operating system and network levels as noted above; additional controls, however, should be provided within the application itself. Authentication and authorization are necessary components of application-level security. Known users must be identified and allowed access to the system, and system functions must be categorized such that users are only presented with access to data and procedures that correspond to their defined privilege level. The technical controls around authentication and authorization are only as useful as the procedural controls around user management. The enrollment of new users, management of personal user information and usage profiles, password management, and the removal of defunct users from the system are required for an authentication engine to provide real risk mitigation. Exhibit 22-12 provides a summary of these technologies and procedures. Data Backup and Archival In addition to technologies to prevent or detect unauthorized system penetration, controls should be put in place to restore data in the event of loss. System backups — onto tape or permanent media — should be in place for any business-critical application. 361

AU1518Ch22Frame Page 362 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Exhibit 22-12. Application security controls. Application Control

Description

Role

System hardening

Processes, procedures, and products to harden operating system against exploitation of network services Monitors connections to servers and detects malicious code or attack signatures Allows for identification and management of system users through identities and passwords

Should be performed for all critical servers and internal systems

Host-based intrusion detection

Authentication

Access control

Encryption

Maps users, by identity or by role, to system resources and functions Critical business data or nonpublic client information should be encrypted (i.e., obscured) while in transit over public networks

On all critical servers and internal systems

For any critical systems; authentication systems may be leveraged across multiple applications to provide single sign-on for enterprise For any critical application

For all Internet-based transactional connectivity; encryption should also be considered for securing highly sensitive data in storage

Backups should be made regularly — as often as daily, depending on the requirements of the business — and should be stored off-site to prevent loss or damage. Test restores should also be performed regularly to ensure the continued viability of the backup copies. Backup retention should extend to at least a month, with one backup per week retained for a year and monthly backups retained for several years. Backup data should always be created and stored in a highly secure fashion. Finally, to ensure system availability, enterprise applications should plan on at least one tier of redundancy for all critical systems and components. Redundant systems can increase the load-bearing capacity of a system as well as provide increased stability. The use of enterprise-class multiprocessor machines is one solution; multiple systems can also be consolidated into server farms. Network devices such as firewalls and routers can also be made redundant through load balancers. Businesses may also wish to consider maintaining standby systems in the event of critical data center failure. Standby systems, like backups, should be housed in a separate storage facility and should be tested periodically to ensure stability. These backup systems should be able to be brought online within 48 hours of a 362

AU1518Ch22Frame Page 363 Thursday, November 14, 2002 6:15 PM

Cyber-Risk Management disaster and should be restored with the most recently available system backups as well. CONCLUSION The optimal model to address the risks of Internet security must combine technology, process, and insurance. This risk management approach permits companies to successfully address a range of different risk exposures, from direct attacks on system resources to unintentional acts of copyright infringement. In some cases, technical controls have been devised that help address these threats; in others, procedural and audit controls must be implemented. Because these threats cannot be completely removed, however, cyber-risk insurance coverage represents an essential tool in providing such nontechnical controls and a major innovation in the conception of risk management in general. A comprehensive policy backed by a specialized insurer with top financial marks and global reach allows organizations to lessen the damage caused by a successful exploit and better manage costs related to loss of business and reputation. It is only through merging the two types of controls that an organization can best minimize its security threats and mitigate its IT risks. Notes 1. The views and policy interpretations expressed in this work by the authors are their own and do not necessarily represent those of American International Group, Inc., or any of its subsidiaries, business units, or affiliates. 2. See http://www.gocsi.com for additional information. 3. Coverage provided in ISPreview, ZDNet. 4. One carrier’s example of this concept can be found at www.aignetadvantage.com. 5. Biometrics authentication comprises many different measures, including fingerprint scans, retinal or iris scans, handwriting dynamics, and facial recognition.

ABOUT THE AUTHORS Carol A. Siegel, CISSP, is the chief security officer of American International Group. Siegel is a well-known expert in the field of information security and has been in the field for more than ten years. She holds a B.S. in systems engineering from Boston University, an M.B.A. in computer applications from New York University, and is a CISA. She can be reached at [email protected] Ty R. Sagalow is executive vice president and chief operating officer of American International Group eBusiness Risk Solutions, the largest Internet risk insurance organization. Over the past 18 years, he has held several executive and legal positions within AIG. He graduated summa cum laude from Long Island University, cum laude from Georgetown University Law Center, and holds a Master of Law from New York University. He can be reached at [email protected] 363

AU1518Ch22Frame Page 364 Thursday, November 14, 2002 6:15 PM

SECURITY MANAGEMENT PRACTICES Paul Serritella is a security architect at American International Group. He has worked extensively in the areas of secure application design, encryption, and network security. He received a B.A. from Princeton University in 1998.

364

AU1518Ch23Frame Page 365 Thursday, November 14, 2002 6:14 PM

Chapter 23

How to Work with a Managed Security Service Provider Laurie Hill McQuillan, CISSP

Throughout history, the best way to keep information secure has been to hide it from those without a need to know. Before there was written language, the practice of information security arose when humans used euphemisms or code words to refer to communications they wanted to protect. With the advent of the computer in modern times, information was often protected by its placement on mainframes locked in fortified rooms, accessible only to those who were trusted employees and capable of communicating in esoteric programming languages. The growth of networks and the Internet have made hiding sensitive information much more difficult. Where it was once sufficient to provide a key to those with a need to know, now any user with access to the Internet potentially has access to every node on the network and every piece of data sent through it. So while technology has enabled huge gains in connectivity and communication, it has also complicated the ability of networked organizations to protect their sensitive information from hackers, disgruntled employees, and other threats. Faced with a lack of resources, a need to recover from an attack, or little understanding of secure technology, organizations are looking for creative and effective ways to protect the information and networks on which their success depends. OUTSOURCING DEFINED One way of protecting networks and information is to hire someone with security expertise that is not available in-house. Outsourcing is an arrangement whereby one business hires another to perform tasks it cannot (or does not want to) perform for itself. In the context of information security, 365

AU1518Ch23Frame Page 366 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES outsourcing means that the organization turns over responsibility for its information or assets security to professional security managers. In the words of one IT manager, outsourcing “represents the possibility of recovering from the awkward position of trying to accomplish an impossible task with limited resources.”1 This promising possibility is embodied in a new segment of the information security market called managed system security providers (MSSPs), which has arisen to provide organizations with an alternative to investing in their own systems security. INDUSTRY PERSPECTIVE With the exception of a few large companies that have offered security services for many years, providing outsourced security is a relatively new phenomenon. Until the late 1990s, no company described itself exclusively as a provider of security services; while in 2001, several hundred service and product providers are listed in MSSP directories. One company has estimated that companies spent $140 million on security services in 1999; and by 2001, managed security firms have secured almost $1 billion in venture capital.2 Another has predicted that the demand for third-party security services will exceed $17.2 billion by the end of 2004.3 The security products and services industry can be segmented in a number of different ways. One view is to look at the way in which the outsourced service relates to the security program supported. These services include performance of short-term or one-time tasks (such as risk assessments, policy development, and architecture planning); mid-term (including integration of functions into an existing security program); and longrange (such as ongoing management and monitoring of security devices or incidents). By far, the majority of MSSPs fall into the latter category and seek to establish ongoing and long-term relationships with their customers. A second type of market segmentation is based on the type of information protected or on the target customer base. Some security services focus on particular vertical markets such as the financial industry, the government, or the defense industry. Others focus on particular devices and technologies, such as virtual private networks or firewalls, and provide implementation and ongoing support services. Still others offer combinations of services or partnerships with vendors and other providers outside their immediate expertise. The outsourcing of security services is not only growing in the United States or the English-speaking world, either in terms of organizations who choose to outsource their security or those who provide the outsourced services. Although many U.S. MSSP companies have international branches, MSSP directories turn up as many Far Eastern and European companies as American or British. In fact, these global companies grow because they understand the local requirements of their customer base. 366

AU1518Ch23Frame Page 367 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider This is particularly evident in Europe, where International Security Standard (ISO) 17799 has gained acceptance much more rapidly than in the U.S., providing guidance for good security practices to both client and vendor organizations. This, in turn, has contributed to a reduction in the risk of experiencing some of the outsourcing performance issues described below. Future Prospective Many MSSPs were formed during the dot.com boom of the mid-1990s in conjunction with the rapid growth of E-commerce and the Internet. Initially, dot.com companies preferred to focus on their core businesses but neglected to secure that business, providing quick opportunity for those who understood newly evolving security requirements. Later, as the boom turned to bust, dot.coms took their expertise in security and new technology and evolved themselves into MSSPs. However, as this chapter is being written in early 2002, while the number of MSSPs is growing, a rapid consolidation and fallout among MSSPs is taking place — particularly among those who never achieved financial stability or a strong market niche. Some analysts “expect this proliferation to continue, but vendors over the next year will be sharply culled by funding limits, acquisition, and channel limits. Over the next three years, we expect consolidation in this space, first by vendors attempting multifunction aggregation, then by resellers through channel aggregation.”4 OUTSOURCING FROM THE CORPORATE PERSPECTIVE On the surface, the practice of outsourcing appears to run contrary to the ancient tenet of hiding information from those without a need to know. If the use of networks and the Internet has become central to the corporate business model, then exposing that model to an outside entity would seem inimical to good security practice. So why, then, would any organization want to undertake an outsourcing arrangement? Relationship to the Life Cycle The answer to this question lies in the pace at which the networked world has evolved. It is rare to read a discussion of the growth of the Internet without seeing the word exponential used to describe the rate of expansion. But while this exponential growth has led to rapid integration of the Internet with corporate business models, businesses have moved more slowly to protect the information — due to lack of knowledge, to immature security technology, or to a misplaced confidence in a vendor’s ability to provide secure IT products. Most automated organizations have 20 or more years of experience with IT management and operations, and their IT departments know how to build systems and integrate them. What they 367

AU1518Ch23Frame Page 368 Thursday, November 14, 2002 6:14 PM

Architecture, Policy, and Education

Foundation

or nd ve nical ct ele ech o s e t ents e t riv Us d de uirem an req

Us e res to de po fin n e c sib ro req ontr ilities les a a n uir tu em al and d en ts

SECURITY MANAGEMENT PRACTICES

or sf ism ent an ch gem rol Me ana Cont M nd a

Trust

Re Sec u q Av (Con uire rity ail fid me ab en nt ility tia s , In lity teg , rity )

Control

Use to establish metrics and derive performance requirements

Exhibit 23-1. Using a security model to derive requirements.

have not known and have been slow to learn is how to secure them, because the traditional IT security model has been to hide secret information; and in a networked world, it is no longer possible to do that easily. One of the most commonly cited security models is that documented by Glen Bruce and Rob Dempsey.5 This model defines three components: foundation, control, and trust. The foundation layer includes security policy and principles, criteria and standards, and the education and training systems. The trust layer includes the environment’s security, availability, and performance characteristics. The control layer includes the mechanisms used to manage and control each of the required components. In deciding whether to outsource its security and in planning for a successful outsourcing arrangement, this model can serve as a useful reference for ensuring that all aspects of security are considered in the requirements. As shown in Exhibit 23-1, each of the model’s components can drive aspects of the arrangement. THE FOUR PHASES OF AN OUTSOURCING ARRANGEMENT Phase 1 of an outsourcing arrangement begins when an organization perceives a business problem — in the case of IT, this is often a vulnerability or threat that the organization cannot address. The organization then decides that an outside entity may be better equipped to solve the problem than the organization’s own staff. The reasons why this decision is made will be discussed below; but once the decision is made, the organization must put an infrastructure in place to manage the arrangement. In 368

AU1518Ch23Frame Page 369 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider Phase 2, a provider of services is selected and hired. In Phase 3, the arrangement must be monitored and managed to ensure that the desired benefits are being realized. And finally, in Phase 4, the arrangement comes to an end, and the organization must ensure a smooth and nondisruptive transition out. Phase 1: Identify the Need and Prepare to Outsource It is axiomatic that no project can be successful unless the requirements are well defined and the expectations of all participants are clearly articulated. In the case of a security outsourcing project, if the decision to bring in an outside concern is made under pressure during a security breach, this is especially true. In fact, one of the biggest reasons many outsourcing projects fail is that the business does not understand what lies behind the decision to outsource or why it is believed that the work cannot (or should not) be done in-house. Those organizations that make the decision to outsource after careful consideration, and who plan carefully to avoid its potential pitfalls, will benefit most from the decision to outsource. The goal of Phase 1 is to articulate (in writing if possible) the reasons for the decision to outsource. As will be discussed below, this means spelling out the products or services to be acquired, the advantages expected, the legal and business risks inherent in the decision, and the steps to be taken to minimize those risks. Consider Strategic Reasons to Outsource. Many of the reasons to outsource can be considered strategic in nature. These promise advantages beyond a solution to the immediate need and allow the organization to seek long-term or strategic advantages to the business as a whole:

• Free up resources to be used for other mission-critical purposes. • Maintain flexibility of operations by allowing peak requirements to be met while avoiding the cost of hiring new staff. • Accelerate process improvement by bringing in subject matter expertise to train corporate staff or to teach by example. • Obtain current technology or capability that would otherwise have to be hired or acquired by retraining, both at a potentially high cost. • Avoid infrastructure obsolescence by giving the responsibility for technical currency to someone else. • Overcome strategic stumbling blocks by bringing in third-party objectivity. • Control operating costs or turn fixed costs into variable ones through the use of predictable fees, because presumably an MSSP has superior performance and lower cost structure. • Enhance organizational effectiveness by focusing on what is known best, leaving more difficult security tasks to someone else. • Acquire innovative ideas from experts in the field. 369

AU1518Ch23Frame Page 370 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES Organizations that outsource for strategic reasons should be cautious. The decision to refocus on strategic objectives is a good one, but turning to an outside organization for assistance with key strategic security functions is not. If security is an inherent part of the company’s corporate mission, and strategic management of this function is not working, the company might consider whether outsourcing is going to correct those issues. The problems may be deeper than a vendor can fix. Consider Tactical Reasons. The tactical reasons for outsourcing security functions are those that deal with day-to-day functions and issues. When the organization is looking for a short-term benefit, an immediate response to a specific issue, or improvement in a specific aspect of its operations, these tactical advantages of outsourcing are attractive:

• • • •

Reduce response times when dealing with security incidents. Improve customer service to those being supported. Allow IT staff to focus on day-to-day or routine support work. Avoid an extensive capital outlay by obviating the need to invest in new equipment such as firewalls, servers, or intrusion detection devices. • Meet short-term staffing needs by bringing in staff that is not needed on a full-time basis. • Solve a specific problem for which existing staff does not have the expertise to address. While the tactical decision to outsource might promise quick or more focused results, this does not necessarily mean that the outsourcing arrangement must be short-term. Many successful long-term outsourcing arrangements are viewed as just one part of a successful information security program, or are selected for a combination of strategic and technical reasons. Anticipate Potential Problems. The prospect of seeing these advantages in place can be seductive to an organization that is troubled by a business problem. But for every potential benefit, there is a potential pitfall as well. During Phase 1, after the decision to outsource is made, the organization must put in place an infrastructure to manage that arrangement. This requires fully understanding (and taking steps to avoid) the many problems that can arise with outsourcing contracts:

• Exceeding expected costs, either because the vendor failed to disclose them in advance or because the organization did not anticipate them • Experiencing contract issues that lead to difficulties in managing the arrangement or to legal disputes • Losing control of basic business resources and processes that now belong to someone else 370

AU1518Ch23Frame Page 371 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider • Failing to maintain mechanisms for effective provider management • Losing in-house expertise to the provider • Suffering degradation of service if the provider cannot perform adequately • Discovering conflicts of interest between the organization and the outsourcer • Disclosing confidential data to an outside entity that may not have a strong incentive to protect it • Experiencing declines in productivity and morale from staff who believe they are no longer important to the business or that they do not have control of resources • Becoming dependent on inadequate technology if the vendor does not maintain technical currency • Becoming a “hostage” to the provider who now controls key resources Document Requirements and Expectations. As discussed above, the goal of Phase 1 is to fully understand why the decision to outsource is made, to justify the rationale for the decision, and to ensure that the arrangement’s risks are minimized. Minimizing this risk is best accomplished through careful preparation for the outsourced arrangement.

Thus, the organization’s security requirements must be clearly defined and documented. In the best situation, this will include a comprehensive security policy that has been communicated and agreed to throughout the organization. However, companies that are beginning to implement a security program may be hiring expertise to help with first steps and consequently do not have such a policy. In these cases, the security requirements should be defined in business terms. This includes a description of the information or assets to be protected, their level of sensitivity, their relationship to the core business, and the requirement for maintaining the confidentiality, availability, and integrity of each. One of the most common issues that surfaces from outsourcing arrangements is financial, wherein costs may not be fully understood or unanticipated costs arise after the fact. It is important that the organization understand the potential costs of the arrangement, which include a complete understanding of the internal costs before the outsourcing contract is established. A cost/benefit analysis should be performed and should include a calculation of return on investment. As with any cost/benefit analysis, there may be costs and benefits that are not quantifiable in financial terms, and these should be considered and included as well. These may include additional overhead in terms of staffing, financial obligations, and management requirements. Outsourcing will add new risks to the corporate environment and may exacerbate existing risks. Many organizations that outsource perform a 371

AU1518Ch23Frame Page 372 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES complete risk analysis before undertaking the arrangement, including a description of residual risk expected after the outsourcing project begins. Such an analysis can be invaluable during the process of preparing the formal specification, because it will point to the inclusion of requirements for ameliorating these risks. Because risk can be avoided or reduced by the implementation of risk management strategies, a full understanding of residual risk will also aid in managing the vendor’s performance once the work begins; and it will suggest areas where management must pay stronger attention in assessing the project’s success. Prepare the Organization. To ensure the success of the outsourcing arrangement, the organization should be sure that it can manage the provider’s work effectively. This requires internal corporate knowledge of the work or service outsourced. Even if this knowledge is not deeply technical — if, for example, the business is networking its services for the first time — the outsourcing organization must understand the business value of the work or service and how it supports the corporate mission. This includes an understanding of the internal cost structure because without this understanding, the financial value of the outsourcing arrangement cannot be assessed. Assign Organizational Roles. As with any corporate venture, management and staff acceptance are important in ensuring the success of the outsourcing project. This can best be accomplished by involving all affected corporate staff in the decision-making process from the outset, and by ensuring that everyone is in agreement with, or is willing to support, the decision to go ahead.

With general support for the arrangement, the organization should articulate clearly each affected party’s role in working with the vendor. Executives and management-level staff who are ultimately responsible for the success of the arrangement must be supportive and must communicate the importance of the project’s success throughout the organization. System owners and content providers must be helped to view the vendor as an IT partner and must not feel their ownership threatened by the assistance of an outside entity. These individuals should be given the responsibility for establishing the project’s metrics and desired outcome because they are in the best position to understand what the organization’s information requirements are. The organization’s IT staff is in the best position to gauge the vendor’s technical ability and should be given a role in bringing the vendor up to speed on the technical requirements that must be met. The IT staff also should be encouraged to view the vendor as a partner in providing IT services to the organization’s customers. And finally, if there are internal security employees, they should be responsible for establishing security policies and 372

AU1518Ch23Frame Page 373 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider

CIOs, Senior Management, Representatives of Outsourcing Company

Strategy Formulation

Security Goals and Policies

Contract Management

Management Control

Implementation of Security Strategies and Technologies

System Administrators, End Users

Task Control

Efficient and Effective Performance of Security Tasks

Exhibit 23-2. Management control for outsourcing contracts.

procedures to be followed by the vendor throughout the term of the contract. The most important part of establishing organizational parameters is to assign accountability for the project’s success. Although the vendor will be held accountable for the effectiveness of its work, the outsourcing organization should not give away accountability for management success. Where to lodge this accountability in the corporate structure is a decision that will vary based on the organization and its requirements, but the chances for success will be greatly enhanced by ensuring that those responsible for managing the effort are also directly accountable for its results. A useful summary of organizational responsibilities for the outsourcing arrangement is shown in Exhibit 23-2, which illustrates the level of management control for various activities.6 Prepare a Specification and RFP. If the foregoing steps have been completed correctly, the process of documenting requirements and preparing a specification should be a simple formality. A well-written request for proposals (RFP) will include a complete and thorough description of the organizational, technical, management, and performance requirements and of the products and services to be provided by the vendor. Every corporate expectation that was articulated during the exploration stage should be covered by a performance requirement in the RFP. And the expected metrics that will be used to assess the vendor’s performance should be included in a service level agreement (SLA). The SLA can be a separate document, but it should be legally incorporated into the resulting contract.

The RFP and resulting contract should specify the provisions for the use of hardware and software that are part of the outsourcing arrangements. 373

AU1518Ch23Frame Page 374 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES This might include, for example, the type of software that is acceptable or its placement, so that the provider does not modify the client’s technical infrastructure or remove assets from the customer premises without advance approval. Some MSSPs want to install their own hardware or software at the customer site; others prefer to use customer-owned technical resources; and still others perform on their own premises using their own resources. Regardless, the contract should spell out the provisions for ownership of all resources that support the arrangement and for the eventual return of any assets whose control or possession are outsourced. If there is intellectual property involved, as might be the case in a customdeveloped security solution, the contract should also specify how the licensing of the property works and who will retain ownership of it at the end of the arrangement. During the specification process, the organization should have determined what contractual provisions it will apply for nonperformance or substandard performance. The SLA contract should clearly define items considered to be performance infractions or errors, including requirements for correction of errors. This includes any financial or nonfinancial penalties for noncompliance or failure to perform. The contract may not be restricted to technical requirements and contractual terms but may also consider human resources and business management issues. Some of the requirements that might be included govern access to vendor staff by the customer, and vice versa, and provisions for day-to-day management of the staff performing the work. In addition, requirements for written deliverables, regular reports, etc. should be specified in advance. The final section of the RFP and contract should govern the end of the outsourcing arrangement and provisions for terminating the relationship with the vendor. The terms that govern the transition out should be designed to reduce exit barriers for both the vendor and the client, particularly because these terms may need to be invoked during a dispute or otherwise in less-than-optimal circumstances. One key provision will be to require that the vendor cooperates fully with any vendor that succeeds it in performance of the work. Specify Financial Terms and Pricing. Some of the basic financial considerations for the RFP are to request that the vendor provide evidence that its pricing and terms are competitive and provide an acceptable cost/benefit business case. The RFP should request that the vendor propose incentives and penalties based on performance and warrant the work it performs.

The specific cost and pricing sections of the specification depend on the nature of the work outsourced. Historically, many outsourcing contracts were priced in terms of unit prices for units provided, and may have been 374

AU1518Ch23Frame Page 375 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider measured by staff (such as hourly rates for various skill levels), resources (such as workstations supported), or events (such as calls answered). The unit prices may have been fixed or varied based on rates of consumption, may have included guaranteed levels of consumption, and may have been calculated based on cost or on target profits. However, these types of arrangements have become less common over the past few years. The cost-per-unit model tends to cause the selling organization to try to increase the units sold, driving up the quantity consumed by the customer regardless of the benefit to the customer. By the same token, this causes the customer to seek alternative arrangements with lower unit costs; and at some point the two competing requirements diverge enough that the arrangement must end. So it has become more popular to craft contracts that tie costs to expected results and provide incentives for both vendor and customer to perform according to expectations. Some arrangements provide increased revenue to the vendor each time a threshold of performance is met; others are tied to customer satisfaction measures; and still others provide for gain-sharing wherein the customer and vendor share in any savings from reduction in customer costs. Whichever model is used, both vendor and customer are given incentives to perform according to the requirements to be met by each. Anticipate Legal Issues. The RFP and resulting contract should spell out clear requirements for liability and culpability. For example, if the MSSP is providing security alert and intrusion detection services, who is responsible in the event of a security breach? No vendor can provide a 100 percent guarantee that such breaches will not occur, and organizations should be wary of anyone who makes such a claim. However, it is reasonable to expect that the vendor can prevent predefined, known, and quantified events from occurring. If there is damage to the client’s infrastructure, who is responsible for paying the cost of recovery? By considering these questions carefully, the client organization can use the possibility of breaches to provide incentives for the vendor to perform well.

In any contractual arrangement, the client is responsible for performing due diligence. The RFP and contract should spell out the standards of care that will be followed, and it will assign accountability for technical and management due diligence. This includes the requirements to maintain the confidentiality of protected information and for nondisclosure of sensitive, confidential, and secret information. There may be legislative and regulatory issues that impact the outsourcing arrangement, and both the client and vendor should be aware of these. Organizations should be wary of outsourcing responsibilities for which it is legally responsible, unless it can legally assign these responsibilities to 375

AU1518Ch23Frame Page 376 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES another party. In fact, outsourcing such services may be prohibited by regulation or law, particularly for government entities. Existing protections may not be automatically carried over in an outsourced environment. For example, certain requirements for compliance with the Privacy Act or the Freedom of Information Act may not apply to employees of an MSSP or service provider. Preparing a good RFP for security services is no different than preparing any RFP. The proposing vendors should be obligated to respond with clear, measurable responses to every requirement, including, if possible, client references demonstrating successful prior performance. Phase 2: Select a Provider During Phase 1, the organization defined the scope of work and the services to be outsourced. The RFP and specification were created, and the organization must now evaluate the proposals received and select a vendor. The process of selecting a vendor includes determining the appropriate characteristics of an outsourcing supplier, choosing a suitable vendor, and negotiating requirements and contractual terms. Determine Vendor Characteristics. Among the most common security ser-

vices outsourced are those that include installation, management, or maintenance of equipment and services for intrusion detection, perimeter scanning, VPNs and firewalls, and anti-virus and content protection. These arrangements, if successfully acquired and managed, tend to be long-term and ongoing in nature. However, shorter-term outsourcing arrangements might include testing and deployment of new technologies, such as encryption services and PKI in particular, because it is often difficult and expensive to hire expertise in these arenas. Hiring an outside provider to do one-time or short-term tasks such as security assessments, policy development and implementation, or audit, enforcement, and compliance monitoring is also becoming popular. One factor to consider during the selection process is the breadth of services offered by the prospective provider. Some vendors have expertise in a single product or service that can bring superior performance and focus, although this can also mean that the vendor has not been able to expand beyond a small core offering. Other vendors sell a product or set of products, then provide ongoing support and monitoring of the offering. This, too, can mean superior performance due to focus on a small set of offerings; but the potential drawback is that the customer becomes hostage to a single technology and is later unable to change vendors. One relatively new phenomenon in the MSSP market is to hire a vendor-neutral service broker who can perform an independent assessment of requirements and recommend the best providers. 376

AU1518Ch23Frame Page 377 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider There are a number of terms that have become synonymous with outsourcing or that describe various aspects of the arrangement. Insourcing is the opposite of outsourcing, referring to the decision to manage services in-house. The term midsourcing refers to a decision to outsource a specific selection of services. Smartsourcing is used to mean a well-managed outsourcing (or insourcing) project and is sometimes used by vendors to refer to their set of offerings. Choose a Vendor. Given that the MSSP market is relatively new and immature, organizations must pay particular attention to due diligence during the selection process, and should select a vendor that not only has expertise in the services to be performed but also shows financial, technical, and management stability. There should be evidence of an appropriate level of investment in the infrastructure necessary to support the service. In addition to assessing the ability of the vendor to perform well, the organization should consider less tangible factors that might indicate the degree to which the vendor can act as a business partner. Some of these characteristics are:

• Business culture and management processes. Does the vendor share the corporate values of the client? Does it agree with the way in which projects are managed? Will staff members be able to work successfully with the vendor’s staff? • Security methods and policies. Will the vendor disclose what these are? Are these similar to or compatible with the customer’s? • Security infrastructure, tools, and technology. Do these demonstrate the vendor’s commitment to maintaining a secure environment? Do they reflect the sophistication expected of the vendor? • Staff skills, knowledge, and turnover. Is turnover low? Does the staff appear confident and knowledgeable? Does the offered set of skills meet or exceed what the vendor has promised? • Financial and business viability. How long has the vendor provided these services? Does the vendor have sufficient funding to remain in the business for at least two years? • Insurance and legal history. Have there been prior claims against the vendor? Negotiate the Arrangement. With a well-written specification, the negotiation process will be simple because expectations and requirements are spelled out in the contract and can be fully understood by all parties. The specific legal aspects of the arrangement will depend on the client’s industry or core business, and they may be governed by regulation (for example, in the case of government and many financial entities). It is important to establish in advance whether the contract will include subcontractors, and if so, to include them in any final negotiations prior to signing a contract. 377

AU1518Ch23Frame Page 378 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES This will avoid the potential inability to hold subcontractors as accountable for performance as their prime contractor. Negotiation of pricing, delivery terms, and warranties should also be governed by the specification; and the organization should ensure that the terms and conditions of the specification are carried over to the resulting contract. Phase 3: Manage the Arrangement Once a provider has been selected and a contract is signed, the SLA will govern the management of the vendor. If the SLA was not included in the specification, it should be documented before the contract is signed and included in the final contract. Address Performance Factors. For every service or resource being outsourced, the SLA should address the following factors:

• • • • • •

The expectations for successful service delivery (service levels) Escalation procedures Business impact of failure to meet service levels Turnaround times for delivery Service availability, such as for after-hours Methods for measurement and monitoring of performance

Use Metrics. To be able to manage the vendor effectively, the customer must be able to measure compliance with contractual terms and the results and benefits of the provider’s work. The SLA should set a baseline for all items to be measured during the contract term. These will by necessity depend on which services are provided. For example, a vendor that is providing intrusion detection services might be assessed in part by the number of intrusions repelled as documented in IDS logs.

To motivate the vendor to behave appropriately, the organization must measure the right things — that is, results over which the provider has control. However, care should be taken to ensure that the vendor cannot directly influence the outcome of the collection process. In the example above, the logs should be monitored to ensure that they are not modified manually, or backup copies should be turned over to the client on a regular basis. The SLA metrics should be reasonable in that they can be easily measured without introducing a burdensome data collection requirement. The frequency of measurement and audits should be established in advance, as should the expectations for how the vendor will respond to security issues and whether the vendor will participate in disaster recovery planning and rehearsals. Even if the provider is responsible for monitoring of equipment such as firewalls or intrusion detection devices, the organization may want 378

AU1518Ch23Frame Page 379 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider to retain control of the incident response process, particularly if the possibility of future legal action exists. In these cases, the client may specify that the provider is to identify, but not act on, suspected security incidents. Thus, they may ask the provider for recommendations but may manage or staff the response process itself. Other organizations distinguish between internal and external threats or intrusions to avoid the possibility that an outside organization has to respond to incidents caused by the client’s own employees. Monitor Performance. Once the contract is in place and the SLA is active, managing the ongoing relationship with the service provider becomes the same as managing any other contractual arrangement. The provider is responsible for performing the work to specifications, and the client is responsible for monitoring performance and managing the contract.

Monitoring and reviewing the outsourced functions are critically important. Although the accountability for success of the arrangement remains with the client organization, the responsibility for monitoring can be a joint responsibility; or it may be done by an independent group inside or outside the organization. Throughout the life of the contract, there should be clear single points of contact identified by the client and the vendor; and both should fully understand and support provisions for coordinating emergency response during a security breach or disaster. Phase 4: Transition Out In an ideal world, the outsourcing arrangement will continue with both parties to their mutual satisfaction. In fact, the client organization should include provisions in the contract for renewal, for technical refresh, and for adjustment of terms and conditions as the need arises. However, an ideal world rarely exists, and most arrangements end sooner or later. It is important to define in advance (in the contract and SLA) the terms that will govern the parties if the client decides to bring the work in-house or to use another contractor, along with provisions for penalties should either party not comply. Should the arrangement end, the organization should continue to monitor vendor performance during the transition out. The following tasks should be completed to the satisfaction of both vendor and client: • All property is returned to its original owner (with reasonable allowance for wear and tear). • Documentation is fully maintained and up-to-date. • Outstanding work is complete and documented. • Data owned by each party is returned, along with documented settings for security controls. This includes backup copies. 379

AU1518Ch23Frame Page 380 Thursday, November 14, 2002 6:14 PM

SECURITY MANAGEMENT PRACTICES

Outstanding

17%

Satisfactory

55%

Needs Work

25%

Not Working

3% 0

10

20

30

40

50

60

Exhibit 23-3. Customer satisfaction with security outsourcing.

• If there is to be staff turnover, the hiring organization has completed the hiring process. • Requirements for confidentiality and nondisclosure continue to be followed. • If legally required, the parties are released from any indemnities, warranties, etc. CONCLUSION The growth of the MSSP market clearly demonstrates that outsourcing of security services can be a successful venture both for the client and the vendor. While the market is undergoing some consolidation and refocusing as this chapter is being written, in the ultimate analysis, outsourcing security services is not much different than outsourcing any other IT service, and the IT outsourcing industry is established and mature. The lessons learned from one clearly apply to the other, and it is clear that organizations that choose to outsource are in fact applying those lessons. In fact, as Exhibit 23-3 shows, the majority of companies that outsource their security describe their level of satisfaction as outstanding or satisfactory.7 Outsourcing the security of an organization’s information assets may be the antithesis of the ancient “security through obscurity” model. However, in today’s networked world, with solid planning in advance, a sound rationale, and good due diligence and management, any organization can outsource its security with satisfaction and success. References 1. Gary Kaiser, quoted by John Makulowich, In government outsourcing, in Washington Technol., 05/13/97; Vol. 12 No. 3, http://www.washingtontechnology.com/news/12_3/news/ 12940-1.html. 2. George Hulme, Security’s best friend, Information Week, July 16, 2001, http://www.informationweek.com/story/IWK20010713S0009. 3. Jaikumar Vijayan, Outsources rush to meet security demand, ComputerWorld, February 26, 2001, http://www.computerworld.com/cwi/story/0,1199,NAV47_STO57980,00.html.

380

AU1518Ch23Frame Page 381 Thursday, November 14, 2002 6:14 PM

How to Work with a Managed Security Service Provider 4. Chris King, META report: are managed security services ready for prime time?, Datamation , July 13, 2002, http://itmanagement.earthweb.com/secu/article/0,,11953_ 801181,00.html. 5. Glen Bruce and Rob Dempsey, Security in Distributed Computing, Hewlett-Packard Professional Books, Saddle River, NJ, 1997. 6. V. Govindarajan and R.N. Anthony, Management Control Systems, Irwin, Chicago, 1995. 7. Forrester Research, cited in When Outsourcing the Information Security Program Is an Appropriate Strategy, at http://www.hyperon.com/outsourcing.htm.

ABOUT THE AUTHOR Laurie Hill McQuillan, CISSP, has been a technology consultant for 25 years, providing IT support services to commercial and federal government organizations. Ms. McQuillan is vice president of KeyCrest Enterprises, a national security consulting company. She has a Master’s degree in technology management and teaches graduate-level classes on the uses of technology for research and the impact of technology on culture. She is treasurer of the Northern Virginia Chapter of the Information Systems Security Association (ISSA) and a founding member of CASPR, an international project that plans to publish Commonly Accepted Security Practices and Recommendations. She can be contacted at [email protected]

Copyright 2003. Laurie Hill McQuillan. All Rights Reserved.

381

AU1518Ch23Frame Page 382 Thursday, November 14, 2002 6:14 PM

AU1518Ch24Frame Page 383 Thursday, November 14, 2002 8:42 PM

Chapter 24

Considerations for Outsourcing Security Michael J. Corby

Outsourcing computer operations is not a new concept. Since the 1960s, companies have been in the business of providing computer operations support for a fee. The risks and challenges of providing a reliable, confidential, and responsive data center operation have increased, leaving many organizations to consider retaining an outside organization to manage the data center in a way that the risks associated with these challenges are minimized. Let me say at the onset that there is no one solution for all environments. Each organization must decide for itself whether to build and staff its own IT security operation or hire an organization to do it for them. This discussion will help clarify the factors most often used in making the decision of whether outsourcing security is a good move for your organization. HISTORY OF OUTSOURCING IT FUNCTIONS Data Center Operations Computer facilities have been traditionally very expensive undertakings. The equipment alone often costs millions of dollars, and the room to house the computer equipment required extensive and expensive special preparation. For that reason, many companies in the 1960s and 1970s seriously considered the ability to provide the functions of an IT (or EDP) department without the expense of building the computer room, hiring computer operators, and, of course, acquiring the equipment. Computer service bureaus and shared facilities sprang up to service the banking, insurance, manufacturing, and service industries. Through shared costs, these outsourced facilities were able to offer cost savings to their customers and also turn a pretty fancy profit in the process. In almost all cases, the reasons for justifying the outsourcing decision were based on financial factors. Many organizations viewed the regular 0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

383

AU1518Ch24Frame Page 384 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES monthly costs associated with the outsource contract far more acceptable than the need to justify and depreciate a major capital expense. In addition to the financial reasons for outsourcing, many organizations also saw the opportunity to off-load the risk of having to replace equipment and software long before it had been fully depreciated due to increasing volume, software and hardware enhancements, and training requirements for operators, system programmers, and other support staff. The technical landscape at the time was changing rapidly; there was an aura of special knowledge that was shared by those who knew how to manage the technology, and that knowledge was shared with only a few individuals outside the “inner circle.” Organizations that offered this service were grouped according to their market. That market was dictated by the size, location, or support needs of the customer: • Size was measured in the number of transactions per hour or per day, the quantity of records stored in various databases, and the size and frequency of printed reports. • Location was important because in the pre-data communications era, often the facility accepted transactions delivered by courier in paper batches and delivered reports directly to the customer in paper form. To take advantage of the power of automating the business process, quick turnaround was a big factor. • The provider’s depth of expertise and special areas of competence were also a factor for many organizations. Banks wanted to deal with a service that knew the banking industry, its regulations, need for detailed audits, and intense control procedures. Application software products that were designed for specific industries were factors in deciding which service could support those industries. In most instances, the software most often used for a particular industry could be found running in a particular hardware environment. Services were oriented around IBM, Digital, Hewlett-Packard, NCR, Burroughs, Wang, and other brands of computer equipment. Along with the hardware type came the technical expertise to operate, maintain, and diagnose problems in that environment. Few services would be able to support multiple brands of hardware. Of course, selecting a data center service was a time-consuming and emotional process. The expense was still quite a major financial factor, and there was the added risk of putting the organization’s competitive edge and customer relations in the hands of a third party. Consumers and businesses cowered when they were told that their delivery was postponed or that their payment was not credited because of a computer problem. Nobody wanted to be forced to go through a file conversion process and 384

AU1518Ch24Frame Page 385 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security learn how to deal with a new organization any more than necessary. The ability to provide a consistent and highly responsive “look and feel” to the end customer was important, and the vendor’s perceived reliability and long-term capabilities to perform in this area were crucial factors in deciding which service and organization would be chosen. Contracting Issues There were very few contracting issues in the early days of outsourced data center operations. Remember that almost all applications involved batch processing and paper exchange. Occasionally, limited file inquiry was provided, but price was the basis for most contract decisions. If the reports could be delivered within hours or maybe within the same day, the service was acceptable. If there were errors or problems noted in the results, the obligation of the service was to rerun the process. Computer processing has always been bathed in the expectation of confidentiality. Organizations recognized the importance of keeping their customer lists, employee ranks, financial operations, and sales information confidential; and contracts were respectful of that factor. If any violations of this expectation of confidentiality occurred in those days, they were isolated incidents that were dealt with privately, probably in the courts. Whether processing occurred in a contracted facility or in-house, expectations that there would be an independent oversight or audit process were the same. EDP auditors focused on the operational behavior of servicer-designed specific procedures, and the expectations were usually clearly communicated. Disaster recovery planning, document storage, tape and disk archival procedures, and software maintenance procedures were reviewed and expected to meet generally accepted practices. Overall, the performance targets were communicated, contracts were structured based on meeting those targets, companies were fairly satisfied with the level of performance they were getting for their money, and they had the benefit of not dealing with the technology changes or the huge capital costs associated with their IT operations. Control of Strategic Initiatives The dividing line of whether an organization elected to acquire services of a managed data center operation or do it in-house was the control of their strategic initiatives. For most regulated businesses, the operations were not permitted to get too creative. The most aggressive organizations generally did not use the data center operations as an integral component of their strategy. Those who did deploy new or creative computer processing initiatives generally did not outsource that part of their operation to a shared service. 385

AU1518Ch24Frame Page 386 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES NETWORK OPERATIONS The decision to outsource network operations came later in the evolution of the data center. The change from a batch, paper processing orientation to an online, electronically linked operation brought about many of the same decisions that organizations faced years before when deciding to “build or buy” their computer facilities. The scene began to change when organizations decided to look into the cost, technology, and risk involved with network operations. New metrics of success were part of this concept. Gone was the almost single focus on cost as the basis of a decision to outsource or develop an inside data communication facility. Reliability, culminating in the concept we now know as continuous availability, became the biggest reason to hire a data communications servicer. The success of the business often came to depend on the success of the data communications facility. Imagine the effect on today’s banking environment if ATMs had a very low reliability, were fraught with security problems, or theft of cash or data. We frequently forget how different our personal banking was in the period before the proliferation of ATMs. A generation of young adults has been transformed by the direct ability to communicate electronically with a bank — much in the same way, years ago, that credit cards opened up a new relationship between consumers and retailers. The qualification expected of the network operations provider was also very different from the batch-processing counterpart. Because the ability to work extra hours to catch up when things fell behind was gone, new expectations had to be set for successful network operators. Failures to provide the service were clearly and immediately obvious to the organization and its clients. Several areas of technical qualification were established. One of the biggest questions used to gauge qualified vendors was bandwidth. How much data could be transmitted to and through the facility? This was reviewed on both a micro and macro domain. From the micro perspective, the question was, ”How fast could data be sent over the network to the other end?” The higher the speed, the higher the cost. On a larger scale, what was the capacity of the network provider to transfer data over the 24-hour period? This included downtime, retransmissions, and recovery. This demand gave rise to the 24/7 operation, where staples of a sound operation like daily backups and software upgrades were considered impediments to the totally available network. From this demand came the design and proliferation of the dual processor and totally redundant systems. Front-end processors and network controllers were designed to be failsafe. If anything happened to any of the components, a second copy of that component was ready to take over. For 386

AU1518Ch24Frame Page 387 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security the most advanced network service provider, this included dual data processing systems at the back end executing every transaction twice, sometimes in different data centers, to achieve total redundancy. Late delivery and slow delivery became unacceptable failures and would be a prime cause for seeking a new network service provider. After the technical capability of the hardware/software architecture was considered, the competence of the staff directing the facility was considered. How smart, how qualified, how experienced were the people that ran and directed the network provider? Did the people understand the mission of the organization, and could they appreciate the need for a solid and reliable operation? Could they upgrade operating systems with total confidence? Could they implement software fixes and patches to assure data integrity and security? Could they properly interface with the applications software developers without requiring additional people in the organization duplicating their design and research capabilities? In addition to pushing bits through the wires, the network service provider took on the role of the front-end manager of the organization’s strategy. Competence was a huge factor in building the level of trust that executives demanded. Along with this swing toward the strategic issues, organizations became very concerned about long-term viability. Often, huge companies were the only ones that could demonstrate this longevity promise. The mainframe vendor, global communications companies, and large well-funded network servicers were the most successful at offering these services universally. As the commerce version of the globe began to shrink, the most viable of these were the ones who could offer services in any country, any culture, at any time. The data communications world became a nonstop, “the store never closes” operation. Contracting Issues With this new demand for qualified providers with global reach came new demands for contracts that would reflect the growing importance of this outsourcing decision to the lifeblood of the organization. Quality-of-service expectations were explicitly defined and put into contracts. Response time would be measured in seconds or even milliseconds. Uptime was measured in the number of nines in the percentage that would be guaranteed. Two nines, or 99 percent, was not good enough. Four nines (99.99 percent) or even five nines (99.999 percent) became the common expectation of availability. A new emphasis developed regarding the extent to which data would be kept confidential. Questions were asked and a response expected in the 387

AU1518Ch24Frame Page 388 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES contract regarding the access to the data while in transit. Private line networks were expected for most data communications facilities because of the perceived vulnerability of public telecommunications facilities. In some high-sensitivity areas, the concept of encryption was requested. Modems were developed that would encrypt data while in transit. Software tools were designed to help ensure unauthorized people would not be able to see the data sent. Independent auditors reviewed data communications facilities periodically. This review expanded to include a picture of the data communications operation over time using logs and transaction monitors. Management of the data communication provider was frequently retained by the organization so it could attest to the data integrity and confidentiality issues that were part of the new expectations levied by the external regulators, reviewers, and investors. If the executives were required to increase security and reduce response time to maintain a competitive edge, the data communications manager was expected to place the demand on the outsourced provider. Control of Strategic Initiatives As the need to integrate this technical ability becomes more important to the overall organization mission, more and more companies opted to retain their own data communications management. Nobody other than the communications carriers and utilities actually started hanging wires on poles; but data communications devices were bought and managed by employees, not contractors. Alternatives to public networks were considered; microwave, laser, and satellite communications were evaluated in an effort to make sure that the growth plan was not derailed by the dependence on outside organizations. The daily operating cost of this communications capability was large; but in comparison to the computer room equipment and software, the capital outlay was small. With the right people directing the data communications area, there was less need for outsourced data communications facilities as a stand-alone service. In many cases it was rolled into an existing managed data center; but in probably just as many instances, the managed data center sat at the end of the internally controlled data communications facility. The ability to deliver reliable communications to customers, constituents, providers, and partners was considered a key strategy of many forward-thinking organizations APPLICATION DEVELOPMENT While the data center operations and data communications outsourcing industries have been fairly easy to isolate and identify, the application development outsourcing business is more subtle. First, there are usually 388

AU1518Ch24Frame Page 389 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security many different application software initiatives going on concurrently within any large organization. Each of them has a different corporate mission, each with different metrics for success, and each with a very different user focus. Software customer relationship management is very different from software for human resources management, manufacturing planning, investment management, or general accounting. In addition, outsourced application development can be carried out by general software development professionals, by software vendors, or by targeted software enhancement firms. Take, for instance, the well-known IBM manufacturing product Mapics®. Many companies that acquired the software contracted directly with IBM to provide enhancements; many others employed the services of software development organizations specifically oriented toward Mapics enhancements, while some simply added their Mapics product to the list of products supported or enhanced by their general application design and development servicer. Despite the difficulty in viewing the clear picture of application development outsourcing, the justification was always quite clear. Design and development of new software, or features to be added to software packages, required skills that differed greatly from general data center or communications operations. Often, hiring the people with those skills was expensive and posed the added challenge in that designers were motivated by new creative design projects. Many companies did not want to pay the salary of good design and development professionals, train and orient them, and give them a one- or two-year design project that they would simply add to their resume when they went shopping for their next job. By outsourcing the application development, organizations could employ business and project managers who had long careers doing many things related to application work on a variety of platforms and for a variety of business functions — and simply roll the coding or database expertise in and out as needed. In many instances, also, outsourced applications developers were used for another type of activity — routine software maintenance. Good designers hate mundane program maintenance and start looking for new employment if forced to do too much of it. People who are motivated by the quick response and variety of tasks that can be juggled at the same time are well suited to maintenance tasks, but are often less enthusiastic about trying to work on creative designs and user-interactive activities where total immersion is preferred. Outsourcing the maintenance function is a great way to avoid the career dilemma posed by these conflicting needs. Y2K gave the maintenance programmers a whole new universe of opportunities to demonstrate their values. Aside from that once-in-a-millennium opportunity, program language conversions, operation system upgrades, and new software 389

AU1518Ch24Frame Page 390 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES releases are a constant source of engagements for application maintenance organizations. Qualifications for this type of service were fairly easy to determine. Knowledge of the hardware platform, programming language, and related applications were key factors in selecting an application development firm. Beyond those specifics, a key factor in selecting an application developer was in the actual experience with the specific application in question. A financial systems analyst or programmer was designated to work on financial systems; a manufacturing specialist on manufacturing systems, and so on. Word quickly spread about which organizations were the application and program development leaders. Companies opened offices across the United States and around the world offering contract application services. Inexpensive labor was available for some programming tasks if contracted through international job shops, but the majority of application development outsourcing took place close to the organization that needed the work done. Often, to ensure proper qualifications, programming tests were given to the application coders. Certifications and test-based credentials support extensive experience and intimate language knowledge. Both methods are cited as meritorious in determining the credentials of the technical development staff assigned to the contract. Along with the measurable criteria of syntax knowledge, a key ingredient was the maintainability of the results. Often, one of the great fears was that the program code was so obscure that only the actual developer could maintain the result. This is not a good thing. The flexibility to absorb the application development at the time the initial development is completed or when the contract expires is a significant factor in selecting a provider. To ensure code maintainability, standards are developed and code reviews are frequently undertaken by the hiring organization. Perhaps the most complicated part of the agreement is the process by which errors, omissions, and problems are resolved. Often, differences of opinion, interpretations of what is required, and the definition of things like “acceptable response time” and “suitable performance” were subject to debate and dispute. The chief way this factor was considered was in contacting reference clients. It probably goes to say that no application development organization registered 100 percent satisfaction with 100 percent of their customers 100 percent of the time. Providing the right reference account that gives a true representation of the experience, particularly in the application area evaluated, is a critical credential. Contracting Issues Application development outsourcing contracts generally took on two forms: pay by product or pay by production. 390

AU1518Ch24Frame Page 391 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security • Pay by product is basically the fixed-price contract; that is, hiring a developer to develop the product and, upon acceptance, paying a certain agreed amount. There are obvious derivations of this concept: phased payments, payment upon acceptance of work completed at each of several checkpoints — for example, payment upon approval of design concept, code completion, code unit testing, system integration testing, user documentation acceptance, or a determined number of cycles of production operation. This was done to avoid the huge balloon payment at the end of the project, a factor that crushed the cash flow of the provider and crippled the ability of the organization to develop workable budgets. • Pay by production is the time-and-materials method. The expectation is that the provider works a prearranged schedule and, periodically, the hours worked are invoiced and paid. The presumption is that hours worked are productive and that the project scope is fixed. Failure of either of these factors most often results in projects that never end or exceed their budgets by huge amounts. The control against either of these types of projects running amok is qualified approval oversight and audit. Project managers who can determine progress and assess completion targets are generally part of the organization’s review team. In many instances, a third party is retained to advise the organization’s management of the status of the developers and to recommend changes to the project or the relationship if necessary. Control of Strategic Initiatives Clearly the most sensitive aspect of outsourced service is the degree to which the developer is invited into the inner sanctum of the customer’s strategic planning. Obviously, some projects such as Y2K upgrades, software upgrades, and platform conversions do not require anyone sitting in an executive strategy session; but they can offer a glimpse into the specifics of product pricing, engineering, investment strategy, and employee/ partner compensation that are quite private. Almost always, application development contracts are accompanied by assurances of confidentiality and nondisclosure, with stiff penalties for violation. OUTSOURCING SECURITY The history of the various components of outsourcing plays an important part in defining the security outsourcing business issue and how it is addressed by those seeking or providing the service. In many ways, outsourced security service is like a combination of the hardware operation, communications, and application development counterparts, all together. Outsourced is the general term; managed security services or MSS is the industry name for the operational component of an organization’s total 391

AU1518Ch24Frame Page 392 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES data facility, but viewed solely from the security perspective. As in any broadreaching component, the best place to start is with a scope definition. Defining the Security Component to be Outsourced Outsourcing security can be a vast undertaking. To delineate each of the components, security outsourcing can be divided into six specific areas or domains: 1. 2. 3. 4. 5. 6.

Policy development Training and awareness Security administration Security operations Network operations Incident response

Each area represents a significant opportunity to improve security, in increasing order of complexity. Let us look at each of these domains and define them a bit further. Security Policies. These are the underpinning of an organization’s entire security profile. Poorly developed policies, or policies that are not kept current with the technology, are a waste of time and space. Often, policies can work against the organization in that they invite unscrupulous employees or outsiders to violate the intent of the policy and to do so with impunity. The policies must be designed from the perspectives of legal awareness, effective communications skills, and confirmed acceptance on the part of those invited to use the secured facility (remember: unless the organization intends to invite the world to enjoy the benefits of the facility — like a Web site — it is restricted and thereby should be operated as a secured facility).

The unique skills needed to develop policies that can withstand the challenges of these perspectives are frequently a good reason to contract with an outside organization to develop and maintain the policies. Being an outside provider, however, does not lessen the obligation to intimately connect each policy with the internal organization. Buying the book of policies is not sufficient. They must present and define an organization’s philosophy regarding the security of the facility and data assets. Policies that are strict about the protection of data on a computer should not be excessively lax regarding the same data in printed form. Similarly, a personal Web browsing policy should reflect the same organization’s policy regarding personal telephone calls, etc. Good policy developers know this. Policies cannot put the company in a position of inviting legal action but must be clearly worded to protect its interests. Personal privacy is a good thing, but using company assets for personal tasks and sending correspondence that is attributed to the organization are clear reasons to allow some 392

AU1518Ch24Frame Page 393 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security level of supervisory review or periodic usage auditing. Again, good policy developers know this. Finally, policies must be clearly communicated, remain apropos, carry with them appropriate means for reporting and handling violations, and for being updated and replaced. Printed policy books are replaced with intranet-based, easily updated policies that can be adapted to meet new security demands and rapidly sent to all subject parties. Policy developers need to display a good command of the technology in all its forms — data communication, printed booklets, posters, memos, etc., video graphics, and nontraditional means of bringing the policy to its intended audience’s attention. Even hot air balloons and skywriting are fair game if they accomplish the intent of getting the policy across. Failure to know the security policy cannot be a defense for violating it. Selecting a security policy developer must take all of these factors into consideration. Training and Awareness. Training and awareness is also frequently assigned to an outside servicer. Some organizations establish guidelines for the amount and type of training an employee or partner should receive. This can take the form of attending lectures, seminars, and conferences; reading books; enrolling in classes at local educational facilities; or taking correspondence courses. Some organizations will hire educators to provide specific training in a specific subject matter. This can be done using standard course material good for anyone, or it can be a custom-designed session targeted specifically to the particular security needs of the organization.

The most frequent topics of general education that anyone can attend are security awareness, asset protection, data classification, and recently, business ethics. Anyone at any level is usually responsible to some degree for ensuring that his or her work habits and general knowledge are within the guidance provided by this type of education. Usually conducted by the human resources department at orientation, upon promotion, or periodically, the objective is to make sure that everyone knows the baseline of security expectations. Each attendee will be expected to learn what everyone in the organization must do to provide for a secure operation. It should be clearly obvious what constitutes unacceptable behavior to anyone who successfully attends such training. Often, the provider of this service has a list of several dozen standard points that are made in an entertaining and informative manner, with a few custom points where the organization’s name or business mission is plugged into the presentation; but it is often 90 percent boilerplate. Selecting an education provider for this type of training is generally based on their creative entertainment value — holding the student’s attention — and the way in which students register their acknowledgment that 393

AU1518Ch24Frame Page 394 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES they have heard and understood their obligations. Some use the standard signed acknowledgment form; some even go so far as to administer a digitally signed test. Either is perfectly acceptable but should fit the corporate culture and general tenor. Some additional requirements are often specified in selecting a training vendor to deal with technical specifics. Usually some sort of hands-on facility is required to ensure that the students know the information and can demonstrate their knowledge in a real scenario. Most often, this education will require a test for mastery or even a supervised training assignment. Providers of this type of education will often provide these services in their own training center where equipment is configured and can be monitored to meet the needs of the requesting organization. Either in the general or specific areas, organizations that outsource their security education generally elect to do a bit of both on an annual basis with scheduled events and an expected level of participation. Evaluation of the educator is by way of performance feedback forms that are completed by all attendees. Some advanced organizations will also provide metrics to show that the education has rendered the desired results — for example, fewer password resets, lost files, or system crashes. Security Administration. Outsourcing security administration begins to get a bit more complicated. Whereas security policies and security education are both essential elements of a security foundation, security administration is part of the ongoing security “face” that an organization puts on every minute of every day and requires a higher level of expectations and credentials than the other domains.

First, let us identify what the security administrator is expected to do. In general terms, security administration is the routine adds, changes, and deletes that go along with authorized account administration. This can include verification of identity and creation of a subsequent authentication method. This can be a password, token, or even a biometric pattern of some sort. Once this authentication has been developed, it needs to be maintained. That means password resets, token replacement, and biometric alternative (this last one gets a bit tricky, or messy, or both). Another significant responsibility of the security administrator is the assignment of approved authorization levels. Read, write, create, execute, delete, share, and other authorizations can be assigned to objects from the computer that can be addressed down to the data item if the organization’s authorization schema reaches that level. In most instances, the tools to do this are provided to the administrator, but occasionally there is a need to devise and manage the authority assignment in whatever platform and at whatever level is required by the organization. 394

AU1518Ch24Frame Page 395 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security A major responsibility of security administrators that is often overlooked is reporting their activities. If a security policy is to be deemed effective, the workload should diminish over time if the population of users remains constant. I once worked with an organization that had outsourced the security administration function and paid a fee based on the number of transactions handled. Interestingly, there was an increasing frequency of reassignment of authorizations, password resets, and adds, changes, and deletes as time went on. The rate of increase was double the rate of user population expansion. We soon discovered that the number of user IDs mushroomed to two or three times the total number of employees in the company. What is wrong with that picture? Nothing if you are the provider, but a lot if you are the contracting organization. The final crucial responsibility of the security administrator is making sure that the procedures designed to assure data confidentiality, availability, and integrity are carried out according to plan. Backup logs, incident reports, and other operational elements — although not exactly part of most administrators’ responsibilities — are to be monitored by the administrator, with violations or exceptions reported to the appropriate person. Security Operations. The security operations domain has become another recent growth area in terms of outsourced security services. Physical security was traditionally separate from data security or computer security. Each had its own set of credentials and its own objectives. Hiring a company it has a well-established physical security reputation does not qualify it as a good data security or computer security operations provider. As has been said, “Guns, guards, and dogs do not make a good data security policy;” but recently they have been called upon to help. The ability to track the location of people with access cards and even facial recognition has started to blend into the data and operational end of security so that physical security is vastly enhanced and even tightly coupled with security technology.

Many organizations, particularly since September 11, have started to employ security operations specialists to assess and minimize the threat of physical access and damage in many of the same terms that used to be reserved only for data access and computer log-in authentication. Traditional security operations such as security software installation and monitoring (remember ACF2, RACF, Top Secret, and others), disaster recovery and data archival (Comdisco, Sunguard, Iron Mountain, and others), and a whole list of application-oriented control and assurance programs and procedures have not gone away. Skills are still required in these areas, but the whole secure operations area has been expanded to include protection of the tangible assets as well as the data assets. Watch this area for more developments, including the ability to use the GPS location of the 395

AU1518Ch24Frame Page 396 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES input device, together with the location of the person as an additional factor in transaction authentication. Network Operations. The most recent articles on outsourcing security have looked at the security of the network operations as the most highly vulnerable and therefore the most sensitive of the security domains. Indeed, much work has been done in this area, and industry analysts are falling over themselves to assess and evaluate the vendors that can provide a managed security operation center, or SOC.

It is important to define the difference between a network operation center (NOC) and a security operation center (SOC). The difference can be easily explained with an analogy. The NOC is like a pipe that carries and routes data traffic to where it needs to go. The pipe must be wide enough in diameter to ensure that the data is not significantly impeded in its flow. The SOC, on the other hand, is not like the pipe but rather like a window in the pipe. It does not need to carry the data, but it must be placed at a point where the data flowing through the pipe can be carefully observed. Unlike the NOC, which is a constraint if not wide enough, the SOC will not be able to observe the data flow carefully enough if it is not fast enough. Network operations have changed from the earlier counterparts described previously in terms of the tools and components that are used for function. Screens are larger and flatter. Software is more graphically oriented. Hardware is quicker and provides more control than earlier generations of the NOC, but the basic function is the same. Security operation centers, however, are totally new. In their role of maintaining a close watch on data traffic, significant new software developments have been introduced to stay ahead of the volume. This software architecture generally takes two forms: data compression and pattern matching. • Data compression usually involves stripping out all the inert traffic (which is usually well over 90 percent) and presenting the data that appears to be interesting to the operator. The operator then decides if the interesting data is problematic or indicative of a security violation or intrusion attempt, or whether it is simply a new form of routine inert activity such as the connection of a new server or the introduction of a new user. • Pattern matching (also known as data modeling) is a bit more complex and much more interesting. In this method, the data is fit to known patterns of how intrusion attempts are frequently constructed. For example, there may be a series of pings, several other probing commands, followed by a brief period of analysis, and then the attempt to use the data obtained to gain access or cause denial of service. In its ideal state, 396

AU1518Ch24Frame Page 397 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security this method can actually predict intrusions before they occur and give the operator or security manager a chance to take evasive action. Most MSS providers offer data compression, but the ones that have developed a comprehensive pattern-matching technique have more to offer in that they can occasionally predict and prevent intrusions — whereas the data compression services can, at best, inform when an intrusion occurs. Questions to ask when selecting an MSS provider include first determining if they are providing a NOC or SOC architecture (the pipe or the window). Second, determine if they compress data or pattern match. Third, review very carefully the qualifications of the people who monitor the security. In some cases they are simply a beeper service. (“Hello, Security Officer? You’ve been hacked. Have a nice day. Goodbye.”) Other providers have well-trained incident response professionals who can describe how you can take evasive action or redesign the network architecture to prevent future occurrences. There are several cost justifications for outsourcing security operations: • The cost of the data compression and modeling tools is shared among several clients. • The facility is available 24/7 and can be staffed with the best people at the most vulnerable time of day (nights, weekends, and holidays). • The expensive technical skills that are difficult to keep motivated for a single network are highly motivated when put in a position of constant activity. This job has been equated to that of a military fighter pilot: 23 hours and 50 minutes of total boredom followed by ten minutes of sheer terror. The best operators thrive on the terror and are good at it. • Patterns can be analyzed over a wide range of address spaces representing many different clients. This allows some advanced warning on disruptions that spread (like viruses and worms), and also can be effective at finding the source of the disruption (perpetrator). Incident Response The last area of outsourced security involves the response to an incident. A perfectly legitimate and popular strategy is that every organization will at some time experience an incident. The ones that successfully respond will consider that incident a minor event. The ones that fail to respond or respond incorrectly can experience a disaster. Incident response involves four specialties: 1. 2. 3. 4.

Intrusion detection Employee misuse Crime and fraud Disaster recovery 397

AU1518Ch24Frame Page 398 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES Intrusion Detection. Best depicted by the previous description of the SOC, intrusion detection involves the identification and isolation of an intrusion attempt. This can be either from the outside, or, in the case of server-based probes, can identify attempts by authorized users to go to places they are not authorized to access. This includes placing sensors (these can be certain firewalls, routers, or IDSs) at various points in the network and having those sensors report activity to a central monitoring place. Some of these devices perform a simple form of data compression and can even issue an e-mail or dial a wireless pager when a situation occurs that requires attention. Employee Misuse. Many attempts to discover employee abuse have been tried over the last several years, especially since the universal acceptance of Internet access as a staple of desktop appliances. Employees have been playing “cat and mouse” with employers over the use of the Internet search capabilities for personal research, viewing pornography, gift shopping, participation in unapproved chat rooms, etc. Employers attempt to monitor their use or prevent such use with filters and firewalls, and employees find new, creative ways to circumvent the restriction. In the United States, this is a game with huge legal consequences. Employees claim that their privacy has been violated; employers claim the employee is wasting company resources and decreasing their effectiveness. Many legal battles have been waged over this issue.

Outsourcing the monitoring of employee misuse ensures that independently defined measures are used across the board for all employees in all areas and at all levels. Using proper techniques for evidence collection and corroboration, the potential for successfully trimming misuse and dismissal or punishment of offenders can be more readily ensured. Crime and Fraud. The ultimate misuse is the commission of a crime or fraud using the organization’s systems and facilities. Unless there is already a significant legal group tuned in to prosecuting this type of abuse, almost always the forensic analysis and evidence preparation are left to an outside team of experts. Successfully identifying and prosecuting or seeking retribution from these individuals depends very heavily on the skills of the first responder to the situation.

Professionals trained in data recovery, forensic analysis, legal interviewing techniques, and collaboration with local law enforcement and judiciary are crucial to achieving success by outsourcing this component. Disaster Recovery. Finally, one of the oldest security specialties is in the area of disaster recovery. The proliferation of backup data centers, records archival facilities, and site recovery experts have made this task easier; but most still find it highly beneficial to retain outside services in several areas: 398

AU1518Ch24Frame Page 399 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security • Recovery plan development: including transfer and training of the organization’s recovery team • Recovery plan test: usually periodic with reports to the executives and, optionally, the independent auditors or regulators • Recovery site preparation: retained in advance but deployed when needed to ensure that the backup facility is fully capable of accepting the operation and, equally important, that the restored original site can resume operation as quickly as possible All of these functions require special skills for which most organizations cannot justify full-time employment, so outsourcing these services makes good business sense. In many cases, the cost of this service can be recovered in reduced business interruption insurance premiums. Look for a provider that meets insurance company specifications for a risk class reduction. Establishing the Qualifications of the Provider For all these different types of security providers, there is no one standard measure of their qualifications. Buyers will need to fall back on standard ways to determine their vendor of choice. Here are a few important questions to ask that may help: • What are the skills and training plan of the people actually providing the service? • Is the facility certified under a quality or standards-based program (ISO 9000/17799, BS7799, NIST Common Criteria, HIPAA, EU Safe Harbors, etc.)? • Is the organization large enough or backed by enough capital to sustain operation for the duration of the contract? • How secure is the monitoring facility (for MSS providers)? If anyone can walk through it, be concerned. • Is there a redundant monitoring facility? Redundant is different from a follow-the-sun or backup site in that there is essentially no downtime experienced if the primary monitoring site is unavailable. • Are there SLAs (service level agreements) that are acceptable to the mission of the organization? Can they be raised or lowered for an appropriate price adjustment? • Can the provider do all of the required services with its own resources, or must the provider obtain third-party subcontractor agreements for some components of the plan? • Can the provider prove that its methodology works with either client testimonial or anecdotal case studies? Protecting Intellectual Property Companies in the security outsourcing business all have a primary objective of being a critical element of an organization’s trust initiative. To 399

AU1518Ch24Frame Page 400 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES achieve that objective, strategic information may very likely be included in the security administration, operation, or response domains. Protecting an organization’s intellectual property is essential in successfully providing those services. Review the methods that help preserve the restricted and confidential data from disclosure or discovery. In the case of incident response, a preferred contracting method is to have a pre-agreed contract between the investigator team and the organization’s attorney to conduct investigations. That way, the response can begin immediately when an event occurs without protracted negotiation, and any data collected during the investigation (i.e., password policies, intrusion or misuse monitoring methods) are protected by attorney–client privilege from subpoena and disclosure in open court. Contracting Issues Contracts for security services can be as different as night is to day. Usually when dealing with security services, providers have developed standard terms and conditions and contract prototypes that make sure they do not commit to more risk than they can control. In most cases there is some “wiggle room” to insert specific expectations, but because the potential for misunderstanding is high, I suggest supplementing the standard contract with an easy-to-read memo of understanding that defines in as clear a language as possible what is included and what is excluded in the agreement. Often, this clear intent can take precedence over “legalese” in the event of a serious misunderstanding or error that could lead to legal action. Attorneys are often comfortable with one style of writing; technicians are comfortable with another. Neither is understandable to most business managers. Make sure that all three groups are in agreement as to what is going to be done at what price. Most activities involve payment for services rendered, either time and materials (with an optional maximum), or a fixed periodic amount (in the case of MSS). Occasionally there may be special conditions. For example, a prepaid retainer is a great way to ensure that incident response services are deployed immediately when needed. “Next plane out” timing is a good measure of immediacy for incident response teams that may need to travel to reach the site. Obviously, a provider with a broad geographic reach will be able to reach any given site more easily than the organization with only a local presence. Expect a higher rate for court testimony, immediate incident response, and evidence collection. 400

AU1518Ch24Frame Page 401 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security Quality of Service Level Agreements The key to a successful managed security agreement is in negotiating a reasonable service level agreement. Response time is one measure. Several companies will give an expected measure of operational improvement, such as fewer password resets, reduced downtime, etc. Try to work out an agreeable set of QoS factors and tie a financial or an additional time penalty for response outside acceptable parameters. Be prudent and accept what is attainable, and do not try to make the provider responsible for more than it can control. Aggressively driving a deal past acceptable criteria will result in no contract or a contract with a servicer that may fail to thrive. Retained Responsibilities Despite what domain of service is selected or the breadth of activities that are to be performed, there are certain cautions regarding the elements that should be held within the organization if at all possible. Management. The first of these is management. Remember that management is responsible for presenting and determining the culture of the organization. Internal and external expectations of performance are almost always carried forth by management style, measurement, and communications, both formal and informal. Risk of losing that culture or identity is considerably increased if the management responsibility for any of the outsourced functions is not retained by someone in the organization ultimately accountable for their performance. If success is based on presenting a trusted image to partners, customers, and employees, help to ensure that success by maintaining close control over the management style and responsibility of the services that are acquired. Operations. Outsourcing security is not outsourcing business operation. There are many companies that can help run the business, including operating the data center, the financial operations, legal, shipping, etc. The same company that provides the operational support should not, as a rule, provide the security of that operation. Keep the old separation of duties principle in effect. People other than those who perform the operations should be selected to provide the security direction or security response. Audit and Oversight. Finally, applying the same principle, invite and

encourage frequent audit and evaluation activities. Outsourced services should always be viewed like a yoyo. Whenever necessary, an easy pull on the string should be all that is necessary to bring them back into range for a check and a possible redirection. Outsourcing security or any other business service should not be treated as a “sign the contract and forget it” project. 401

AU1518Ch24Frame Page 402 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES Building an Escape Clause. But what if all this is done and it still looks like we made a mistake? Easy. If possible, build in an escape clause in the outsource contract that allows for a change in scope, direction, or implementation. If these changes (within reason) cannot be accommodated, most professional organizations will allow for an escape from the contract. Setup and equipment charges may be incurred, but those would typically be small compared to the lost time and expense involved in misunderstanding or hiring the wrong service. No security service organization wants a reference client that had to be dragged, kicking and screaming, through a contract simply because the name is on the line when everyone can agree that the service does not fit.

THE FUTURE OF OUTSOURCED SECURITY Industries Most Likely to Outsource The first category of industries most likely to outsource security is represented by those companies whose key assets are the access to reliable data or information service. Financial institutions, especially banks, securities brokers, and insurance, health, or property claims operations, are traditional buyers of security services. Recent developments in privacy have added healthcare providers and associated industries to that list. Hospitals, medical care providers, pharmaceuticals, and health-centered industries have a new need for protecting the privacy of personal health information. Reporting on the success of that protection is often a new concept that neither meets the existing operation nor justifies the full-time expense. HIPAA compliance will likely initiate a rise in the need for security (privacy) compliance providers. The third category of industry that frequently requires outsourced security is the set of industries that cannot suffer any downtime or show any compromise of security. Railroads, cargo ships, and air traffic control are obvious examples of the types of industries where continuous availability is a crucial element for success. They may outsource the network operation or periodic review of their response and recovery plan. Internet retailers that process transactions with credit cards or against credit accounts fit into this category. Release of credit card data, or access to or changes made to purchasing history, is often fatal to continued successful operation. The final category of industry that may need security services are those industries that have as a basis of their success an extraordinary level of trust in the confidentiality of their data. Taken to the extreme, this can include military or national defense organizations. More routinely, this would include technology research, legal, marketing, and other industries that would suffer severe image loss if it were revealed that their security was compromised or otherwise rendered ineffectual. 402

AU1518Ch24Frame Page 403 Thursday, November 14, 2002 8:42 PM

Considerations for Outsourcing Security Measurements of Success I once worked on a fairly complex application project that could easily have suffered from “scope creep.” To offset this risk, we encouraged the user to continually ask the team, “How do we know we are done?” This simple question can help identify quite clearly what the expectations are for the security service, and how success is measured. What comes to my mind is the selection of the three milestones of project success: “scope, time, and cost — pick two out of three.” A similar principle applies to measuring the success of security services. They are providing a savings of risk, cost, or effort. Pick two out of three. It is impractical to expect that everything can be completely solved at a low cost with total confidence. Security servicers operate along the same principles. They can explain how you can experience success, but only in two out of three areas. Either they save money, reduce risk, or take on the complexity of securing the enterprise. Only rarely can they do all three. Most can address two of these measures, but it lies to the buying organization to determine which of these are the two most important. Response of MSS (Managed Security Service) Providers to New World Priorities After September 11, 2001, the security world moved substantially. What was secure was no longer secure. What was important was no longer important. The world focused on the risk of personal safety and physical security and anticipated the corresponding loss of privacy and confidentiality. In the United States, the constitutional guarantee of freedom was challenged by the collective need for personal safety, and previously guaranteed rights were brought into question. The security providers have started to address physical safety issues in a new light. What was previously deferred to the physical security people is now accepted as part of the holistic approach to risk reduction and trust. Look for an integration of traditional physical security concepts to be enhanced with new technologies like digital facial imaging, integrated with logical security components. New authentication methods will reliably validate “who did what where,” not only when something was done on a certain device. Look also for an increase in the sophistication of pattern matching for intrusion management services. Data compression can tell you faster that something has happened, but sophisticated modeling will soon be able to predict with good reliability that an event is forming in enough time to take appropriate defensive action. We will soon look back on today as the primitive era of security management. 403

AU1518Ch24Frame Page 404 Thursday, November 14, 2002 8:42 PM

SECURITY MANAGEMENT PRACTICES Response of the MSS Buyers to New World Priorities The servicers are in business to respond quickly to new priorities, but managed security service buyers will also respond to emerging priorities. Creative solutions are nice, but practicality demands that enhanced security be able to prove itself in terms of financial viability. I believe we will see a new emphasis on risk management and image enhancements. Organizations have taken a new tack on the meaning of trust in their industries. Whether it is confidentiality, accuracy, or reliability, the new mantra of business success is the ability to depend on the service or product that is promised. Security in all its forms is key to delivering on that promise. SUMMARY AND CONCLUSIONS Outsourced security, or managed security services (MSS), will continue to command the spotlight. Providers of these services will be successful if they can translate technology into real business metrics. Buyers of that service will be successful if they focus on the measurement of the defined objectives that managed services can provide. Avoid the attraction offered simply by a recognized name and get down to real specifics. Based on several old and tried methods, there are new opportunities to effectively use and build on the skills and economies of scale offered by competent MSS providers. Organizations can refocus on what made them viable or successful in the first place: products and services that can be trusted to deliver on the promise of business success. ABOUT THE AUTHOR Michael J. Corby is president of QinetiQ Trusted Information Management, Inc. He was most recently vice president of the Netigy Global Security Practice, CIO for Bain & Company and the Riley Stoker division of Ashland Oil, and founder of M. Corby & Associates, Inc., a regional consulting firm in continuous operation since 1989. He has more than 30 years of experience in the information security field and has been a senior executive in several leading IT and security consulting organizations. He was a founding officer of (ISC)2 Inc., developer of the CISSP program, and was named the first recipient of the CSI Lifetime Achievement Award. A frequent speaker and prolific author, Corby graduated from WPI in 1972 with a degree in electrical engineering.

404

Au1518Ch25Frame Page 405 Thursday, November 14, 2002 6:13 PM

Chapter 25

Roles and Responsibilities of the Information Systems Security Officer Carl Burney, CISSP

Information is a major asset of an organization. As with any major asset, its loss can have a negative impact on the organization’s competitive advantage in the marketplace, a loss of market share, and become a potential liability to shareholders or business partners. Protecting information is as critical as protecting other organizational assets, such as plant assets (i.e., equipment and physical structures) and intangible assets (i.e., copyrights or intellectual property). It is the information systems security officer (ISSO) who establishes a program of information security to help ensure the protection of the organization’s information. The information systems security officer is the main focal point for all matters involving information security. Accordingly, the ISSO will: • Establish an information security program including: — Information security plans, policies, standards, guidelines, and training • Advise management on all information security issues • Provide advice and assistance on all matters involving information security

0-8493-1518-2/03/$0.00+$1.50 © 2003 by CRC Press LLC

405

Au1518Ch25Frame Page 406 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES THE ROLE OF THE INFORMATION SYSTEMS SECURITY OFFICER There can be many different security roles in an organization in addition to the information system security officer, such as: • • • • • • •

Network security specialist Database security specialist Internet security specialist E-business security specialist Public key infrastructure specialist Forensic specialist Risk manager

Each of these roles is in a unique, specialized area of the information security arena and has specific but limited responsibilities. However, it is the role of the ISSO to be responsible for the entire information security effort in the organization. As such, the ISSO has many broad responsibilities, crossing all organizational lines, to ensure the protection of the organization’s information. RESPONSIBILITIES OF THE INFORMATION SYSTEMS SECURITY OFFICER As the individual with the primary responsibility for information security in the organization, the ISSO will interact with other members of the organization in all matters involving information security, to include: • Develop, implement, and manage an information security program. • Ensure that there are adequate resources to implement and maintain a cost-effective information security program • Work closely with different departments on information security issues, such as: — The physical security department on physical access, security incidents, security violations, etc. — The personnel department on background checks, terminations due to security violations, etc. — The audit department on audit reports involving information security and any resulting corrective actions • Provide advice and assistance concerning the security of sensitive information and the processing of that information. • Provide advice and assistance to the business groups to ensure that information security is addressed early in all projects and programs. • Establish an information security coordinating committee to address organization-wide issues involving information security matters and concerns. • Serve as a member of technical advisory committees. 406

Au1518Ch25Frame Page 407 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer Exhibit 25-1. An information security program will cover a broad spectrum. Policies, Standards, Guidelines, and Rules

Reports

Access controls Audits and Reviews Configuration management Contingency planning Copyright Incident response Personnel security Physical security

Risk management Security software/hardware Testing Training Systems acquisition Systems development Certification/accreditation Exceptions

• Consult with and advise senior management on all major information security-related incidents or violations. • Provide senior management with an annual state of information security report. Developing, implementing, and managing an information security program is the ISSO’s primary responsibility. The Information Security Program will cross all organizational lines and encompass many different areas to ensure the protection of the organization’s information. Exhibit 25-1 contains a noninclusive list of the different areas covered by an information security program. Policies, Standards, Guidelines, and Rules • Develop and issue security policies, standards, guidelines, and rules. • Ensure that the security policies, standards, guidelines, and rules appropriately protect all information that is collected, processed, transmitted, stored, or disseminated. • Review (and revise if necessary) the security policies, standards, guidelines, and rules on a periodic basis. • Specify the consequences for violations of established policies, standards, guidelines, and rules. • Ensure that all contracts with vendors, contractors, etc. include a clause that the vendor or contractor must adhere to the organization’s security policies, standards, guidelines, and rules, and be liable for any loss due to violation of these policies, standards, guidelines, and rules. Access Controls • Ensure that access to all information systems is controlled. • Ensure that the access controls for each information system are commensurate with the level of risk, determined by a risk assessment. 407

Au1518Ch25Frame Page 408 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES • Ensure that access controls cover access by workers at home, dial-in access, connection from the Internet, and public access. • Ensure that additional access controls are added for information systems that permit public access. Audits and Reviews • Establish a program for conducting periodic reviews and evaluations of the security controls in each system, both periodically and when systems undergo significant modifications. • Ensure audit logs are reviewed periodically and all audit records are archived for future reference. • Work closely with the audit teams in required audits involving information systems. • Ensure the extent of audits and reviews involving information systems are commensurate with the level of risk, determined by a risk assessment. Configuration Management • Ensure that configuration management controls monitor all changes to information systems software, firmware, hardware, and documentation. • Monitor the configuration management records to ensure that implemented changes do not compromise or degrade security and do not violate existing security policies. Contingency Planning • Ensure that contingency plans are developed, maintained in an up-todate status, and tested at least annually. • Ensure that contingency plans provide for enough service to meet the minimal needs of users of the system and provide for adequate continuity of operations. • Ensure that information is backed up and stored off-site. Copyright • Establish a policy against the illegal duplication of copyrighted software. • Ensure inventories are maintained for each information system’s authorized/legal software. • Ensure that all systems are periodically audited for illegal software. Incident Response • Establish a central point of contact for all information security-related incidents or violations. 408

Au1518Ch25Frame Page 409 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer • Disseminate information concerning common vulnerabilities and threats. • Establish and disseminate a point of contact for reporting information security-related incidents or violations. • Respond to and investigate all information security-related incidents or violations, maintain records, and prepare reports. • Report all major information security-related incidents or violations to senior management. • Notify and work closely with the legal department when incidents are suspected of involving criminal or fraudulent activities. • Ensure guidelines are provided for those incidents that are suspected of involving criminal or fraudulent activities, to include: — Collection and identification of evidence — Chain of custody of evidence — Storage of evidence Personnel Security • Implement personnel security policies covering all individuals with access to information systems or having access to data from such systems. Clearly delineate responsibilities and expectations for all individuals. • Ensure all information systems personnel and users have the proper security clearances, authorizations, and need-to-know, if required. • Ensure each information system has an individual, knowledgeable about information security, assigned the responsibility for the security of that system. • Ensure all critical processes employ separation of duties to ensure one person cannot subvert a critical process. • Implement periodic job rotation for selected positions to ensure that present job holders have not subverted the system. • Ensure users are given only those access rights necessary to perform their assigned duties (i.e., least privilege). Physical Security • Ensure adequate physical security is provided for all information systems and all components. • Ensure all computer rooms and network/communications equipment rooms are kept physically secure, with access by authorized personnel only. Reports • Implement a reporting system, to include: — Informing senior management of all major information security related incidents or violations 409

Au1518Ch25Frame Page 410 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES — An annual State of Information Security Report — Other reports as required (i.e., for federal organizations: OMB CIRCULAR NO. A-130, Management of Federal Information Resources) Risk Management • Establish a risk management program to identify and quantify all risks, threats, and vulnerabilities to the organization’s information systems and data. • Ensure that risk assessments are conducted to establish the appropriate levels of protection for all information systems. • Conduct periodic risk analyses to maintain proper protection of information. • Ensure that all security safeguards are cost-effective and commensurate with the identifiable risk and the resulting damage if the information was lost, improperly accessed, or improperly modified. Security Software/Hardware • Ensure security software and hardware (i.e., anti-virus software, intrusion detection software, firewalls, etc.) are operated by trained personnel, properly maintained, and kept updated. Testing • Ensure that all security features, functions, and controls are periodically tested, and the test results are documented and maintained. • Ensure new information systems (hardware and software) are tested to verify that the systems meet the documented security specifications and do not violate existing security policies. Training • Ensure that all personnel receive mandatory, periodic training in information security awareness and accepted information security practices. • Ensure that all new employees receive an information security briefing, as part of the new employee indoctrination process. • Ensure that all information systems personnel are provided appropriate information security training for the systems with which they work. • Ensure that all information security training is tailored to what users need to know about the specific information systems with which they work. • Ensure that information security training stays current by periodically evaluating and updating the training. 410

Au1518Ch25Frame Page 411 Thursday, November 14, 2002 6:13 PM

Roles and Responsibilities of the Information Systems Security Officer Systems Acquisition • Ensure that appropriate security requirements are included in specifications for the acquisition of information systems. • Ensure that all security features, functions, and controls of a newly acquired information system are tested to verify that the system meets the documented security specifications, and does not violate existing security policies, prior to system implementation. • Ensure all default passwords are changed when installing new systems. Systems Development • Ensure information security is part of the design phase. • Ensure that a design review of all security features is conducted. • Ensure that all information systems security specifications are defined and approved prior to programming. • Ensure that all security features, functions, and controls are tested to verify that the system meets the documented security specifications and does not violate existing security policies, prior to system implementation. Certification/Accreditation • Ensure that all information systems are certified/accredited, as required. • Act as the central point of contact for all information systems that are being certified/accredited. • Ensure that all certification requirements have been met prior to accreditation. • Ensure that all accreditation documentation is properly prepared before submission for final approval. Exceptions • If an information system is not in compliance with established security policies or procedures, and cannot or will not be corrected: — Document: • The violation of the policy or procedure • The resulting vulnerability • Any necessary corrective action that would correct the violation • A risk assessment of the vulnerability. — Have the manager of the information system that is not in compliance document and sign the reasons for noncompliance. — Send these documents to the CIO for signature. 411

Au1518Ch25Frame Page 412 Thursday, November 14, 2002 6:13 PM

SECURITY MANAGEMENT PRACTICES THE NONTECHNICAL ROLE OF THE INFORMATION SYSTEMS SECURITY OFFICER As mentioned, the ISSO is the main focal point for all matters involving information security in the organization, and the ISSO will: • Establish an information security program. • Advise management on all information security issues. • Provide advice and assistance on all matters involving information security. Although information security may be considered technical in nature, a successful ISSO is much more than a “techie.” The ISSO must be a businessman, a communicator, a salesman, and a politician. The ISSO (the businessman) needs to understand the organization’s business, its mission, its goals, and its objectives. With this understanding, the ISSO can demonstrate to the rest of the management team how information security supports the business of the organization. The ISSO must be able to balance the needs of the business with the needs of information security. At those times when there is a conflict between the needs of the business and the needs of information security, the ISSO (the businessman, the politician, and the communicator) will be able to translate the technical side of information security into terms that business managers will be better able to understand and appreciate, thus building consensus and support. Without this management support, the ISSO will not be able to implement an effective information security program. Unfortunately, information security is sometimes viewed as unnecessary, as something that gets in the way of “real work,” and as an obstacle most workers try to circumvent. Perhaps the biggest challenge is to implement information security into the working culture of an organization. Anybody can stand up in front of a group of employees and talk about information security, but the ISSO (the communicator and the salesman) must “reach” the employees and instill in them the value and importance of information security. Otherwise, the information security program will be ineffective. CONCLUSION It is readily understood that information is a major asset of an organization. Protection of this asset is the daily responsibility of all members of the organization, from top-level management to the most junior workers. However, it is the ISSO who carries