923 52 2MB
Pages 218 Page size 431.99 x 649.98 pts Year 2011
Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2506
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Metin Feridun Peter Kropf Gilbert Babin (Eds.)
Management Technologies for E-Commerce and E-Business Applications 13th IFIP/IEEE International Workshop on Distributed Systems: Operations and Management, DSOM 2002 Montreal, Canada, October 21-23, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Metin Feridun IBM Research, Zurich Research Laboratory S¨aumerstr. 4, 8803 Rueschlikon, Switzerland E-mail: [email protected] Peter Kropf University of Montreal Department of Computer Science and Operations Research C.P. 6128, sccursale Centre-ville, Montr´eal (Qu´ebec), Canada H3C 3J7 E-mail: [email protected] Gilbert Babin Department of Information Technologies ´ ´ Ecole des Hautes Etudes Commerciales 3000, chemin de la Cˆote-Sainte-Catherine, Montr´eal (Qu´ebec), Canada H3T 2A7 E-mail: [email protected] Cataloging-in-Publication Data applied for Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de
CR Subject Classification (1998): C.2, K.6, D.1.3, D.4.4, K.4.4 ISSN 0302-9743 ISBN 3-540-00080-1 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP Berlin, Stefan Sossna e. K. Printed on acid-free paper SPIN: 10870863 06/3142 543210
Preface
This volume of the Lecture Notes in Computer Science series contains all the papers accepted for presentation at the 13th IFIP/IEEE International Workshop on Distributed Systems: Operations and Management (DSOM 2002), which was held at the University of Montreal, Canada, from 21 to 23 October 2002. DSOM 2002 was the thirteenth workshop in a series of annual workshops and it follows in the footsteps of highly successful previous meetings, the most recent of which were held in Nancy, France (DSOM 01), Austin, USA (DSOM 00), Zurich, Switzerland (DSOM 99), Delaware, USA (DSOM 98), and Sydney, Australia (DSOM 97). The goal of the DSOM workshops is to bring together researchers in the areas of network, systems, and services management, from both industry and academia, to discuss recent advances and foster future growth in this field. In contrast to the larger management symposia, such as IM (Integrated Management) and NOMS (Network Operations and Management Symposium), the DSOM workshops are organized as single-track programs in order to stimulate interaction among participants. The focus of DSOM 2002 is “Making e-commerce and e-business applications successful through management.” As e-commerce and e-business applications grow to be a prominent part of day-to-day business, management technologies that support them gain in importance. The papers presented at the workshop address the underlying technologies that are key to the success of e-commerce and e-business applications, assuring quality of service, security and trust, endto-end service management, and ubiquity. This year we were fortunate to receive 40 high-quality papers from 15 countries, of which 16 were selected for the 5 technical sessions. The technical sessions covered the topics “Managing Quality of Service,” “Measuring Quality of Service,” “Service Archictures,” “Policy and Process,” and “Fault Analysis.” This year we also included in the program a panel session titled “Enforcing Quality: Myth or Reality?,” to provide a forum for open discussion of the state-of-the-art and requirements for quality-of-service configuration, monitoring, and enforcement. This workshop owes its success to all the members of the technical program committee, who did an excellent job of encouraging their colleagues in the field to submit high-quality papers, and who devoted a lot of their time to help create an outstanding technical program. We thank them sincerely. We are also very grateful to the volunteer reviewers who gave generously of their time to make the review process effective. October 2002
Gilbert Babin Metin Feridun Peter Kropf
Organization
The 13th IFIP/IEEE International Workshop on Distributed Systems: Operations and Management (DSOM 2002) was sponsored by IFIP (TC 6, Communication Systems; WG 6.6, Management of Networks and Distributed Systems), the IEEE Communications Society, the Minist`ere de la Recherche, de la Science et de la Technologie du Qu´ebec, IBM, CIRANO (Center for Interuniversity Research and Analysis on Organizations), CRT (Center of Research on Transportation), and Bombardier.
Conference Chairs Metin Feridun, IBM Research, Switzerland Peter Kropf, University of Montreal, Canada
Local Arrangements Chair Gilbert Babin, HEC, Montreal, Canada
Technical Program Committee Sebastian Abeck, University of Karlsruhe, Germany Nikos Anerousis, Voicemate, USA Gilbert Babin, HEC Montreal, Canada Raouf Boutaba, University of Waterloo, Canada Torsten Braun, University of Bern, Switzerland Marcus Brunner, NEC Europe, Germany Mark Burgess, University College Oslo, Norway Omar Cherkaoui, University of Quebec in Montreal, Canada Alexander Clemm, Cisco Systems, USA Theodor Crainic, University of Montreal, Canada Markus Debusmann, FH Wiesbaden, Germany Gabi Dreo-Rodosek, LRZ Munich, Germany Olivier Festor, LORIA/INRIA, France Kurt Geihs, Technical University Berlin, Germany Heinz-Gerd Hegering, University of Munich, Germany Joseph Hellerstein, IBM Research, USA Gabriel Jakobson, Gabriel Jakobson Associates, USA Brigitte Jaumard, University of Montreal, Canada Alexander Keller, IBM Research, USA Yoshiaki Kiriha, NEC, Japan Lundy Lewis, Aprisma Management Technologies, USA
VIII
Organization
Antonio Liotta, University of Surrey, UK Emil Lupu, Imperial College, UK Hanan Lutfiyya, University of Western Ontario, Canada Jean-Philippe Martin-Flatin, CERN, Switzerland George Pavlou, University of Surrey, UK Aiko Pras, University of Twente, The Netherlands Danny Raz, Technion, Israel Juergen Schoenwaelder, Technical University of Braunschweig, Germany Adarshpal Sethi, University of Delaware, USA Morris Sloman, Imperial College, UK Rolf Stadler, KTH Stockholm, Sweden Burkhard Stiller, ETH Zurich, Switzerland Robert Weihmayer, Verizon E-Business, USA Carlos B. Westphall, Federal University of Santa Catarina, Brazil
Reviewers Hamid Asgari, Thales Research, UK Chris Bohoris, University of Surrey, UK Markus Debusmann, FH Wiesbaden, Germany Paris Flegkas, University of Surrey, UK Klaus Herrmann, Technical University Berlin, Germany Sye-Loong Keoh, Imperial College, London, UK Remco van de Meent, University of Twente, The Netherlands Thomas Schwotzer, Technical University Berlin, Germany Martin Stiemerling, NEC Europe, Germany Andreas Tanner, Technical University Berlin, Germany Alvin Yew, University of Surrey, UK
Centre de recherche sur les transports
Table of Contents
Keynote Speakers More Research Is Indeed Needed in E-commerce; Where Were Business Academicians When We Needed Them? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jacques Nantel (HEC Montreal) Cool to Critical: Managing Web Services Now . . . . . . . . . . . . . . . . . . . . . . . . . Ellen Stokes (IBM/Tivoli Systems Management)
1 2
Panel Session Enforcing QoS: Myth or Reality? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organizers: Gabi Dreo Rodosek (Leibniz Supercomputing Center), Metin Feridun (IBM Research)
3
Managing Quality of Service Modeling of Service-Level Agreements for Composed Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Daly (University of Illinois at Urbana-Champaign), Gautam Kar (IBM T.J. Watson Research Center), William H. Sanders (University of Illinois at Urbana-Champaign)
4
The Architecture of NG-MON: A Passive Network Monitoring System for High-Speed IP Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Se-Hee Han (POSTECH), Myung-Sup Kim (POSTECH), Hong-Taek Ju (Keimyung University), James Won-Ki Hong (POSTECH) Automated SLA Monitoring for Web Services . . . . . . . . . . . . . . . . . . . . . . . . . 28 Akhil Sahai (HP Laboratories), Vijay Machiraju (HP Laboratories), Mehmet Sayal (HP Laboratories), Aad van Moorsel (HP Laboratories), Fabio Casati (HP Laboratories) Optimizing Quality of Service Using Fuzzy Control . . . . . . . . . . . . . . . . . . . . . 42 Yixin Diao (IBM T.J. Watson Research Center), Joseph L. Hellerstein (IBM T.J. Watson Research Center), Sujay Parekh (IBM T.J. Watson Research Center)
X
Table of Contents
Measuring Quality of Service Interaction Translation Methods for XML/SNMP Gateway . . . . . . . . . . . . . . 54 Yoon-Jung Oh (POSTECH), Hong-Taek Ju (Keimyung University), Mi-Jung Choi (POSTECH), James Won-Ki Hong (POSTECH) Measuring Application Response Times with the CIM Metrics Model . . . . . 66 Alexander Keller (IBM T.J. Watson Research Center), Andreas K¨ oppel (SAP AG), Karl Schopmeyer (The Open Group) Quality Aspects in IT Service Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Gabi Dreo Rodosek (Leibniz Supercomputing Center)
Service Architectures Replication and Notification Management in a Knowledge Delivery Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Nikos Anerousis (Voicemate) Delivering Service Adaptation with 3G Technology . . . . . . . . . . . . . . . . . . . . . 108 Antonio Liotta (University of Surrey), Alvin Yew (University of Surrey), Chris Bohoris(University of Surrey), George Pavlou (University of Surrey) Remote Code Browsing, a Network Based Computation Utility . . . . . . . . . . 121 Chris Giblin (IBM Zurich Research Laboratory), Sean Rooney (IBM Zurich Research Laboratory), Anthony Bussani (IBM Zurich Research Laboratory)
Policy and Process Performance Study of COPS over TLS and IPsec Secure Session . . . . . . . . . 133 Yijun Zeng (Universit´e du Qu´ebec Montr´eal), Omar Cherkaoui (Universit´e du Qu´ebec Montr´eal) A Criteria Catalog Based Methodology for Analyzing Service Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Michael Brenner (University of Munich), Igor Radisic (University of Munich), Martina Schollmeyer (BMW Group) A Comparative Study of Policy Specification Languages for Secure Distributed Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Sandrine Duflos (Universit´e P. & M. Curie), Gladys Diaz (Universit´e Paris 13), Val´erie Gay (Universit´e P. & M. Curie), Eric Horlait (Universit´e P. & M. Curie)
Table of Contents
XI
Fault Analysis Two Dimensional Time-Series for Anomaly Detection and Regulation in Adaptive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Mark Burgess (Oslo University College) A Hot–Failover State Machine for Gateway Services and Its Application to a Linux Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Harald Roelle (University of Munich) Distributed Fault Localization in Hierarchically Routed Networks . . . . . . . . 195 Malgorzata Steinder (University of Delaware), Adarsh Sethi (University of Delaware)
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
More Research Is Indeed Needed in E-commerce; Where Were Business Academicians When We Needed Them? Jacques Nantel Director, RBC Financial Group Chair in e-commerce HEC Montreal, 3000 Ch. Cˆ ote-Ste-Catherine, Montr´eal, Canada, H3T 2A7 [email protected]
An exhaustive survey of the literature published between 1995 and 2001 concerning the impact of the Web on business practices suggests that a majority of articles and books, generally published by practitioners but also often by academicians, were too often unfounded, not empirically supported and even misguiding. Four areas of business research are explored: Finance, Web usability, advertising on the web, and the distribution of digital products. In the area of finance and economics, very few articles were published between 1995 and 2002 that suggested that the market evaluation of the .com company was grossly inflated and why it was so. Yet, similar speculative episodes had happened before in history. In the area of web design and web usability several articles and books have been published. Yet, very few of them are based on sound corroborative methodologies. Everywhere you read that, for a site to be efficient, consumers need to find what they want in 3 clics or that a sample of only five respondents is required in order to test the usability of a site. Although appealing, these results have not received a lot of attention from the academic community. The same holds true for the efficiency of advertising practices on the web for which we have a profusion of anecdotal evidences but very little rigorous analyses. Finally, another area which suffers from a serious lack of grounded research is the area related to the marketing and distribution of digital products. As it becomes obvious to everyone that the web, even in a “post-Napster” environment, will change forever the way music and other digital products are distributed, very little research have emerged in the business literature suggesting ways in which the economics of this industry could be reshaped. In this dot-bomb era, the gap between IT and engineering researches on the one hand and business research on the other is widening. The comprehension we have about the ways in which technological advancements change business and especially marketing practices has not evolved at the same pace as technological developments. More business research is indeed needed.
M. Feridun et al. (Eds.): DSOM 2002, LNCS 2506, p. 1, 2002. c Springer-Verlag Berlin Heidelberg 2002
Cool to Critical: Managing Web Services Now Ellen Stokes Senior Technical Staff Member IBM/Tivoli Systems Management 9442 Capitol of Texas Highway North, Austin, TX 78759, USA [email protected]
Web services are no longer just hype — they are being sanctioned by the industry on two fronts, standards and products. IBM is investing time, talent, and money on both these fronts, establishing itself as an industry leader. Web services are being developed as the foundation of a new generation of business-to-business and application integration architectures. This places Web services technologies in a business critical role within most enterprises. The corollary to this is that the Web services and the applications that use them must be manageable, from end to end, through the firewalls. The business-to-business Web services applications require management solutions in kind. They must be platform and technology agnostic, available through firewall, internet friendly, and flexible. This presents new challenges and opportunities to management vendors. This session will explain IBM’s view of the challenges and vision for managing Web services as well as leveraging Web services for management purposes. The session will also talk about how the same management challenges and solutions might be applied to Grid Services. In summary, this session will address IBM’s current work in the Web Services management area and how IBM’s products address the needs of e-business and management through Web Services.
M. Feridun et al. (Eds.): DSOM 2002, LNCS 2506, p. 2, 2002. c Springer-Verlag Berlin Heidelberg 2002
Enforcing QoS: Myth or Reality? Organizers: Gabi Dreo Rodosek1 and Metin Feridun2 1
Munich Network Management Team Leibniz Supercomputing Center Barer Str. 21, 80333 Munich, Germany [email protected] 2 IBM Research Zurich Research Laboratory Saumerstr. 4, 8803 Rueschlikon, Switzerland [email protected]
In recent years, the research community has investigated how quality of service (QoS) can be integrated into services such that it can be managed, i.e., configured, monitored and enforced. The gradual growth in network-based services is making it necessary to implement some notion of quality of service into products such that service providers can differentiate their services, for example, provide different classes of service, where a service is end-to-end. Are research results, approaches sufficient to address this need? Do we know how to provide customer-oriented quality of service? Do we know what quality of service parameters are demanded by customers, which of them provide useful and relevant information? How can we specify and measure these QoS parameters? What are the appropriate approaches?
M. Feridun et al. (Eds.): DSOM 2002, LNCS 2506, p. 3, 2002. c Springer-Verlag Berlin Heidelberg 2002
Modeling of Service-Level Agreements for Composed Services David Daly1 , Gautam Kar2 , and William H. Sanders1 1
Center for Reliable and High-Performance Computing, Coordinated Science Laboratory and Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA {ddaly, whs}@crhc.uiuc.edu, http://www.crhc.uiuc.edu/PERFORM 2 IBM T.J. Watson Research Center P.O.Box 704 Yorktown Heights, NY 10598 [email protected]
Abstract. As Web services are increasingly accepted and used, the next step for them is the development of hierarchical and distributed services that can perform more complex tasks. In this paper, we focus on how to develop guarantees for the performance of an aggregate service based on the guarantees provided by the lower-level services. In particular, we demonstrate the problem with an example of an e-commerce Web site implemented using Web services. The example is based on the Transaction Processing Performance Council (TPC) TPC-W Benchmark [8], which specifies an online store complete with a description of all the functionality of the site as well as a description of how customers use the site. We develop models of the site’s performance based on the performance of two sub-services. The model’s results are compared to experimental data and are used to predict the performance of the system under varying conditions.
1
Introduction
Web services are increasingly being referred to as the future of outsourcing on the Internet, because they allow remote services to be discovered and accessed in a uniform manner to execute some functionality. The infrastructure for Web services is already being developed in the form of open standards such as SOAP,
This material is based upon work supported in part by the National Science Foundation under Grant No. 9975019 and IBM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or IBM. This work was done in part while Mr. Daly was an intern at IBM T.J. Watson Research Center.
M. Feridun et al. (Eds.): DSOM 2002, LNCS 2506, pp. 4–15, 2002. c Springer-Verlag Berlin Heidelberg 2002
Modeling of Service-Level Agreements for Composed Services
5
UDDI, and WSDL, as well as other closed standards. A Web-service customer can query a UDDI [10] (Universal Description, Discovery, Integration) server to find needed services, and access WSDL [2] (Web Service Description Language) descriptions of the services using the SOAP [1] (Simple Object Access Protocol) protocol. The customer and service provider negotiate a contract, including service-level agreements (SLAs), once the customer finds the appropriate service. The development of outsourced Web services allows service providers to be more specialized and efficient, and to provide improved, more flexible services. A logical result of this outsourcing is the development of hierarchical services (which are themselves made of outsourced services) as well as service aggregators that can develop a complete Web service for a customer using outsourced services. With little overhead, a service aggregator would be able to quickly develop and deliver a service for a customer; It would contract only for the needed levels of service, and would increase those levels as needed. We focus on the modeling of such services to determine the service levels that can be guaranteed.
2
Problem Overview
Hierarchical and aggregated services are also Web services, and require that contracts, including SLAs, be negotiated. SLAs stipulate minimum standards to be provided by the service and usage constraints for the customer of the service. The agreements also generally include penalties if the service levels do not meet the guarantees (that is, if there is an SLA violation). It is a relatively straightforward (although not necessarily easy) problem to determine what levels of service can be guaranteed when the service is provided entirely by one entity (no outsourcing of subcomponents); the prediction of performance of individual services has been examined in many areas. However, it is more difficult to determine the level of service that can be offered for a service composed of multiple services. The service provider may know only the guarantees offered in the SLAs of the component services, and therefore will need a method to compute the overall SLA of the composite service. Thus, the problem is to develop contracts between the customer, the service integrator, and all of the service providers. The service integrator guarantees overall performance to the customer, while the service providers guarantee the performance of their services to the service integrator. The problem is complicated by the fact that the service level of the composite service may not be a simple combination of the service levels of the sub-services. This situation requires a simultaneous analysis of all the relevant outsourced services, which is the focus of this paper. Before we can address the problem of relating SLA terms, we first must understand the type of guarantees offered by an SLA. Providers commonly guarantee that service will be completed within a certain time, a certain percentage of the time, when the load is below a certain value. This will normally be worded as “X% of requests respond in under Y seconds, when the load is less than Z requests per second.” We used SLAs of that form for the component services.
6
2.1
D. Daly, G. Kar, and W.H. Sanders
Focus Area: E-commerce and TPC-W
For this paper, we focused on a situation in which a service integrator implements an e-commerce Web site for a client. The integrator may use a combination of traditional service providers and outsourced Web services to implement the ecommerce site, providing the integrator with the flexibility to scale the services as needed with low overhead. Whether the service integrator is an external vendor or an internal organization is immaterial. It will still need to provide performance guarantees to the client. The e-commerce example we specifically examine is the TPC-W benchmark [8], which was developed by the Transaction Processing Performance Council (TPC) [9]. It is a benchmark based on an online bookstore. Users may search for books, view bestsellers, track orders, and perform other functions. All of the pages are dynamically generated using an application server, and all product and customer data are stored in a back-end database. The TPC-W benchmark, therefore, requires two main services: the application server and the back-end database. While the benchmark is intended to test unified system offerings from a vendor, there is no reason why both services could not be outsourced. The standard SLA guarantee based on the percent of satisfied requests, while satisfactory for simple services, is not sufficient for an e-commerce site, such as the example one. A shift towards business metrics is necessary to properly meet the requirements of the client. The client does not inherently care about response time, but about the satisfaction of the customers using the site. Ultimately, the client is concerned with revenue and profit, but the service provider cannot make guarantees on revenue and profit, as there are many factors other than service level that influence those metrics. However, if a customer becomes dissatisfied with the site and leaves, the client loses potential revenue. Therefore, we propose to relate the business metrics that a client is interested in to the service-related metrics that the provider is able to measure and report: the number of customers who leave a transaction prematurely. Therefore, we suggest SLA terms for an e-commerce site of the form “less than X% of the customers leave the site prematurely because of the service level.” The fraction of customers who leave the site prematurely will be dependent on the response time and the service levels of the component services, but cannot be represented by a single response time guarantee. We do use the original SLA form presented earlier for the component services, since they are simple services, but the more sophisticated SLA is required for the complete e-commerce site. 2.2
Related Work
Menasc´e et al. [6] have performed some relevant related work on modeling ecommerce systems. They use revenue as the key metric, and determine what effect several options have on revenue. In [5] Menasc´e and Almeida develop several queuing models of e-commerce systems to determine the resources needed to meet the demands on the system. Adjustments are made for variability in workload, and for multiple classes of requests. The demand for the system is
Modeling of Service-Level Agreements for Composed Services
7
generated using a Customer Behavior Model Graph (CBMG) that is solved to determine arrival rates in the queuing systems. The authors have extended the work to make it business-focused by concentrating on revenue, with metrics such as revenue throughput and potential lost revenue [6]. The work assumes that all resources (application server, database, and so forth) are controlled by the company hosting the e-commerce site. Based on that assumption, it is valid to focus directly on the revenue and profits of the site. Our work deals with similar systems, but in the context of outsourced Web services. The use of outsourced Web services invalidates the assumption that all resources are under the direct control of any one service provider. In addition, since a Web service aggregator is merely developing and implementing the ecommerce site for the company, and does not control the products offered by the site, it is not reasonable to expect revenue guarantees from the aggregator. Therefore, instead of focusing on revenue, we measure other factors that impact revenue, and can be controlled by the aggregator.
3
E-commerce SLA Models
To determine the SLA guarantees that a service integrator can offer to a customer based on the SLA guarantees of outsourced services, we develop a model of the service using many submodels. The model has two major components that it uses to determine SLA guarantees. The first is the workload model, which models the load applied to the system. It incorporates the behavior of the users to determine how many requests are made. In addition, since our SLA for an e-commerce site is based on user behavior (how many users leave the site prematurely because of poor service), the workload model must explicitly model users leaving the site both prematurely and after completion of normal activity. The second component is the system model. The system model captures the performance of the services as they process the user requests. The workload model needs to determine the load offered to the services, and must also predict how many customers will leave the site prematurely. We must therefor understand what makes a user leave a Web site. For that reason, the workload model includes models of several users accessing the services. We have used a simple model of user behavior in the workload model. The users wait a certain amount of time for each page. If the page takes longer than the allowed time to load, the user attempts to reload the page. After he/she attempts to reload a page a certain number of times, he/she becomes frustrated and leaves the site. We model this by tracking how many times a user reloads a page, and when the number of reloads gets above a preset threshold, the modeled user leaves the site. We could construct a slightly more complicated model in which the user leaves the site only if he/she must retry too many pages in a certain span of time. That scenario is more complex and will be dealt with in future research. We need a model of the system to use with the workload model. The system is made up of the individual services. The model of the services has a latency
8
D. Daly, G. Kar, and W.H. Sanders
and a throughput component to determine the total delay experienced by a user request. The latency component in the model represents the network latency in sending the request to the server and getting the response from the server. All the network latencies are combined into an aggregate latency for each service, which is not affected by the load on the server. If the local network itself is expected to be a bottleneck, it should also be modeled as a service. We represent the latency as a constant time delay. This corresponds well to a low network load condition, in which all requests of the same size take the same amount of time to traverse the network. However, if the network is the Internet, it may experience local bottlenecks and varying delays. We do not attempt to account for that factor at this time. The second factor in the total delay a request experiences in accessing a service (over and above the latency) is the service time. The service time is the total time to process a request, once that request has arrived at the service. For example, a number of factors affect the service time of a request, including the size of the request, the speed of the service, and other requests at the service. If the service exhibits parallelism, it can process multiple requests at the same time with no degradation of service to the requests. The number of requests that can be processed at the same time with no degradation of service to each request is the degree of parallelism of the service. If the number of requests is greater than the degree of parallelism, then all requests are processed at the same time, and they all experience a slowdown in processing. The service time, combined with the level of parallelism, determines the throughput of the service. The throughput is important to the system operator, as the operators will want to process as many requests as possible on the given hardware. However, the user ultimately does not care about the throughput of the service, only the delay experienced in accessing it. The parallelism and service time allow us to determine the delay, which we would be unable to compute solely from the throughput. 3.1
Parameters Based on SLA Values
The server processing delay is selected to match the SLA guarantee on loss for that service if no more detailed information is available. Recall that the service SLA was defined in Section 2 to have the form “no more than X% of requests take longer than Y seconds to complete when the load is less than Z requests per second.” Values for the service delay and for the parallelism of the service need to be determined from those SLA values. We select the average service delay to meet the SLA guarantee on an unloaded machine. This delay is specified by the delay distribution of the service; the specification includes both the type of the delay distribution (e.g., normal or negative-exponential) and any parameters used to describe the particular distribution type. We use a negative-exponential delay distribution if no other information is available. The negative-exponential distribution requires only the average service time as a parameter. It is a simple matter to determine the parameter for a negative exponential distribution such that X% of the time a
Modeling of Service-Level Agreements for Composed Services
9
sample value will be greater than Y. The service time of a request on a loaded server is more complicated, since the request may need to compete for resources, leading to longer service delays. The effect that load has on the request is reflected in the degree of parallelism of the service; by setting the parallelism to Z times the service delay we can select the parallelism such that the server can process more than Z requests per second without experiencing any slowdown.
4
Experimental Validation
We demonstrate the models and ideas developed in the previous section by simulating experiments in the M¨obius tool [3,4] and running experiments on an experimental TPC-W configuration to determine the lost user rate for the e-commerce system represented in TPC-W. We do not explicitly develop SLAs for the two services, but instead use the models developed for Web services and perform measurement on experiments to determine the model parameters (as suggested in the previous section). Therefore, our experiment is unusually detailed, and should demonstrate the accuracy of the models. In addition, the measurements show how the SLA guarantees for the component services could be determined. 4.1
Simulation Environment: M¨ obius
The models were simulated using the M¨obius tool. M¨ obius is a multi-formalism, multi-solution extensible modeling tool for discrete event stochastic models [3, 4]. Using M¨ obius, a user can develop models of parts of a system using different formalisms (or ways of describing a model) and combine those models to form a complete model. For this study we used the Stochastic Activity Network (SAN) [7] formalism, because of its generality. SANs consist of places (represented by circles), which contain tokens; activities (represented by bars), which remove and place tokens in places; and gates (represented by triangles), which control the behavior of activities, allowing for more complex behavior. Figure 1 shows the SAN model of a user accessing the home page of the site. The process starts when a token is placed in Home by another submodel. (That submodel is composed with other submodels to form the complete model. Composition is explained below.) When the user generates requests, the Request activity fires, removing the token from Home and placing one token apiece in Home in and Req in Prog. The user waits for a token to be placed in place Home out, which represents a response. When the token is placed in Home out, that token and the token in Req in Prog are removed, and a token is placed in viewing. After that occurs, some time will pass (representing the time the user spends reading the page) before the Done Viewing activity fires, removing the token from viewing and putting it in a place that determines which page to visit next. Alternatively, if the response to the request takes too long, the activity Timeout will fire, and the user will retry the request. The Drain Lost activity ensures that the lost request will eventually be removed from the system.
10
D. Daly, G. Kar, and W.H. Sanders
Fig. 1. SAN Model of a User Accessing the Home Page
Fig. 2. SAN Model of Application Server
Fig. 3. SAN Model of DB Access for Home Page
The model does not explain how a token goes from Home in to Home out. The home service model, shown in Fig. 2, controls that. The Home in and Home out places will be shared between the two models, so both models will always have the same number of tokens in each place. The Latency activity will remove a token from the Home in place and place one token a piece in Processing and IIS Queue. The delay of the Latency activity represents the network latency experienced by the request. The IIS Queue is shared by all the services that use the application server. The processing time of the home page is scaled by the IIS Queue value combined with the degree of parallelism in the server. The home page also requires a DB access, and the time for this access is modeled after the application server processing time. A token is placed in DB in, and when the DB request is done, a token is placed inDB out. Figure 3 shows the model of the DB access. It is similar to the home page access. When the token is placed in DB in, there is a delay for the firing of Latency, and then some processing time, which is scaled by the number of other requests currently in the DB. Similarly, there are models that represent the user access to each of the pages in the site and the processing of those requests. In our earlier description of the user behavior model, we stated that the user leaves if too many requests are retried. That is modeled in a separate model that we do not describe here because of space consideration. When a user retries too many pages, he/she ends the session prematurely. A user session can also end normally. According to the TPC-W definition, a session ends on the home page request after a random timer expires. This behavior is included in our model. M¨ obius allows multiple models to be composed together, with certain places held in common or shared through the Rep/Join formalism. A join node combines multiple different models, while a rep node creates multiple replications of one model. In both cases, some places are held in common between the submodels.
Modeling of Service-Level Agreements for Composed Services
11
Our models are joined together, sharing all identically named places, to create a model of one user accessing the site. The model is then replicated to represent a population of users accessing the site. The replicas in the final composed model share only specific places, such as IIS queue and DB queue. The measure of interest for this model is the percentage of user sessions that end prematurely. Therefore, measures are defined to determine the number of sessions that end prematurely, and the total number of sessions. The ratio of those two numbers is the user loss rate. 4.2
Experimental Environment: TPC-W Running in Lab
In our lab we set up an application server and a database to run the TPC-W benchmark on a local network. The application server ran using IIS and Jakarta on an IBM RS/6000 workstation, while the back-end database resided on an AIX box, using DB2. The two servers and the client were connected using 100 Mbit Ethernet. The client ran under Linux on a Pentium III workstation. In TPC-W, users wait for each request to complete and end sessions normally. We adjusted our TPC-W client emulators to retry requests if too much time elapses, and to leave the site if the client has to retry too many requests. With that adjustment, the model should match the behavior of the experimental setup after one additional step: the model needs to be parameterized to match the experimental system. Since we did not have actual SLAs from which we could determine the parameters, we instead attempted to experimentally determine the response times and parallelism of the services. We calibrated these values by performing experiments without user timeouts; in other words, the users would never retry requests. Single-user experiments were run to determine the response time of the servers, while multiple-user experiments were run to determine the effect of load on the services, and therefore the parallelism. A service provider would perform similar experiments to determine the SLA terms that could be offered for a service. We analyzed each page on the TPC-W, since each one makes different demands on the outsourced services.
5
Results
We start with the calibration results. Calibration was needed to determine the service delay and parallelism of the two services (application server and database server), as well as the size of the requests made by each page. The step can be thought of as determining what terms could be offered for an SLA on those services, and translating them into the needed parameters. Indeed, this calibration was equivalent to having SLA terms that were very accurate plus the distribution information. We had to calibrate the pages individually. Each page had multiple database accesses and varying usage of the application server. Figure 4 shows the inverse cumulative distribution function for the delay for accessing the buy confirm page and the home page. The simulated results closely reflect the experimental results.
12
D. Daly, G. Kar, and W.H. Sanders Response Time (Experimental) 100
Home Buy Confirm
% Interactions
80 60 40 20 0
0
2
4
6
8
10 Time (s)
12
14
16
100
20
Home Buy Confirm
80 % Interactions
18
Response Time (Simulated)
60 40 20 0
0
2
4
6
8
10
12
14
16
18
20
Time (s)
Fig. 4. Inverse Cumulative Delay Distributions for Page Accesses
The services had a low variance on the completely unloaded systems; we accommodated that by using an Erlang distribution instead of a negative exponential for those services. The Buy Confirm page had a large outlying probability density; it was around 10 seconds for our experiments. To accommodate that we adjusted the DB access to occasionally require larger amounts of resources. The other pages performed similarly well, with the Buy Confirm page being one of the slowest pages, and the Home page being one of the quickest pages. A service integrator would determine how many requests each page made to the outsourced services and the size of the requests, and would then determine the model parameters by combining that information with the performance guarantees offered by the service providers for the outsourced services. We determined relative request sizes and performance for the service based on measurements of the delays of the pages and an understanding of the requests generated for each page. 5.1
Validation of Model Predictions
With the models calibrated, we ran complete experiments to compare the loss rate of the modeled and experimental systems. We found that the two systems gave similar overall loss rates and component loss rates, as shown in Table 1. The table shows user retries and the number of lost users when there are 30, 45, or 60 users with users willing to wait 10, 12, or 15 seconds. All experiments were for 15 minutes after a 90-second warmup period. The simulations solved the model several times to generate 95% confidence intervals compared to the experiments that were solved once. We note two things about the results: 1) at lower loss rates, our model reports more losses and timeouts, and 2) at higher loss rates, there appears to be a higher probability that a retried request will take too long, leading to loss that was not captured by our models. Therefore we expect our models to be conservative (to overestimate loss) at low loss rates. We might be
Modeling of Service-Level Agreements for Composed Services
13
able to account for the higher loss rate when there are more retried requests by making retried requests have a higher demand on the services, especially since requests that need to be retried are likely larger than average to begin with. Table 1. Simulation and Experimental Results
Number of Users 30 45 60
Timeout Length
10
12
15
Exper. Simulation Exper. Simulation Exper. Simulation Timeouts Loss Timeouts Loss Timeouts Loss
12 7 13 7 26 13
9.65 ± 0.55 1.76 ± 0.18 15.19 ± 0.81 3.09 ± 0.28 20.92 ± 1.21 4.68 ± 0.47
0 0 4 0 7 1
2.30 ± 0.27 0.20 ± 0.07 4.35 ± 0.40 0.52 ± 0.12 7.67 ± 0.71 2.30 ± 0.27
0 0 0 0 1 0
0.46 ± 0.10 0.02 ± 0.02 1.47 ± 0.20 0.12 ± 0.05 3.49 ± 0.40 0.42 ± 0.10
From the results, we could determine the SLA that could be offered by the service integrator, based on the amount of time that a user will wait for a page. For instance, if a user could be expected to wait 12 seconds, the SI could guarantee that there would be less than 1% loss when the load is less than 60 users. One problem we discovered was that the system could become unstable at high loss rates. The TPC-W benchmark starts a new user session immediately when another one ends, in order to maintain a constant number of users. That can lead to an increased load when loss is considered. Normally, a user would leave, causing the load and loss rate to drop; but in our experiment, the new user will also keep retrying requests, increasing the load instead of decreasing it. 5.2
Analyzing Results from Varying Parameters in the Simulation Models
Once we verified that the model performed well for the base case, we conducted some studies using the model to better understand the dynamics of the system and their ramifications for the performance that could be guaranteed. Some of the studies could not have been done with the experimental setup, while others would have been time-consuming. The studies focused on ways to lower the overall loss rate, in order to improve the service guarantees that could be offered. The first two studies focused on the user model and on determining the effect of timeout value and loss threshold on overall loss rate. We varied the two parameters separately in two studies. Figure 5 shows that increasing the timeout value did decrease the loss rate, and increasing the timeout value to 14 seconds would ensure a loss of no more than 1%. Similarly, Fig. 6 shows that if the users are willing to retry three requests before leaving, instead of one, the loss rate also drops below 1%. However, since the timeout length and the loss threshold are given values, we cannot change them. Instead, we have to focus on ways to speed up the site to
User Retry vs. Lost User Probability 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0
10 11 12 13 14 15 Time (in Seconds) a User Waits Before Retrying a Request
Fraction of User Sessions that End in Lost User
Fig. 5. Loss Rate as a Function of How Long a User Will Wait for a Request DB Processing Rate vs. Lost User Probability 0.03 0.025 0.02 0.015 0.01 0.005 0 100
120 140 160 180 DB Processing Rate (Baseline 100)
200
Fig. 7. Loss Rate as a Function of DB Performance
Fraction of User Sessions that End in Lost User
D. Daly, G. Kar, and W.H. Sanders Number of Retries vs. Lost User Probability 0.12 0.1 0.08 0.06 0.04 0.02 0
0
1 2 3 4 Number of Times a User Will Retry a Request
5
Fig. 6. Loss Rate as a Function of How Many Times a User is Willing to Retry More Requests Fraction of User Sessions that End in Lost User
Fraction of User Sessions that End in Lost User
14
DB Parallelism vs. Lost User Rate 0.03 0.025 0.02 0.015 0.01 0.005 0
2
2.5 3 3.5 DB Degree of Parallelism (Baseline 2)
4
Fig. 8. Loss Rate as a Function of DB Parallelism
lower the loss rate. Since the two services we were evaluating were the application server and the back-end database, we also varied the performance of those two services. Minimal speedup was observed from increasing the application server performance, but performance degradation was observed if the application server is slowed down. However, increasing the performance of the database has a dramatic effect on the overall loss rate, as shown in Fig. 7. Therefore, the database is the key bottleneck, and we should investigate ways to lower the response time of requests to the database. Figure 8 shows that increasing the parallelism of the DB will also lower the loss rate. It should be easier to increase the parallelism of the DB than to reduce the response time. The two graphs show that the service integrator would want to negotiate to improve the guaranteed level of service in the SLA for the database service, thus increasing the number of requests that can be processed without changing the response time criterion. The baseline results also showed that the Buy Confirm page was the dominant source of timeouts in the site. Further results (not shown) from solving the model showed that decreasing the processing requirements for the Buy Confirm page would lower the loss significantly.
Modeling of Service-Level Agreements for Composed Services
6
15
Conclusion
In this paper we have framed the problem of coming up with SLA guarantee terms for a Web service that is composed of a collection of Web services. The problem is that of relating the SLA terms of the sub-services to the aggregate service in a useful manner. Focusing on Web-commerce, we showed how the aggregated service might need to provide types of guarantees that are different from those obtained from the sub-services. We have proposed a model of the Web services to relate the different guarantees to each other. We realized our model using a TPC-W benchmark implementation, and set up the same implementation experimentally to relate the performance and SLA guarantees of the sub-services to the performance and SLA guarantees of the complete Web-commerce site. The results from the model agreed closely with the experimental values and also allowed us to answer questions that could not have been answered through experimentation alone. For instance, we determined that speeding up the database response time in our example would significantly improve performance. We did so by varying the database response time, which would have been very difficult to do in an actual database and demonstrates the usefulness of the model.
References 1. D. Box, D. Ehnebuske, G. Kakivaya, A. Layman, N. Mendelsohn, H. F. Nielsen, S. Thatte, and D. Winer, “Simple object access protocol (SOAP) 1.1,” Tech. Rep., W3C, 2000. 2. E. Christensen, F. Curbera, G. Meredith, and S. Weerawarana, “Web services description language (WSDL) 1.1,” Tech. Rep., W3C, 2001. 3. G. Clark, T. Courtney, D. Daly, D. D. Deavours, S. Derisavi, J. M. Doyle, W. H. Sanders, and P. G. Webster, “The M¨ obius modeling tool,” in Proceedings of the 9th International Workshop on Petri Nets and Performance, September 2001, pp. 241–250. 4. D. D. Deavours, G. Clark, T. Courtney, D. Daly, S. Derisavi, J. M. Doyle, W. H. Sanders, and P. G. Webster, “The M¨ obius framework and its implementation,” Transactions on Software Engineering, vol. 28, no. 10, October 2002. 5. D. A. Menasc´e and V. A. F. Almeida, Scaling for E-Business: Technologies, Models, Performance, and Capacity Planning, Prentice Hall, 2000. 6. D. A. Menasc´e, V. A. F. Almeida, R. Fronseca, and M. A. Mendes, “Businessoriented resource management policies for e-commerce servers,” Performance Evaluation, vol. 42, pp. 223–239, 2000. 7. J. F. Meyer, A. Movaghar, and W. H. Sanders, “Stochastic activity networks: Structure, behavior and application,” in Proc. International Conference on Timed Petri Nets, 1985, pp. 106–115. 8. Transaction Processing Performance Council (TPC), TPC Benchmark W (Web Commerce), August 2001. 9. Transaction Processing Performance Council (TPC) Web Page,
10. UDDI Executive White Paper, November 2001 .
The Architecture of NG-MON: A Passive Network 1 Monitoring System for High-Speed IP Networks 1
1
2
Se-Hee Han , Myung-Sup Kim , Hong-Taek Ju , 1 and James Won-Ki Hong 1
Department of Computer Science and Engineering, POSTECH, Korea Department of Computer Engineering, Keimyung University, Korea {sehee, mount, juht, jwkhong}@postech.ac.kr
2
Abstract. This paper presents the design of a next generation network traffic monitoring and analysis system, called NG-MON (Next Generation MONitoring), for high-speed networks such as 10 Gbps and above. Packet capturing and analysis on such high-speed networks is very difficult using traditional approaches. Using distributed, pipelining and parallel processing techniques, we have designed a flexible and scalable monitoring and analysis system, which can run on off-the-shelf, cost-effective computers. The monitoring and analysis task in NG-MON is divided into five phases; packet capture, flow generation, flow store, traffic analysis, and presentation. Each phase can be executed on separate computer systems and cooperates with adjacent phases using pipeline processing. Each phase can be composed of a cluster of computers wherever the system load of the phase is higher than the performance of a single computer system. We have defined efficient communication methods and message formats between phases. Numerical analysis results of our design for 10 Gbps networks are also provided.
1
Introduction
Today, multi-gigabit networks are becoming common within or between ISP networks. The bandwidth of ISP’s backbone networks is evolving from OC-48 (2.5Gbps) to OC-192 (10Gbps) to support rapidly increasing Internet traffic. Also, Ethernet networks are evolving from gigabit to 10 Gbps. Further, the types of traffic on these links are changing from simple text and image based traffic to more sophisticated and higher volume (such as streaming rich media, peer-to-peer). Monitoring and analyzing such high-speed, high-volume and complex network traffic is needed, but it lies beyond the boundaries of most traditional monitoring systems. Sampling is a popular method that most monitoring systems adopted to overcome this problem [1]. However, the sampling method is neither accurate nor adequate for 1
The authors would like to thank the Ministry of Education of Korea for its financial support toward the Electrical and Computer Engineering Division at POSTECH through its BK21 program.
G. Babin et al. (Eds.): DSOM 2002, LNCS 2506, pp. 16–27, 2002. © Springer-Verlag Berlin Heidelberg 2002
The Architecture of NG-MON: A Passive Network Monitoring System
17
some applications (e.g., usage-based billing or intrusion detection system). Another approach is by the adoption of purpose-built hardware [2]. Unfortunately, the development cost of such hardware approach is very high, and the hardware can get outdated quickly. ISPs would be required to replace them to meet the requirement as the network bandwidth increases. Therefore, we need a solution that is flexible, scalable, and cost-effective. This paper suggests the design of such a solution. In our earlier work, we had developed a passive network traffic monitoring system, called WebTrafMon [3, 4]. It could monitor 100 Mbps links and was able to capture packets without any loss. When we used it to monitor faster links than 100 Mbps, we encountered several problems. The amounts of incoming packets were beyond the processing capacity of the probe. And the required storage space for flow data increased linearly as the link speed increased. Also, the analyzer took a long time to complete its tasks. So, we had to come up with a new approach to solve the problem. At first, we subdivided the monitoring process into multiple phases, and distributed the processing load over them by allocating a system for each phase. If the distributed load in each phase is still beyond the capability of the system, it can be composed of a cluster of systems. By using this approach, we have designed a flexible and scalable network traffic monitoring and analysis system, called NG-MON (Next Generation MONintoring). NG-MON uses the passive monitoring method. The organization of this paper is as follows. The requirements of the NG-MON are enumerated in Section 2 and the design of NG-MON is described in Section 3. Numerical analysis results of our design for 10 Gbps networks are provided in Section 4. In Section 5, we compare our approach with other approaches proposed thus far. Finally, concluding remarks are given and possible future work is mentioned in Section 6.
2
Requirements
The following are the requirements we have considered in designing NG-MON. Distributed architecture: With a single general purpose PC system, it is hard to monitor and analyze all the packets on a multi-gigabit network. So it is required to divide monitoring task into several functional units and distribute processing loads. With respect to the distribution method, we considered the pipelined and parallel methods. And we also considered the packet distribution by using the functions which are provided by network devices. Lossless packet capture: We need to capture all packets on the link without any loss so to provide required information to various applications. Flow-based analysis: When analyzing, it is better to aggregate packet information into flows for efficient processing. By doing this, packets can be compressed without any loss of information.
18
S.-H. Han et al.
Consideration of limited storage: The amount of captured packets in high-speed networks is more than hundreds of megabytes per minute even though being aggregated into flows [2]. An efficient method is needed for storing these large amounts of flows and analyzed data in the limited storage. Support for various applications: It should be flexible enough to provide data to various applications in diverse forms. When a new application needs to use the system, it should be able to easily support the application without changing the structure of the system.
3
Design of NG-MON
In the design of NG-MON, the key features we have employed are pipelined distribution and load balancing techniques. In Fig. 1, traffic monitoring and analysis tasks are divided into five phases: packet capture, flow generation, flow store, traffic analysis, and presentation of analyzed data. Network Device
Packet Capturer
Flow Generator
Flow Store
Traffic Analyzer
Presenter Web Server
User Interface
Fig. 1. Pipelined Architecture of NG-MON
These five phases are serially interconnected using a pipelined architecture. One or more systems may be used in each phase to distribute and balance the processing load. Each phase performs its defined role in the manner of a pipeline system. This architecture can improve the overall performance. And each phase is configured with a cluster architecture for load distribution. This provides good scalability. We have also defined a communication method between each phase. Each phase can be replaced with more optimized modules as long as they provide and use the same defined interface. In the following sections, we describe each phase in detail. This gives flexibility. Rather than using expensive server-level computers, we use inexpensive, off-the-shelf PC-level computers. Since our solution is all softwarebased, as more processing power is needed one can simply replace existing hardware or add more to wherever is needed. We believe this is a very cost-effective and scalable approach. 3.1
Packet Capture
In the packet capture phase, one or more probe machines (or packet capturer) collect the entire raw packets passing through the network link. By using the splitting function
The Architecture of NG-MON: A Passive Network Monitoring System
19
provided by an optical splitter [5], all the packets on the link are directed toward probe systems as illustrated in Fig. 2. We can also use the mirroring function provided in network devices such as switches and routers for distributing traffic to multiple probes. Each probe processes incoming packets and keeps the minimum packet header information that we are interested in. In Fig. 2, the output of each probe is a collection of the packet header information that is derived from raw packets. 3UREH
Splitting Device 3UREH
3UREH
Network Link devided raw packet
pkt header message
Fig. 2. Packet Distribution and Packet Capture
A single off-the-shelf PC cannot process all the packets coming from the highspeed links such as 10 Gbps networks due to performance limitations [6]. It is essential to use multiple systems for capturing all the packets without loss. Although processing loads are distributed, the packets in the same flow can be scattered. Each probe has as many export buffer-queues as the number of flow generators. Each export buffer-queue is for flow generators. The probe fills the buffers with header information using the 5-tuple based hashing over export buffer-queues. When this buffer-queue is full, the probe constructs a message containing captured packet headers and then sends it to the next phase, the flow generator. The destination of the constructed message is assigned among the addresses of flow generators as to the buffer-queues. Therefore, temporally scattered packets in the same flow would be put together and sent to the same flow generator. One message is composed of up to 50 packet header information. The format of raw packet header data is given in Fig. 3.
Fig. 3. Packet Header Data Format
The size of packet header data kept is 28 bytes for each packet. All the fields except Timestamp and Capture ID are extracted from IP and TCP/UDP headers of each packet. The Timestamp indicates the time when a packet is captured by a probe. The Capture ID indicates the system, which captured that packet for later use.
20
3.2
S.-H. Han et al.
Flow Generation
There are various definitions about the flow [7, 8, 9]. In this paper, we use the traditional one, which defines the flow as a sequence of packets with the same 5-tuple: source IP address, destination IP address, protocol number, source port, and destination port. )ORZ *HQHUDWRU
)ORZ *HQHUDWRU
)ORZ *HQHUDWRU
pkt header message
)ORZ *HQHUDWRU
flow message
Fig. 4. Load Distribution in the Flow Generation Phase
In Fig. 4, the messages from the packet capture phase are distributed over flow generators by assigning their destinations to the corresponding flow generators. A flow generator stores the flow data in its memory area for processing. When a message containing raw packet data arrives, the flow generator searches the corresponding flow data from its flow table and then updates it, or creates a new flow if one does not already exist. Packets in the same flow are aggregated into the same entry of the table by increasing the packet count and adding the length to the total packet size. The flow generator exports the flow data to the flow store when one of the following conditions is satisfied: when the flow is finished (if TCP, when a FIN packet received), the time has expired or the flow table is filled.
Fig. 5. Flow Data Format
The flow data format is given in Fig. 5. Besides the 5-tuple information, the flow data has several other fields such as flow start time and flow end time. The flow start time indicates the time when the first packet of a flow is captured by a probe, and the flow end time means the capture time of the last packet of the flow. The size of the flow data format is 32 bytes. For our flow generator, it can send up to 40 flows in a single message of approximately 1350 bytes.
The Architecture of NG-MON: A Passive Network Monitoring System
3.3
21
Flow Store
In our earlier work [4], we realized that one of the bottlenecks of the monitoring process is a storing of flow data. Therefore, when the flow data is stored to the flow store, the load balancing should be considered. In Fig. 6, the destination of the exported messages is assigned among the flow stores in turn by a round-robin algorithm. The assigning period is determined by the transfer rate of export flow data, capabilities of the flow stores, and the number of flow stores. In this way, the processing load to store the flow data is distributed over the flow stores. )ORZ6WRU H
'DWDEDVH 4XHU\5HVSRQVH
7UDIILF$QDO\]HU
)ORZ6WRUH
7UDIILF$QDO\]HU
)ORZ6WRUH
flow message
W
W
W
Fig. 6. Load Distribution in the Flow Store Phase
When several flow generators assign a destination flow store of the messages, there can be a time synchronization problem. But the components of NG-MON would tend to be deployed in a local area network, thus the required degree of time synchronization is not so high. Therefore, the time synchronization protocol like NTP [10] can be used to synchronize the system clocks of the components. In our system, we separate write (i.e., insert) operations from database query operations performed by the analyzers. Insertion does not occur at the same time as other operations in a single flow store. Thus, traffic analyzers query databases of flow stores when they are not receiving flow data. An example is illustrated in Fig. 6. At time t1, the flow store #1 receives flow data from flow generators and the flow stores #2 and #3 process the query from traffic analyzers. That is, the flow store concentrates on operation requests of one side at a time. Flow stores discard the flow data table when they are finished with analysis by traffic analyzers. Only the most recent flow data is stored in the flow store, so the flow store only requires a small, fixed amount of disk space. There can be various traffic analyzers for supporting various applications after the flow store phase. This means that the flow store should provide an analysis API to analyzers. 3.4
Traffic Analysis
In this phase, the traffic analyzer queries the flow data stored in the flow store according to the various analysis scopes. The analyzer sends query messages to the
22
S.-H. Han et al.
flow stores and makes various matrices and tables from the response. If all the scope of analysis is put into one table, the size of a single table will be too large to manage. Therefore, we place several reduced set of tables corresponding to each analysis scope. For example, the analyzer in Fig. 7, provides the details on network throughput with protocol and temporal analysis. And in order to provide temporal analysis, the analyzer has a set of tables according to every time-unit of minute, hour, day, month, and year. It is impractical to store all the flow data into the time-series tables because of voluminous data, and limitation of storage space. To reduce the storage requirement, we preserve tables with only the most significant N entries. Thus, the total size of database will have some degree of boundary. The analyzer fills up the tables simultaneously in a pipelined manner. If the reception time period of flow stores is 5 minutes, there can be 20 tables for storing every 5 minutes’ analyzed flow data. After updating the 5-minute table, the corresponding hour table gets updated. There should be these kinds of time-series tables for each scope of analysis in the analyzer. PLQXWH )ORZ6WRUH
GDWDBVHQW KRXU
GD\
)ORZ6WRUH
1*021$QDO\]HU
PRQWK
\HDU
3UHVHQWHU
)ORZ6WRUH
Web Server
Fig. 7. Traffic Analyzer and Various Applications
The presentation phase can provide an analysis to users about the traffic in various forms using the Web interface. Before designing an analyzer, we have to determine analysis items to be shown in this phase. Then the traffic analyzer can generate the corresponding DB tables based on these items. That is, a different analyzer is required to support a different purpose application. Because tables contain analyzed data which is ready to be shown, the time needed to create reports and HTML pages is very short, typically less than a second. 3.5
Presentation
This phase provides analyzed data to corresponding applications. Because the header information of all packets has been stored to the flow store being compressed into flows, it can provide any information to applications in a flexible and efficient way. NG-MON can provide necessary information to the billing applications on IP networks, IDS systems, and so on.
The Architecture of NG-MON: A Passive Network Monitoring System
4
23
Design Analysis
We have validated our design of NG-MON analytically for monitoring high-speed networks. We already described the flexibility and the scalability of our design. At each phase, we can assign a number of systems for load distribution, or can merge some phases into one system. The appropriate number of systems will be determined from this analysis. 4.1
Assumptions
The monitored network link speed is 10 Gbps, and our system captures all the packets inbound and outbound. The size of a single packet header data is 28 bytes, and that of a single flow data is 32 bytes. Then we calculate the size of data to be processed in a second in a probe, flow generator, and flow store. The average number of packets per flow (Cavg) is 16, which is derived from the flow generator test on our campus network. The average packet size (Pavg) is 550 bytes from the same testing. So in this numerical analysis we use the follow values as shown in Table 1. Table 1. Symbols and its values Symbol L d Hp Hf Cavg Pavg
4.2
Description Link speed Full duplex factor A single packet header data size A single flow data size Average packet count per flow Average packet size
Value 10 Gbps 2 28 bytes 32 bytes 16 550 bytes
The Amount of Data to Be Processed
The total raw packet (Tp) in the packet capture phase, the total raw packet header information (Th) in the flow generation, and the total flow data (Tf) in the flow store processed in one second are as follows:
Total raw packets (Tp) = L ×
d 2 = 10 × 8 8
= 2.5 Gbytes/sec
Total raw packet header data (Th)
=
Tp Pavg
× Hp +
MAC header + IP header + UDP header the number of raw packet headers in an export UDP packet
2500 14 + 20 + 8 × 28 + = = 131.1 Mbytes/sec 550 50
24
S.-H. Han et al.
Total flow data (Tf) =
Tp Hf 2500 32 × = × = 9 Mbytes/sec Pavg Cavg 550 16
It seems meaningless to calculate the size of total flow data in a second (Tf) because it is too short to be used as an exporting period in the flow generation phase. So we observe the size of flow data in a minute (1 min-Tf), five minutes (5 mins-Tf), and an hour (1 hour-Tf): 1 min-Tf = 9(Tf) × 60 = 540 Mbytes 5 mins-Tf = 2.7 Gbytes 1 hour-Tf = 32.4 Gbytes If we choose one minute as the exporting period, each flow store requires only 540 Mbytes of disk space. In the same way, if we choose 5 minutes, 2.7 Gbytes are required per flow store. 4.3
Allocation of Systems to Each Phase
The amount of data to be processed at each phase is typically beyond the processing capacity of a single off-the-shelf general-purpose PC system. For example, a single probe cannot process all the raw packets of 2.5 Gbytes in one second because of the limited network interface card (NIC) and PCI bus capacity in a PC. During processing in a single system, there are several subsystems that affect the monitoring capacity: NIC bandwidth, PCI bus bandwidth, memory, CPU processing power, storage and so on. Let us consider a computer system with a gigabit Ethernet NIC and 66MHz/64bit PCI bus and 1 Gbyte RDRAM (800MHz/16bit) and 2 GHz Pentium 4. Then the theoretical max transfer rate of a gigabit Ethernet NIC is 125 Mbytes/sec, PCI bus is 533 Mbytes/sec, and dual channel RDRAM (800MHz/16bit) is 3.2 Gbytes/sec [6, 12]. The probe can have multiple NICs, one for sending raw packet header information to flow generators, the others for capturing. Then, there can be 4 NICs within the bandwidth of a PCI bus so far as the number of PCI slots permits. Therefore, it requires 7 probe machines to receive total raw packets of 2.5 Gbytes in a second for a full duplex 10 Gbps link. In the flow generator phase, it receives 131.1 Mbytes of raw packet header per second (Th), so theoretically 1 flow generator which has 2 NICs is needed. In the flow store phase, though it is sufficient for processing the rate of flow data with one system, the execution time of queries affects the required number of flow stores. Such an execution time varies as to the kind of database system, DB schema, query construction, and so on. Therefore, the number of flow stores is flexible regarding to those kinds of factors. In our previous work [4], it took about 4 seconds to insert 20 Mbytes of raw packet headers into MySQL database running on an 800 MHz Pentium 3 with 256 Mbytes of memory. If we assume the runtime of an insert is O(N), it will take 150 seconds to insert 1 min-Tf data into the database. Here we assume the analysis system takes about 2 minutes for querying. Then it will take 4 minutes and 30 seconds to insert and analyze an 1-minute flow data. As we have to finish all these
The Architecture of NG-MON: A Passive Network Monitoring System
25
operations within 1 minute, it requires 3 systems for inserting, and 2 systems for analyzing in the flow store phase. Table 2. The required number of systems in each phase Packet Capture 100 Mbps 1 Gbps 10 Gbps
Flow Generation 1 1
7
1
Flow Store 2 5
Total 1 3 13
Therefore, it requires approximately 13 systems (7 in the packet capture phase, 1 in the flow generation phase, 5 in the flow store phase) to provide flow data to analysis systems in a fully-utilized, full-duplex 10 Gbps network. In a 100 Mbps network, the amount of flow data in a minute is less than 10 Mbytes. Thus, three phases can merge into one system. In a 1 Gbps network, packet capture and flow generation phase can merge into one system which has 3 NICs. And the flow store phase can be composed of 2 systems if the time to insert and query the 1min-Tf of 1 Gbps network is less than a minute per each operation. Table 2 summarizes the required number for systems in each phase for 100 Mbps, 1 Gbps and 10 Gbps links.
5
Related Work
Table 3 compares NG-MON with other passive network monitoring systems. Ntop [13] is a monitoring software system that provides detailed protocol analysis and a graphical user interface. However, Ntop is not suitable for monitoring high-speed networks from our experience in deploying in an operational network. It cannot scale beyond monitoring a fully-utilized 100 Mbps link. FlowScan [14] is a NetFlow analysis software system that uses cflowd [15], RRD Tool [16], and arts++ [17]. It can only process the NetFlow [18] data format and is not suitable for monitoring high-speed networks either. Ntop and FlowScan are appropriate for analyzing relatively low-speed networks such as a WAN-LAN junction or a LAN segment. Our approach of distributing the processing load may be applied to Ntop and FlowScan in improving their processing capabilities. CoralReef [19] is a package of library, device driver, class, and application for network monitoring developed by CAIDA [20]. CoralReef can monitor up to OC-48 network links. With respect to the load distribution, only CoralReef suggests a separation of flow generation and traffic analysis, but without consideration of clustering of processing systems in each stage. Sprint’s IPMon project [21] developed a probe system for collecting traffic traces, which is used for off-line analysis after transferring to a laboratory. Their approach uses purpose-built hardware to assist the packet capturing and processing.
26
S.-H. Han et al. Table 3. A Comparison of NG-MON with Related Work
Input Output Speed Solution Sampling used Analysis
Ntop Raw Traffic, NetFlow Throughput myPrio] Handover Main State. When the service handover start || rcv. SvcTO[prio>myPrio] ] downlinkChk() monitoring procedures in backup main state [downlinkChk()==succ.] [rcv. DnstrTODeny || indicate a failure, a backup host needs to bercv. DnstrTO[prio>myPrio] || rcv. SvcTO[prio>myPrio] ] downstream takeover come the new master. As the failure indication send DnstrTO might not only be provoked by a service fail- handover interrupt [DnstrTODeny timeout] ure, but also by a failure in the backup host’s deactivateUplink() handover uplink check network connectivity, these cases have to be activateUplink() uplinkChk() separated before the service takeover finally [rcv. DnstrTO[prio>myPrio] [uplinkChk()==succ.] || rcv. UpstrTO[prio>myPrio] can be completed. This distinction is the main || rcv. SvcTO[prio>myPrio] ] upstream takeover purpose of the handover main state (Fig. 6). send UpstrTO In handover start state the downlink is Fig. 6. Handover Main State checked by downlinkChk() to ensure communication to the other hosts is functional. On success, in downstream takeover state a DnstrTO message is sent to indicate the beginning takeover to other hosts in the redundancy cluster, especially to the master host. The master may now reply a DnstrTODeny message (see service main state), leading to fall back to backup main state. If none is received, in handover uplink check state, first the uplink is activated by activateUplink() and gets tested by uplinkChk(). If it succeeds, both links are proven to be functional. Therefore taking over the upstream is signaled to the other hosts by sending a UpstrTO message in uplink takeover state. If the current master host raises its veto by sending a UpstrTODeny message (see service main state), the uplink is shut down by deactivateUplink() and the host remains in backup main state. Otherwise the takeover is completed and service main state is reached. Takeover is interrupted if either a DnstrTO, UpstrTO or SvcTO message with a higher priority than the own is received, since this indicates that another host is going to take over the service. When any of the link checks fail, the host is considered to be inoperable for backup use and the state machine quits.
Service Main State. A host being in service main state designates it as the master host which currently executes the service. Furthermore it is responsible for maintaining the status table. To avoid inconsistencies it is changed only on the master host. Besides, a local monitoring of the service is accomplished and on–demand checks are carried out.
188
H. Roelle
down decide
[rcv. SvcCheck]
[rcv. SvcTO]
deny upstream takeover When entering service main state, activate service send UpstrTODeny status table service check timeout first the status table is updated to re- update send SvcTO update status table [success] activateSvc() flect the new situation which is antakeoverUpstr() svc full check C [SvcReqChkTimer alarm] nounced to other hosts in the redun- takeoverDnstr() start SelfChkTimer service active [rcv. UpstrTO[prio